Stefan Henneken: IEC 61131-3: SOLID – Five principles for better software

In addition to the syntax of a programming language and the understanding of the most important libraries and frameworks, other methodologies – such as design patterns – belong to the fundamentals of software development. Aside from a design pattern, design principles are also a helpful tool in the development of software. SOLID is an acronym for five such design principles, which help developers to design software more understandable, more flexible and more maintainable.

In larger software projects, a great number of function blocks exist that are connected to each other via inheritance and references. These units interact by the calls of the function blocks and their methods. The interaction of the code units can unnecessarily complicate the extending or finding of errors if designed wrongly. In order to develop sustainable software, the function blocks should be modeled in such a way that they are easy to extend.

Many design patterns apply the SOLID principles to suggest an architectural approach for the respective task. The SOLID principles are not to be understood as rules, but rather as advice. They are a subset of many principles that an American software engineer and lecturer Robert C. Martin (also known as Uncle Bob) presented in his book (Amazon advertising link *) Clean Architecture: A Craftsman’s Guide to Software Structure and Design. The SOLID principles include:

  • Single Responsibility Principle
  • Open Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

The principles shown here are hints that make it easier for a developer to improve code quality. The effort pays for itself after a short time, because changes will be easier, tests and debugging will be faster. Thus, the knowledge of these five design principles should be part of every software developer’s basic knowledge.

Single Responsibility Principle

A function block should have only one responsibility. If the functionality of a program is changed, this should have effects only on few function blocks. Many small function blocks are better than a few large ones. The code appears at first sight more extensive, but it is easier to organize. A program with many smaller function blocks, each for a special task, is easier to maintain, than few large function blocks, claiming to cover everything.

Open Closed Principle

According to the Open closed principle, function blocks should be open for extensions but closed for changes. The implementation of extensions should only be achieved by adding code, not by changing existing code. A good example of this principle is inheritance. A new function block inherits from an existing function block. New functions can thus be added without having to change the existing function block. It is not even necessary to have the program code.

Liskov Substitution Principle

Liskov substitution principle requires that derived function blocks must always be usable in place of their basic function blocks. Derived function blocks must behave like their basic function blocks. A derived function block may extend the base function block, but not restrict it.

Interface Segregation Principle

Many customized interfaces are better than one universal interface. Accordingly, an interface may only contain those functions that really belong together closely. Comprehensive interfaces create links between otherwise independent program parts. Thus, the Interface segregation principle has a similar goal as the Single responsibility principle. However, there are different approaches to the implementation of these two principles.

Dependency Inversion Principle

Function blocks are often linearly interdependent in one direction. A function block for logging messages calls methods of another function block to write data to a database. There is a fixed dependency between the function block for logging and the function block for accessing the database. The Dependency inversion principle resolves this fixed dependency by defining a common interface. This is implemented by the block for the database access.

Conclusion

In the following posts, I will introduce the individual SOLID principles in more detail and try to explain them using an example. The sample program from my post IEC 61131-3: The Command Pattern serves as a basis. With each SOLID principle, I will try to optimize the program further.

I will start shortly with the Dependency inversion principle.

Jürgen Gutsch: ASP.NET Core in .NET 6 - Shadow-copying in IIS

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to explore the Shadow-copying in IIS.

Since .NET is locking the assemblies that are running by a process, it is impossible to replace them during an update scenario. Specially in scenarios where you self-host an IIS server or where you need to update an running application via FTP.

To solve this, Microsoft added a new feature to the ASP.NET Core module for IIS to shadow copy the application assemblies to a specific folder.

Exploring Shadow-copying in IIS

To enable shadow-copying you need to install the latest preview version of the ASP.NET Core module

On a self-hosted IIS server, this requires a new version of the hosting bundle. On Azure App Services, you will be required to install a new ASP.NET Core runtime site extension (https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-6-preview-3/#shadow-copying-in-iis)

If you have the requirements ready, you should add a web.config to your project or edit the weg.config that is created during the publish process (dotnet publish). Since most of us are using continuous integration and can't touch the web.config after it gets crated automatically, you should add it to the project. Just copy the one that got created using dotnet publish. Continuous integration will not override an existing web.config.

To enable it you will need to add some new handlerSettings to the web.config:

<aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout">
    <handlerSettings>
        <handlerSetting name="experimentalEnableShadowCopy" value="true" />
        <handlerSetting name="shadowCopyDirectory" value="../ShadowCopyDirectory/" />
    </handlerSettings>
</aspNetCore>

This enables shadow-copying and specifies the shadow copy directory.

After the changes are deployed, you should be able to update the assemblies of a running application.

What's next?

In the next part In going to look into the support for BlazorWebView controls for WPF & Windows Forms in ASP.NET Core.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Hot Reload

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look at the .NET 6 support for Hot Reload.

In the preview 3, Microsoft started to add support for Hot Reload, which automatically gets started when you write dotnet watch. The preview 4 also includes support for Hot Reload in Visual Studio. Currently, I'm using the preview 5 to try Hot Reload.

Playing around with Hot Reload

To play around and to see how it works, I also create a new MVC project using the following commands:

dotnet new mvc -o HotReload -n HotReload
cd HotReload
code .

These commands create an MVC app, change into the project folder, and open VSCode.

dotnet run will not start the application with Hot Reload enabled, but dotnet watch does.

Run the command dotnet watch and see what happens, if you change some C#, HTML, or CSS files. It immediately updates the browser and shows you the results. You can see what's happening in the console as well.

Hot Reload in action

As mentioned initially, Hot Reload is enabled by default, if you use dotnet watch. If you don't want to use Hot Reload, you need to add the option --no-hot-reload to the command:

dotnet watch --no-hot-reload

Hot Reload should also work with WPF and Windows Forms Projects, as well as with .NET MAUI projects. I had a quick try with WPF and it didn't really work with XAML files. Sometimes it also did an infinite build loop. Every build

More about Hot Reload in this blog post: https://devblogs.microsoft.com/dotnet/introducing-net-hot-reload/

What's next?

In the next part In going to look into the support for Shadow-copying in IIS.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - HTTP/3 endpoint TLS configuration

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into HTTP/3 endpoint TLS configuration.

In the preview 3, Microsoft started to add support for HTTP/3 which brings a lot of improvements to the web. HTTP3 brings a faster connection setup as well as improved performance on low-quality networks.

Microsoft now adds support for HTTP/3 as well as the support to configure TLS (https) for HTTP/3.

More about HTTP/3

HTTP/3 endpoint TLS configuration

Let's see how you can configure HTTP/3 in a small MVC app using the following commands:

dotnet new mvc -o Http3Tls -n Http3Tls
cd Http3Tls
code .

This commands create an MVC app, change into the project folder and open VSCode.

In the Program.cs we need to configure HTTP/3 as shown in Microsoft's blog post:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder
                    .ConfigureKestrel((context, options) =>
                    {
                        options.EnableAltSvc = true;
                        options.Listen(IPAddress.Any, 5001, listenOptions =>
                        {
							// Enables HTTP/3
                            listenOptions.Protocols = HttpProtocols.Http3;
                            // Adds a TLS certificate to the endpoint
                            listenOptions.UseHttps(httpsOptions =>
                            {
                                httpsOptions.ServerCertificate = LoadCertificate();
                            });
                        });
                    })
                    .UseStartup<Startup>();
            });
}

The flag EnableAltSvc sets a Alt-Svc header to the browsers to tell them, that there are alternative services to the existing HTTP/1 or HTTP/2. This is needed to tell the browsers, that the alternative services - HTTP/3 in this case - should be treated like the existing ones. This needs a https connection to be secure and trusted.

What's next?

In the next part In going to look into the support for .NET Hot Reload support in ASP.NET Core.

Golo Roden: In eigener Sache: Die tech:lounge Summer Edition

Die tech:lounge Summer Edition ist eine Reihe von 12 Webinaren, in denen es um die Themen Architektur, Codequalität, Containerisierung und moderne Entwicklung geht, und die sich an Einsteiger und Fortgeschrittene gleichermaßen richtet.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Preserve prerendered state in Blazor apps

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into preserve prerendered state in Blazor apps.

In Blazor apps can be prerendered on the server to optimize the load time. The app gets rendered immediately in the browser and is available for the user. Unfortunately, the state that is used on while prerendering on the server is lost on the client and needs to be recreated if the page is fully loaded and the UI may flicker, if the state is recreated and the prerendered HTML will be replaced by the HTML that is rendered again on the client.

To solve that, Microsoft adds support to persist the state into the prerendered page using the <preserve-component-state /> tag helper. This helps to set a stage that is identically on the server and on the client.

Actually, I have no idea why this isn't implemented as a default behavior in case the app get's prerendered. It should be done easily and wouldn't break anything, I guess.

Try to preserve prerendered states

I tried it with a new Blazor app and it worked quite well on the FetchData page. The important part is, to add the preserve-component-state tag helper after all used components in the _Host.cshtml. I placed it right before the script reference to the blazor.server.js:

<body>
    <component type="typeof(App)" render-mode="ServerPrerendered" />

    <div id="blazor-error-ui">
        <environment include="Staging,Production">
            An error has occurred. This application may no longer respond until reloaded.
        </environment>
        <environment include="Development">
            An unhandled exception has occurred. See browser dev tools for details.
        </environment>
        <a href="" class="reload">Reload</a>
        <a class="dismiss">🗙</a>
    </div>

    <persist-component-state /> <!-- <== relevant tag helper -->
    <script src="_framework/blazor.server.js"></script>
</body>

The next snippet is more or less the same as on Microsoft's blog post, except that the forecast variable is missing there and System.Text.Json should be in the usings as well

@page "/fetchdata"
@implements IDisposable

@using PrerenderedState.Data
@using System.Text.Json
@inject WeatherForecastService ForecastService
@inject ComponentApplicationState ApplicationState

...

@code {
    private WeatherForecast[] forecasts;
    protected override async Task OnInitializedAsync()
    {
        ApplicationState.OnPersisting += PersistForecasts;
        if (!ApplicationState.TryTakePersistedState("fetchdata", out var data))
        {
            forecasts = await ForecastService.GetForecastAsync(DateTime.Now);
        }
        else
        {
            var options = new JsonSerializerOptions
            {
                PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
                PropertyNameCaseInsensitive = true,
            };
            forecasts = JsonSerializer.Deserialize<WeatherForecast[]>(data, options);
        }
    }

    private Task PersistForecasts()
    {
        ApplicationState.PersistAsJson("fetchdata", forecasts);
        return Task.CompletedTask;
    }

    void IDisposable.Dispose()
    {
        ApplicationState.OnPersisting -= PersistForecasts;
    }
}

What is the tag helper doing?

It renders an HTML comment to the page that contains the state in an encoded format:

<!--Blazor-Component-State:CfDJ8IZEzFk/KP1DoDRucCE
6nSjBxhfV8XW7LAhH9nkG90KnWp6A83ylBVm+Fkac8gozf2hBP
DSQHeh/jejDrmtDEesKaoyjBNs9G9EDDyyOe1o1zuLnN507mK0
Bjkbyr82Mw83mIVl21n8mxherLqhyuDH3QoHscgIL7rQKBhejP
qGqQLj0WvVYdvYNc6I+FuW4v960+1xiF5XZuEDhKJpFODIZIE7
tIDHJh8NEBWAY5AnenqtydH7382TaVbn+1e0oLFrrSWrNWVRbJ
QcRUR5xpa+yWOZ7U52iudA27ZZr5Z8+LrU9/QVre3ehO+WSW7D
Z/kSnvSkpSnGRMjFDUSgWJp3WE/y9ZKIqzmnOymihJARThmUUM
ewmU2oKkb6alKJ9SabJ0Dbj/ZLwJiDpIt1je5RpZGQvEp7SWJy
VMGieHgGL9lp2UIKwCX2HMiVB+b7UpYSby5+EjLW6FB8Yh5yY3
7IK90KVzl/45UDIJWWXpltHMhJqX2eiFxT7QS3p7tbG08jeBBf
6d74Bb7q6yxfgfRuPigERZhM1MEpqYvkHsugj7TC/z1mN2RF2l
yqjbF3VG/bpATkQyVkcZq4ll/zg+98PcXS18waisz7gntG3iwM
u/sf8ugqaFWQ1hS8CU3+JtvINC7bRDfg4g4joJjlutmmlMcttQ
GCCkt+hkGKxeAyMzHbnRkv8pVyPr4ckCjLdW02H5QhgebOWGGZ
etGlFih1Dtr5cidHT0ra72pgWNoSb7jqk4wVE+E5gmEOiuX0N2
/avvuwAnAifY9Sha1cY27ZxcNJQ5ZOejTXwquuitAdotatdk89
id3WDiTt6T0LvUywvMoga8qWIPqeZw+0VmBKJjFOwQRqx1dy9E
qq4zpTBOECcinKTsbnSb5KkRLQkrCQi4MJCkh/JzvKXP+/bksd
8B3ife7ad1aFgYwX/jvAtO8amzGiMaQvgYQyHsOQwqfrYUSFZm
9hGsdXUmWlE/g8VejWlSUiforHpVjPJojsfYfmeLOjRoSPBTQZ
Q0LL4ie/QFmKXY/TI7GjJCs5UuPM=-->

(I added some line breaks here)

This reminds me of the ViewState we had in ASP.NET WebForms. Does this make Blazor Server the successor of ASP.NET WebForms? Just kidding.

Actually, it is not really the ViewState, because it not gets sent back to the server. It just helps the client restore the state created on the server initially while it was prerendered.

What's next?

In the next part In going to look into the support for HTTP/3 endpoint TLS configuration in ASP.NET Core.

Golo Roden: 12 Aspekte für bessere Software

Teams und Unternehmen, die die Qualität ihrer Softwareentwicklung verbessern wollen, finden wertvolle Hilfe in den 12 Aspekten aus dem "Joel Test".

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Infer component generic types from ancestor components

This is the ninth part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into the inferring of generic types from ancestor components.

In Blazor generic ancestor components need to have the generic type defined in the markup code, until yet. With the preview 2 of .NET 6 ancestor components can infer the generic type from the parent component.

In the announcement post, Microsoft shows a quick demo with the Grid component. Let's have a quick look at the snippets:

<Grid Items="@people">
    <Column TItem="Person" Name="Full name">@context.FirstName @context.LastName</Column>
    <Column TItem="Person" Name="E-mail address">@context.Email</Column>
</Grid>

In this snippet, the Column component has the generic type with the TItem property. This is not longer needed as they showed with this sample:

<Grid Items="@people">
    <Column Name="Full name">@context.FirstName @context.LastName</Column>
    <Column Name="E-mail address">@context.Email</Column>
</Grid>

Since I don't like grids at all, I would like to try to build a SimpleList component that uses a generic ListItem ancestor component to render the items in the list:

Try to infer generic types

As usual, I have to create a project first. This time I'm going to use a Blazor Server project

dotnet new blazorserver -n ComponentGenericTypes -o ComponentGenericTypes
cd ComponentGenericTypes
code .

This creates a new Blazor Server project called ComponentGenericTypes, changes into the project directory, and opens VSCode to start working on the project.

To generate some meaningful dummy data, I'm going to add my favorite NuGet package GenFu:

dotnet add package GenFu

In the Index.razor, I replaced the existing code with the following:

@page "/"
@using ComponentGenericTypes.Components
@using ComponentGenericTypes.Data
@using GenFu

<h1>Hello, world!</h1>

<SimpleList Items="@people">
    <ListItem>
        <p>
            Hallo <b>@context.FirstName @context.LastName</b><br />
            @context.Email
        </p>
    </ListItem>
</SimpleList>

@code {
    public IEnumerable<Person> people = A.ListOf<Person>(15);    
}

This will not work yet, but let's quickly go through it to get the idea. Since this code uses two components that are located in the Components folder, we need to add a using of ComponentGenericTypes.Components, as well as a using to ComponentGenericType.Date because we like to use the Person class. Both the components and the class don't exist yet.

At the bottom of the file, we create a list of 15 persons using GenFu and assign it to a variable that is bound to the SimpleList component. The ListItem component is the direct child component of the SimpleList and behaves like a template for the items. It also contains markup code to display the values.

For the Person class I created a new C# file in the Data folder and added the following code:

namespace ComponentGenericTypes.Data
{
    public class Person
    {
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }
    }
}

This is a pretty simple class. But the property names are important. If such a class is instantiated by GenFu, it automatically writes first names into the FirstName property, last names into the LastName property and it also writes valid email addresses into the Email property. It also works with Streets, Addresses, ZIP codes, phone numbers, and so on. This is why GenFu is my favorite NuGet package.

Now let's create a Components folder and place the SimpleList component inside. The code looks like this:

@typeparam TItem
@attribute [CascadingTypeParameter(nameof(TItem))]

<CascadingValue IsFixed="true" Value="Items">@ChildContent</CascadingValue>

@code {
    [Parameter] public IEnumerable<TItem> Items { get; set; }
    [Parameter] public RenderFragment ChildContent { get; set; }
}

It defines the generic type parameter TItem and a property called Items that is of type IEnemuerable of TItem. That makes the component generic to use almost any kind of IEnumerables. To use child components, the SimpleList also contains a RenderFragment property called ChildContent.

The second attribute does the magic. This cascades the generic type parameter of a specific type to the child component. This is why we don't need to specify the generic type in the ancestor component. In the third line, we also cascade the property Items to the child component.

Now it's time to create the ListItem component:

@typeparam TItem

@foreach (var item in Items)
{
    <div>@ChildContent(item)</div>
}

@code {
    [CascadingParameter] public IEnumerable<TItem> Items { get; set; }
    [Parameter] public RenderFragment<TItem> ChildContent { get; set; }
}

This component iterates through the list of items and renders the ChildContent which in this case is a generic RenderFragment. The generic one creates a context variable of type TItem that can be used to bind the passed value to child components or HTML markup. As seen in the Index.razor the context variable will be of type Person:

<ListItem>
    <p>
        Hallo <b>@context.FirstName @context.LastName</b><br />
        @context.Email
    </p>
</ListItem>

That's it! The index page now will show a list of 15 persons:

Generic List Component

Since I'm not really a Blazor expert the way how I implemented the components might be not completely right, but it's working and shows the idea of the topic of this blog post.

What's next?

In the next part In going to look into the support for Preserve prerendered state in Blazor apps in ASP.NET Core.

Code-Inside Blog: Today I learned (sort of) 'fltmc' to inspect the IO request pipeline of Windows

The headline is obviously a big lie, because I followed this twitter conversation last year, but it’s still interesting to me and I wanted to write it somewhere down.

Starting point was that Bruce Dawson (Google programmer) noticed, that building Chrome on Windows is slow for various reasons:

Trentent Tye told him to disable the “filter driver”:

If you have never heard of a “filter driver” (like me :)), you might want to take a look here.

To see the loaded filter driver on your machine try out this: Run fltmc (fltmc.exe) as admin.

x

Description:

This makes more or less sense to me. I’m not really sure what to do with that information, but it’s cool (nerd cool, but anyway :)).

Stefan Henneken: IEC 61131-3: SOLID – Fünf Grundsätze für bessere Software

Neben der Syntax einer Programmiersprache und dem Verständnis der wichtigsten Bibliotheken und Frameworks, gehören weiterer Methodiken – wie zum Beispiel Design Pattern – zu den Grundlagen der Softwareentwicklung. Neben den Design Pattern sind Designprinzipien ebenfalls ein hilfreiches Werkzeug bei der Entwicklung von Software. SOLID ist ein Akronym für fünf solcher Designprinzipien, die dem Entwickler dabei unterstützen Software verständlicher, flexibler und wartbarer zu entwerfen.

In größeren Softwareprojekten existieren eine Vielzahl von Funktionsblöcken, die über Vererbung und Referenzen miteinander in Verbindung stehen. Durch die Aufrufe der Funktionsblöcke und deren Methoden agieren diese Einheiten untereinander. Dieses Zusammenspiel der Codeeinheiten, kann bei falschem Design das Erweitern oder Auffinden von Fehlern unnötig erschweren. Für die Entwicklung von nachhaltiger Software sollten die Funktionsblöcke so modelliert werden, damit diese einfach zu erweitern sind.

Viele Design Pattern wenden die SOLID-Prinzipien an, um für die jeweilige Aufgabenstellung einen Architekturansatz vorzuschlagen. Die SOLID-Prinzipien sind auch nicht als Regeln zu verstehen, sondern mehr als Ratschläge. Sie sind eine Untermenge vieler Prinzipien, die der amerikanische Software-Ingenieur und Dozent Robert C. Martin (auch bekannt als Uncle Bob) in seinem Buch (Amazon-Werbelink *) Clean Architecture: Das Praxis-Handbuch für professionelles Softwaredesign vorgestellt hat. Die SOLID-Prinzipien sind im Einzelnen:

  • Single Responsibility Principle
  • Open Closed Principle
  • Liskov’sche Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

Die hier gezeigten Prinzipien sind Hinweise, die es einem Entwickler erleichtert die Codequalität zu verbessern. Der Aufwand amortisiert sich nach kurzer Zeit, da Änderungen einfacher, Tests und Fehlersuche beschleunigt werden. Somit sollte das Wissen über diese fünf Designprinzipien zur Basis eines jeden Softwareentwicklers gehören.

Single Responsibility Principle

Ein Funktionsblock sollte nur eine einzige Verantwortung haben. Wird die Funktionalität eines Programms geändert, sollte dieses nur Auswirkungen auf wenige Funktionsblöcke haben. Viele kleine Funktionsblöcke, sind besser als wenige große. Der Code wirkt auf dem ersten Blick zwar umfangreicher, ist dadurch aber einfacher zu organisieren. Ein Programm mit vielen kleineren Funktionsblöcken, für jeweils spezielle Aufgaben, ist einfacher zu pflegen, als wenige große Funktionsblöcke, die den Anspruch erheben, alles zu können.

Open Closed Principle

Nach dem Open Closed Principle sollten Funktionsblöcke offen für Erweiterungen, aber geschlossen für Änderungen sein. Die Umsetzung von Erweiterungen sollte nur durch Hinzufügen von Code, nicht durch Ändern von vorhandenen Code erreicht werden. Ein gutes Beispiel für dieses Prinzip ist die Vererbung. Ein neuer Funktionsblock erbt von einem schon vorhandenen Funktionsblock. Neue Funktionen können so hinzugefügt werden, ohne den vorhanden Funktionsblock verändern zu müssen. Es muss nicht einmal der Programmcode vorliegen.

Liskov‘sche Substitution Principle

Das Liskov‘sche Substitution Principle fordert, dass abgeleitete Funktionsblöcke immer anstelle ihrer Basis-FBs einsetzbar sein müssen. Abgeleitete FBs müssen sich so verhalten wie ihr Basis-FB. Ein abgeleiteter FB darf den Basis-FB erweitern, aber nicht einschränken.

Interface Segregation Principle

Viele kundenspezifische Schnittstellen sind besser als eine Universalschnittstelle. Eine Schnittstelle darf demnach nur die Funktionen enthalten, die auch wirklich eng zusammengehören. Durch umfangreiche Schnittstellen entstehen Kopplungen zwischen ansonsten unabhängigen Programmteilen. Somit hat das Interface Segregation Principle, ein ähnliches Ziel wie das Single Responsibility Principle. Allerdings gibt es bei der Umsetzung dieser beiden Prinzipien unterschiedliche Ansätze.

Dependency Inversion Principle

Funktionsblöcke sind häufig linear in einer Richtung voneinander abhängig. Ein Funktionsblock für das Loggen von Meldungen, ruft Methoden eines anderen Funktionsblocks auf, um Daten in eine Datenbank zu schreiben. Zwischen dem Funktionsblock für das Loggen und dem Funktionsblock für den Zugriff auf die Datenbank besteht eine feste Abhängigkeit. Das Prinzip der Abhängigkeitsinversion löst diese feste Abhängigkeit auf, indem eine gemeinsame Schnittstelle definiert wird. Diese wird von dem Baustein für die Datenbankzugriffe implementiert.

Zum Schluss

In den folgenden Posts werde ich die einzelnen SOLID-Prinzipen genauer vorstellen und versuchen diese anhand eines Beispiels zu erläutern. Als Grundlage dient hierbei das Beispiel-Programm aus meinem Post IEC 61131-3: Das Command Pattern. Mit jedem SOLID-Prinzip will ich versuchen das Programm weiter zu optimieren.

Starten werde ich in Kürze mit dem Dependency Inversion Principle.

Holger Schwichtenberg: .NET 6 erscheint am 9. November 2021

Microsoft hat jetzt den genauen Erscheinungstermin von .NET 6 bekannt gegeben.

Holger Schwichtenberg: Build-Konferenz 2021 beginnt heute: online und kostenfrei

Die Microsoft Build findet vom 25. bis 27.5.2021 zum elften Mal statt.

Holger Schwichtenberg: Softwareentwickler-Update für .NET- und Web-Entwickler am 8.6.2021 (Online)

Der Infotag am 8.6.2021 behandelt .NET 6, C# 10, WinUI3, Cross-Plattform mit MAUI und Blazor Desktop sowie Visual Studio 2022.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - CSS isolation for MVC Views and Razor Pages

This is the eighth part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a quick into the support for SS isolation for MVC Views and Razor Pages.

Blazor components already support CSS isolation. MVC Views and Razor Pages now do the same. Since the official blog post shows it on Razor Pages, I'd like to try it in an MVC application.

Trying CSS isolation for MVC Views

At first, I'm going to create a new MVC application project using the .NET CLI:

dotnet new mvc -n CssIsolation -o CssIsolation
cd CssIsolation
code .

These commands create the project, change the directory into the project folder, and opens VSCode.

After VSCode opens, create an Index.cshtml.css file in the Views/Home folder. In Visual Studio this file will be nested under the Index.cshtml. VSCode doesn't support this kind of nesting yet.

Like in Microsoft's blog post, I just add a small CSS snippet to the new CSS file to change the color of the H1 element:

h1 {
    color: red;
}

This actually doesn't have any effect yet. Unlike Blazor, we need to add a reference to a CSS resource that bundles all the isolated CSS. Open the _Layout.cshtml that is located in the Views/Shared folder and add the following line right after the reference to the site.css:

<link rel="stylesheet" href="CssIsolation.styles.css" />

Ensure the first part of the URL is the name of your application. It is CssIsolation in my case. If you named your application like FooBar, the CSS reference is FooBar.styles.css.

We'll now have a red H1 header:

Isolated CSS: red header

How is this solved?

I had a quick look at the sources to see how the CSS isolation is solved. Every element of the rendered View gets an autogenerated empty attribute that identifies the view:

<div b-zi0vwlqhpg class="text-center">
    <h1 b-zi0vwlqhpg class="display-4">Welcome</h1>
    <p b-zi0vwlqhpg>Learn about <a b-zi0vwlqhpg href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.</p>
</div>

Calling the CSS bundle resource in the browser (https://localhost:5001/cssisolation.styles.css) we can see how the CSS is structured:

/* _content/CssIsolation/Views/Home/Index.cshtml.rz.scp.css */
h1[b-zi0vwlqhpg] {
  color: red;
}
/* _content/CssIsolation/Views/Home/Privacy.cshtml.rz.scp.css */
h1[b-tqxfxf7tqz] {
  color: blue;
}

I did the same for the Privacy.cshtml to see how the isolation is done in the CSS resource. This is why you see two different files listed here. The autogenerated attribute is used with every CSS selector that is used here. This creates unique CSS selectors per view.

I assume this works the same with Razor Pages since both MVC and Razor Pages use the same technique.

This is pretty cool and helpful.

What's next?

In the next part In going to look into the support for Infer component generic types from ancestor components in ASP.NET Core.

Holger Schwichtenberg: Support-Ende für .NET Framework 4.5.2, 4.6 und 4.6.1 schon im April 2022

Microsoft hat bekanntgegeben, die Unterstützung der Versionen 4.5.2, 4.6 und 4.6.1 des klassischen .NET Frameworks vorzeitig schon in einem Jahr zu beenden.

Jürgen Gutsch: ASP.NET Core in .NET 6 - Support for custom event arguments in Blazor

This is the seventh part of the ASP.NET Core on .NET 6 series. In this post, I want to have a quick into the support for custom event arguments in Blazor.

In Blazor you can create custom events and Microsoft now added the support for custom event arguments for those custom events in Blazor as well. Microsoft added a sample in the blog post about the preview 2 that I like to try in a small Blazor project.

Exploring custom event arguments in Blazor

At first, I'm going to create a new Blazor WebAssembly project using the .NET CLI:

dotnet new blazorwasm -n BlazorCustomEventArgs -o BlazorCustomEventArgs
cd BlazorCustomEventArgs
code .

These commands create the project, change the directory into the project folder, and opens VSCode.

After VSCode opens, I create a new folder called CustomEvents and place a new C# file called CustomPasteEventArgs.cs in it. This file contains the first snippet:

using System;
using Microsoft.AspNetCore.Components;

namespace BlazorCustomEventArgs.CustomEvents
{
    [EventHandler("oncustompaste", typeof(CustomPasteEventArgs), enableStopPropagation: true, enablePreventDefault: true)]
    public static class EventHandlers
    {
        // This static class doesn't need to contain any members. It's just a place where we can put
        // [EventHandler] attributes to configure event types on the Razor compiler. This affects the
        // compiler output as well as code completions in the editor.
    }

    public class CustomPasteEventArgs : EventArgs
    {
        // Data for these properties will be supplied by custom JavaScript logic
        public DateTime EventTimestamp { get; set; }
        public string PastedData { get; set; }
    }
}

Additionally, I added a namespace to be complete.

In the Index.razor in the Pages folder we add the next snippet of the blog post:

@page "/"
@using BlazorCustomEventArgs.CustomEvents

<p>Try pasting into the following text box:</p>
<input @oncustompaste="HandleCustomPaste" />
<p>@message</p>

@code {
    string message;

    void HandleCustomPaste(CustomPasteEventArgs eventArgs)
    {
        message = $"At {eventArgs.EventTimestamp.ToShortTimeString()}, you pasted: {eventArgs.PastedData}";
    }
}

I need to add the using to match the namespace of the CustomPasteEventArgs. This creates an input element and outputs a message that will be generated on the CustomPaste event handler.

At the end, we need to add some JavaScript in the index.html that is located in the wwwroot folder. This file hosts the actual WebAssembly application. Place this script directly after the script tag for the blazor.webassembly.js:

<script>
    Blazor.registerCustomEventType('custompaste', {
        browserEventName: 'paste',
        createEventArgs: event => {
            // This example only deals with pasting text, but you could use arbitrary JavaScript APIs
            // to deal with users pasting other types of data, such as images
            return {
                eventTimestamp: new Date(),
                pastedData: event.clipboardData.getData('text')
            };
        }
    });
</script>

This binds the default paste event to the custompaste event and adds the pasted text data, as well as the current date to the CustomPasteEventArgs. In that case the JavaScript object literal should match the CustomPasteEventArg to get it working property, except the casing of the properties.

Blazor doesn't protect you to write some JavaScript ;-)

Let's try it out. I run the application by calling the dotnet run command or the dotnet watch command in the console:

dotnet run

If the browser doesn't start automatically copy the displayed HTTPS URL into the browser. It should look like this:

custom event args 1

Now I past some text into the input element. Et voilà:

custom event args 2Don't be confused about the date. Since it is created via JavaScript using new Date() it is a UTC date, which means minus two hours within the CET time zone, during daylight saving time.

What's next?

In the next part In going to look into the support for CSS isolation for MVC Views and Razor Pages in ASP.NET Core.

Code-Inside Blog: How to self host Google Fonts

Google Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.

In one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.

After some research we discovered this tool: Google-Web-Fonts-Helper

x

Pick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)

The project site is on GitHub.

Hope this helps!

Christina Hirth : What Does Continuous Delivery to a Team

Tl;dr: Continuous integration and delivery are not about a pipeline, it is about trust, psychological safety, a common goal and real teamwork.

What is needed for CI/CD – and how to achieve those?

  • No feature branches but trunk-based development and feature toggles: feature branches mean discontinuous development. CI/CD works with only one temporary branch: the local copy on your machine getting integrated at the moment you want to push. “No feature branches” also means pushing your changes at least once a day.
  • A feeling of safety to commit and push your code: trust in yourself and trust in your environment to help you if you fall – or steady you to not fall at all.
  • Quality gates to keep the customer safe
  • Observing and reducing the outcome of your work (as a team, of course)
  • Resilience: accept that errors will happen and make sure that they are not fatal, that you can live with them. This means also being aware of the risk involved in your changes

What happens in the team, in the team-work:

  • It enables a growing maturity, autonomy due to fast feedback, failing fast and early
  • It makes us real team-workers, “we fail together, we succeed together”
  • It leads to better programmers due to the need for XP practices and the need to know how to deliver backwards compatible software
  • It has an impact on the architecture and the design (see Accelerate)
  • Psychological safety: eliminates the fear of coding, of making decisions, of having code reviews
  • It gives a common goal, valuable for everybody: customers, devs, testers, PO, company
  • It makes everybody involved happy because of much faster feedback from customers instead of the feedback of the PO => it allows to validate the assumption that the new feature is valuable
  • It drives new ideas, new capabilities bc it allows experiments
  • Sets the right priorities: not to jump to code but to think about how to deliver new capabilities, to solve problems (sometimes even by deleting code)

How to start:

  • Agree upon setting CI/CD as a goal for the whole team: focus on how to get there not on the reasons why it cannot work out
  • Consider all requirements (safety net, coding and review practices, creating the pipeline and the quality gates) as necessary steps and work on them, one after another
  • Agree upon team rules making CI/CD as a team responsibility (monitoring errors, fixing them, flickering tests, processes to improve leaks in the safety net, blameless post-mortems)
  • Learn to give and get feedback on a professional manner (“I am not my work”). For example by reading the book Agile Conversations and/or practice it in the meetup

– – – – –

This bullet-point list was born during this year’s CITCON, a great un-conference on continuous improvement. I am aware that they can trigger questions and needs for explanations – and I would be happy to answer them 🙂

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Nullable Reference Type Annotations

This is the sixth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a quick into the new Nullable Reference Type Annotations in some ASP.NET Core APIs

Microsoft added Nullable Reference Types in C# 8 and this is why they applied nullability annotations to parts of ASP.NET Core. This provides additional compile-time safety while using reference types and protects against possible null reference exceptions.

This is not only a new thing with preview 1 but an ongoing change for the next releases. Microsoft will add more and more nullability annotations to the ASP.NET Core APIs in the next versions. You can see the progress in this GitHub Issue: https://github.com/aspnet/Announcements/issues/444

Exploring Nullable Reference Type Annotations

I'd quickly like to see whether this change is already visible in a newly created MVC project.

dotnet new mvc -n NullabilityDemo -o NullabilityDemo
cd NullabilityDemo

This creates a new MVC project and changes the directory into it.

Projects that enable using nullable annotations may see new build-time warnings from ASP.NET Core APIs. To enable nullable reference types, you should add the following property to your project file:

<PropertyGroup>
    <Nullable>enable</Nullable>
</PropertyGroup>

In the following screenshot you'll see, the build result before and after enabling nullable annotations:

null warnings on build

Actually, there is no new warning, It just shows a warning for the RequestId property in the ErrorViewModel because it might be null. After changing it to a nullable string, the warning disappears.

public class ErrorViewModel
{
    public string? RequestId { get; set; }

    public bool ShowRequestId => !string.IsNullOrEmpty(RequestId);
}

However. How can I try the changed APIs?

I need to have a look into the already mentioned GitHub Issue to choose an API to try.

I'm going with the Microsoft.AspNetCore.WebUtilitiesQueryHelpers.ParseQuery method:

using Microsoft.AspNetCore.WebUtilities;

// ...

private void ParseQuery(string queryString)
{
    QueryHelpers.ParseQuery(queryString);
}

If you now set the queryString variable to null, you'll get yellow squiggles that tell you that null may be null:

null hints

You get the same message if you mark the input variable with a nullable annotation:

private void ParseQuery(string? queryString)
{
	QueryHelpers.ParseQuery(queryString);
}

nullable hints

It's working and it is quite cool to prevent null reference exceptions against ASP.NET Core APIs.

What's next?

In the next part In going to look into the support for Support for custom event arguments in Blazor in ASP.NET Core.

Golo Roden: Luca-App versus Open Source

Die Luca-App enthält Code aus einem Open-Source-Projekt, hat aber dessen Lizenz verletzt. Die zugehörige Meldung bestimmte die Schlagzeilen der vergangenen Wochen. Doch was genau ist eigentlich Open Source, was ist der Unterschied zu Freier Software, und was lässt sich über Open Source lernen?

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Input ElementReference in Blazor

This is the fifth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the input ElementReference in Blazor that is exposed to relevant components.

Microsoft exposes the ElementReference of the Blazor input elements to the underlying input. This effects the following components: InputCheckbox, InputDate, InputFile, InputNumber, InputSelect, InputText, and InputTextArea.

Exploring the ElementReference

To test it, I created a Blazor Server project using the dotnet CLI:

dotnet new blazorserver -n ElementReferenceDemo -o ElementReferenceDemo

CD into the project and call dotnet watch

I will reuse the index.razor to try the form ElementReference:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

At first, add the following code block at the end of the file:

@code{
    Person person = new Person{
      FirstName = "John",
      LastName = "Doe"
    };

    InputText firstNameReference;
    InputText lastNameReference;

    public class Person
    {
        public string FirstName { get; set; }

        public string LastName { get; set; }
    }
}

This creates a Person type and initializes it. We will use it later as a model in the EditForm. There are also two variables added that will reference the actual InputText elements in the form. We will add some more code later on, but let's add the form first:

<EditForm Model=@person>
    <InputText @bind-Value="person.FirstName" @ref="firstNameReference" /><br>
    <InputText @bind-Value="person.LastName" @ref="lastNameReference" /><br>

    <input type="submit" value="Submit" class="btn btn-primary" /><br>
    
    <input type="button" value="Focus FirstName" class="btn btn-secondary" 
        @onclick="HandleFocusFirstName" />
    <input type="button" value="Focus LastName" class="btn btn-secondary" 
        @onclick="HandleFocusLastName" />
</EditForm>

This form has the person object assigned as a model. It contains two InputText elements, the default input button as well as two input buttons that will be used to test the ElementReference.

The reference Variables are assigned to the @ref attribute of the InputText elements. We will use these variables later on.

The buttons have @onclick methods assigned that we need to add to the code section:

private async Task HandleFocusFirstName()
{
}

private async Task HandleFocusLastName()
{
}

As described by Microsoft the input elements now expose the ElementReference. This can be used to set the Focus of an element. Add the following lines to focus the InputText elements:

private async Task HandleFocusFirstName()
{
   await firstNameReference.Element.Value.FocusAsync();
}

private async Task HandleFocusLastName()
{
   await lastNameReference.Element.Value.FocusAsync();
}

This might be pretty useful. Instead of playing around with JavaScript Interop, you can use C# completely.

On the other hand, it would be great, if Microsoft exposes much more features via the ElementReference, instead of just focusing an element.

What's next?

In the next part In going to look into the support for Nullable Reference Type Annotations in ASP.NET Core.

Norbert Eder: Python lernen #2: Installation / Tools

Im zweiten Teil der Serie geht es um die Installation. Das notwendige Setup findest du unter https://www.python.org/downloads/. Unterstützt werden alle gängigen Betriebssysteme. Ich installiere Python 3.9.2 für Windows.

Bei der Installation wähle ich die Standard-Installation. Vorher empfehle ich, den Installationsphad in die PATH-Umgebungsvariable mit aufnehmen zu lassen:

Weitere Zwischenschritte gibt es hier nicht.

Das Setup wurde erfolgreich ausgeführt. Starten wir nun die Konsole und führen einen ersten Test durch, ob auch tatsächlich alles funktioniert hat:

D:\>python
Python 3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 23:44:55) [MSC v.1928 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>

Das sieht doch gut aus und wir befinden uns damit auch schon in der Python Shell.

Nun noch schnell ein einfacher Test mit dem Befehl print. Damit können wir Informationen in der Standardausgabe ausgeben:

>>> print('visit norberteder.com')
visit norberteder.com

Natürlich kann auch mit Dateien gearbeitet werden. Nachfolgend schreiben wir die Codezeile in eine Python-Datei:

D:\>echo "print('visit norberteder.com') > hello.py
"print('visit norberteder.com') > hello.py

Und führen diese auch aus:

D:\>python hello.py
visit norberteder.com

Tools

Gemeinsam mit Python wird die IDLE installiert. Damit bekommt man ein einfaches Werkzeug, um sowohl die Python Shell auszuführen, als auch Python-Dateien zu schreiben/editieren und zu debuggen.

IDLE

Für die ersten Schritte ist IDLE sicherlich ausreichend, für größere Vorhaben werde ich einen anderen Editor verwenden.

Für private Zwecke bzw. für die Entwicklung von Open Source Software gibt es von JetBrains eine kostenlose PyCharm Community Edition. Da ich bereits mit anderen sprachspezifischen Editoren von JetBrains vertraut bin, werde ich in den nächsten Teilen dieser Serie auf PyCharm umsteigen.

Weitere Tools kommen für den Anfang nicht zum Einsatz. Möglicherweise ändert sich das im Laufe der Serie – wir werden sehen :)

Damit hätten wir die Basis sichergestellt und haben vorerst alles, was wir brauchen. Im nächsten Teil geht es um die Grundlagen der Programmiersprache. Welche Naming Conventions gibt es, wie definiert man Variablen und Funktionen.

In Python lernen #1: Der Einstieg findest du eine Liste aller verfügbaren Artikel zu meiner Python-Serie. Ich freue mich über dein Feedback, damit ich meine Serie weiter verbessern kann.

Der Beitrag Python lernen #2: Installation / Tools erschien zuerst auf Norbert Eder.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - DynamicComponent in Blazor

This is the fourth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the DynamicComponent in Blazor.

What does Microsoft say about it?

DynamicComponent is a new built-in Blazor component that can be used to dynamically render a component specified by type.

That sounds nice. It is a component that dynamically renders any other component. Unfortunately, there is no documentation available yet, except a comment in the blog. So let's create a small one:

Trying the DynamicComponent

To test it, I created a Blazor Server project using the dotnet CLI

dotnet new blazorserver -n BlazorServerDemo -o BlazorServerDemo

CD into the project and call dotnet watch

Now let's try the DynamicComponent on the index.razor:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

My idea is to render the SurveyPrompt component dynamically with a different title:

@code{
    var someType = typeof(SurveyPrompt);
    var myDictionaryOfParameters = new Dictionary<string, object>
    {
        { "Title", "Foo Bar"}
    };
}

<DynamicComponent Type="@someType" Parameters="@myDictionaryOfParameters" />

At first, I needed to define the type of the component I want to render. At second I needed to define the parameters I want to pass to that component. In that case, it is just the title property.

DynamicComponent

Why could this be useful?

This is great in case you want to render components dynamically based on data inputs or whatever.

Think about a timeline of news, a newsfeed, or stuff like this on a web page, that can render different kind of content like text, videos, pictures. You can now just loop through the news list and render the DynamicComponent and pass the type of the actual component to it, as well as the attribute values the components need.

What's next?

In the next part In going to look into the support for ElementReference in Blazor.

Holger Schwichtenberg: Kommende Entwickler-Events: Vor Ort und/oder Online

Ein Liste der kommenden Entwicklerveranstaltungen im deutschsprachigen Raum bis Mai 2022.

Stefan Henneken: IEC 61131-3: Different versions of the same library in a TwinCAT project

Library placeholders allow to reference multiple versions of the same library in a PLC project. This situation can be helpful if a library has to be updated in an existing project due to new functions, but the update turns out to give an older FB a changed behavior.

The mentioned problem can be solved by including different versions of the same library in the project using placeholders. Placeholders for libraries are comparable with references. Instead of adding libraries directly to a project, they are referenced indirectly via placeholders. Each placeholder is linked to a library either with a specific version or so that the current library is always used. If libraries are added via the standard dialog, placeholders are always used automatically.

In the following short post, I want to show how to add several versions of the same library to a project. In our example, I will add two different versions of the Tc3_JsonXml library to a project. There are currently three different versions of the library on my computer.

V3.3.7.0 and V3.3.14.0 will be used in parallel in the example.

Open the dialog for adding a library. Then switch to the view Advanced.

Switch to the tab Placeholder and enter a unique name for the new placeholder.

Select the library that will be referenced by the placeholder. A specific version or the latest version can always be selected using the ‘*’.

If you then select the placeholder in the project tree under References and switch to the properties window, the properties of the placeholder will be displayed there.

The namespace has still to be adjusted here. The namespace is used later in the PLC program and is used to address elements of both libraries via different names. I presented the basic concept of namespaces in IEC 61131-3: Namespaces. I chose the same identifiers for the namespace as for the placeholders.

After performing the same steps for the V3.3.14.0 version of the library, both placeholders should be available with a unique name and customized namespace.

The Library Manager, which is opened by double-clicking on References, provides a good overview.

Here you can clearly see how the placeholders are resolved. Usually, the placeholders have the same name as the libraries that are referenced. The ‘*’ means that always the newest version of the library is used, which is available on the development computer. The right column shows the version referenced by the placeholder. The names of the placeholders have been adapted for the two placeholders of the Tc3_JsonXml library.

FB_JsonSaxWriter will be used as an example in the PLC program. If the FB is specified without a namespace when the instance is declared,

PROGRAM MAIN
VAR
  fbJsonSaxWriter    : FB_JsonSaxWriter;
END_VAR

the compiler will output an error message:

The name FB_JsonSaxWriter cannot be uniquely resolved because two different versions of the Tc3_JsonXml library (V3.3.7.0 and V3.3.14.0) are available in the project. Thus, FB_JsonSaxWriter is also contained twice in the project.

By using the namespaces, targeted access to the individual elements of the desired library is possible:

PROGRAM MAIN
VAR
  fbJsonSaxWriter_Build7           : Tc3_JsonXml_Build7.FB_JsonSaxWriter;
  fbJsonSaxWriter_Build14          : Tc3_JsonXml_Build14.FB_JsonSaxWriter;
  sVersionBuild7, sVersionBuild14  : STRING;
END_VAR
 
fbJsonSaxWriter_Build7.AddBool(TRUE);
fbJsonSaxWriter_Build14.AddBool(FALSE);
 
sVersionBuild7 := Tc3_JsonXml_Build7.stLibVersion_Tc3_JsonXml.sVersion;
sVersionBuild14 := Tc3_JsonXml_Build14.stLibVersion_Tc3_JsonXml.sVersion;

In this short example, the current version number is also read out via the global structure that is contained in every library:

Both libraries can now be used in parallel in the same PLC project. However, it must be ensured that both libraries are available in exactly the required versions (V3.3.7.0 and V3.3.14.0) on the development computer.

Norbert Eder: Python lernen #1: Der Einstieg

Du möchtest, genauso wie ich, Python lernen? Dann geh diesen Weg gemeinsam mit mir. In dieser Artikelserie werde ich meine Fragen, Antworten und Erkenntnisse mit dir teilen.

Den Anfang macht eine kleine Liste von Informationsquellen, die ich mir im Vorfeld herausgesucht habe und die ich in der weiteren Zeit verwenden werde. Gerne freue ich mich auch über Hinweise auf weitere interessante Webseiten, Bücher und Videos.

Am Ende dieses Artikels findest du eine Liste aller Artikel, die laufend erweitert wird und vorerst nur diesen beinhält – aber hoffentlich kontinuierlich wächst.

Motivation

In der letzten Zeit komme ich vermehrt mit Themen in Berührung, die förmlich nach Python schreien. Vorrangig handelt es sich hierbei um das Thema Machine Learning.

Zusätzlich möchte auch mein Nachwuchs in das Thema Softwareentwicklung einsteigen. Hier eignet sich Python hervorragend für Spielereien mit Raspberry Pi und Co. Zudem soll sie einfach zu erlenen sein, was gerade für den Nachwuchs von Vorteil ist.

Bücher

Als meine Grundlage und Leitfaden dient das Buch Einstieg in Python.

Weitere Bücher mit guten Rezensionen findest du hier:

         

Auch wenn ich damit vielleicht etwas altmodisch wirke, für ein strukturiertes Lernen einer neuen Sprache etc. setze ich nach wie vor gerne auf Bücher. Grundsätzlich aber kann das natürlich jeder halten, wie er möchte.

Links

Im Laufe der Zeit werden sich sicherlich viele hilfreiche Links ansammeln. Vorerst halte ich mich jedoch an die offiziellen Webseiten, um mich mit den notwendigen Downloads und Informationen zu versorgen:

Auf https://www.python.org/ finden sich zusätzlich auch viele Lernunterlagen, Videos und natürlich eine große Community.

Unter https://github.com/python finden sich sämtliche offizielle Repositories. Natürlich kann man auch einen Blick darauf werfen, was es sonst noch so auf Github mit Python gibt.

Videos

Wer sich gerne Videos ansieht, der findet gerade auf Youtube eine Menge zum Thema. Ganz gut sieht diese 29-teilige Serie zu Python 3 aus:

Python-Serie

  1. Python lernen #1: Der Einstieg [dieser Artikel]
  2. Python lernen #2: Installation und Setup
  3. Python lernen #3: Grundlagen der Programmiersprache (Naming Conventions, Variablen deklarieren, Datentypen, Funktionen usw.)
  4. Python lernen #4: Module und Gliederung von Projekten
  5. Python lernen #5: Ein erstes Beispiel

Weitere Artikel folgen in Kürze, diese Liste wird laufend von mir erweitert.

Ich freue mich auch gerne über jedes Feedback, Unterstützung, Wünsche, Fragen und dergleichen. Bitte schreib mir hier einen Kommentar, oder melde dich über das Kontakt-Formular bei mir.

Der Beitrag Python lernen #1: Der Einstieg erschien zuerst auf Norbert Eder.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Support for IAsyncDisposable in MVC

This is the third part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the Support for IAsyncDisposable in MVC.

The IAsyncDisposable is a thing since .NET Core 3.0. If I'm right, we got that together with the async streams to release those kind of streams asynchronously. Now MVC is supporting this interface as well and you can use it anywhere in your code on controllers, classes, etc. to release async resources.

When should I use IAsyncDisposable?

When you work with asynchronous enumerators like in async steams and when you work with instances of unmanaged resources which needs resource-intensive I/O operation to release.

When implementing this interface you can use the DisposeAsync method to release those kind of resources.

Let's try it

Let's assume we have a controller that creates and uses a Utf8JsonWriter which as well is a IAsyncDisposable resource

public class HomeController : Controller, IAsyncDisposable
{
    private Utf8JsonWriter _jsonWriter;

    private readonly ILogger<HomeController> _logger;

    public HomeController(ILogger<HomeController> logger)
    {
        _logger = logger;
        _jsonWriter = new Utf8JsonWriter(new MemoryStream());
    }

The interface needs us to implement the DisposeAsync method. This should be done like this:

public async ValueTask DisposeAsync()
{
    // Perform async cleanup.
    await DisposeAsyncCore();
    // Dispose of unmanaged resources.
    Dispose(false);
    // Suppress GC to call the finalizer.
    GC.SuppressFinalize(this);
}

This is a higher level method that calls a DisposeAsyncCore that actually does the async cleanup. It also calls the regular Dispose method to release other unmanaged resources and it tells the garbage collector not to call the finalizer. I guess this could release the instance before the async cleanup finishes.

This needs us to add another method called DisposeAsyncCore():

protected async virtual ValueTask DisposeAsyncCore()
{
    if (_jsonWriter is not null)
    {
        await _jsonWriter.DisposeAsync();
    }

    _jsonWriter = null;
}

This will actually dispose the async resource .

Further reading

Microsoft has some really detailed docs about it:

What's next?

In the next part In going to look into the support for DynamicComponent in Blazor.

Holger Schwichtenberg: Konsolenfenster und Windows-Fenster in einer .NET-5.0-App

Um Windows Forms oder Windows Presentation Foundation (WPF) in einer Konsolenanwendung nutzen zu können, ist eine spezielle Einstellung notwendig.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Update on dotnet watch

This is the second part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look into the updates on dotnet watch. The announcement post from February 17th mentioned that dotnet watch now does dotnet watch run by default.

Actually, this doesn't work in preview 1 because this feature didn't make it to this release by accident: https://github.com/dotnet/aspnetcore/issues/30470

BTW: This feature isn't mentioned anymore. The team changed the post and didn't add it to preview 2 though.

The idea is to just use dotnet watch without specifying the run command that should be executed after a file is changed. run is now the default command:

dotnetwatch.png

This is just a small thing but might save some time.

What's next?

In the next part In going to look into the support for IAsyncDisposable in MVC.

Golo Roden: Tests schreiben – die Grundlagen

Tests sind ein wesentlicher Baustein für qualitativ hochwertige und nachhaltige Softwareentwicklung. Doch warum genau sind Tests so wichtig, was sind ihre Vorteile, und welche Gründe sprechen für ihren Einsatz? Könnte man nicht alternativ auch von Hand testen?

Jürgen Gutsch: How to suppress dotnet whatch run to open a browser

An interesting question on Twitter leads me to write this small post. The question was how to suppress opening a browser when you run dotnet watch run.

The thing is, that you might not want to open a browser if you run dotnet watch run on an Web API project. Since the Web API projects have Swagger enabled by default, this might make sense but often you just want to run your backend project while your frontend project is open in a browser or whatever frontend you have.

Using an environment variable

There are two options to change that behavior. You can set an environment variable, which sets the behavior globally or in a console session:

SET DOTNET_WATCH_SUPPRESS_LAUNCH_BROWSER=1

This will override the default behavior.

Using the launchSettings.json

The better option is to change it project-wise for all the projects you want to suppress. This can be done in the launchSettings.json that you will find in the Properties folder of each project. The launchsettings.json contains iisSettings and two or more profiles that configures how the applications will be launched:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:32265",
      "sslPort": 44369
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "swagger",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "MyProject": {
      "commandName": "Project",
      "dotnetRunMessages": "true",
      "launchBrowser": true, 
      "launchUrl": "swagger",
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

The launchBrowser property in the profiles defines whether the browser should be opened or not. Set it to false in case you want to suppress it.

In case you set the environment variable, it will override the setting of the launchsettings.json.

Conclusion

In many cases you might want to see the Swagger UI in your browser to test your API but there are cases as well, where you just want to spin up up your backend and to work on your front end using the running API.

Albert Weinert: AUA: Gregor und Albert live auf Sendung am 19. März 2021

AUA: Gregor und Albert live auf Sendung am 19. März 2021

Willkommen zur großen Ask Us Anything Session auf dem Twitch Kanal von Gregor Biswanger. Am 19. März 2021 ab 20:30 Uhr stellen Gregor und ich euch unsere Zeit, unser Wissen und unsere Ahnungslosigkeit euch zu Verfügung. Wir lösen eure Herausforderungen beim Programmieren.

Alles Rund im .NET, ASP.NET Core, JavaScript, Datenbanken, Wetter, Architektur, Ideen, Tooling, Testing, Security, Schuhgrößen, Git oder was auch immer. Fragt was Ihr wollt, bleibt freundlich und ob wir das Problem live angehen entscheiden wir spontan. Wir garantieren nichts, nicht mal die Lösung. Wir hoffen auf einen schönen und unterhaltsamen Abend, mit viel Beteiligung.

AUA: Gregor und Albert live auf Sendung am 19. März 2021
Gregor Biswanger
Gregor ist aktueller Microsoft MVP, entwickelt am liebsten auf Windows mit Visual Studio und Visual Studio Code. Macht was mit Angular, .NET, Mongo Db und vieles mehr.
AUA: Gregor und Albert live auf Sendung am 19. März 2021
Albert Weinert
Albert ist ehemaliger Microsoft MVP, entwickelt am liebsten auf dem Mac mit Rider und WebStorm. Macht was mit Vue, .NET, Identity Server und vieles mehr.

Floskelalarm

Also kommt reichlich. Im Zweifel wissen wir nichts. Der Abend ist kostenlos, sicher nicht umsonst. Wer nicht dabei ist, verpasst es. Aber kann es später ohne die eigenen Fragen stellen zu können auf YouTube nachschauen.

Holger Schwichtenberg: Blazor-WebAssembly-Tutorial nun vollständig

Das fünfteilige Tutorial zur Webprogrammierung mit Blazor steht nun einschließlich des zugehörigen Quellcodes vollständig online zur Verfügung.

Golo Roden: RTFM #5: Structure and Interpretation of Computer Programs (SICP)

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär handelt es sich um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Structure and Interpretation of Computer Programs" von Hal Abelson, Gerald Jay Sussman und Julie Sussman.

Golo Roden: Was sind Interfaces?

Interfaces sind eines der wichtigsten Konstrukte in der Programmierung, um Code sauber zu strukturieren, weshalb sie auch als Grundlage für viele Entwurfsmuster dienen. Doch was genau sind Interfaces und warum sind sie so relevant?

Holger Schwichtenberg: Microsoft baut an neuem Upgrade-Assistent von .NET Framework zu .NET 5 und .NET 6

Microsoft startet mit dem .NET Upgrade Assistant einen neuen Versuch eines Werkzeugs, das Entwickler beim Umstieg von .NET Framework auf .NET 5 und .NET 6 unterstützen soll.

Golo Roden: Datenbanktypen im Vergleich

Früher waren relationale Datenbanken das Maß der Dinge, doch in den vergangenen 15 Jahren haben sich zahlreiche andere Datenbanktypen etabliert. Wie unterscheiden sie sich, und spielen relationale Datenbanken heutzutage überhaupt noch eine Rolle?

Jürgen Gutsch: Trying the REST Client extension for VSCode

I recently stumbled upon a tweet by Lars Richter who mentioned and linked to a rest client extension in VSCode. I had a more detailed look and was pretty impressed by this extension.

I can now get rid of Fiddler and Postman.

Let's start at the beginning

The REST Client Extension for VSCode was developed by Huachao Mao from China. You will find the extension on the visual studio marketplace or in the extensions explorer in VS Code:

  • https://marketplace.visualstudio.com/items?itemName=humao.rest-client

If you follow this link, you will find a really great documentation about the extension, how it works, and how to use it. This also means this post is pretty useless, except you want to read a quick overview ;-)

rest client extension

The source code of the REST Client extension is hosted on GitHub:

  • https://github.com/Huachao/vscode-restclient

This extension is actively maintained, has almost one and a half installations and an awesome rating (5.0 out of 5) by more than 250 people

What does it solve?

Compared to Fiddler and Postman it is absolutely minimalistic. There is no overloaded and full-blown UI. While Fidler is completely overloaded but full of features, Postman's UI is nicer, easier, and more intuitive, but the REST Client doesn't need a UI at all, except the VSCode shell and a plain text editor.

While Fiddler and Postman cannot easily share the request configurations, the REST Client stores the request configurations in text files using the *.http or *.rest extension that can be committed to the source code repository and shared with the entire team.

Let's see how it works

To test it out in a demo, let's create a new Web API project, change to the project directory, and open VSCode:

dotnet new webapi -n RestClient -o RestClient
cd RestClient
code .

This project already contains a Web API controller. I'm going to use this for the first small test of the REST Client. I will create and use a more complex controller later in the blog post

To have the *.http files in one place I created an ApiTest folder and place a WeatherForecast.http in it. I'm not yet sure if it makes sense to put such files into the project, because these files won't go into production. I think, in a real-world project, I would place the files somewhere outside the actual project folder, but inside the source code repository. Let's keep it there for now:

http file

I already put the following line into that file:

GET https://localhost:5001/WeatherForecast/ HTTP/1.1

This is just a simple line of text in a plain text file with the file extension *.http but the REST Client extension does some cool magic with it while parsing it:

On the top border, you can see that the REST Client extension supports the navigation inside the file structure. This is cool. Above the line, it also adds a CodeLens actionable link to the configured request to send the request.

At first, start the project by pressing F5 or by using dotnet run in the shell.

If the project is running you can click the Send Request CodeLens link and see what happens.

result

It opens the response in a new tab group in VSCode and shows you the response headers as well as the response content

A more complex sample

I created another API controller that handles persons. The PersonController uses GenFu to create fake users. The Methods POST, PUT and DELETE don't really do anything but the controller is good to test no.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

using GenFu;

using RestClient.Models;

namespace RestClient.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class PersonController : ControllerBase
    {

        [HttpGet]
        public ActionResult<IEnumerable<Person>> Get()
        {
            return A.ListOf<Person>(15);
        }

        [HttpGet("{id:int}")]
        public ActionResult<Person> Get(int id)
        {
            var person = A.New<Person>(new Person { Id = id });
            return person;
        }

        [HttpPost]
        public ActionResult Post(Person person)
        {
            return Ok(person);
        }

        [HttpPut("{id:int}")]
        public ActionResult Put(int id, Person person)
        {
            return Ok(person);

        }

        [HttpDelete("{id:int}")]
        public ActionResult Delete(int id)
        {
            return Ok(id);
        }
    }
}

The Person model is simple:

namespace RestClient.Models
{
    public class Person
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }
        public string Telephone { get; set; }
        public string Street { get; set; }
        public string Zip { get; set; }
        public string City { get; set; }
    }
}

If you now start the project you will see the new endpoints in the Swagger UI that is already configured in the Web API project. Call the following URL to see the Swagger UI: https://localhost:5001/swagger/index.html

swaggerui

The Swagger UI will help you to configure the REST Client files.

Ok. Let's start. I created a new file called Person.http in the ApiTests folder. You can add more than one REST Client request configuration into one file.

We don't need the Swagger UI for the two GET endpoints and for the DELETE endpoints, since they are the easy ones and look the same as in the WeatherForecast.http:

GET https://localhost:5001/Person/ HTTP/1.1

###

GET https://localhost:5001/Person/2 HTTP/1.1

### 

DELETE https://localhost:5001/Person/2 HTTP/1.1

The POST request is just a little more complex

If you now open the POST /Person section in the Swagger UI and try the request, you'll get all the information you need for the REST Client:

swagger details

In the http file it will look like this:

POST https://localhost:5001/Person/ HTTP/1.1
content-type: application/json

{
  "id": 0,
  "firstName": "Juergen",
  "lastName": "Gutsch",
  "email": "juergen@example.com",
  "telephone": "08150815",
  "street": "Mainstr. 2",
  "zip": "12345",
  "city": "Smallville"
}

You can do the same with the PUT request:

PUT https://localhost:5001/Person/2 HTTP/1.1
content-type: application/json

{
  "id": 2,
  "firstName": "Juergen",
  "lastName": "Gutsch",
  "email": "juergen@example.com",
  "telephone": "08150815",
  "street": "Mainstr. 2",
  "zip": "12345",
  "city": "Smallville"
}

This is how it looks in VSCode, if you click the CodeLens link for the GET request :

results

You are now able to test all the API endpoints this way

Conclusion

Actually, it is not only about REST. You can test any kind of HTTP request this way. You can even send binary data, like images to your endpoint.

This is a really great extension for VSCode and I'm sure I will use Fiddler or Postman only in environments where I don't have a VS Code installed.

Stefan Henneken: IEC 61131-3: unterschiedliche Versionen der gleichen Bibliothek in einem TwinCAT Projekt

Bibliotheksplatzhalter ermöglichen es, mehrere Versionen der gleichen Bibliothek in einem SPS-Projekt zu referenzieren. Diese Situation kann hilfreich sein, wenn auf Grund neuer Funktionen eine Bibliothek in einem bestehenden Projekt aktualisiert werden soll, sich durch das Update aber herausstellt, dass ein älterer FB ein geändertes Verhalten erhält.

Das genannte Problem ist dadurch lösbar, indem verschiedene Versionen der gleichen Bibliothek über Platzhalter in dem Projekt eingebunden werden. Platzhalter bei Bibliotheken sind vergleichbar mit Referenzen. Statt Bibliotheken direkt zu einem Projekt hinzuzufügen, werden diese indirekt über Platzhalter referenziert. Jeder Platzhalter wird mit einer Bibliothek verknüpft. Entweder mit einer konkreten Version oder so, dass immer die aktuelle Bibliothek verwendet wird. Werden Bibliotheken über den Standarddialog hinzugefügt, kommen immer automatisch Platzhalter zum Einsatz.

Wie mehrere Versionen der gleichen Bibliothek in einem Projekt eingebunden werden, will ich in dem folgenden, kurzen Post vorstellen. Bei unserem Beispiel werde ich zwei verschiedene Versionen der Tc3_JsonXml Bibliothek in ein Projekt hinzufügen. Auf meinen Rechner sind aktuellen drei verschiedene Versionen der Bibliothek vorhanden.

V3.3.7.0 und V3.3.14.0 sollen in dem Beispiel parallel zum Einsatz kommen.

Öffnen Sie den Dialog zum Hinzufügen einer Bibliothek. Wechseln Sie anschließend in die erweitere Ansicht.

Wechseln Sie in den Bereich Platzhalter und geben Sie für den neuen Platzhalter einen eindeutigen Namen ein.

Wählen Sie die Bibliothek aus, auf der der Platzhalter referenzieren soll. Hierbei kann eine konkrete Version oder durch das ‚*‘, immer die neuste Version ausgewählt werden.

Wenn Sie anschließend im Projektbaum unter References den Platzhalter auswählen und in das Eigenschaftsfenster wechseln, so werden dort die Eigenschaften des Platzhalters angezeigt.

Hier muss der Namensraum noch angepasst werden. Der Namensraum wird später im SPS-Programm verwendet und dient dazu, Elemente beider Bibliotheken über unterschiedliche Bezeichner anzusprechen. Das Grundkonzept von Namensräumen hatte ich unter IEC 61131-3: Namensräume vorgestellt. Für den Namensraum habe ich die gleichen Bezeichner gewählt, wie für die Platzhalter.

Nachdem die gleichen Arbeitsschritte auch für die Version V3.3.14.0 der Library ausgeführt wurden, sollten beide Platzhalter mit einen eindeutigen Namen und angepassten Namensraum vorhanden sein.

Einen guten Überblick liefert die Bibliotheksverwaltung, die durch ein Doppelklick auf References geöffnet wird.

Hier ist gut zu erkennen, wie die Platzhalter aufgelöst werden. In der Regel haben die Platzhalter den gleichen Namen wie die Bibliotheken, auf die verwiesen wird. Das ‚*‘ bedeutet, dass immer die neuste Version der Bibliothek verwendet wird, die auf dem Entwicklungsrechner vorhanden ist. In der rechten Spalte wird die Version angezeigt, auf die der Platzhalter verweist. Für die beiden Platzhalter der Tc3_JsonXml Bibliothek wurden die Namen der Platzhalter angepasst.

Als Beispiel soll in dem SPS-Programm FB_JsonSaxWriter zum Einsatz kommen. Wird der FB bei der Deklaration der Instanz ohne Namensraum angegeben,

PROGRAM MAIN
VAR
  fbJsonSaxWriter    : FB_JsonSaxWriter;
END_VAR

so gibt der Compiler eine Fehlermeldung aus:

Der Name FB_JsonSaxWriter kann nicht eindeutig aufgelöst werden, da von der Bibliothek Tc3_JsonXml zwei verschiedene Versionen (V3.3.7.0 und V3.3.14.0) in dem Projekt vorhanden sind. Somit ist auch FB_JsonSaxWriter in dem Projekt zweimal enthalten.

Durch die Verwendung der Namensräume ist ein gezielter Zugriff auf die einzelnen Elemente der gewünschten Bibliothek möglich:

PROGRAM MAIN
VAR
  fbJsonSaxWriter_Build7           : Tc3_JsonXml_Build7.FB_JsonSaxWriter;
  fbJsonSaxWriter_Build14          : Tc3_JsonXml_Build14.FB_JsonSaxWriter;
  sVersionBuild7, sVersionBuild14  : STRING;
END_VAR

fbJsonSaxWriter_Build7.AddBool(TRUE);
fbJsonSaxWriter_Build14.AddBool(FALSE);

sVersionBuild7 := Tc3_JsonXml_Build7.stLibVersion_Tc3_JsonXml.sVersion;
sVersionBuild14 := Tc3_JsonXml_Build14.stLibVersion_Tc3_JsonXml.sVersion;

In diesem kurzen Beispiel wird des Weiteren über eine globale Struktur, die in jeder Bibliothek enthalten ist, die aktuelle Versionsnummer ausgelesen:

Beide Bibliotheken lassen sich jetzt parallel im gleichen SPS-Projekt verwenden. Allerdings muss hierbei sichergestellt werden, dass beide Bibliotheken in genau den geforderten Versionen (V3.3.7.0 und V3.3.14.0) auf dem Entwicklungsrechner vorhanden sind.

Golo Roden: RTFM #4: Common Lisp

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär geht es um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Common Lisp: A Gentle Introduction to Symbolic Computation" von David S. Touretzky.

Golo Roden: Algorithmen für künstliche Intelligenz

Im Bereich der künstlichen Intelligenz (KI) gibt es zahlreiche Algorithmen für die verschiedensten Arten von Problemen. Welche grundlegenden Algorithmen sollte man in dem Zusammenhang einordnen können?

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Overview

.NET 5 was released just about 3 months age and Microsoft announced the first preview of .NET 6 last week. This is really fast. Actually, they already started working on .NET 6 before version 5 was released. But it is anyway cool to have a preview available to start playing around. Also, the ASP.NET team wrote a new blog post. It is about ASP.NET Core updates on .NET 6.

I will take the chance to have a more detailed look into the updates and the new feature. I'm going to start a series about those updates and features. This is also a chance to learn what I need to rewrite, If I need to update my book that recently got published by Packt.

Install .NET 6 preview

At first I'm going to download ..NET 6 preview from https://dotnet.microsoft.com/download/dotnet/6.0 and install it on my machine.

download.png

I chose the x64 Installer for Windows and started the installation

install01.png

After the installation is done the new SDK is available. Type dotnet --info in a terminal:

dotnetinfo.png

Be careful

Since I didn't add a global.json yet, the .NET 6 preview is the default SDK. This means I need to be careful if I want to create a .NET 5 project. I need to add a global.json every time I want to create a .NET 5 project:

dotnet new globaljson --sdk-version 5.0.103

This creates a small JSON file that contains the SDK version number in the current folder.

{
  "sdk": {
    "version": "5.0.103"
  }
}

Now all folder and subfolder will use this SDK version.

Series posts

This series will start with the following topics:

Preview 1

ASP.NET Core Updates in .NET 6 preview 1

Preview 2

ASP.NET Core Updates In .NET 6 preview 2

Preview 3

ASP.NET Core updates in .NET 6 Preview 3

Preview 4

https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-6-preview-4/

  • Introducing minimal APIs
  • Async streaming
  • HTTP logging middleware
  • Use Kestrel for the default launch profile in new projects
  • IConnectionSocketFeature
  • Improved single-page app (SPA) templates
  • .NET Hot Reload updates
  • Generic type constraints in Razor components
  • Blazor error boundaries
  • Blazor WebAssembly ahead-of-time (AOT) compilation

Preview 5

https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-6-preview-5/

  • NET Hot Reload updates for dotnet watch
  • ASP.NET Core SPA templates updated to Angular 11 and React 17
  • Use Razor syntax in SVG foreignObject elements
  • Specify null for Action and RenderFragment component parameters
  • Reduced Blazor WebAssembly download size with runtime relinking
  • Configurable buffer threshold before writing to disk in Json.NET output formatter
  • Subcategories for better filtering of Kestrel logs
  • Faster get and set for HTTP headers
  • Configurable unconsumed incoming buffer size for IIS

(I will update this list as soon I add a new post or as soon Microsoft adds a new release).2

Christian Dennig [MS]: Getting started with KrakenD on Kubernetes / AKS

If you develop applications in a cloud-native environment and, for example, rely on the “microservices” architecture pattern, you will sooner or later have to deal with the issue of “API gateways”. There is a wide range of offerings available “in the wild”, both as managed versions from various cloud providers, as well as from the open source domain. Many often think of the well-known OSS projects such as “Kong”, “tyk” or “gloo” when it comes to API gateways. The same is true for me. However, when I took a closer look at the projects, I wasn’t always satisfied with the feature set. I was always looking for product that can be hosted in your Kubernetes cluster, is flexible and easy to configure (“desired state”) and offers good performance. During my work as a cloud solution architect at Microsoft, I became aware of the OSS API Gateway “KrakenD” during a project about 1.5 years ago.

KrakenD API Gateway

krakend logo
KrakenD logo

KrakenD is an API gateway implemented in Go that relies on the ultra-fast GIN framework under the hood. It offers an incredible number of features out-of-the-box that can be used to implement about any gateway requirement:

  • request proxying and aggregation (merge multiple responses)
  • decoding (from JSON, XML…)
  • filtering (allow- and block-lists)
  • request & response transformation
  • caching
  • circuit breaker pattern via configuration, timeouts…
  • protocol translation
  • JWT validation / signing
  • SSL
  • OAuth2
  • Prometheus/OpenCensus integration

As you can see, this is quite an extensive list of features, which is nevertheless far from being “complete”. On their homepage and documentation, you can find much more information, what the product offers in its entirety​.

The creators also recently published an Azure Marketplace offer, a container image that you can directly push / integrate to your Azure Container Registry…so I thought, it’s an appropriate time to publish a blog post about how to get started with KrakenD on Azure Kubernetes Service (AKS).

Getting Started with KrakenD on AKS

Ok, let’s get started then. First, we need a Kubernetes cluster on which we can roll out a sample application that we want to expose via KrakenD. So, as with all Azure deployments, let’s start with a resource group and then add a corresponding AKS service. We will be using the Azure Command Line Interface for this, but you can also create the cluster via the Azure Portal.

# create an Azure resource group

$ az group create --name krakend-aks-rg \
   --location westeurope

{
  "id": "/subscriptions/xxx/resourceGroups/krakend-aks-rg",
  "location": "westeurope",
  "managedBy": null,
  "name": "krakend-aks-rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

# create a Kubernetes cluster

$ az aks create -g krakend-aks-rg \
   -n krakend-aks \
   --enable-managed-identity \
   --generate-ssh-keys

After a few minutes, the cluster has been created and we can download the access credentials to our workstation.

$ az aks get-credentials -g krakend-aks-rg \
   -n krakend-aks 

# in case you don't have kubectl on your 
# machine, there's a handy installer coming with 
# the Azure CLI:

$ az aks install-cli

Let’s check, if we have access to the cluster…

$ kubectl get nodes

NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-34625029-vmss000000   Ready    agent   24h   v1.18.14
aks-nodepool1-34625029-vmss000001   Ready    agent   24h   v1.18.14
aks-nodepool1-34625029-vmss000002   Ready    agent   24h   v1.18.14

Looks great and we are all set from an infrastructure perspective. Let’s add a service that we can expose via KrakenD.

Add a sample service

We are now going to deploy a very simple service implemented in dotnet core, that is capable of creating / storing “contact” objects in a MS SQL server 2019 (Linux) that is running – for convenience reasons – on the same Kubernetes cluster as a single container/pod. After having the services deployed, the in-cluster situation looks like that:

In-cluster architecture /wo KrakenD

Let’s deploy everything. First, the MS SQL server with its service definition:

# content of sql-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 30
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql
          image: mcr.microsoft.com/mssql/server:2019-latest
          ports:
            - containerPort: 1433
          env:
            - name: MSSQL_PID
              value: 'Developer'
            - name: ACCEPT_EULA
              value: 'Y'
            - name: SA_PASSWORD
              value: 'Ch@ngeMe!23'
---
apiVersion: v1
kind: Service
metadata:
  name: mssqlsvr
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: ClusterIP

Create a file called sql-server.yaml and apply it to the cluster.

$ kubectl apply -f sql-server.yaml

deployment.apps/mssql-deployment created
service/mssqlsvr created

Second, the contacts API plus a service definition:

# content of contacts-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ca-deploy
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: scmcontacts
      service: contactsapi
  template:
    metadata:
      labels:
        application: scmcontacts
        service: contactsapi
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '256Mi'
              cpu: '500m'
          image: ghcr.io/azuredevcollege/adc-contacts-api:3.0
          env:
            - name: ConnectionStrings__DefaultConnectionString
              value: "Server=tcp:mssqlsvr,1433;Initial Catalog=scmcontactsdb;Persist Security Info=False;User ID=sa;Password=Ch@ngeMe!23;MultipleActiveResultSets=False;Encrypt=False;TrustServerCertificate=True;Connection Timeout=30;"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: contacts
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  type: ClusterIP
  selector:
    application: scmcontacts
    service: contactsapi
  ports:
    - port: 8080
      targetPort: 5000

Create a file called contacts-app.yaml and apply it to the cluster.

$ kubectl apply -f contacts-app.yaml

deployment.apps/ca-deploy created
service/contacts created

To check, if the contact pods can communicate with the MSSQL server, let’s quickly spin up an interactive pod and issue a few requests from within the cluster. As you can see in the YAML manifests, the services have been added as type ClusterIP which means they don’t get an external IP address. Exposing the contacts service to the public will be the responsibility of KrakenD.

$ kubectl run -it --rm --image csaocpger/httpie:1.0 http --restart Never -- /bin/sh
If you don't see a command prompt, try pressing enter.

$ echo '{"firstname": "Satya", "lastname": "Nadella", "email": "satya@microsoft.com", "company": "Microsoft", "avatarLocation": "", "phone": "+1 32 6546 6545", "mobile": "+1 32 6546 6542", "description": "CEO of Microsoft", "street": "Street", "houseNumber": "1", "city": "Redmond", "postalCode": "123456", "country": "USA"}' | http POST http://contacts:8080/api/contacts

HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 10:58:57 GMT
Location: http://contacts:8080/api/contacts/ee176782-a767-45ad-a7df-dbcefef22688
Server: Kestrel
Transfer-Encoding: chunked

{
    "avatarLocation": "",
    "city": "Redmond",
    "company": "Microsoft",
    "country": "USA",
    "description": "CEO of Microsoft",
    "email": "satya@microsoft.com",
    "firstname": "Satya",
    "houseNumber": "1",
    "id": "ee176782-a767-45ad-a7df-dbcefef22688",
    "lastname": "Nadella",
    "mobile": "+1 32 6546 6542",
    "phone": "+1 32 6546 6545",
    "postalCode": "123456",
    "street": "Street"
}

$ http GET http://contacts:8080/api/contacts
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 11:00:07 GMT
Server: Kestrel
Transfer-Encoding: chunked

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

As you can see, we can create new contacts by POSTing a JSON payload to the endpoint http://contacts:8080/api/contacts (first request) and also retrieve what has been added to the database by GETing data from http://contacts:8080/api/contacts endpoint (second request).

Create a KrakenD Configuration

So far, everything works as expected and we have a working API in the cluster that is storing its data in a MSSQL server. As discussed in the previous section, we did not expose the contacts service to the internet on purpose. We will do this later by adding KrakenD in front of that service giving the API gateway a public IP so that it is externally reachable.

But first, we need to create a KrakenD configuration (a plain JSON file) where we configure the endpoints, backend services, how requests should be routed etc. etc. Fortunately, KrakenD has a very easy-to-use designer that gives you a head-start when creating that configuration file – it’s simply called the KrakenDesigner.

kraken designer
KrakenDesigner – sample service
kraken designer logging config
KrakenDesigner – logging configuration

When creating such a configuration, it comes down to these simple steps:

  1. Adjust “common” configuration for KrakenD like service name, port, CORS, exposed/allowed headers etc.
  2. Add backend services, in our case just the Kubernetes service for our contacts API (http://contacts:8080)
  3. Exposed endpoints (/contacts) at the gateway and to which backend to route to (http:/contacts:8080/api/contacts). Here you can also define, if a JWT token should be validated, which headers to pass to the backend etc. A lot of options – which we obviously don’t need in our simple setup.
  4. Add logging configuration – it’s optional, but you should do it. We simply enable stdout logging, but you can also use OpenCensuse.g. and even expose metrics to a Prometheus instance (nice!).

You can export the configuration you have done in the UI as a last step to a JSON file. For our sample here, this file looks like that:

{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

We simply expose two endpoints, one that let’s us create (POST) contacts and one that retrieves (GET) all contacts from the database – so basically the same sample we did when calling the contacts service from within the cluster.

Save that file above to your local machine (name it krakend.json) as we need to add it later to Kubernetes as a ConfigMap.

Add the KrakenD API Gateway

So, now we are ready to deploy KrakenD to the cluster: we have an API that we want to expose and we have the KrakenD configuration. To dynamically add the configuration (krakend.json) to our running KrakenD instance, we will use a Kubernetes ConfigMap object. This gives us the ability to decouple configuration from our KrakenD application instance/pod – if you are not familiar with the concepts, have a look at the official documentation here.

During the startup of KrakenD we will then use this ConfigMap and mount the content of it (krakend.json file) into the container (folder /etc/krakend) so that the KrakenD process can pick it up and apply the configuration.

In the folder where you saved the config file, issue the following commands:

$ kubectl create configmap krakend-cfg --from-file=./krakend-cfg.json

configmap/krakend-cfg created

# check the contents of the configmap

$ kubectl describe configmap krakend-cfg

Name:         krakend-cfg
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
krakend.json:
----
{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

Events:  <none>

That looks great. We are finally ready to spin up KrakenD in the cluster. We therefor apply the following Kubernetes manifest file which creates a deployment and a Kubernetes service of type LoadBalancer – which gives us a public IP address for KrakenD via the Azure loadbalancer.

# content of api-gateway.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: krakend-deploy
  labels:
    application: apigateway
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: apigateway
  template:
    metadata:
      labels:
        application: apigateway
    spec:
      automountServiceAccountToken: false
      volumes:
        - name: krakend-cfg
          configMap:
            name: krakend-cfg
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '1024Mi'
              cpu: '1000m'
          image: devopsfaith/krakend:1.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          volumeMounts:
          - name: krakend-cfg
            mountPath: /etc/krakend

---
apiVersion: v1
kind: Service
metadata:
  name: apigateway
  labels:
    application: apigateway
spec:
  type: LoadBalancer
  selector:
    application: apigateway
  ports:
    - port: 8080
      targetPort: 8080

Let me highlight the two important parts here, that mount the configuration file into our pod. First, we create a volume on line 26 (named krakend-cfg) referencing the ConfigMap we created before and second, we mount that volume (line 43) to our pod (mountPath /etc/krakend).

Save the manifest file and apply it to the cluster.

$ kubectl apply -f api-gateway.yaml

deployment.apps/krakend-deploy created
service/apigateway created

The resulting architecture within the cluster is now as follows:

Architecture with krakend
Architecture with KrakenD API gateway

As a last step, we just need to retrieve the public IP of our “LoadBalancer” service.

$ kubectl get services

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)          AGE
apigateway   LoadBalancer   10.0.26.150   104.45.73.37   8080:31552/TCP   4h53m
contacts     ClusterIP      10.0.155.35   <none>         8080/TCP         3h47m
kubernetes   ClusterIP      10.0.0.1      <none>         443/TCP          26h
mssqlsvr     ClusterIP      10.0.192.57   <none>         1433/TCP         3h59m

So, in our case here, we got 104.45.73.37. Let’s issue a few request (either with a browser or a tool like httpie – which I use all the time) against the resulting URL http://104.45.73.37:8080/contacts.

$ http http://104.45.73.37:8080/contacts

HTTP/1.1 200 OK
Content-Length: 337
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 12:10:20 GMT
Server: Kestrel
Vary: Origin
X-Krakend: Version 1.2.0
X-Krakend-Completed: false

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

Works like a charm! Also, have a look at the logs of the KrakenD container:

$ kubectl logs krakend-deploy-86c44c787d-qczjh -f=true

Parsing configuration file: /etc/krakend/krakend.json
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ ERROR unable to create the GELF writer: getting the extra config for the krakend-gelf module
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ INFO Listening on port: 8080
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN influxdb: unable to load custom config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN opencensus: no extra config defined for the opencensus module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN building the etcd client: unable to create the etcd client: no config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN bloomFilter: no config for the bloomfilter
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN no config present for the httpsecure module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.747 ▶ INFO registering usage stats for cluster ID '293C0vbu4hqE6jM0BsSNl/HCzaAKsvjhSbHtWo9Hacc='
[GIN] 2021/02/17 - 10:01:44 | 200 |    4.093438ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:46 | 200 |    5.397977ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:56 | 200 |    6.820172ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:57 | 200 |    5.911475ms |      10.244.1.1 | GET      "/contacts"

As mentioned before, KrakenD logs its events to stdout and we can see how the request are coming in, the destination and the time the request needed to complete at the gateway level.

Wrap-Up

In this brief article, I showed you how you can deploy KrakenD to an AKS/Kubernetes cluster on Azure and how to setup a first, simple sample of how to expose an API running in Kubernetes via the KrakenD API gateway. The project has so many useful features, that this post only covers the very, very basic stuff. I really encourage you to have a look at the product when you consider hosting an API gateway within your Kubernetes cluster. The folks at KrakenD do a great job and are also open and accept pull requests, if you want to contribute to the project.

As mentioned in the beginning of this article, they recently published a version of their KrakenD container image to the Azure Marketplace. This gives you the ability to directly push their current and future image to your own Azure Container Registry, enabling scenarios like static image scanning, Azure Security Center integration, geo-replication etc. You can find their offering here: KrakenD API Gateway

Hope you enjoyed this brief introduction…happy hacking, friends! 🖖

Golo Roden: Grundbegriffe der künstlichen Intelligenz

Künstliche Intelligenz (KI) ist eines der wichtigsten Themen der vergangenen Jahre. Ein zumindest grundlegendes Verständnis ist daher hilfreich, um gewisse Themen ins rechte Licht rücken zu können. Welche Grundbegriffe der künstlichen Intelligenz sollte man kennen?

Golo Roden: Wie man Aufwand schätzt

Jede Entwicklerin und jeder Entwickler kennt die Herausforderung, Aufwand für die Entwicklung Code zu schätzen. Die wenigsten machen das gerne. Warum ist Schätzen so unbeliebt, warum ist es überhaupt erforderlich und worauf sollte man achten?

Golo Roden: RTFM #3: Game Engine Black Book: Doom

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär geht es um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Game Engine Black Book: Doom" von Fabien Sanglard.

Golo Roden: Fünf Maßnahmen für mehr Codequalität

Die Codequalität zu verbessern, ist vielen Teams ein wichtiges Anliegen. Dabei gibt es einige grundlegende Maßnahmen, die man mit verhältnismäßig überschaubarem Aufwand anwenden kann. Welche sind das?

Jürgen Gutsch: Working inside a Docker container using Visual Studio Code

As mentioned in the last post I want to write about remote working inside a docker container. But at first, we should get an idea about why we should ever remote work inside a docker container.

Why should I do that?

One of our customers is running an OpenShift/Kubernetes cluster and also likes to have the technology-specific development environments in a container that runs in Kubernetes. We had a NodeJS development container, a Python development container, and so on... All the containers had an SSH server installed, Git, the specific SDKs, and all the stuff that is needed to develop. Using VSCode we connected to the containers via SSH and developed inside the container.

Having the development environment in a container is one reason. Maybe not the most popular reason. But trying stuff inside a container because the local environment isn't the same makes a lot of sense. If you want to debug an application in a production-like environment makes absolute sense.

How does it work?

VSCode has a great set of tools to work remotely. In installed Remote WSL (used in the last post), Remote SSL was the one we used with OpenShift (maybe I will write about it, too), and with this post, I'm gonna use Remote Container. All three of them will work inside the remote explorer within VS Code. All three add-ins will work pretty similarly.

If the remote machine doesn't have the VSCode Server Installed, the remote toll will install it and start it. The VSCode server is like a full VSCode without a user interface. It also needs to have add-ins installed to work with the specific technologies. The local VSCode will connect to the remote VSCode server and mirrors it in the user interface of your locally installed VSCode. It is like a remote session to the other machine but feels local.

Setup the demo

I created a small ASP.NET Core MVC project:

dotnet new mvc -n RemoteDocker -o RemoteDocker
cd RemoteDocker

Than I added a dockerfile to it:

FROM mcr.microsoft.com/dotnet/sdk:5.0

COPY . /app

WORKDIR /app

EXPOSE 5000 5001

# ENTRYPOINT ["dotnet", "run"] not needed to just work in the container

If you don't have the docker tool installed, VSCode will ask you to install it as soon you have the dockerfile open. If it's installed you can just right-click the dockerfile in the VSCode Explorer and select "Build image...".

image-20210203220213602

This will prompt you for an image name. You can use the proposed name which is "remotedocker:latest" in my case. It seems it uses the project name or the folder name which makes sense:

image-20210203220356005

Select the Docker tab in VSCode and you will find your newly built image in the list of images:

image-20210203220705183

You can now right-click the tag latest and choose "Run Interactive". If you just choose "Run" the container stops, because we commented out the entry point. We need an interactive session. This will start-up the container and it will now appear as a running container in the container list:

image-20210203220954330

You can browse and open the files inside the container from this containers list, but editing will not work. This is not what we want to do. We want to remotely connect VSCode to this Docker container.

Connecting to Docker

This can be done using two different ways:

  1. Just right-click the running container and choose "Attach Visual Studio Code"

image-20210203221956964

  1. Or select the Remote Explorer tab, ensure the Remote Containers add-in is selected in the upper-right dropdown box, wait for the containers to load. If all the containers are visible, choose the one you want to connect, right-click it and choose "Attach to container" or "Attach in New Window". It does the same thing as the previous way

image-20210203221546976

Now you have a VSCode instance open that is connected to the container. You now can see the files in the project, you can sue the terminal inside the container and you can now edit the files inside the project.

image-20210203222354886

You can see that this is a different VSCode than your local instance by having a look at the tabs on the left side. Not all the add-ins are installed on that instance. In my case, the database tools are missing as well as the Kubernetes tools and some others.

Working inside the Container

Since we disabled the entry point in the dockerfile we are now able to start debugging by pressing F5.

image-20210204221414743

This also opens the local browser and shows the application that is running inside the container. This is really awesome. It feels like really local development:

image-20210204222047723

Let's change something to see that this is really working. Like in the last demo, I'm going to change the page title. I would like to see the name "Remote Docker demo":

image-20210204222347623

Just save and restart debugging in VSCode:

image-20210204222617177

That's it.

Conclusion

Isn't this cool?

You can easily start docker containers to test, debug and develop in a production-like environment. You can configure a production-like environment with all Docker containers you need using docker-compose on your machine. Then add your development, or your testing container to the composition and start it all up. Now you can connect to this container and start playing around within this environment. It is all fast, accessible, and on your machine.

This is cool!

I'd like to see this is also working if the containers running on Azure. I will try it within the next weeks and maybe I can put the results in a new blog post.

Golo Roden: Servicemodelle in der Cloud

Der Begriff "Cloud" ist längst in die Alltagssprache eingezogen, doch die wenigsten können im Detail erklären, was genau damit gemeint ist. Tatsächlich gibt es eine Definition des NIST, die vier Servicemodelle beschreibt. Welche sind das?

Jürgen Gutsch: Finally - My first book got published

I always had the idea to write a book. Twelve or thirteen years ago, Stefan Falz told me not to do it, because it is a lot of effort and takes a lot of your time. Even if my book is just a small one and smaller than Stefan's books for sure, now I know what he meant, I guess :-)

How it started

My journey writing a book starts in fall 2018 when I started the "Customizing ASP.NET Core" series. A reader asked me to bundle the series as a book. I took my time to thought about it and started to work on it in July 2019. The initial idea to use LeanPub and create a book the open source way was good but But there was no pressure, no timeline, and that project was had lower priority besides life and other stuff. The release of ASP.NET Core was a good event to put some more pressure on it. From September last year on I started to update all the contents and samples to ASP.NET Core 5.0. I also updated the text in a way that it matches a book more than a blog series.

Actually my very first book is a compilation of the old blog series, but updated to ASP.NET Core 5.0 and it includes an additional thirteenth chapter that wasn't part of the original series.

I was almost done end of October and ready to publish it around the .NET Conf 2020 when .NET 5 and ASP.NET Core were announced. Then I decided to try an experiment:

How it went

At that time, I did a technical review of a book about Blazor for Packt and I decided to ask Packt if my book is worth it to get published by Packt. They said yes and wanted to publish it. That was awesome. My idea was to improve the quality of the book, to have professional editors, and reviewers and most important to not do the publishing and the marketing by myself.

The downside of this decision: I wasn't able to publish the book around the .NET Conf 2020. Packt started to work on it and it was a really impressive experience:

  • An editor worked on it to make the texts more "booky" than "bloggy", and I had to review and rework some texts
  • A fellow MVP Toi B. Wright did the technical review, and I had a lot more to fix.
  • Another technical reviewer executed all the samples and snippets, and I had to fix some small issues.
  • A copy editor went through all the chapters and had feedback about formatting.
  • In the meanwhile I had to work on the front matter and the preface.

I also never thought about a foreword of my book until I worked on the preface. I didn't want to write the foreword by myself and had the right person in mind.

I asked Damien Bowden the smartest and coolest ASP.NET Core security guru I know. He also is a fellow MVP and a famous blogger. His posts got shared many times and often mentioned in the ASP.NET Community Standup. It's always a pleasure to talk to him and we had a lot of fun at the MVP summits in Redmond and Bellevue.

Thanks Damien for writing this awesome foreword :-)

How it is right now

Sure, my very first book is just a compilation of the old blog series, but updated to ASP.NET Core 5.0 and it includes an additional thirteenth chapter that wasn't part of the original series:

  1. Customizing Logging
  2. Customizing App Configuration
  3. Customizing Dependency Injection
  4. Configuring and Customizing HTTPS
  5. Using IHostedService and BackgroundService
  6. Writing Custom Middleware
  7. Content negotiation using custom OutputFormatter
  8. Managing inputs with custom ModelBinders
  9. Creating custom ActionFilter
  10. Creating custom TagHelpers
  11. Configuring WebHostBuilder
  12. Using different Hosting models
  13. Working with Endpoint Routing

This book also contains details about ASP.NET Core 3.1. I'm mentioning 3.1, if it differs from 5.0. Because ASP.NET Core 3.1 is a LTS version and some companies definitely will stay on LTS.

Packt helped me to higher the quality of the contents and it now is is a compact cook book with 13 recipes you should know about ASP.NET Core.

It is definitely a book for ASP.NET Core beginners, who already know C# and the main concepts about ASP.NET Core

Where to get it

Last Saturday Packt published it on Amazon as Kindle edition and as paperback

Damien, do you see your name below the title? ;-)

I guess it will be as available on Packt as well soon, for those of you who have a Packt subscription.

Would be awesome, if you would drop a review as soon you read it

Thanks

I would like to say thanks to some persons, who helped me do this.

  • At first I say thanks to my family, friends, and colleagues who supported me and motivated me to finish the work.

  • I also say thanks to Packt. They did a great job supporting me and they added a lot more value to the book. I also like the cover design.

  • I say thanks again to Damien for that great foreword

  • Also thanks to the developer community and the readers of my blog, since this book is mainly powered by the community.

What's next?

My plan is to keep this book up-to-date. I will update the samples and concepts with every new major version.

For now, I will focus on my blog again. I've written almost nothing in the past six months. In any case, I already have an idea for another book :-)

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.