Code-Inside Blog: How to fix: 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine

In our product we can interact with different datasource and one of these datasources was a Microsoft Access DB connected via OLEDB. This is really, really old, but still works, but on one customer machine we had this issue:

'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine

Solution

If you face this issue, you need to install the provider from here.

Be aware: If you have a different error, you might need to install the newer provider - this is labled as “2010 Redistributable”, but still works with all those fancy Office 365 apps out there.

Important: You need to install the provider in the correct bit version, e.g. if you run under x64, install the x64.msi.

The solution comes from this Stackoverflow question.

Helper

The best tip from Stackoverflow was these powershell commands to check, if the provider is there or not:

(New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION 

Get-OdbcDriver | select Name,Platform

This will return something like this:

PS C:\Users\muehsig> (New-Object system.data.oledb.oledbenumerator).GetElements() | select SOURCES_NAME, SOURCES_DESCRIPTION

SOURCES_NAME               SOURCES_DESCRIPTION
------------               -------------------
SQLOLEDB                   Microsoft OLE DB Provider for SQL Server
MSDataShape                MSDataShape
Microsoft.ACE.OLEDB.12.0   Microsoft Office 12.0 Access Database Engine OLE DB Provider
Microsoft.ACE.OLEDB.16.0   Microsoft Office 16.0 Access Database Engine OLE DB Provider
ADsDSOObject               OLE DB Provider for Microsoft Directory Services
Windows Search Data Source Microsoft OLE DB Provider for Search
MSDASQL                    Microsoft OLE DB Provider for ODBC Drivers
MSDASQL Enumerator         Microsoft OLE DB Enumerator for ODBC Drivers
SQLOLEDB Enumerator        Microsoft OLE DB Enumerator for SQL Server
MSDAOSP                    Microsoft OLE DB Simple Provider


PS C:\Users\muehsig> Get-OdbcDriver | select Name,Platform

Name                                                   Platform
----                                                   --------
Driver da Microsoft para arquivos texto (*.txt; *.csv) 32-bit
Driver do Microsoft Access (*.mdb)                     32-bit
Driver do Microsoft dBase (*.dbf)                      32-bit
Driver do Microsoft Excel(*.xls)                       32-bit
Driver do Microsoft Paradox (*.db )                    32-bit
Microsoft Access Driver (*.mdb)                        32-bit
Microsoft Access-Treiber (*.mdb)                       32-bit
Microsoft dBase Driver (*.dbf)                         32-bit
Microsoft dBase-Treiber (*.dbf)                        32-bit
Microsoft Excel Driver (*.xls)                         32-bit
Microsoft Excel-Treiber (*.xls)                        32-bit
Microsoft ODBC for Oracle                              32-bit
Microsoft Paradox Driver (*.db )                       32-bit
Microsoft Paradox-Treiber (*.db )                      32-bit
Microsoft Text Driver (*.txt; *.csv)                   32-bit
Microsoft Text-Treiber (*.txt; *.csv)                  32-bit
SQL Server                                             32-bit
ODBC Driver 17 for SQL Server                          32-bit
SQL Server                                             64-bit
ODBC Driver 17 for SQL Server                          64-bit
Microsoft Access Driver (*.mdb, *.accdb)               64-bit
Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb) 64-bit
Microsoft Access Text Driver (*.txt, *.csv)            64-bit

Hope this helps! (And I hope you don’t need to deal with these ancient technologies for too long 😅)

Sebastian Seidel: App Design in 3 steps - an appealing ui design for a better ux

Mobile apps thrive on an appealing and well-structured design that helps users achieve their goals. In this article, I'll show you how an old app design can be redesigned to make it look modern and clean. You'll learn more about the design process and design decisions made to provide users with a user-centric UI and UX that helps them achieve their goals.

Golo Roden: Jobs in der IT: Stellenanzeigen sind alle gleich (schlecht)

Stellenanzeigen gleichen sich üblicherweise wie ein Ei dem anderen, statt den ersten individuellen Eindruck zu vermitteln, den man von einem Unternehmen erhält.

Code-Inside Blog: Resource type is not supported in this subscription

I was playing around with some Visual Studio Tooling and noticed this error during the creation of a “Azure Container Apps”-app:

Resource type is not supported in this subscription

x

Solution

The solution is quite strange at first, but in the super configurable world of Azure it makes sense: You need to activate the Resource provider for this feature on your subscription. For Azure Container Apps you need the Microsoft.ContainerRegistry-resource provider registered:

x

It seems, that you can create such resources via the Portal, but if you go via the API (which Visual Studio seems to do) the provider needs to be registered at first.

Some resource providers are “enabled by default”, other providers needs to be turned on manually. Check out this list for a list of all resource providers and the related Azure service.

Be careful: I guess you should only enable the resource providers that you really need, otherwise your attack surface will get larger.

To be honest: This was completly new for me - I do Azure since ages and never had to deal with resource providers. Always learning ;)

Hope this helps!

Holger Schwichtenberg: Neu in .NET 7 [8]: Statische abstrakte Properties & Methoden in Interfaces in C#

In der neuen Version von Microsofts Programmiersprache C# dürfen Schnittstellendeklarationen von Properties und Methoden als static abstract deklariert sein.

Stefan Henneken: IEC 61131-3: SOLID – Das Open/Closed Principle

Vererbung ist eine beliebte Methode, um bestehende Funktionsblöcke wiederzuverwenden. Dadurch lassen sich Methoden und Eigenschaften hinzufügen oder bestehende Methoden überschreiben. Hierbei ist es nicht notwendig, den Quellcode des Basis-FB zur Verfügung zu haben. Software so zu designen, dass Erweiterungen möglich sind, ohne die vorhandene Software zu verändern, ist die Grundidee des Open/Closed Principle (OCP). Doch die Anwendung von Vererbung hat hierbei auch Nachteile. Der Einsatz von Schnittstellen minimiert diese Nachteile und bietet zusätzliche Vorteile.

Mit anderen Worten: Das Verhalten von Software sollte erweiterbar sein, ohne dass sie modifiziert werden muss. Angelehnt an das Beispiel aus den bisherigen Posts, soll ein Funktionsblock entwickelt werden, um Sequenzen für die Ansteuerung von Lampen zu verwalten. Anschließend wird der Funktionsblock um zusätzliche Funktionen erweitert. Anhand dieses Beispiels wird die Grundidee des Open/Closed Principle (OCP) genauer betrachtet.

Ausgangssituation

Zentraler Ausgangspunkt ist der Funktionsblock FB_SequenceManager. Dieser stellt über die Eigenschaft aSequence die einzelnen Schritte einer Sequenz zur Verfügung. Über die Methode Sort() kann die Liste nach verschiedenen Kriterien sortiert werden.

Die Eigenschaft aSequence ist ein Array und enthält Elemente vom Typ ST_SequenceItem.

PROPERTY PUBLIC aSequence : ARRAY [1..5] OF ST_SequenceItem

Um das Beispiel überschaubar zu halten, wird mit festen Arraygrenzen von 1 bis 5 gearbeitet. Die Array-Elemente sind vom Typ ST_SequenceItem und enthalten eine eindeutige Id (nId), den Ausgangswert (nValue) für die Lampen und die Dauer (nDuration) bis zum Umschalten auf den nächsten Ausgangswert.

TYPE ST_SequenceItem :
STRUCT
  nId         : UINT;
  nValue      : USINT(0..100);
  nDuration   : UINT;
END_STRUCT
END_TYPE

Auf sämtliche Methoden für die Bearbeitung der Sequenz wurde im Rahmen dieses Beispiels verzichtet. Allerdings enthält das Beispiel die Methode Sort(), um die Liste nach verschiedenen Kriterien zu sortieren.

METHOD PUBLIC Sort
VAR_INPUT
  eSortedOrder  : E_SortedOrder;
END_VAR

Die Liste kann aufsteigend nach nId oder nValue sortiert werden.

TYPE E_SortedOrder :
(
  Id,
  Value
);
END_TYPE

In der Methode Sort() wird durch den Eingangsparameter eSortedOrder entschieden, ob nach nId oder nach nValue sortiert werden soll.

CASE eSortedOrder OF
  E_SortedOrder.Id:
    // Sort the list by nId
    // …
  E_SortedOrder.Value:
    // Sort the list by nValue
    // …
END_CASE

Bei dem Beispiel handelt es sich um eine einfache monolithische Anwendung, welche in kurzer Zeit erstellt werden kann, um die gewünschten Anforderungen zu erfüllen.

Das UML-Diagramm zeigt recht deutlich den monolithischen Aufbau der Anwendung:

Dabei wurde allerdings nicht berücksichtigt, mit welchem Aufwand zukünftige Erweiterungen realisierbar sind.

Beispiel 1 (TwinCAT 3.1.4024) auf GitHub

Erweiterung der Implementierung

Die Anwendung soll so erweitert werden, dass nicht nur nach nId und nValue sortiert werden kann, sondern auch zusätzlich nach nDuration. Bisher wurde die Liste immer aufsteigend sortiert. Eine absteigende Sortierung ist ebenfalls gewünscht.

Wie lässt sich unser Beispiel anpassen, damit die beiden Kundenwünsche erfüllt werden?

Ansatz 1: Quick & Dirty

Ein Ansatz besteht darin, die vorhandene Methode Sort() einfach zu erweitern, so dass diese auch nach nDuration sortieren kann. Hierzu wird E_SortedOrder um das Feld eDuration erweitert.

TYPE E_SortedOrder :
(
  Id,
  Value,
  Duration
);
END_TYPE

Zusätzlich wird noch ein Parameter benötigt, der angibt ob in aufsteigender oder in absteigender Reihenfolge sortiert werden soll:

TYPE E_SortedDirection :
(
  Ascending,
  Descending
);
END_TYPE

Somit besitzt die Methode Sort() jetzt zwei Parameter:

METHOD PUBLIC Sort
VAR_INPUT
  eSortedOrder      : E_SortedOrder;
  eSortedDirection  : E_SortedDirection;
END_VAR

Die Methode Sort() enthält jetzt zwei ineinander verschachtelte CASE-Anweisungen. Die Äußere für die Auswahl der Sortierrichtung und die Innere für das Element nach dem sortiert wird.

CASE eSortedDirection OF
  E_SortedDirection.Ascending:
    CASE eSortedOrder OF
      E_SortedOrder.Id:
        // Sort the list by nId in ascending order
        // …
      E_SortedOrder.Value:
        // Sort the list by nValue in ascending order
        // …
      E_SortedOrder.Duration:
        // Sort the list by nDuration in ascending order
        // …
    END_CASE
  E_SortedDirection.Descending:
    CASE eSortedOrder OF
      E_SortedOrder.Id:
        // Sort the list by nId in descending order
        // …
      E_SortedOrder.Value:
        // Sort the list by nValue in descending order
        // …
      E_SortedOrder.Duration:
        // Sort the list by nDuration in descending order
        // …
    END_CASE
  END_CASE
END_CASE

Dieser Ansatz ist schnell umzusetzen. Bei einer kleinen Anwendung, wo der Quellcode nicht sehr umfangreich ist, durchaus ein guter Ansatz. Allerdings muss der Quellcode zur Verfügung stehen, damit diese Erweiterungen überhaupt möglich sind. Außerdem muss sichergestellt sein, dass FB_SequenceManager nicht mit anderen Projekten geteilt wird, z. B. durch eine SPS-Bibliothek in der FB_SequenceManager enthalten ist. Da bei der Methode Sort() ein Parameter hinzugekommen ist, hat sich die Signatur geändert. Programmteile, die die Methode mit einem Parameter aufrufen, lassen sich dadurch nicht mehr compilieren.

Durch das UML-Diagramm ist gut zu erkennen, dass sich die Struktur nicht geändert hat. Es ist weiterhin eine sehr monolithische Anwendung:

Beispiel 2 (TwinCAT 3.1.4024) auf GitHub

Ansatz 2: Vererbung

Ein weiterer Ansatz, um die Anwendung mit den gewünschten Funktionen zu erweitern, ist der Einsatz von Vererbung. Dadurch lassen sich Funktionsblöcke erweitern, ohne dass der vorhandene Funktionsblock verändert werden muss.

Hierzu wird als erstes ein neuer Funktionsblock angelegt, der von FB_SequenceManager erbt:

FUNCTION_BLOCK PUBLIC FB_SequenceManagerEx EXTENDS FB_SequenceManager

Der neue Funktionsblock erhält die Methode SortEx(), mit den beiden Parametern, welche die gewünschte Sortierung vorgibt:

METHOD PUBLIC SortEx : BOOL
VAR_INPUT
  eSortedOrder      : E_SortedOrderEx;
  eSortedDirection  : E_SortedDirection;
END_VAR

Auch hier wird wieder der Datentyp E_SortedDirection hinzugefügt, der angibt ob in aufsteigender oder in absteigender Reihenfolge sortiert werden soll:

TYPE E_SortedDirection :
(
  Ascending,
  Descending
);
END_TYPE

Statt E_SortedOrder zu erweitern, wir ein neuer Datentyp angelegt:

TYPE E_SortedOrderEx :
(
  Id,
  Value,
  Duration
);
END_TYPE

In der Methode SortEx() können jetzt die gewünschten Sortierungen umgesetzt werden.

Bei der Sortierung in aufsteigender Reihenfolge ist der Zugriff auf die Methode Sort() des Basis-FBs (FB_SequenceManager) möglich. Dadurch ist eine erneute Implementierung der schon vorhandenen Sortieralgorithmen nicht erforderlich. Nur die zusätzliche Sortierung muss hinzugefügt werden:

CASE eSortedOrder OF
  E_SortedOrderEx.Id:
    SUPER^.Sort(E_SortedOrder.Id);
  E_SortedOrderEx.Value:
    SUPER^.Sort(E_SortedOrder.Value);
  E_SortedOrderEx.Duration:
    // Sort the list by nDuration in ascending order
    // …
END_CASE

Die Sortierung in absteigender Reihenfolge muss allerdings komplett programmiert werden, da hier nicht auf bestehende Methoden zurückgegriffen werden kann.

Erbt ein Funktionsblock von einem anderen Funktionsblock, so erhält der neue Funktionsblock den Funktionsumfang des Basis-FBs. Durch zusätzliche Methoden und Eigenschaften kann dieser erweitert werden, ohne die Notwendigkeit, den Basis-FB zu verändern (offen für Erweiterungen). Durch den Einsatz von Bibliotheken kann der Quellcode auch komplett vor Veränderung geschützt werden (geschlossen gegen Modifikationen).

Vererbung ist somit eine Methode, um das Open/Closed Principle (OCP) umzusetzen.

Beispiel 3 (TwinCAT 3.1.4024) auf GitHub

Dieser Ansatz hat allerdings zwei Nachteile:

Durch den übermäßigen Einsatz von Vererbung können komplexe Hierarchien entstehen. Ein abgeleiteter FB ist fest an seinen Basis-FB gebunden. Wird der Basis-FB um weitere Methoden oder Eigenschaften erweitert, so erbt auch jeder abgeleitete FB diese Elemente (wenn diese PUBLIC sind), auch dann, wenn der abgeleitete FB diese Elemente nach Außen gar nicht anbieten möchte.

Eine Erweiterung durch Vererbung ist unter Umständen nur dann möglich, wenn die abgeleiteten Funktionsblöcke auf die internen Zustände des Basis-FBs Zugriff haben. Der Zugriff auf diese internen Elemente kann durch PROTECTED gekennzeichnet werden. Somit können nur abgeleitete Funktionsblöcke darauf zugreifen.

Im obigen Beispiel konnten nur deshalb die Sortieralgorithmen hinzugefügt werden, weil der Setter der Eigenschaft aSequence als PROTECTED deklariert wurde. Wäre ein Schreibzugriff auf die Eigenschaft aSequence nicht möglich, so könnte der abgeleitete Funktionsblock die Liste nicht verändern und somit auch nicht sortieren.

Dieses bedeutet aber, dass der Entwickler dieses Funktionsblocks immer zwei Anwendungsfälle berücksichtigen muss. Zum einen den Anwender, der die öffentlichen Methoden und Eigenschaften verwendet. Zusätzlich aber noch den Anwender, der den Funktionsblock als Basis-FB verwendet und auch über die PROTECTED Elemente neue Funktionalitäten hinzufügt. Doch welche internen Elemente sollen als PROTECTED markiert werden? Auch müssen diese Elemente dokumentiert werden, damit eine Anwendung überhaupt möglich ist.

Ansatz 3: zusätzliche Schnittstelle

Ein weiterer Lösungsansatz ist der Einsatz von Schnittstellen anstatt der Vererbung. Allerdings muss dieses direkt bei dem Design berücksichtigt werden.

Soll FB_SequenceManager so entworfen werden, dass der Anwender des Funktionsblocks beliebige Sortieralgorithmen hinzufügen kann, so sollte der Code für das Sortieren der Liste aus FB_SequenceManager entfernt werden. Der Zugriff aus dem Sortieralgorithmus auf die Liste sollte stattdessen über eine Schnittstelle erfolgen.

Bezogen auf unser Beispiel wird die Schnittstelle I_SequenceSortable hinzugefügt. Diese Schnittstelle enthält die Methode SortList(), welche eine Referenz auf die zu sortierende Liste enthält.

METHOD SortList
VAR_INPUT
  refSequence  : REFERENCE TO ARRAY [1..5] OF ST_SequenceItem;
END_VAR

Als nächstes werden die Funktionsblöcke angelegt, in denen die jeweiligen Sortieralgorithmen hinterlegt sind. Jeder dieser Funktionsblöcke implementiert die Schnittstelle I_SequenceSortable. Als Beispiel wird hier der Funktionsblock gezeigt, der nach nId aufsteigend sortiert.

FUNCTION_BLOCK PUBLIC FB_SequenceSortedByIdAscending IMPLEMENTS I_SequenceSortable

Der Name des Funktionsblocks ist beliebig, entscheidend ist die Implementierung der Schnittstelle I_SequenceSortable. Dadurch ist sichergestellt das FB_SequenceSortedByIdAscending die Methode SortList() enthält. In der Methode SortList() wird der eigentliche Sortieralgorithmus implementiert.

METHOD SortList
VAR_INPUT
  refSequence  : REFERENCE TO ARRAY [1..5] OF ST_SequenceItem;
END_VAR
// Sort the list by nId in ascending order
// …

FB_SequenceManager erhält in der Methode Sort() einen Parameter vom Typ I_SequenceSortable. Wird die Methode Sort() aufgerufen, so wird ein Funktionsblock (z.B. FB_SequenceSortedByIdAscending) übergeben, welcher die Schnittstelle I_SequenceSortable implementiert und somit auch die Methode SortList() enthält. In der Methode Sort() von FB_SequenceManager wird SortList() aufgerufen und eine Referenz der Liste aSequence übergeben.

METHOD PUBLIC Sort
VAR_INPUT
  ipSequenceSortable  : I_SequenceSortable;
END_VAR
IF (ipSequenceSortable <> 0) THEN
  ipSequenceSortable.SortList(THIS^._aSequence);
END_IF

Dadurch erhält der Funktionsblock mit dem implementierten Sortieralgorithmus die Referenz auf die zu sortierende Liste.

Für jeden gewünschten Sortieralgorithmus wird ein Funktionsblock erstellt. Somit stehen uns zum einen FB_SequenceManager mit der Methode Sort() zur Verfügung und zum anderen die Funktionsblöcke, welche die Schnittstelle I_SequenceSortable implementieren und die Sortieralgorithmen enthalten.

Wird von FB_SequenceManager die Methode Sort() aufgerufen, so wird ein Funktionsblock übergeben (hier FB_SequenceSortedByIdAscending). Dieser Funktionsblock enthält die Schnittstelle I_SequenceSortable über die anschließend die Methode SortList() aufgerufen wird.

PROGRAM MAIN
VAR
  fbSequenceManager              : FB_SequenceManager;
  fbSequenceSortedByIdAscending  : FB_SequenceSortedByIdAscending;
  // …
END_VAR
fbSequenceManager.Sort(fbSequenceSortedByIdAscending);
// …

Bei diesem Ansatz wird keine Vererbung angewendet. Die Funktionsblöcke für die Sortieralgorithmen könnten ihre eigene Vererbungshierarchie anwenden, falls dieses gefordert wird. Ebenfalls könnten die Funktionsblöcke weitere Schnittstellen implementieren, da das Implementieren mehrerer Schnittstellen möglich ist.

Datenhaltung (Liste) und Datenverarbeitung (Sortierung) sind durch den Einsatz der Schnittstelle klar voneinander getrennt. Die Eigenschaft aSequence benötigt keinen Schreibzugriff. Auch sind Zugriffe auf interne Variablen von FB_SequenceManager nicht notwendig.

Auch die beiden Datentypen E_SortedOrder und E_SortedDirection werden nicht benötigt. Die Auswahl der Sortierung wird ausschließlich durch den Funktionsblock bestimmt, der an Sort() übergeben wird.

Wird eine neue Sortierung hinzugefügt, so ist es nicht notwendig schon vorhandene Elemente zu verändern oder anzupassen.

Beispiel 4 (TwinCAT 3.1.4024) auf GitHub

Analyse der Optimierung

Es gibt verschiedene Techniken, um einen bestehenden Funktionsblock funktional zu erweitern, ohne diesen verändern zu müssen. Neben der Vererbung, eine der Hauptmerkmale der objektorientierten Programmierung (OOP), stellen Schnittstellen evtl. eine bessere Alternative dar.

Bei der Verwendung von Schnittstellen ist die Entkoppelung größer. Allerdings müssen beim Softwareentwurf die einzelnen Schnittstellen implementiert werden. Es muss also im Vorfeld überlegt werden, welche möglichen Bereiche durch Schnittstellen abstrahiert werden und welche nicht.

Aber auch bei Verwendung von Vererbung muss bei der Entwicklung eines Funktionsblocks überlegt werden, welche internen Elemente den abgeleiteten Funktionsblöcken per PROTECTED angeboten werden.

Die Definition des Open/Closed Principle

Das Open/Closed Principle (OCP) wurde im Jahr 1988 von Bertrand Meyer formuliert und besagt:

Eine Softwareentität sollte offen für Erweiterungen, aber zugleich auch geschlossen gegenüber Modifikationen sein.

Softwareentität: Damit ist eine Klasse, Funktionsblock, Module, Methode, Service, … gemeint.

Offen: Das Verhalten von Softwaremodule sollte erweiterbar sein.

Geschlossen: Eine Erweiterbarkeit soll nicht dadurch erreicht werden, indem bestehende Software verändert wird.

Als das Open/Closed Principle (OCP) von Bertrand Meyer Ende der 80er definiert wurde, lag der Fokus auf der Programmiersprache C++. Er nutzte die in der objektorientierten Welt bekannte Vererbung. Die damals noch junge Disziplin der Objektorientierung versprach große Verbesserungen bei Wiederverwendbarkeit und Wartbarkeit dadurch, dass konkrete Klassen als Basisklassen für neue Klassen verwendet werden können.

Als Robert C. Martin in den 90er Jahren das Prinzip von Bertrand Meyer übernahm, setzte er es technisch anders um. C++ ermöglicht die Verwendung von Mehrfachvererbung, während in neueren Programmiersprachen Mehrfachvererbung eher selten anzutreffen ist. Aus diesem Grund setzte Robert C. Martin den Fokus auf die Verwendung von Schnittstellen. Weitere Informationen hierzu sind in dem Buch (Amazon-Werbelink *) Clean Architecture: Das Praxis-Handbuch für professionelles Softwaredesign zu finden.

Zusammenfassung

Die Einhaltung des Open/Closed Principle (OCP) birgt allerdings die Gefahr des Overengineering. Die Möglichkeit für Erweiterungen sollte nur dort implementiert werden, wo sie konkret benötigt wird. Eine Software lässt sich nicht so designen, dass jede denkbare Erweiterung umgesetzt werden kann, ohne dass Anpassungen an dem Quellcode notwendig sind.

Damit ist meine Serie über die SOLID-Prinzipien abgeschlossen. Neben den SOLID-Prinzipien gibt es allerdings noch weitere Prinzipien, wie z.B. Keep It Simple, Stupid (KISS), Don’t Repeat Yourself (DRY), Law Of Demeter (LOD) oder You Ain‘t Gonna Need It (YAGNI). All diese Prinzipen haben das gemeinsame Ziel, die Wartbarkeit und die Wiederverwendbarkeit von Software zu verbessern.

Jürgen Gutsch: Play with Playwright

What is Playwright?

Playwright is a Web UI testing framework that supports different languages and is maintained by Microsoft. Playwright can be used with JavaScript/TypeScript, Python, Java and for sure C#. It comes with windowless browser support with various browsers. It has to be used with unit testing frameworks and because of this, you can just run it within your CI/CD pipeline. The syntax is pretty intuitive and I actually love it. Besides that the documentation is really good and helps a lot to easily start working with it.

In this blog post, I don't want to introduce Playwright. Actually, the website and the documentation is a much better resource to learn about the it. I would like to play around with it and to use it differently. Instead of testing a pre-hosted web application, I'd like to test a web application that is self hosted in the test project using the WebApplicationFactory. This way you have really isolated UI tests that don't relate to on another infrastructure and won't fail because of network problems.

Does it work?

Let's try it:

Setting up the solution

The following lines create an ASP.NET Core MVC project and an NUnit test project. After that, a solution file will be created and the projects will be added to the solution. The last command adds the NUnit implementation of Playwright to the test project:

dotnet new mvc -n PlayWithPlaywright
dotnet new nunit -n PlayWithPlaywright.Tests
dotnet new sln -n PLayWithPlaywright
dotnet sln add .\PlayWithPlaywright\
dotnet sln add .\PlayWithPlaywright.Tests\

dotnet add .\PlayWithPlaywright.Tests\ package Microsoft.Playwright.NUnit

Run those commands and build the solution:

dotnet build

The build is needed to copy a PowerShell script to the output directory of the test project. This PowerShell script is the command line interface to control playwright.

At next we need to install the required browsers to execute the tests via that PowerShell:

.\PlayWithPlaywright.Tests\bin\Debug\net7.0\playwright.ps1 install

Generating test code

Using the codegen command helps you to autogenerate test code that can be copied to the test project:

.\PlayWithPlaywright.Tests\bin\Debug\net7.0\playwright.ps1 codegen https://asp.net-hacker.rocks/

This command opens the Playwright Inspector where you can record your test case. While clicking through your application the test code will be generated on the right hand side:

plaiwright codegen

Instead of testing an external website like I did, you can also call codegen with a locally running application.

Just copy the generated code into the NUnit test project and fix the namespace and class name to match the namespace of your project.

Using the generated code as an example you will be able to write more the tests manually.

If this is done, just run dotnet test to execute the generated test and just to verify that Playwright is working.

Start playing

Usually Playwright is testing applications that are running somewhere on a server. This as one simple problem: If the test cannot connect to the running application because of network issues the test will fail. Usually a test should only have one single reason to fail: It should fail because the expected behavior didn't occure.

The solution would be to test a web application that is hosted on the same infrastructure and within the same process as the actual test.

Microsoft already provided the possibility to write integration tests against a web application using the WebApplicationFactory. My Idea was to use this WebApplicationFactory to host an application that can be tested with Playwright.

Since the WebApplicationFactory also provides a HttpClient, I would expect to have an URL to connect to. That HttpClient would have a BaseAddress that I can use to pass to Playwright.

Would this really work?

WebApplicationFactory and Playwright

Actually, we can't combine them by default because the WebApplicationFactory doesn't really host a web application over HTTP. That means it doesn't use Kestrel to expose an endpoint over HTTP. The WebApplicationFactory creates a test server that hosts the application in memory and just simulates an actual HTTP server.

We need to find a way to start a HTTP server, like Kestrel, to host the application. Actually we could start WebApplicationBuilder but the Idea was to reuse the configuration of the Program.cs of the application we want to test. Like it is done with the WebApplicationFactory.

Daniel Donbavand actually found a solution how to override the WebApplicationFactory to actually host the application over HTTP and to get an endpoint that can be used with Playwright. I used Daniels solution but made it a little more Generic.

Let's see how this works together with Playwright.

First, add a project reference to the web project within the Playwright test project and add a package reference to Microsoft.AspNetCore.Mvc.Testing.

dotnet add .\PlayWithPlaywright.Tests\ reference .\PlayWithPlaywright\

dotnet add .\PlayWithPlaywright.Tests\ package Microsoft.AspNetCore.Mvc.Testing

The first one is needed to use the Program.cs with the WebApplicationFactory. The second one adds the WebApplicationFactory and the test server to the test project.

To use the Program class that is defined in a Program.cs that uses the minimal API you can simply add an empty partial Program class to the Program.cs.

I just put the following line at the end of the Program.cs:

public partial class Program { }

To make the Playwright tests as generic as possible I created an abstract SelfHostedPageTest class that inherits the PageTest class that comes with Playwright and use the CustomWebApplicationFactory there and just expose the server address to the test class that inherits the SelfHostedPageTest:

public abstract class SelfHostedPageTest<TEntryPoint> : PageTest where TEntryPoint : class
{
    private readonly CustomWebApplicationFactory<TEntryPoint> _webApplicationFactory;

    public SelfHostedPageTest(Action<IServiceCollection> configureServices)
    {
        _webApplicationFactory = new CustomWebApplicationFactory<TEntryPoint>(configureServices);
    }

    protected string GetServerAddress() => _webApplicationFactory.ServerAddress;
}

The actual Playwright test just inherits the SelfHostedPageTest as follows instead of the PageTest:

public class PlayWithPlaywrightHomeTests : SelfHostedPageTest<Program>
{
    public PlayWithPlaywrightHomeTests() :
        base(services =>
        {
			// configure needed services, like mocked db access, fake mail service, etc.
        }) { }
        
	// ...
}

As you can see, I pass in the Program type as generic argument to the SelfHostedPageTest. The CustomWebApplicationFactory that is used inside is almost the same implementation as done by Daniel. I just added the generic argument for the Program class and added the possibility to pass the service configuration via the constructor:

internal class CustomWebApplicationFactory<TEntryPoint> :
   WebApplicationFactory<TEntryPoint> where TEntryPoint : class
{
    private readonly Action<IServiceCollection> _configureServices;
    private readonly string _environment;

    public CustomWebApplicationFactory(
        Action<IServiceCollection> configureServices,
        string environment = "Development")
    {
        _configureServices = configureServices; 
        _environment = environment;
    }

    protected override void ConfigureWebHost(IWebHostBuilder builder)
    {
        builder.UseEnvironment(_environment);
        base.ConfigureWebHost(builder);

        // Add mock/test services to the builder here
        if(_configureServices is not null)
        {
        	builder.ConfigureServices(_configureServices);
        }
    }
    
    // ...
    
}

Now we can use GetServerAddress() to get the server address and to pass it to the Page.GotoAsync() method:

[Test]
public async Task TestWithWebApplicationFactory()
{
    var serverAddress = GetServerAddress();

    await Page.GotoAsync(serverAddress);
    await Expect(Page).ToHaveTitleAsync(new Regex("Home Page - PlayWithPlaywright"));

    Assert.Pass();
}

That's it.

To try it out. just call dotnet test on the Command Line or PowerShell or run the relevant test in a test explorer.

Conclusion

The result with my test project looks like the following while running all the tests when I was offline:

test result

One failing test is the recorded test session of my blog on https://asp.net-hacker.rocks/ and the other one is the demo test I found on https://playwright.dev. The passed test is the one that uses the CustomWebApplicationFactory

This is exactly the result I expected.

You'll find the the example on my GitHub repository.

Holger Schwichtenberg: Neu in .NET 7 [7]: Auto-Default Structs in C# 11.0

Die jüngste Version von Microsofts Programmiersprache belegt in parameterlosen Konstruktoren die Felder und Properties von Strukturen mit Standardwerten.

Stefan Henneken: IEC 61131-3: SOLID – The Interface Segregation Principle

The basic idea of the Interface Segregation Principle (ISP) has strong similarities with the Single Responsibility Principle (SRP): Modules with too many responsibilities can negatively influence the maintenance and maintainability of a software system. The Interface Segregation Principle (ISP) focuses on the module’s interface. A module should implement only those interfaces that are needed for its task. The following shows how this design principle can be implemented.

Starting situation

In the last post (IEC 61131-3: SOLID – The Liskov Substitution Principle), the example was extended by another lamp type (FB_LampSetDirectDALI). The special feature of this lamp type is the scaling of the output value. While the other lamp types output 0-100%, the new lamp type outputs a value from 0 to 254.

Just like all other lamp types, the new lamp type (DALI lamp) has an adapter (FB_LampSetDirectDALIAdapter). The adapters have been added during the implementation of the Single Responsibility Principle (SRP) and ensure that the function blocks of the individual lamp types are only responsible for a single function (see IEC 61131-3: SOLID – The Single Responsibility Principle).

The sample program was last adapted so that the output value from the new lamp type (FB_LampSetDirectDALI) is scaled within the adapter from 0-254 to 0-100 %. This makes the DALI lamp behave exactly like the other lamp types without violating the Liskov Substitution Principle (LSP).

This will serve as a starting point for explaining the Interface Segregation Principle (ISP).

Extension of the implementation

Also this time, the application has to be extended. However, it is not a new lamp type that is defined, but an existing lamp type is extended by a functionality. The DALI lamp should be able to count the operating hours. For this purpose, the function block FB_LampSetDirectDALI is extended by the property nOperatingTime.

PROPERTY PUBLIC nOperatingTime : DINT

The setter can be used to set the operating hours counter to any value, while the getter returns the current state of the operating hours counter.

Since FB_Controller represents the individual lamp types, this function block is also extended by nOperatingTime.

The operating hours are recorded in the FB_LampSetDirectDALI function block. If the output value is > 0, the operating hours counter is incremented by 1 every second:

IF (nLightLevel > 0) THEN
  tonDelay(IN := TRUE, PT := T#1S);
  IF (tonDelay.Q) THEN
    tonDelay(IN := FALSE);
    _nOperatingTime := _nOperatingTime + 1;
  END_IF
ELSE
  tonDelay(IN := FALSE);
END_IF

The variable _nOperatingTime is the backing variable for the new property nOperatingTime and is declared in the function block.

What possibilities are there to transfer the value of nOperatingTime from FB_LampSetDirectDALI to the property nOperatingTime of FB_Controller? Here, too, there are now various approaches of integrating the required extension into the given software structure.

Approach 1: Extension of I_Lamp

The property for the new feature is integrated into the I_Lamp interface. Thus, the abstract function block FB_Lamp also receives the nOperatingTime property. Since all adapters inherit from FB_Lamp, the adapters of all lamp types receive this property, regardless of whether the lamp type supports an operating hours counter or not.

The getter and the setter of nOperatingTime in FB_Controller can thus directly access nOperatingTime of the individual adapters of the lamp types. The getter of FB_Lamp (abstract function block from which all adapters inherit) returns the value -1. The absence of the operating hours counter can thus be detected.

IF (fbController.nOperatingTime >= 0) THEN
  nOperatingTime := fbController.nOperatingTime;
ELSE
  // service not supported
END_IF

Since FB_LampSetDirectDALI supports the operating hours counter, the adapter (FB_LampSetDirectDALIAdapter) overwrites the nOperatingTime property. The getter and the setter from the adapter access nOperatingTime from FB_LampSetDirectDALI. In this way, the value of the operating hours counter is passed on to FB_Controller.

(abstract elements are displayed in italics)

Sample 1 (TwinCAT 3.1.4024) on GitHub

This approach implements the feature as desired. Also, none of the SOLID principles shown so far are violated.

However, the central interface I_Lamp is extended only to add another feature for one lamp type. All other adapters of the lamp types, even those that do not support the new feature, also receive the nOperatingTime property via the abstract base FB_Lamp.

With each feature that is added in this way, the interface I_Lamp increases and so does the abstract base FB_Lamp.

Approach 2: Additional Interface

In this approach, the I_Lamp interface is not extended, but a new interface (I_OperatingTime) is added for the desired functionality. I_OperatingTime contains only the property necessary for providing the operating hours counter:

PROPERTY PUBLIC nOperatingTime : DINT

This interface is implemented by the adapter FB_LampSetDirectDALIAdapter.

FUNCTION_BLOCK PUBLIC FB_LampSetDirectDALIAdapter EXTENDS FB_Lamp IMPLEMENTS I_OperatingTime

Thus, FB_LampSetDirectDALIAdapter receives the property nOperationTime not via FB_Lamp or I_Lamp, but via the new interface I_OperatingTime.

If FB_Controller accesses the active lamp type in the getter of nOperationTime, it is checked before the access whether the selected lamp type implements the I_OperatingTime interface. If this is the case, the property is accessed via I_OperatingTime. If the lamp type does not implement the interface, -1 is returned.

VAR
  ipOperatingTime  : I_OperatingTime;
END_VAR
IF (__ISVALIDREF(_refActiveLamp)) THEN
  IF (__QUERYINTERFACE(_refActiveLamp, ipOperatingTime)) THEN
    nOperatingTime := ipOperatingTime.nOperatingTime;
  ELSE
    nOperatingTime := -1; // service not supported
  END_IF
END_IF

The setter of nOperationTime is structured similarly. After the successful check whether I_OperatingTime is implemented by the active lamp, the property is accessed via the interface.

VAR
  ipOperatingTime  : I_OperatingTime;
END_VAR
IF (__ISVALIDREF(_refActiveLamp)) THEN
  IF (__QUERYINTERFACE(_refActiveLamp, ipOperatingTime)) THEN
    ipOperatingTime.nOperatingTime := nOperatingTime;
  END_IF
END_IF
(abstract elements are displayed in italics)

Sample 2 (TwinCAT 3.1.4024) on GitHub

Optimization analysis

The use of a separate interface for the additional feature corresponds to the ‘optionality’ from IEC 61131-3: SOLID – The Liskov Substitution Principle. In the above example, it can be checked at runtime of the program (with __QUERYINTERFACE()) whether a specific interface is implemented and thus the respective feature is supported. Further features, like bIsDALIDevice from the ‘Optionality’ example, are not necessary with this solution approach.

If a separate interface is offered for each feature or functionality, other lamp types can also implement this in order to implement the desired feature. If FB_LampSetDirect also has to receive an operating hours counter, FB_LampSetDirect must be extended by the property nOperatingTime. In addition, FB_LampSetDirectAdapter must implement the I_OperatingTime interface. All other function blocks, including FB_Controller, remain unchanged.

If the functionality of the operating hours counter changes and I_OperatingTime receives additional methods, only the function blocks that also support the feature must be adapted.

Examples of the Interface Segregation Principle (ISP) can also be found in .NET. For example, .NET has the interface IList. This interface contains methods and properties for creating, modifying and reading listings. However, depending on the use case, it may be sufficient for the user to only read a listing. However, passing a listing through IList in this case would also provide methods to modify the listing. One can use the IReadOnlyList interface for these use cases. With this interface, a listing can only be read. Accidental modification of the data is therefore not possible.

Dividing functionalities into individual interfaces thus increases not only the maintainability but also the security of a software system.

The definition of the Interface Segregation Principle

This brings us to the definition of the Interface Segregation Principle (ISP):

A module that uses an interface should be presented with only those methods that the interface really needs.

Or to put it another way:

Clients should not be forced to depend on methods they do not need.

A common argument against the Interface Segregation Principle (ISP) is the increased number of interfaces. A software design can still be adapted at any time during its development cycles. So, if you feel that an interface contains too many functionalities, check whether segregation is possible. Of course, overengineering should always be avoided. A certain amount of experience can be helpful here.

Abstract function blocks also represent an interface (see FB_Lamp). An abstract function block can contain basic functions to which the user only adds the necessary details. It is not necessary to implement all the methods or properties yourself. Here also it is important not to burden the user with technicalities which are not necessary for his tasks. The set of abstract methods and properties should be as small as possible.

Adherence to the Interface Segregation Principle (ISP) keeps interfaces between functional blocks as small as possible, reducing coupling between each functional block.

Summary

If a software system has to cover further performance features, reflect the new requirements and do not hastily extend existing interfaces. Check whether separate interfaces are not a better decision. The reward is a software system that is easier to maintain, to test and to extend.

In the last pending part, the Open/Closed Principle (OCP) will be explained in more detail.

Holger Schwichtenberg: Neu in .NET 7 [6]: Required Members mit C# 11.0

Ein neues Schlüsselwort für Microsofts Programmiersprache C# legt fest, dass Properties oder Felder zwingend gesetzt werden müssen.

Sebastian Seidel: 3 alternatives to Xamarin.UITest

UI testing is an essential part of mobile app development to ensure that the app delivers the best possible user experience and meets the needs and expectations of its users. But how do we do that in .NET MAUI when Xamarin.UITest is not fully compatible anymore? Let's look at 3 alternatives to Xamarin.UITest.

Holger Schwichtenberg: Neu in .NET 7 [5]: List Pattern und Slice Pattern mit C# 11

Das Pattern Matching funktioniert in C# 11 mit Listen. Außerdem lassen sich Teilmengen extrahieren.

Code-Inside Blog: Azure DevOps Server 2022 Update

Azure DevOps Server 2022 - OnPrem?

Yes I know - you can get everything from the cloud nowadays, but we are still using our OnPrem hardware and were running the “old” Azure DevOps Server 2020. The _Azure DevOps Server 2022 was released last december, so an update was due.

Requirements

If you are running am Azure DevOps Server 2020 the requirements for the new 2022 release are “more or less” the same except the following important parts:

  • Supported server operation systems: Windows Server 2022 & Windows Server 2019 vs. the old Azure DevOps Server 2020 could run under a Windows Server 2016
  • Supported SQL Server versions: Azure SQL Database, SQL Managed Instance, SQL Server 2019, SQL Server 2017 vs. the old Azure DevOps Server supported SQL Server 2016.

Make sure you have a backup

The last requirement was a suprise for me, because I thought the update should run smoothly, but the installer removed the previous version and I couldn’t update, because our SQL Server was still on SQL Server 2016. Fortunately we had a VM backup and could rollback to the previous version.

Step for Step

The update process itself was straightforward: Download the installer and run it.

x

x

x

x

x

x

x

x

x

x

The screenshots are from two different sessions. If you look carefully on the clock you might see that the date is different, that is because of the SQL Server 2016 problem.

As you can see - everything worked as expected, but after we updated the server the search, which is powered by ElasticSearch was not working. The “ElasticSearch”-Windows-Service just crashed on startup and I’m not a Java guy, so… we fixed it by removing the search feature and reinstall it. We tried to clean the cache, but it was still not working. After the reinstall of this feature the issue went away.

Features

Azure Server 2022 is just a minor update (at least from a typical user perspective). The biggest new feature might be “Delivery Plans”, which are nice, but for small teams not a huge benefit. Check out the release notes.

A nice - nerdy - enhancement, and not mentioned in the release notes: “mermaid.js” is now supported in the Azure DevOps Wiki, yay!

Hope this helps!

Jürgen Gutsch: Creating a circuit breaker health check using Polly CircuitBreaker

Finally! After months of not writing a blog post, here it is:

A GitHub Issue on the ASP.NET Core Docs points me to Polly CircuitBreaker. Which is really great. Before that, I didn't even know that circuit breakers is a term in the software industry. Actually, I implemented mechanisms that work like that but never called them circuit breakers. Maybe that's the curse of never having visited a university :-D

http://www.thepollyproject.org/

What is a circuit breaker?

Let's assume you have a connection to an external resource that breaks from time to time, which doesn't break your application but degraded the health of your application. In case you check that broken connection your application will be in a degraded state from time to time. What if those connection issues increase? When will it be a broken state? One broken connection out of one thousand might be okay. One out of ten might look quite unhealthy, right? If so, it makes sense to count the number of issues within a period of time and throw an exception if the number of exceptions exceeds the allowed number of exceptions. Exactly this is a circuit breaker.

Please excuse the amateurish explanation, I'm sure Martin Fowler can do it much better.

With Polly's circuit breaker, you can define how many specific exceptions are allowed to happen within a specific time period before throwing an exception.

The following snippet shows the usage of Polly's circuit breaker:

var policy = Policy.Handle<HttpRequestException>()
    .CircuitBreakerAsync(
        exceptionsAllowedBeforeBreaking: 2,
        durationOfBreak: TimeSpan.FromMinutes(1)
    ));
await policy.ExecuteAsync(async () =>
{
    var client = new HttpClient();
    var response = await client.GetAsync("http://localhost:5259/api/dummy");
    if (!response.IsSuccessStatusCode)
    {
        throw new HttpRequestException();
    }
});

This creates an AsyncCircuitBreakerPoliy that allows two exceptions within a minute before throwing an exception.

Actually, I wanted to see how this would look like in an ASP.NET Core Health Check. The health check I'm going to show here isn't perfect but shows the concept of it:

Creating a circuit breaker health check

Adding a circuit breaker to a health check or in general into a web application requires you to persist the state of that circuit breaker over multiple scopes or requests. This means we need to store instances of the AsyncCircuitBreakerPolicy as singletons in the service collection. See here.

Preparing the test application

To test the implementation I created a minimal endpoint that fails randomly within a new web application:

app.MapGet("/api/dummy", () =>
{
    var rnd = Random.Shared.NextInt64(0, 1000);
    if ((rnd % 5) == 0)
    {
        throw new Exception("new exception");
    }
    return rnd;
});

This endpoint returns a random number and fails in case the random number can be divided by five. This exception is meaningless, but the endpoint is good to test the health check we will implement.

We also need to create a health check endpoint that we will call to see the current health state. This endpoint also executes the health check every time we call it. When calling it, the health check will call the dummy API and gets a randomly generated error.

app.UseHealthChecks("/health");

Implementing the health check

Next, we are going to write a health check that gets an AsyncCircuitBreakerPolicy via the service provider and executes a web request against the dummy breaking endpoint:

using Microsoft.Extensions.Diagnostics.HealthChecks;

namespace CircuitBreakerChecks;

public class ApiCircuitBreakerHealthCheck<TPolicy> : IHealthCheck where TPolicy : ApiCircuitBreakerContainer
{
    private readonly ApiCircuitBreakerHealthCheckConfig _config;
    private readonly IServiceProvider _services;

    public ApiCircuitBreakerHealthCheck(
        ApiCircuitBreakerHealthCheckConfig config, 
        IServiceProvider services)
    {
        _config = config;
        _services = services;
    }

    public async Task<HealthCheckResult> CheckHealthAsync(
        HealthCheckContext context,
        CancellationToken cancellationToken = default)
    {
        var policy = _services.GetService<TPolicy>()?.Policy;

        try
        {
            if (policy is not null)
            {
                await policy.ExecuteAsync(async () =>
                {
                    var client = new HttpClient();
                    var response = await client.GetAsync(_config.Url);
                    if (!response.IsSuccessStatusCode)
                    {
                        throw new HttpRequestException();
                    }
                });
            }
        }
        catch (Exception)
        {
            return HealthCheckResult.Unhealthy("Unhealthy");
        }

        return HealthCheckResult.Healthy("Healthy");
    }
}

This health check is generic and receives a container for the AsyncCircuitBreakerPolicy as a generic type argument. We'll see later why.

In the CheckHealthMethod we take the specific container from the service provider to get the actual Polly AsyncCircuitBreakerPolicy and we use it as shown in the first snippet. That part was quite common.

The container is a really simple object that just stores the Policy:

using Polly.CircuitBreaker;

namespace CircuitBreakerChecks;

public class ApiCircuitBreakerContainer
{
    private readonly AsyncCircuitBreakerPolicy _policy;
    public ApiCircuitBreakerContainer(AsyncCircuitBreakerPolicy policy)
    {
        _policy = policy;
    }
    public AsyncCircuitBreakerPolicy Policy => _policy;
}

This container gets registered as a singleton to persist the policy for a longer period of time.

The health check also uses a configuration class that passes the configuration arguments to the health check. Currently, it is just the URL of the API to test and the name of the health check registration:

namespace CircuitBreakerChecks;

public class ApiCircuitBreakerHealthCheckConfig
{
    public string Url { get; set; }
}

This configuration class gets registered as transient.

Now let's puzzle that all together to get it running. We could do that all in the Program.cs, but will mess up the file, though.

Instead of messing up the Program.cs, I would like to have a configuration like this:

builder.Services.AddApiCircuitBreakerHealthCheck(
    "http://localhost:5259/api/dummy", // URL to check
    "AddApiCircuitBreakerHealthCheck", // Name of the health check registration
    Policy.Handle<HttpRequestException>() // Polly CircuitBreaker Async Policy
        .CircuitBreakerAsync(
            exceptionsAllowedBeforeBreaking: 2,
            durationOfBreak: TimeSpan.FromMinutes(1)
        ));

In your project, you might need to change the URL to match your local port.

The call in this snippet will register and configure the ApiCircuitBreakerHealthCheck. This means we will create an extension method on the IServiceCollection to stick that all together:

using Polly.CircuitBreaker;

namespace CircuitBreakerChecks;

public static class IServiceCollectionExtensions
{
    public static IServiceCollection AddApiCircuitBreakerHealthCheck(
        this IServiceCollection services,
        string url,
        string name,
        AsyncCircuitBreakerPolicy policy)
    {
        services.AddTransient(_ => new ApiCircuitBreakerHealthCheckConfig { Url = url });
        services.AddSingleton(new ApiCircuitBreakerContainer(policy));
        services.AddHealthChecks()
        	.AddCheck<ApiCircuitBreakerHealthCheck<ApiCircuitBreakerContainer>>(name);
        return services;
    }
}

That's it.

Trying it out

To try it out, run the application and call the health check endpoint in the browser.

The first two calls should display a healthy state for sure. Then it stays healthy until it gets at least two errors within a period of one minute. After that, it stays unhealthy until it gets less than two errors within that period.

Play around with it. Debug the minimal endpoint or debug the health check. It is kind of fun.

I published the demo code to GitHub: https://github.com/JuergenGutsch/aspnetcore-circuitbreaker-healthcheck

One issue left

With this implementation, we can just use one registration of this endpoint. Creating a generic health check that way didn't really make sense. The reason is the singleton instance of the Policy CircuitBreaker. This instance would be shared over multiple health check registrations. To solve this we need to find a way to register a unique singleton of a policy per health check registration. But this is a different story.

Code-Inside Blog: Use ASP.NET Core and React with Vite.js

The CRA Problem

In my previous post I showed a simple setup with ASP.NET Core & React. The React part was created with the “CRA”-Tooling, which is kind of problematic. The “new” state of the art React tooling seems to be vite.js - so let’s take a look how to use this.

x

Step for Step

Step 1: Create a “normal” ASP.NET Core project

(I like the ASP.NET Core MVC template, but feel free to use something else - same as in the other blogpost)

x

Step 2: Install vite.js and init the template

Now move to the root directory of your project with a shell and execute this:

npm create vite@latest clientapp -- --template react-ts

This will install the latest & greatest vitejs based react app in a folder called clientapp with the react-ts template (React with Typescript). Vite itself isn’t focused on React and supports many different frontend frameworks.

x

Step 3: Enable HTTPS in your vite.js

Just like in the “CRA”-setup we need to make sure, that the environment is served under HTTPS. In the “CRA” world we needed to different files from the original ASP.NET Core & React template, but with vite.js there is a much simpler option available.

Execute the following command in the clientapp directory:

npm install --save-dev vite-plugin-mkcert

Then in your vite.config.ts use this config:

import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import mkcert from 'vite-plugin-mkcert'

// https://vitejs.dev/config/
export default defineConfig({
    base: '/app',
    server: {
        https: true,
        port: 6363
    },
    plugins: [react(), mkcert()],
})

Be aware: The base: '/app' will be used as a sub-path.

The important part for the HTTPS setting is that we use the mkcert() plugin and configure the server part with a port and set https to true.

Step 4: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package

Same as in the other blogpost, we need to add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package to glue the ASP.NET Core development and React world together. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.

x

Step 5: Enhance your Program.cs

Back to the Program.cs - this is more or less the same as with the “CRA” setup:

Add the SpaStaticFiles to the services collection like this in your Program.cs - be aware, that vite.js builds everything in a folder called dist:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddControllersWithViews();

// ↓ Add the following lines: ↓
builder.Services.AddSpaStaticFiles(configuration => {
    configuration.RootPath = "clientapp/dist";
});
// ↑ these lines ↑

var app = builder.Build();

Now we need to use the SpaServices like this:

app.MapControllerRoute(
    name: "default",
    pattern: "{controller=Home}/{action=Index}/{id?}");

// ↓ Add the following lines: ↓
var spaPath = "/app";
if (app.Environment.IsDevelopment())
{
    app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>
    {
        client.UseSpa(spa =>
        {
            spa.UseProxyToSpaDevelopmentServer("https://localhost:6363");
        });
    });
}
else
{
    app.Map(new PathString(spaPath), client =>
    {
        client.UseSpaStaticFiles();
        client.UseSpa(spa => {
            spa.Options.SourcePath = "clientapp";

            // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)
            // .js and other static resources are still cached by the browser
            spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions
            {
                OnPrepareResponse = ctx =>
                {
                    ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();
                    headers.CacheControl = new CacheControlHeaderValue
                    {
                        NoCache = true,
                        NoStore = true,
                        MustRevalidate = true
                    };
                }
            };
        });
    });
}
// ↑ these lines ↑

app.Run();

Just like in the original blogpost. In the development mode we use the UseProxyToSpaDevelopmentServer-method to proxy all requests to the vite.js dev server. In the real world, we will use the files from the dist folder.

Step 6: Invoke npm run build during publish

The last step is to complete the setup. We want to build the ASP.NET Core app and the React app, when we use dotnet publish:

Add this to your .csproj-file and it should work:

	<PropertyGroup>
		<SpaRoot>clientapp\</SpaRoot>
	</PropertyGroup>

	<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
		<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
		<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
		<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />

		<!-- Include the newly-built files in the publish output -->
		<ItemGroup>
			<DistFiles Include="$(SpaRoot)dist\**" />  <!-- Changed to dist! -->
			<ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
				<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->
				<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
				<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
			</ResolvedFileToPublish>
		</ItemGroup>
	</Target>

Result

You should now be able to use Visual Studio Code (or something like this) and start the frontend project with dev. If you open a browser and go to https://127.0.0.1:6363/app you should see something like this:

x

Now start the ASP.NET Core app and go to /app and it should look like this:

x

Ok - this looks broken, right? Well - this is a more or less a “known” problem, but can be easily avoided. If we import the logo from the assets it works as expected and shouldn’t be a general problem:

x

Code

The sample code can be found here.

Video

I made a video about this topic (in German, sorry :-/) as well - feel free to subscribe ;)

Hope this helps!

Holger Schwichtenberg: Neu in .NET 7 [4]: Zusätzlicher Scope mit File-local Types in C# 11

Das aktuelle Release von Microsofts Programmiersprache C# führt mit file einen neuen Scope für Klassen, Strukturen, Interfaces und Co ein.

Thorsten Hans: VPN Austria – Unlock ORF and ServusTV

How you can unlock the popular free TV channels of ORF and ServusTV with a VPN for Austria. With a VPN like the one from NordVPN* you can unlock sports events like Formula 1, matches of the UEFA Champions League, the UEFA Europa League, the World Cup and European Championship, DFB-Pokal, as well as a lot of winter sports.

How can you enjoy the free content of ORF and ServusTV, without having to be in Austria? I’ll show you here.
With a reliable VPN, such as our test winner NordVPN* or the second-placed CyberGhost*, you can virtually transfer to Austria. Here is the explanation in detail:

How to unlock ORF and ServusTV Austria with VPN

Time needed: 15 minutes.

Unblock ORF and ServusTV in four simple steps with a VPN for Austria.

  1. Choose a VPN provider

    Play it safe if you care about your data and choose a reputable provider like NordVPN* or CyberGhost*. They have proven themselves to me.

  2. Install and launch the VPN software

    NordVPN and CyberGhost offer clients for Android, Windows, iOS, macOS, Amazon FireTV Stick, Linux and even Raspberry Pi. Install the right software for your system.

  3. Choose a server

    Choose a server from Austria and connect. With our favorites, you have several servers at your disposal, which means you always have fallback options. This is important, because streaming providers sometimes uncover and block servers. In this case, only a change of server will help.

  4. Start streaming

    Now you can watch ORF or ServusTV Austria completely protected from anywhere in the world. Once you are connected to the server, you can go to the ORF or ServusTV website and start streaming.

NordVPN is our top choice – unblock ServusTV and ORF Austria

To access ORF and ServusTV from anywhere in the world, you have to bypass the providers’ geoblocking. And the best way to do that is with a VPN service that is reliable and reputable. In my opinion, NordVPN is the best VPN provider and the best choice for streaming ORF and ServusTV Austria worldwide. Of course, you can also choose another provider. For NordVPN, I gladly pay about 3 Euros per month for all its advantages. The strict no-logs policy alone is worth it to me. This means that my data is never collected, stored or shared. So my ISP or the government can never access my personal data.

The best 3 VPNs for Austria under the magnifying glass

Since choosing the right VPN service can become a doctoral thesis due to the unmanageable oversupply, we have listed the most important criteria. After all, not every VPN is suitable for the Austrian programs of ORF and ServusTV. The provider must be able to reliably and seamlessly bypass geo-blocking without being exposed and deliver the required data speed so that even sports events can be streamed optimally.

In my opinion, these 3 are the best VPN services for Austria:

  • NordVPN* the best all-round
  • CyberGhost* the runner-up – great for streaming
  • PIA* the inexpensive classic

The all-rounder NordVPN – in detail

NordVPN* has the best price-performance ratio. It offers clients for Android, Windows, iOS, macOS, Amazon FireTV Stick, Linux and even for Raspberry Pi. Connection on 6 devices at the same time is possible. Moreover, it has a kill switch. You can choose from a whole 61 servers in Austria. Best of all, benefit from a money-back guarantee – during this phase, you can ask for your money back via NordVPN Support if you don’t like it.

The top streaming VPN CyberGhost – in detail

You can use CyberGhost* for up to 7 devices simultaneously. It supports Android, Windows, iOS, macOS, Amazon FireTV Stick, and Linux. It offers browser add-ons for Firefox and Chrome, has an adblocker, and is inclusive of malware, tracking, and phishing protection. Surfshark also has a kill switch and offers a trial period with money-back policy.

PIA the VPN classic for Austria – in detail

With the VPN from PIA*, 10 parallel connections are possible. It offers a client for Windows, iOS, macOS, and Android. It has browser extensions with which you can unlock Austrian programs as well. PIA also offers a money-back guarantee.

FAQ – Frequently asked questions and answers about VPN usage for Austria

What to do if the streaming via VPN Austria does not work properly or not at all?

Check your VPN connection. If you are sure that your VPN is switched on, delete the cache and cookies of the browser. Change the server manually and not via the automatic quick selection. If that doesn’t help, use the VPN of the popular providers via a browser extension on Chrome, Firefox and Safari. If it still doesn’t work, contact customer service, which is available 24/7 with reputable providers.

Why shouldn’t I stream ServusTV and ORF with a free VPN service?

The free VPN services are mostly throttled versions of the renowned providers. The inadequate server selection in Austria can prevent you from being able to unblock ORF and ServusTV. Lack of data volume and data speed can be more than a nuisance when it comes to the streaming experience, especially for sports events.

How can I watch ORF TVthek from abroad?

With a VPN, the geoblocking of ORF TVthek can also be unlocked.

Is VPN usage legal?

Using a VPN is completely legal in Germany, but also in many other countries like Switzerland or Austria. Using a VPN for illegal actions, such as downloading copyrighted media, are of course not legal.

Which sports events can I stream for free through VPN Austria?

Formula 1, UEFA Champions League matches, UEFA Europa League matches, World Cup and European Championship football matches, Ice Hockey World Championship matches and many more winter sports.

The post VPN Austria – Unlock ORF and ServusTV appeared first on xplatform.rocks.

Holger Schwichtenberg: Neu in .NET 7 [3]: UTF-8-Zeichenketten-Literale in C# 11

Die neue Version von Microsofts Programmiersprache kann aus Zeichenketten-Literalen Bytefolgen in UTF-8-Codes erstellen.

Code-Inside Blog: Use ASP.NET Core & React togehter

The ASP.NET Core React template

x

Visual Studio (at least VS 2019 and the newer 2022) ships with a ASP.NET Core React template, which is “ok-ish”, but has some really bad problems:

The React part of this template is scaffolded via “CRA” (which seems to be problematic as well, but is not the point of this post) and uses JavaScript instead of TypeScript. Another huge pain point (from my perspective) is that the template uses some special configurations to just host the react part for users - if you want to mix in some “MVC”/”Razor” stuff, you need to change some of this “magic”.

The good parts:

Both worlds can live together: During development time the ASP.NET Core stuff is hosted via Kestrel and the React part is hosted under the WebPack Development server. The lovely hot reload is working as expected and is really powerful. If you are doing a release build, the project will take care of the npm-magic.

But because of the “bad problems” outweight the benefits, we try to integrate a typical react app in a “normal” ASP.NET Core app.

Step for Step

Step 1: Create a “normal” ASP.NET Core project

(I like the ASP.NET Core MVC template, but feel free to use something else)

x

Step 2: Create a react app inside the ASP.NET Core project

(For this blogpost I use the “Create React App”-approach, but you can use whatever you like)

Execute this in your ASP.NET Core template (node & npm must be installed!):

npx create-react-app clientapp --template typescript

Step 3: Copy some stuff from the React template

The react template ships with some scripts and settings that we want to preserve:

x

The aspnetcore-https.js and aspnetcore-react.js file is needed to setup the ASP.NET Core SSL dev certificate for the WebPack Dev Server. You should also copy the .env & .env.development file in the root of your clientapp-folder!

The .env file only has this setting:

BROWSER=none

A more important setting is in the .env.development file (change the port to something different!):

PORT=3333
HTTPS=true

The port number 3333 and the https=true will be important later, otherwise our setup will not work.

Also, add this line to the .env-file (in theory you can use any name - for this sample we keep it spaApp):

PUBLIC_URL=/spaApp

Step 4: Add the prestart to the package.json

In your project open the package.json and add the prestart-line like this:

  "scripts": {
    "prestart": "node aspnetcore-https && node aspnetcore-react",
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject"
  },

Step 5: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package

x

We need the Microsoft.AspNetCore.SpaServices.Extensions NuGet-package. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.

Step 6: Enhance your Program.cs

Add the SpaStaticFiles to the services collection like this in your Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddControllersWithViews();

// ↓ Add the following lines: ↓
builder.Services.AddSpaStaticFiles(configuration => {
    configuration.RootPath = "clientapp/build";
});
// ↑ these lines ↑

var app = builder.Build();

Now we need to use the SpaServices like this:

app.MapControllerRoute(
    name: "default",
    pattern: "{controller=Home}/{action=Index}/{id?}");

// ↓ Add the following lines: ↓
var spaPath = "/spaApp";
if (app.Environment.IsDevelopment())
{
    app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>
    {
        client.UseSpa(spa =>
        {
            spa.UseProxyToSpaDevelopmentServer("https://localhost:3333");
        });
    });
}
else
{
    app.Map(new PathString(spaPath), client =>
    {
        client.UseSpaStaticFiles();
        client.UseSpa(spa => {
            spa.Options.SourcePath = "clientapp";

            // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)
            // .js and other static resources are still cached by the browser
            spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions
            {
                OnPrepareResponse = ctx =>
                {
                    ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();
                    headers.CacheControl = new CacheControlHeaderValue
                    {
                        NoCache = true,
                        NoStore = true,
                        MustRevalidate = true
                    };
                }
            };
        });
    });
}
// ↑ these lines ↑

app.Run();

As you can see, we run in two different modes. In our development world we just use the UseProxyToSpaDevelopmentServer-method to proxy all requests that points to spaApp to the React WebPack DevServer (or something else). The huge benefit is, that you can use the React ecosystem with all its tools. Normally we use Visual Studio Code to run our react frontend and use the ASP.NET Core app as the “Backend for frontend”. In production we use the build artefacts of the react build and make sure, that it’s not cached. To make the deployment easier, we need to invoke npm run build when we publish this ASP.NET Core app.

Step 7: Invoke npm run build during publish

Add this to your .csproj-file and it should work:

	<PropertyGroup>
		<SpaRoot>clientapp\</SpaRoot>
	</PropertyGroup>

	<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
		<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
		<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
		<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />

		<!-- Include the newly-built files in the publish output -->
		<ItemGroup>
			<DistFiles Include="$(SpaRoot)build\**" />
			<ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
				<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->
				<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
				<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
			</ResolvedFileToPublish>
		</ItemGroup>
	</Target>

Be aware that these instruction are copied from the original ASP.NET Core React template and are slightly modified, otherwise the path wouldn’t match.

Result

With this setup you can add any spa app that you would like to add to your “normal” ASP.NET Core project.

If everything works as expected you should be able to start the React app in Visual Studio Code like this:

x

Be aware of the https://localhost:3333/spaApp. The port and the name is important for our sample!

Start your hosting ASP.NET Core app in Visual Studio (or in any IDE that you like) and all requests that points to spaApp use the WebPack DevServer in the background:

x

With this setup you can mix all client & server side styles as you like - mission succeeded and you can use any client setup (CRA, anything else) as you would like to.

Code

The code (but with slightly modified values (e.g. another port)) can be found here. Be aware, that npm i needs to be run first.

Video

I uploaded a video on my YouTube channel (in German) about this setup:

Hope this helps!

Holger Schwichtenberg: Neu in .NET 7 [2]: Zeilenumbrüche in Interpolationsausdrücken in C# 11.0

Die in geschweiften Klammern gefassten Zeichenketten-Interpolationsausdrücke dürfen neuerdings Kommentare und Zeilenumbrüche enthalten.

Thorsten Hans: Stream Lucifer Netflix Series with a VPN

Lucifer is a fictional Netflix series based on the DC Comics character of the same name. The show follows Lucifer Morningstar, the Devil, as he abandons his throne in Hell for Los Angeles, where he starts his own nightclub called Lux. While in LA, Lucifer becomes involved with the LAPD and helps solve crimes.

The show is available to watch on Netflix in most countries. However, due to licensing restrictions, it is not available in some countries. If you are unable to watch Lucifer on Netflix in your country, you can use a VPN to bypass these restrictions.

A VPN is a service that allows you to connect to the internet via a server run by a third party. This server acts as a middleman between you and the websites you visit. By connecting to a VPN server in a different country, you can make it appear as if you are located in that country. This allows you to access websites that are blocked in your country.

There are many VPNs available, and each has its own benefits and drawbacks. Here are some of the most popular VPNs:

NordVPN: NordVPN is one of the most popular VPNs available. It offers high speeds and strong security features, making it a good choice for streaming Lucifer.

– ExpressVPN: ExpressVPN is another popular VPN with high speeds and strong security features. It is also easy to use, making it a good choice for beginners.

– IPVanish: IPVanish is a good option for those who want a VPN with a large selection of servers. It also offers strong security features.

– PureVPN: PureVPN is a good choice for those who want a low-cost VPN with good security features.

How to Watch Using a VPN?

To watch Lucifer using a VPN, you will need to connect to a VPN server that is located in the United States. This will allow you to bypass geographic restrictions and watch the series.

What Are the Benefits of Using a VPN?

The benefits of using a VPN include:

– increased security and privacy

– bypass geographic restrictions

– unblock websites and content

– anonymous browsing.

Using a VPN is a great way to watch content if it is blocked in your country. It allows you to bypass restrictions and access content from anywhere in the world. Additionally, VPNs provide a level of security and privacy that is not available when using public Wi-Fi networks. So, if you are looking for a way to stream your favourite shows, a VPN is the best option available.

The first two seasons of Lucifer are available on Netflix US, but due to licensing agreements, the third season is not available until May 8th. However, there is a way to watch Lucifer Season 3 on Netflix US using a VPN.

To watch Lucifer Season 3 on Netflix US using ExpressVPN, follow these steps:

1. Sign up for a ExpressVPN account here.

2. Download the ExpressVPN app for your device.

3. Connect to a US server.

4. Open Netflix and start watching Lucifer!

The post Stream Lucifer Netflix Series with a VPN appeared first on xplatform.rocks.

Holger Schwichtenberg: Neu in .NET 7 [1]: Raw Literal Strings in C# 11.0

Das mit dem jüngsten .NET-Release veröffentlichte C# 11 bietet eine neue, einfache Methode zum Anlegen von Zeichenketten mit Umbrüchen und Einrückungen.

Sebastian Seidel: 7 steps to migrate from Xamarin Forms to .NET MAUI

With the end of support for Xamarin approaching, developers are busy migrating existing Xamarin Forms projects to .NET MAUI as its successor. So are we, of course. In this article, I'll show 7 steps we've always had to take during the transition to make your transition easier.

Thorsten Hans: UEFA Champions League Free TV abroad

You want to watch the UEFA Champions League on free TV abroad? Then I’ll show you how it works with a good VPN like the one from NordVPN. Below are the foreign channels and streams that still broadcast the UEFA Champions League free and legally on free TV:

Watching Champions League free TV abroad is possible, but only with a good VPN

Every year the best soccer teams in Europe compete against each other to be crowned as Champions League winners. Of course, you’ll see the best clubs in the competition competing against each other. For example, Bayern Munich, Juventus Turin, Manchester United, Paris St. Germain, Manchester City, Real Madrid, Borussia Dortmund and FC Barcelona. Teams from England, Spain and Germany in particular regularly get very far in the tournament. It is always the most exciting soccer tournament of the year, where no soccer fan wants to miss the games of their stars.

Soccer has become a big business. If you want to watch top soccer as a soccer fan, you have to pay a lot of money for pay TV and stadium visits. That’s why I did a little research on where you can still watch UEFA Champions League abroad on free TV. With the free TV channels listed above, you can watch it legally and for free with a good VPN.

Guide: How to watch Champions League abroad for free

Time needed: 15 minutes.

Here’s a quick guide on how to watch Champions League matches on free TV abroad.

  1. Get a reliable VPN

    First of all, you need to know which streaming provider you want to use. Choose from the list of free TV channels above. Before that, you should choose a reliable VPN with which you want to watch the Champions League. I advise you to use NordVPN or CyberGhost. I use both myself all the time and it works perfectly.

  2. Install the VPN on your devices and connect

    Decide on which devices you want to watch Champions League on free TV abroad. Install the VPN on all your devices – whether Android, Windows, macOS, iOS or Linux.

  3. Connect to the VPN server

    Once the VPN software is installed on your device, you can connect to your destination country. In the list, the VPN server_countries are in brackets after the free TV channels.

  4. Visit the streaming provider

    Once you have dialed in to the right country, simply visit the streaming provider’s website.

Champions League abroad for free on free TV

In some countries, Champions League matches continue to be shown free of charge. ServusTV Austria (servustv.com), for example, shows certain matches on Wednesdays. Especially in the preliminary round, the channel focuses on teams from its own country, but also on those with Austrian players or coaches. However, the broadcasts are subject to geoblocking and you need an Austrian IP address. With a good VPN like NordVPN or CyberGhost, this is no problem. You can use it to unblock the channel.

However, Champions League free TV abroad is also possible via other broadcasters. RTL Luxembourg or Belgium also stream various games for free. However, you need an IP address in Luxembourg or Belgium.

Why is it so complicated?

OK, it’s actually not as complicated as it sounds. The software of the best VPN providers is very user-friendly and even technically less experienced people can cope with it.

The problem at this point is called geoblocking. The broadcasters and streaming providers in the individual countries only have the license to broadcast the respective Champions League game in a certain region.

However, based on your IP address, the streaming providers know in which country you are located. If you come from England, for example, and want to stream ServusTV Austria, the service will block you, stating that the desired broadcast cannot be transmitted for legal reasons.

If you are on vacation in Austria, Luxembourg, Belgium or another country with free Champions League broadcasts on free TVL, the situation is of course different. If you use the WLAN in your Airbnb, hotel or a SIM card from the corresponding country, you will also get a correct IP address and the broadcasters will not block you anyway.

However, if you are not in the corresponding country and connect to a server in Austria, for example, and then visit the website of the Austrian streaming provider, the provider thinks you are physically there. Now the regional block is lifted and you can stream Champions League on free TV.

What can I do if it does not work right away?

There are a few reasons why streaming does not work despite VPN and also a few solutions.

If you are using Android or iOS, the mobile apps of the respective streaming providers often work better than browsers.

With various streaming providers, some browsers have problems and in this case just try another internet browser. It can also help to delete cookies and cache or to use the incognito mode. In Firefox this is called private window.

It also happens that your server is unmasked. Disconnect and connect to another server. This has also helped me.

The best VPN services also provide browser extensions that act as a proxy. This often works better. Proxies aren’t as secure, but they are faster and when streaming you want speed.

The VPN protocol can also play a role. Change it if it doesn’t work at all. You can change it with the best services directly in the app with a few clicks.

The fact is, if you’re smart and look around a bit, you can watch a lot of interesting Champions League games for free – it’s very easy to save money here!

FAQ – Questions and answers about streaming

You still have some questions about the Champions League on free TV? Maybe you’ll find the right answers in this section.

Can I watch Champions League for free?

Yes, this is possible, but not everywhere anymore. There are several foreign broadcasters that stream UEFA Champions League matches on free TV. At the beginning of the article I have put a list of the channels that I have found.

Which channels show CL for free?

ZDF (Champions League final only) (VPN Germany)
ServusTV Austria (VPN Austria)
RTL Luxemburg (VPN Luxembourg)
RTL Zwee (VPN Luxembourg)
Canale 5 (VPN Italy)
Club RTL (VPN Belgium)

Is it legal to watch Champions League with a VPN for free?

Many believe that it is not illegal to bypass geoblocking and watch Champions League on free TV via foreign countries. You are almost certainly violating the terms of use of the streaming providers. It’s best to find out for yourself how this is regulated in your location.

The post UEFA Champions League Free TV abroad appeared first on xplatform.rocks.

Thorsten Hans: Watch Formula 1 for free – abroad via Free TV

In Belgium, Luxembourg, Austria and Switzerland, Formula 1 on free TV is still possible. With a reliable VPN, you can access these channels from anywhere to stream Formula 1 abroad on free TV for free. NordVPN* is my favorite here, because with it the free F1 streaming always works for me.

These channels show Formula 1 on free TV:

You can watch Formula 1 for free – for example via ORF / ServusTV or SRF

Perhaps a few more important notes. ServusTV and ORF take turns showing the races. SRF, on the other hand, shows all races, but you have to look up on which channel. This can be SRF 2 or SRF Info. However, all websites offer a TV program, which you can quickly find out about. RTL Luxembourg also broadcasts the races and even records them. You can stream the last race for a week as a repeat. RTL Play in Belgium shows the F1 races with French commentary.

If you are traveling and on vacation in Switzerland, Luxembourg, Belgium or Austria, you can simply stream the races. The above mentioned broadcasters have the license to officially broadcast the races in their country on free TV. If you are not on location, you will get a message that the respective content may not be broadcast for legal reasons – this is then the so-called geoblocking, which you can bypass, I use here NordVPN* or CyberGhost*.

How to stream Formula 1 abroad for free on free TV?

Time needed: 15 minutes.

Follow my simple instructions and you will be able to watch the races you want online for free abroad without any problems.

  1. Get a VPN

    Subscribe to a VPN with servers in Austria, Luxembourg, Belgium or Switzerland. It depends on what you prefer to watch over. The best VPN services offer servers in these countries anyway. The providers I recommend, NordVPN* and CyberGhost*, provide apps for Android, Windows, macOS, iOS, and Linux.

  2. Install the VPN on your devices

    Once you have decided, download the appropriate VPN apps and install them on your device. Open the app and log in with your user data.

  3. Connect to one of the servers

    At this point, it depends on whether you want to stream RTL / RTL Play / ORF / ServusTV or SRF. Connect to the appropriate server in the country where you want to stream Formula 1 for free on free TV.

  4. Open the streaming provider

    Now open one of the possible F1 streams. On SRF, the races are usually broadcast on SRF 2, possibly also on SRF Info. Sometimes you have to change the channel during a Grand Prix, but the commentators announce this in the live stream.

Pro tip: You can also watch the races on free TV via RTL Luxembourg (rtl.lu). There you can watch the repeat of the last Formula 1 race for a whole week. So if you missed a race or it was broadcast at an inconvenient time for you, you can watch the Formula 1 replay there for free. However, you need an IP address in Luxembourg and you have to connect to a corresponding server.

Is watching Formula 1 abroad for free legal?

Various specialist lawyers are convinced that you are not doing anything illegal when circumventing geoblocking. You are most likely violating the terms of use of the respective streaming provider. However, you don’t have to register with any of these broadcasters, and you probably won’t be prosecuted anyway.

Instead, the streaming providers rely on so-called geographical blockades and try to detect and block your VPN. That’s why you need a reliable VPN provider with many good servers.

However, there are countries where VPNs are prohibited. What I would like to say at this point: Find out for yourself what is allowed in your location and what is not. The fact is that you can stream Formula 1 abroad via free TV, as long as you are in one of these countries.

The F1 stream with VPN does not work – solutions

Here are some tips. If the F1 stream of Formula 1 on Free TV abroad does not work, this can have several causes.

Possibly, the streaming provider has a problem. This rarely happens, but it is a possibility. Maybe there is a technical problem that you are powerless against. In this case, you can only wait or switch to another channel.

Sometimes certain streams do not work with various browsers. Try another browser and maybe that will solve your problem.

Providers are always eager to detect VPNs. It happens that a server is unmasked and then the broadcast is blocked. If this is the case, simply change the server. Disconnecting and reconnecting often solves the problem.

The best VPNs offer multiple protocols. It may be that some VPN protocols don’t work well and that’s why try to change the protocol in the app’s settings. You can also install the browser extension of the service. These are mostly proxies and they are great for streaming.

FAQ – Frequently asked questions and answers about streaming F1 for free

The bottom line is that it’s pretty easy to watch Formula 1 on free TV from anywhere if you know the right trick with VPN.

Where can I stream Formula 1 for free?

You can watch Formula 1 abroad for free on all the channels mentioned above. All races are geoblocked and therefore you need a VPN with servers in the corresponding country.

Where can I watch Formula 1 replays for free?

RTL Luxembourg shows the replay of the last race, but only for one week – still. This is quite useful when races are broadcast at night or very early in the morning. You don’t have to stay awake or get up early, just watch the Grand Prix when it suits you.

Can I watch F1 with a free VPN?

I highly doubt it. All free VPNs have limitations. Some offer only a few Mbytes of data volume per month and with that you can’t stream a complete race. Others don’t have the servers you need. Rather take a cheap premium service and you don’t have to be annoyed.

The post Watch Formula 1 for free – abroad via Free TV appeared first on xplatform.rocks.

Thorsten Hans: Stream Six Nations Rugby for free abroad

As a rugby fan, you can’t miss the Six Nations, the most important rugby tournament of the year. I’ll show you how you can stream the Six Nations for free abroad. For this you need a good VPN like the one from NordVPN. Licensing rights, pay-TV and geoblocking are the reasons why you can’t easily stream the Six Nations at home for free. More about this later.

These foreign streams broadcast the Six Nations on free TV:

Country (VPN Server)ChannelLanguage
EnglandBBC iPlayer + IPTVEnglish
ItalyDMAXItalian
FranceFrance 2French
IrelandRTE or VirginEnglish

How to stream Six Nations Rugby for free abroad

You want to stream the rugby tournament of the year for free? I’ll explain it to you with the solution via BBC and IPTV in England:

Time needed: 15 minutes.

Watch all Six Nations rugby matches – it’s free:

  1. You need a VPN that can bypass geoblocking

    Get a reliable VPN like NordVPN or CyberGhost – these two services are known to work around geoblocking very well. ITV and BBC broadcast the games only in England and that’s why you need a VPN with local servers. Analogous for Italy, France and Ireland.

  2. Connect to a server

    The next step is to connect to one of these servers in England. This will give you a local English IP address and it will look like you are on site.

  3. Find out which channel shows which game

    BBC (https://www.bbc.com/) and ITV (https://www.itv.com/) usually show the matches in rotation. Find out in time, who broadcasts which match of the Six Nations. With both broadcasters you have to create an account and register for free. Registration requires you to enter a zip code in England… pick one!

  4. Finished!

    Now you can start the free stream of the Six Nations.

Which countries participate in the Six Nations?

The Six Nations brings together the best teams from Europe. England, Wales, Ireland, Scotland, France and Italy fight each year for the coveted rugby crown. Italy is the biggest underdog in this tournament, but they are increasingly causing big surprises. Italy is improving year by year and I am curious to see who they will upset this year.

The rules of the Six Nations – Rugby rules explained in brief

You are interested in the tournament, but you have some gaps in the rules? Here briefly the scoring system of the rugby tournament, as it can cause some wonderment:

  • Four points are awarded for a win.
  • In case of a draw, each team gets two points.
  • A team gets a try bonus point if it lays 4 or more tries in a game.
  • If a team loses by 7 or fewer points, the team receives a losing bonus point.
  • If a team manages a Grand Slam, i.e. wins all 5 games, then it gets 3 extra points.

The bonus points system is designed to make teams play as offensively as possible and not just give up when victory seems hopeless.

The best VPNs to watch Six Nations for free

In principle, any VPN that can successfully bypass geoblocking of one of the channels listed above will work. However, the service should deliver high speeds, otherwise Six Nations streaming is no fun. Below are two VPN providers that I have had excellent experiences with when streaming Six Nations for free.

NordVPN

The provider is perfect for streaming the Six Nations for free via the channels listed above abroad. The VPN unlocks all of the above channels and geoblocking is no longer a problem. I tested it myself with all variants and it works flawlessly.

NordVPN allows the connection of 6 devices at the same time. But it also allows you to use it on a router, which allows you to connect devices like smart TVs and game consoles to the VPN.

The service supports all popular operating systems: Windows, Android, iOS, macOS and even Linux – including Raspberry Pi.

NordVPN has an adblocker that also protects against malware, phishing and trackers.

Another special feature of NordVPN is the cloaking servers (Obfuscate). This allows the service to work even in countries with VPN blocks, such as China, Turkey, Egypt, and Russia.

The Kill Switch protects your devices in case the connection to the VPN fails accidentally. The app will immediately disconnect your Internet connection until a connection with a VPN server is restored.

You can even try NordVPN for free and risk-free because it comes with a 30-day money-back guarantee.

CyberGhost

CyberGhost is cheaper than NordVPN, but can bypass geoblocking just as well. You can also use it to unblock the above mentioned channels to watch Six Nations for free abroad without any problems.

CyberGhost allows 7 simultaneous device connections. Of course, you may also use this provider on a router to connect your Playstation, Xbox or Smart TV to it.

CyberGhost offers one of the best and most user-friendly Android apps I know of. Besides WireGuard, CyberGhost also offers OpenVPN and if you use the latter VPN protocol as TCP, stealth mode is automatically enabled.

Otherwise, there are apps for all popular operating systems: Android, iOS, Windows, macOS and Linux. There is even a GUI for the latter, and that is rather rare.

CyberGhost also has an adblocker that protects against other cyber threats – phishing, trackers and malware.

CyberGhost also offers a money-back guarantee. This is valid for 45 days.

FAQ – frequently asked questions about the Six Nations

Can I stream Six Nations Rugby for free?

Yes, you can. This works with the TV channels listed above abroad.

Can I watch Six Nations Rugby with a free VPN?

Probably not, and if so, then in an extremely poor quality. We run into several problems here. Most free VPNs limit the data volume and it is not enough to watch a complete game. Other free VPNs throttle the bandwidth and that’s why streaming is not possible without annoying interruptions to load the data. Another hurdle is that free VPNs only offer servers in a few countries.

Is it legal to stream the Six Nations with a VPN?

You are most likely violating the terms of use of some streaming providers. In the worst case, this will lead to an exclusion from the broadcaster. However, with a VPN you are anonymous on the Internet.
VPNs are completely legal in Germany, Austria, Switzerland and Luxembourg. However, they are not a license to break the law. Even if you use a VPN, you must comply with the respective legislation.
However, there are countries where VPNs are illegal or restricted. These include China, Egypt, Turkey, Russia, Iran and so on. If you’re traveling abroad, check the laws before you go – they’re known to change.

Where will the Six Nations be broadcast for free?

I only found free legal streams in England, France, Ireland and Italy. I have listed the respective channels of the free TV countries in the list at the beginning of the article.

Can I stream Six Nations on my phone or tablet?

Of course it works. Either you open the websites of the services, which are of course optimized for mobile devices, or you get the corresponding apps.
With a VPN, you can watch Six Nations for free on the go and never miss a game.

The post Stream Six Nations Rugby for free abroad appeared first on xplatform.rocks.

Thorsten Hans: Modern Family – How to Watch Abroad?

Modern Family is a modern-day family sitcom that originally aired on ABC in 2009. The show follows the lives of three generations of a fictional family, the Dunphys.

However, Modern Family is not available on Netflix in all countries. If you’re located in a country where Modern Family is not available on Netflix, you can use a VPN to watch it.

VPNs are online services that allow you to conceal your real IP address and encrypt your traffic. This means that you can bypass Netflix’s geographic restrictions and watch Modern Family no matter where you are located.

Netflix released the Modern Family series that has become very popular. However, some people are not able to watch it because their location does not allow them to access the content. A VPN can be used to change a person’s IP address so they can watch Modern Family from anywhere. There are many different types of VPNs, and each one has its own benefits.

A VPN is a great way to keep your information safe when you are online. It can also help you get around content restrictions. Some of the benefits of using a VPN include:

– Increased privacy and security – When you use a VPN, your traffic is encrypted, which means that it is much harder for someone to track your online activities.

– Access to blocked content – If you are trying to watch Modern Family from a location that doesn’t allow it, a VPN can help you get around those restrictions.

– Reduced risk of being hacked – A VPN can help protect your devices from being compromised by hackers.

– Improved connection speeds – Some VPNs can improve your connection speeds, which can be helpful if you are trying to stream content or play games online.

When choosing a VPN, it is important to consider the different options available to you. There are many different providers, and each one offers a unique set of features. Some of the things you should consider include:

– Price – VPNs can be expensive, so it is important to find one that fits your budget.

– Number of devices supported – Not all VPNs support the same number of devices, so you will want to make sure the one you choose can be used on all of your devices.

– Location – Some VPNs are only available in certain locations, so you will want to make sure the one you choose covers the area you need it to.

– Bandwidth – The amount of bandwidth that a VPN provides can vary, so you will want to make sure you have enough bandwidth to meet your needs.

If you are looking for a VPN to watch Modern Family, there are a few things to keep in mind. First, decide what features are important to you and then find a provider that offers those features. Second, make sure the provider is trustworthy and has a good reputation. Finally, read the terms of service carefully to make sure you understand what you are getting into. By following these tips, you can find the perfect VPN for your needs.

What VPNs Are There?

There are many different VPN providers out there, each with its own unique features and benefits. Some of the most popular VPN providers include ExpressVPN, NordVPN, and CyberGhost.

How to watch Modern Family using a VPN?

Once you have signed up for a VPN service, you will need to download and install the VPN software. Then, open the VPN software and connect to a server in the US. Once you are connected, you can open Netflix and watch Modern Family. Note that some VPN providers may slow down your internet connection, so you may want to test out a few different servers before settling on one.

So if you’re looking for a way to watch Modern Family outside of the United States, using a VPN is the best option. VPNs are easy to use and provide a lot of benefits, such as privacy and security. So don’t miss out on Modern Family – sign up for a VPN today!

The post Modern Family – How to Watch Abroad? appeared first on xplatform.rocks.

Code-Inside Blog: Your URL is flagged as malware/phishing, now what?

Problem

On my last day in 2022 - Friday, 23. December, I received a support ticket from one customer, that our software seems to be offline and it looks like that our servers are not responding. I checked our monitoring and the server side of the customer and everything was fine. My first thought: Maybe a misconfiguration on the customer side, but after a remote support session with the customer, I saw that it “should work”, but something in the customer network blocks the requests to our services. Next thought: Firewall or proxy stuff. Always nasty, but we are just using port 443, so nothing too special.

After a while I received a phone call from the customers firewall team and they discovered the problem: They are using a firewall solution from “Check Point” and our domain was flagged as “phishing”/”malware”. What the… They even created an exception so that Check Point doesn’t block our requests, but the next problem occured: The customers “Windows Defender for Office 365” has the same “flag” for our domain, so they revert everything, because they didn’t want to change their settings too much.

x

Be aware, that from our end everything was working “fine” and I could access the customer services and our Windows Defender didn’t had any problems with this domain.

Solution

Somehow our domain was flagged as malware/phishing and we needed to change this false positive listening. I guess there are tons of services, that “tracks” “bad” websites and maybe all are connected somehow. From this indicent I can only suggest:

If you have trouble with Check Point:

Go to “URLCAT”, register an account and try to change the category of your domain. After you submit the “change request” you will get an email like this:

Thank you for submitting your category change request.
We will process your request and notify you by email (to: xxx.xxx@xxx.com ).
You can follow the status of your request on this page.
Your request details
Reference ID: [GUID]
URL: https://[domain].com
Suggested Categories: Computers / Internet,Business / Economy
Comment: [Given comment]

After ~1-2 days the change was done. Not sure if this is automated or not, but it was during Christmas.

If you have trouble with Windows Defender:

Go to “Report submission” in your Microsoft 365 Defender setting (you will need an account with special permissions, e.g. global admin) and add the URL as “Not junk”.

x

I’m not really sure if this helped or not, because we didn’t had any issues with the domain itself and I’m not sure if those “false positive” tickets bubbles up into a “global defender catalog” or if this only affects our own tenant.

Result

Anyway - after those tickets were “resolved” by Check Point / Microsoft the problem on the customer side disappeared and everyone was happy. This was my first experience with such an “false positive malware report”. I’m not sure how we ended up on such a list and why only one customer was affected.

Hope this helps!

Christina Hirth : (Data) Ownership, Boundaries, Contexts – what do these words mean?

In the last months, we started to use these terms more and more at my company without discussing the concepts behind them. One day I was asked, “What do you mean by data ownership?” 🤔 The question made me realise that I don’t know how much of these concepts are understood.

These terms refer to sociotechnical concepts (some originating from Domain-driven design). They refer to one possible answer to the question: how can a product be improved and maintained in the long term? How can we avoid hunting for weeks for bugs, understanding what the code does, finding out what it should do, and hoping that fixing one issue does not lead to a new problem? How can we continue having fun instead of becoming more and more frustrated?

Real digital products address needs which were fulfilled earlier manually. Companies which survived the first years of testing the product are often innovators in their market. They have chances to stay ahead of the others, but they have the burden of solving all questions themselves. I don’t mean the technical questions; nowadays, we have a considerable toolbox we can use. But all the competitors have that toolbox too. The questions to answer are how to organise in teams and how to organise the software to reach a steady pace without creating an over-complicated, over-engineered or over-simplified solution.

How to get a grip on the increasing complexity built up in those years when the only KPI that mattered was TTM (Time-to-Market)?

Years ago, the companies creating software to help automate work answered this question with silos around the architecture: frontend, backend, processing, etc. In the meantime, it became clear that this was not good enough.

Engineers are not hired to type code but to advise and help to solve problems. 

This means they should not belong to an engineering department anymore but be part of teams around different topics to handle: marketing, search, checkout, you name it. These are sub-domains or bounded contexts (depending on the importance of the subject, more than one bounded context can build the solution for the same sub-domain). These contexts and their boundaries are not fixed forever because the context changes, the market around the company changes, and the needs change. The people involved change and, finally, the effort needed changes. The best way also to define them is to take a look at how the business is organised (sales, marketing, finance, platform, developer experience, etc.) and how the companies using the product are organised (client setup, client onboarding, employee onboarding, payroll period, connected services, etc.). By aligning the software and – to get the most significant benefit – the teams to these sub-domains, you can ensure that the cognitive load for each team is smaller than the sum of all.

What are the benefits?

  • The domain experts and the engineers speak the same language, the ubiquitous language of their bounded context, to use the DDD terms.
  • The teams can become experts in their sub-domain to make innovation and progress easier as the problems are uncovered one after another. They can and will become responsible and accountable about their domain because they are the only ones enabled to do so.
  • Each team knows who to contact and with whom to collaborate because the ownership and the boundaries are clear. (No long-running meetings and RFCs anymore by hoping to have reached everyone involved).

What does data ownership mean in this case? Data ownership is not only about which team is the only one controlling how data is created and changed but also the one controlling which data is shared and which remains implementation detail. This way, they stay empowered and autonomous: they can decide about their experiments, reworks, and changes inside their boundaries.

Data ownership also means process ownership. 

It means the team which owns the data around “expenses”, for example, owns the features around this topic, what is implemented and when so that they are involved in each improvement or change regarding expenses from the beginning. This is the only way to respect the boundaries, take responsibility, and be accountable for all decisions around the software the engineers create.

Applying these concepts can’t be done overnight, mainly because it is not only about finding the (currently) good boundaries but also shifting the mindset from “let me build something” towards “I feel responsible for this part of my product”. It needs knowledge about the product and a lot of coaching and care. But finding the boundaries to start with should be doable in case of a product already established on the market and with a clear strategy. The alternatives are silos, continuously increasing cognitive load or the loss of an overview and local optimisations.

Martin Richter: PTZControl unterstützt nun bis zu 3 Kameras

Es erstaunt und freut mich, dass mein kleines Helferlein gut benutzt wird und es tatsächlich noch Anfragen zu Features gibt.

Ich habe heute eine neue Version bereitgestellt.

  • Unterstützung bis maximal 3 Kameras nun.
  • Layout passt sich für jede Anzahl Kameras so an, dass möglichst wenig Platz auf dem Bildschirm eingenommen wird.
  • Unterstützte Kameras sind nun: Logitech PTZ 2 Pro, PTZ Pro, Logitech Rally und die ConferenceCam CC3000e Kameras.

Hier der entsprechende Link auf mein Repository mit der neuesten Version:
https://github.com/xMRi/PTZControl


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Christina Hirth : KanDDDinsky 2022 Watch-List

This is the list of the sessions I watched, some with additional insights, others as a resource. All of them are recommended if the topic is interesting to you.

All sessions recorded during the conference can be viewed on the KanDDDinsky YouTube channel.

Keynote By Mathias Verraes about Design & Reality

Thought-provoking, like all talks I saw from Mathias.

Connascence: beyond Coupling and Cohesion (Marco Consolaro)

An interesting old concept regarding cohesion and good developer practices. Fun fact: I had never heard of Connascence before, but two times at this conference 😀.

Learn more about this from Jim Weirich’s “Grand Unified Theory of Software Design” (YouTube). It is a clear recommendation for programmers wanting to learn how to reduce cohesion.

Architect(ure) as Enabler of Organization’s Flow of Change (Eduardo da Silva)

The evolution of the rate of change in time

“The level and speed of innovation has exploded, but we still have old mental models when it comes to organisations” – Taylorism says hello 🙁

Evolution pattern depends on architectural and team maturity.

“There is no absolute wrong or right in the organisational model of the architecture owners; it is contextual and depends on the maturity.”

This talk is highly recommended if you work in or with big organisations.

Systems Thinking by combining Team Topologies with Context Maps (Michael Plöd)

A lot of overlapping between Team Topologies and DDD

💯 recommended! (The slides are on speakerdeck.)

Road-movie architectures – simplifying our IT landscapes (Uwe Friedrichsen)

There will always be multiple architectures.

“The architecture is designed for 80-20% of the teams, and it is ignored by 80-20% of them.”

The complexity trap

Uwe describes his concept-in-evolution of a desirable solution that could help avoid the different traps. They should be

  • collaborative and inclusive,
  • allowing to travel light with the architecture,
  • topical and flexible

The concept is fascinating, with a lot of good heuristics. A clear recommendation 👍

How to relate your OKRs to your technical real-estate (Marijn Huizenveld)

Common causes of failure with OKRs
Combine OKRs with Wardley Maps

The slides are on speakerdeck. Marijn is a great speaker; the talk is recommended if you work with OKRs.

Improving Your Model by Understanding the Persona Behind the User (Zsofia Herendi)

Salesforce study: 76% of customers expect companies to understand their needs and expectations.

😱 what about the rest of 24%?!! Do they not even expect to get what they need?

Zsofia gives a lot of good tips about visualising and understanding the personas.

Balancing Coupling in Software Design (Vladik Khononov)

Maths meet physics meet software development – yet again, a talk from Vladik, which must be seen more than once.

The function for calculating the pain due to cohesion.

By reducing one of these elements (strength, volatility, distance) to 0, the maintenance pain due to coupling can be reduced to (almost) 0. Now we know what we have to do 😁.

Culture – The Ultimate Context (Avraham Poupko)

Why does not have the DDD community any actual conflicts? Because our underlying concept is to collaborate – to discuss, challenge, decide, agree, commit (even if we disagree) and act.

 

This talk is so “beautiful” (I know, it is a curious thing to say), so overwhelming (because of this extraordinary speaker 💚), it would be a failure even to try to describe it! It is available, go and watch it if you want to understand the DDD community.


This list is just a list. It won’t give you any hints about the hallway conversations which happen everywhere, about the feeling of “coming home to meet friends!” which I got each year, and I won’t even try 🙂

Code-Inside Blog: SQLLocalDb update

Short Intro

SqlLocalDb is a “developer” SQL server, without the “full” SQL Server (Express) installation. If you just develop on your machine and don’t want to run a “full blown” SQL Server, this is the tooling that you might need.

From the Microsoft Docs:

Microsoft SQL Server Express LocalDB is a feature of SQL Server Express targeted to developers. It is available on SQL Server Express with Advanced Services.

LocalDB installation copies a minimal set of files necessary to start the SQL Server Database Engine. Once LocalDB is installed, you can initiate a connection using a special connection string. When connecting, the necessary SQL Server infrastructure is automatically created and started, enabling the application to use the database without complex configuration tasks. Developer Tools can provide developers with a SQL Server Database Engine that lets them write and test Transact-SQL code without having to manage a full server instance of SQL Server.

Problem

(I’m not really sure, how I ended up on this problem, but I after I solved the problem I did it on my “To Blog”-bucket list)

From time to time there is a new SQLLocalDb version, but to upgrade an existing installation is a bit “weird”.

Solution

If you have installed an older SQLLocalDb version you can manage it via sqllocaldb. If you want to update you must delete the “current” MSSQLLocalDB in the first place.

To to this use:

sqllocaldb stop MSSQLLocalDB
sqllocaldb delete MSSQLLocalDB

Then download the newest version from Microsoft. If you choose “Download Media” you should see something like this:

x

Download it, run it and restart your PC, after that you should be able to connect to the SQLLocalDb.

We solved this issue with help of this blogpost.

Hope this helps! (and I can remove it now from my bucket list \o/ )

Christina Hirth : Paper on Event-Driven Architecture

Programming Without a Call Stack – Event-driven Architectures
by Gregor Hohpe (2006) shared by Indu Alagarsamy on KanDDDinsky 2019.

Holger Schwichtenberg: In eigener Sache: Bücher zu C# 11.0, Blazor 7.0 und Entity Framework Core 7.0

Der Dotnet-Doktor hat seine Buchreihe bereits auf den Stand der fertigen Version von .NET 7.0 gebracht.

Jürgen Gutsch: Windows Terminal, PowerShell, oh-my-posh, and Winget

I'm thinking about changing the console setup I use for some development tasks on Windows. The readers of this block already know that I'm a console guy. I'm using git and docker in the console only. I'm navigating my folders using the console. I even used the console to install, update or uninstall tools using Chocolatey (https://chocolatey.org/).

This post is not a tutorial on how to install and use the tools I'm going to mention here. It is just a small portrait of what I'm going to use. Follow the links to learn more about the tools.

PowerShell and oh-my-posh

Actually, working in the console doesn't work for me with the regular cmd.exe and I completely understand why developers on Windows still prefer using windows based tools for git and docker, and so on. Because of that, I was using cmder (https://cmder.app/), a great terminal with useful Linux commands and great support for git. The git support not only integrates the git CLI, but it also shows the current branch in the prompt:

Cmder in action

The latter is a great help when working with git; I missed that in the other terminals. Commander also supports adding different shells like git bash, WSL, or PowerShell but I used the cmd shell which has been enriched with a lot more useful commands. This worked great for me.

For a couple of weeks, I'm playing around with the Windows Terminal a little more. The reason why I looked into the Windows Terminal is, that I like the more lightweight settings.

The Windows Terminal (download it from the windows store) and oh-my-posh (https://ohmyposh.dev/) are out for a while and I followed Scott Handelman's blog posts about it for a long time but wasn't able to get it running on my machine. Two weeks ago I got some help by Jan De Dobbeleer to get it running. It just turned out that I had too many posh versions installed on my machine, and the path environment variable was messed up. After cleaning my system and reinstalling oh-my-posh on my machine by following the installation guide it is working quite well:

Terminal and posh in action

I still need to configure the prompt a little bit to match my needs 100% but the current theme is great for now and does more as cmder does. I'd like to display the latest tag of the current git repository and the currently used dotnet SDK version, but this will be another story.

Windows Terminal

In the Windows Terminal, I configured oh-my-posh for both, the Windows PowerShell 5 and the new PowerShell 7 and set the PowerShell 7 as my default console. I also added configurations to use PowerShell 5, WSL (both Ubuntu 18 and Ubuntu 20), git bash, and the Azure Cloud Shell. I did almost the same with cmder but I like the way how it gets configured in Windows Terminal.

Winget

Winget is basically an apt-get for windows and I like it.

As mentioned, Chocolatey is the tool I used to install the tools I need, like git, cmder, etc. I tried it for a while, winget was mentioned on Twitter (unfortunately I forgot the link). Actually, it is much better than Chocolatey because it uses the application registry used by windows, which means it can update and uninstall programs that have been installed without using winget.

Winget is the console version of installing and managing installed programs on Windows and it is natively installed on Windows 10 and Windows 11.

Conclusion

So I'm going to change my setup from this ...

  • cmder
    • cmd
    • chocolatey

... to that ...

  • Windows Terminal
    • Powershell7
    • oh-my-posh
    • Winget

... and it seems to work great for me.

Any other tools that I should have a look at? Just drop me a comment :-)

Holger Schwichtenberg: .NET 7 erscheint am 8. November im Rahmen der .NET Conf 2022

Während der Entwicklerkonferenz .NET Conf 2022 will Microsoft nächste Woche die einsatzreifen Versionen von .NET 7.0 und C# 11.0 freigeben.

Jürgen Gutsch: ASP.NET Core Globalization and a custom RequestCultureProvider [Updated]

In this post, I'm going to write about how to enable and use Globalization in ASP.NET Core. Since you don't can change the culture depending on route values by default, I show you how to create and register a custom RequestCultureProvider that does this job.

UPDATE:

Hisham Bin Ateya pointed me to the [fact via Twitter](TWITTER STATUS) that there already is a RequestCultureProvider that can change the culture depending on route values in ASP.NET Core. Because of that, please see the last section in this blog post just as an example about how to create a custom RequestCultureProvider.

I also restructured the post a little bit. to separate general information about Globalization and RequestCultureProvider. If you are familiar with Globalization, just skip the fist sections and jump to the second last section.

About GLobalization

Resources Files

Like in the old time of the .NET Framework, the resources (strings, images, icons, etc.) for different languages are stored in so-called resource files that end with resx stored in a folder called Resources by default.

Unlike in the good old time of the .NET Framework, the right resource files will be fetched automatically by the implementation of the specific Localizer if you follow some naming conventions.

  • If you inject the Localizer into a controller, the localizer should be named like Controllers.ControllerClassName.[Culture].resx or put to a subfolder called Controllers and named like ControllerClassName.[Culture].resx .
  • If you are injecting the Localizer into a view, it is almost the same as for the controllers. The difference is just to have a view name in the resource path instead of a controller name: Views.ControllerName.ViewName.[culture].resx or Views/ControllerName/ViewName.[culture].resx.

It is up to you to decide how you like to structure your resource files. Personally, I prefer the folder option. Also, an autogenerated code file as you might know from the past is no longer needed since you need to use a localizer to access the resources.

Unfortunately there is now way yet to add a resource file via the .NET CLI. Maybe there will be a template in the future. I created the resource file with the Visual Studio 2022 and copied it to create the other files needed.

Localizers

You no longer need to use the resource manager to read the actual localized strings from the resource files. You can now use an IStringLocalizer or an IHtmlLocalizer. The latter doesn't HTML-encode the strings that are stored in the resource files and can be used to localize strings that contain HTML code if needed.:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Localization;

namespace Internationalization.Controllers;

public class HomeController : Controller
{
	private readonly IStringLocalizer<HomeController> _localizer;

    public AboutController(IStringLocalizer<HomeController> localizer)
    {
   		_localizer = localizer;
    }

    public IActionResult Index()
    {
    	return View(new { Title = _localizer["About Title"]});
    }
}

The resource key named "About Title" doesn't need to exist or even the resource file doesn't need to exist. If the Localizer doesn't find the key, the key itself gets returned as a string. You can use any kind of string as a key. This can help you to develop the application without having the resource files in place.

You can even inject a localizer in the Razor View like this:

@using Microsoft.AspNetCore.Mvc.Localization
@inject IViewLocalizer Localizer

@model HomeIndexViewModel
@{
    ViewData["Title"] = Localizer["Title"];
}

<h1>@ViewData["Title"]</h1>

In this case, it is an HtmlLocalizer the key can also contain HTML that doesn't get encoded when writing it out to the view. Even if it's not recommended to save HTML in resource files it might be needed in some cases. You shouldn't do that because HTML should be part of the frontend templates like Razor, Blazor, etc.

Instead of using the ViewLocalizer in the Razor Templates, you can also localize the entire View. Therefore you need to suffix the view name with the needed culture or put the view in a subfolder called like the culture. How localized views are handled needs to be configured when enabling Localization and Globalization.

Enabling Globalization in ASP.NET Core

As usual, you need to add the required services to enable localization:

builder.Services.AddLocalization(options =>
{
    options.ResourcesPath = "Resources";
});
builder.Services.AddControllersWithViews()
    .AddViewLocalization(LanguageViewLocationExpanderFormat.Suffix)
    .AddDataAnnotationsLocalization();

The first line adds general localization to be used in the C# code, like controllers, etc. setting the ResourcePath in the options is optionally and just added to the snippets, to show you that you can change the path where the resources are stored.

After that, the ViewLocalization, as well as the DataAnnotationLocalization, was added to the Service Collection. The LanguageViewLocationExpanderFormat tells the View localizer that in the case of localized views, the culture was added as a suffix to the filename instead of being part of the folder structure.

After adding the needed services to the service collection the required middleware needs to be added as well:

app.UseRequestLocalization(options =>
{
    var culture = new List<CultureInfo> {
        new CultureInfo("en-en"),
        new CultureInfo("fr-fr"),
        new CultureInfo("de-de")
    };
    options.DefaultRequestCulture = new RequestCulture("en-en");
    options.SupportedCultures = culture;
    options.SupportedUICultures = culture;
});

This Middleware uses pre-configured RequestCultureProviders to set the Culture and the UICulture to the current process. With this culture set, the localizers can select the right resource files or the right localized views.

That's it with enabling Localization and Globalization. With this information, you should be able to create multilanguage applications already.

Culture vs. UI Culture

Setting the culture will set the application to a specific language and optional a region. If you also set the UI Culture, you make a distinction between translating texts and between how numbers, dates, and currencies are displayed. The UI culture is used to load resources from a corresponding resource file and the Culture is used to change the way how numbers, dates, and currencies are formatted and displayed.

In some cases, it makes sense to handle that separately. If you only like to translate your page without taking care of number and date formats, etc. you should only change the UI Culture.

Localize ViewModels

While enabling view localization, we also enabled DataAnnotationsLocalization. This helps you to translate labels for form fields in case you use the @Html.LabelFor() method. This doesn't need to specify the ResourceType anymore. Since there is no longer an autogenerated C#-File, there is also no ResourceType specified. Inside the ViewModel you just need to specify the DisplayAttribute:

public class EmployeeViewModel
{
    [Display(Name = "Number")]
    public int Number { get; set; }

    [Display(Name = "Firstname")]
    public string? Firstname { get; set; }

    [Display(Name = "Lastname")]
    public string? Lastname { get; set; }

    [Display(Name = "Department")]
    public string? Department { get; set; }

    [Display(Name = "Phone")]
    public string? Phone { get; set; }

    [Display(Name = "Email")]
    public string? Email { get; set; }

    [Display(Name = "Date of birth")]
    public DateTime DateOfBirth { get; set; }

    [Display(Name = "Size")]
    public decimal Size { get; set; }

    [Display(Name = "Salery")]
    public decimal Salery { get; set; }
}

The DataAnnotationsLocalizer will automatically use the string that is set in the Name property as a key to search for the relevant resource. This also works for the Description and the ShortName properties.

The resource file that is used to translate the display names has to be placed inside subfolders called ViewModels/ControllerName. Example: /Resources/ViewModels/Home/EmployeeModel.de-DE.resx

Creating a custom RequestCultureProvider

RequestCultureProviders

As mentioned RequestCultureProviders retrieve the culture from somewhere and prepare it for the process to work with the culture. The RequestCultureProviders return a ProviderCultureResult that has the property Culture and UICulture set. Both cultures can differ if needed. In most cases, it will be the same.

There are three preconfigured RequestCultureProviders:

  • QueryStringRequestCultureProvider This provider extracts the Culture and UICulture from query string values if there are any. This means you can switch the language by just setting the query strings. ?culture-de-DE&ui-culture=de-DE

  • CookieRequestCultureProvider This provider extracts the culture information from a specific cookie. The cookie-name is .AspNetCore.Culture and the value of the cookie might look like this: c=es-MX|uic=es-MX (c is the culture and uic is the ui-culture)

  • AcceptLanguageHeaderRequestCultureProvider That provider extracts the language information from the Accept-Language Header that gets sent by the browsers. Every browser has preferred languages configured and sends those languages to the server. With this information, you can localize your application-specific to the user's language

As you have seen in the previews section, not every language sent by the accept-language header, cookie, or query string gets accepted by your application. You need to define a list of supported cultures and a default request culture that is used if the language sent by the client isn't supported by your application.

The custom RequestCultureProvider

UPDATE:

Actually there is an existing RequestCultureProvider in ASP.NET Core that can change the culture depending on rout values. Since it isn't in the default list of registered RequestCultureProvider, I expected that there is none. That was wrong.

Since there is one already, just see the following section as an example about how to create a custom RequestCultureProvider.

What I am missing in the list of RequestCultureProviders is a RouteValueRequestCultureProvider. A provider that is getting the culture information from a route value in case it is part of the route like this /en-US/Home/Index/

Let's assume we have a route configured like this:

app.MapControllerRoute(
    name: "default",
    pattern: "{culture=en-us}/{controller=Home}/{action=Index}/{id?}");

This adds the culture as part of the route.

Actually, I built a RouteValueRequestCultureProvider that handles the route values:

using Microsoft.AspNetCore.Localization;

namespace Internationalization.Providers;

// <summary>
/// Determines the culture information for a request via values in the route values.
/// </summary>
public class RouteValueRequestCultureProvider : RequestCultureProvider
{
    /// <summary>
    /// The key that contains the culture name.
    /// Defaults to "culture".
    /// </summary>
    public string RouteValueKey { get; set; } = "culture";

    /// <summary>
    /// The key that contains the UI culture name. If not specified or no value is found,
    /// <see cref="RouteValueKey"/> will be used.
    /// Defaults to "ui-culture".
    /// </summary>
    public string UIRouteValueKey { get; set; } = "ui-culture";

    public override Task<ProviderCultureResult?> DetermineProviderCultureResult(HttpContext httpContext)
    {
        if (httpContext == null)
        {
            throw new ArgumentNullException(nameof(httpContext));
        }

        var request = httpContext.Request;
        if (!request.RouteValues.Any())
        {
            return NullProviderCultureResult;
        }

        string? queryCulture = null;
        string? queryUICulture = null;

        if (!string.IsNullOrWhiteSpace(RouteValueKey))
        {
            queryCulture = request.RouteValues[RouteValueKey]?.ToString();
        }

        if (!string.IsNullOrWhiteSpace(UIRouteValueKey))
        {
            queryUICulture = request.RouteValues[UIRouteValueKey]?.ToString();
        }

        if (queryCulture == null && queryUICulture == null)
        {
            // No values specified 
            return NullProviderCultureResult;
        }

        if (queryCulture != null && queryUICulture == null)
        {
            // Value for culture but not for UI culture so default to culture value for both
            queryUICulture = queryCulture;
        }
        else if (queryCulture == null && queryUICulture != null)
        {
            // Value for UI culture but not for culture so default to UI culture value for both
            queryCulture = queryUICulture;
        }

        var providerResultCulture = new ProviderCultureResult(queryCulture, queryUICulture);

        return Task.FromResult<ProviderCultureResult?>(providerResultCulture);
    }
}

This RouteValueRequestCultureProvider reads the culture and the ui-culture out of the route values and returns a ProviderCultureResult that will be used by the Localizers.

The route engine handles the generation of the route URLs for us if we use the MVC mechanisms to create links and tags. We'll now have the selected language and region everywhere in the routes.

To create a language changer, we Just need to change the culture in the route value like this:

<ul class="navbar-nav flex-grow-1 justify-content-end">
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" 
           asp-controller="@Context.GetRouteValue("Controller")" 
           asp-action="@Context.GetRouteValue("Action")" 
           asp-route-culture="en-US">EN</a>
    </li>
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" 
           asp-controller="@Context.GetRouteValue("Controller")" 
           asp-action="@Context.GetRouteValue("Action")" 
           asp-route-culture="de-DE">DE</a>
    </li>
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" 
           asp-controller="@Context.GetRouteValue("Controller")" 
           asp-action="@Context.GetRouteValue("Action")" 
           asp-route-culture="fr-FR">FR</a>
    </li>
</ul>

Changing the culture and the UI culture also changes the way how dates, numbers, and currencies are displayed. This means the language changer is also changing the region and will display the currency in Euro in case you change the region to a region that uses the Euro as local currency. You need to keep this in mind when working with financial data because just changing the currency doesn't make sense if you don't convert the actual numbers to the local currency as well. If you don't want to change the currency, you should hard code the way how to format and display currency. Just fixing the culture of a region and changing the UI culture would also set the numbers and dates to a fixed format which is not what we want to have.

This is the start page of the sample project in French.

French localized UI

(I apologies for wrong translations. Unfortunately it is more than 25 years in the past when I was learning French in school.)

Sample application and Conclusion

This is actually working and I created a small application to demonstrate this. This sample includes all the topics of this post. You will find the sample project in Github.

Microsoft reduces the complexity a lot. On the other hand, if you were used to work with more complex resource handling in the past, you will stumble upon small things you won't expect like me. However, adding Globalization and Localization in .NET 7 is easy and I like the way how to get it working.

Holger Schwichtenberg: Erneute Umbenennung: 18 Monate sind nun der "Standard Support" für .NET

Microsoft verwirft die zwischenzeitlich bei .NET eingeführte Bezeichnung "Short-term Support" wieder und spricht nun von "Standard Support".

Code-Inside Blog: Azure DevOps & Azure Service Connection

Today I needed to setup a new release pipeline on our Azure DevOps Server installation to deploy some stuff automatically to Azure. The UI (at least on the Azure DevOps Server 2020 (!)) is not really clear about how to connect those two worlds, and thats why I’m writing this short blogpost.

First - under project settings - add a new service connection. Use the Azure Resource Manager-service. Now you should see something like this:

x

Be aware: You will need to register app inside your Azure AD and need permissions to setup. If you are not able to follow these instructions, you might need to talk to your Azure subscription owner.

Subscription id:

Copy here the id of your subscription. This can be found in the subscription details:

x

Keep this tab open, because we need it later!

Service prinipal id/key & tenant id:

Now this wording about “Service principal” is technically correct, but really confusing if your are not familar with Azure AD. A “Service prinipal” is like a “service user”/”app” that you need to register to use it. The easiest route is to create an app via the Bash Azure CLI:

az ad sp create-for-rbac --name DevOpsPipeline

If this command succeeds you should see something like this:

{
  "appId": "[...GUID..]",
  "displayName": "DevOpsPipeline",
  "password": "[...PASSWORD...]",
  "tenant": "[...Tenant GUID...]"
}

This creates an “Serivce principal” with a random password inside your Azure AD. The next step is to give this “Service principal” a role on your subscription, because it has currently no permissions to do anything (e.g. deploy a service etc.).

Go to the subscription details page and then to Access control (IAM). There you can add your “DevOpsPipeline”-App as “Contributor” (Be aware that this is a “powerful role”!).

After that use the "appId": "[...GUID..]" from the command as Service Principal Id. Use the "password": "[...PASSWORD...]" as Service principal key and the "tenant": "[...Tenant GUID...]" for the tenant id.

Now you should be able to “Verify” this connection and it should work.

Links: This blogpost helped me a lot. Here you can find the official documentation.

Hope this helps!

Jürgen Gutsch: ASP.NET Core 7 updates

Release candidate 1 of ASP.NET Core 7 is out for around two weeks and the release date isn't that far. The beginning of November usually is the time when Microsoft is releasing the new version of .NET. Please find the announcement post here: ASP.NET Core updates in .NET 7 Release Candidate 1. I will not repeat this post but pick some personal highlights to write about.

ASP.NET Core Roadmap for .NET 7

First of all, a look at the ASP.NET Core roadmap for .NET 7 shows us, that there are only a few issues open and planned for the upcoming release. That means the release is complete and almost only bugfixes will be pushed to that release. Many other open issues are already stroked through and probably assigned to a later release. Guess we'll have a published roadmap for ASP.NET Core on .NET 8 soon. At the latest at the beginning of November.

What are the updates of this RC 1?

A lot of Blazor

Even this release is full of Blazor improvements. Those working a lot with Blazor will be happy about improved JavaScript interop, debugging improvements, handling location-changing events, and dynamic authentication requests coming with this release.

However, there are some quite interesting improvements within this release that might be great for almost every ASP.NET Core developer:

Faster HTTP/2 uploads and HTTP3 performance improvements

The team increases the default upload connection window size of HTTP/2, resulting in a much faster upload time. Stream handling is always tricky and needs a lot of fine-tuning to find the right balance. Improving the upload speed by more than five times is awesome and really helpful to upload bigger files. Even in HTTP/3 the performance was increased by reducing HTTP/3 allocations. Feature parity with HTTP/1, HTTP/2, and HTTP/3 is as useful as Server Name Indication (SNI) when configuring connection certificates.

Rate limiting middleware improvements

The rate-limiting middleware got some small improvements to make it easier and more flexible to configure. You can now add attributes to controller actions to enable or disable rate limiting on specific endpoints. To do the same on Minimal API endpoints and endpoint groups you can use methods to enable or disable rate limiting. This way you can enable rate-limiting for an endpoint group, but disable it for a specific one inside this group.

You can specify the rate limiting policy on both attributes, endpoints, and endpoint groups methods. Unlike the attributes that support named policies, only the Minimal API methods can also take an instance of a policy.

Experimental stuff added to this release

WebTransport is a new draft specification for HTTP/3 that works similarly to WebSockets but supports multiple streams per connection. The support for WebTransport is now added as an experimental feature to the RC1

One of the new features in .NET 7 is gRPC JSON transcoding to turn gRPC APIs into RESTful APIs. Any RESTful API should have an OpenAPI documentation, and so should gRPC JSON transcoding. This release now contains experimental support to add Swashbuckle Swagger to gRPC to render an OpenAPI documentation

Conclusion

ASP.NET Core on .NET 7 seems to be complete now and I'm really looking forward to the .NET Conf 2022 beginning of November which will be the launch event for .NET 7.

And exactly this reminds me to start thinking about the next edition of my book "Customizing ASP.NET Core" which needs to be updated to .NET 8 and enhanced by probably three more chapters next year.

Stefan Henneken: IEC 61131-3: SOLID – Das Interface Segregation Principle

Der Grundgedanke des Interface Segregation Principle (ISP) hat starke Ähnlichkeit mit dem Single Responsibility Principle (SRP): Module mit zu vielen Zuständigkeiten können die Pflege und Wartbarkeit eines Softwaresystem negativ beeinflussen. Das Interface Segregation Principle (ISP) legt den Schwerpunkt hierbei auf die Schnittstelle des Moduls. Ein Modul sollte nur die Schnittstellen implementieren, die für seine Aufgabe benötigt werden. Im Folgenden wird gezeigt, wie dieses Designprinzip umgesetzt werden kann.

Ausgangssituation

Im letzten Post (IEC 61131-3: SOLID – Das Liskov Substitution Principle) wurde das Beispiel um einen weiteren Lampentyp (FB_LampSetDirectDALI) erweitert. Das Besondere an diesem Lampentyp ist die Skalierung des Ausgangwertes. Während die anderen Lampentypen 0-100 % ausgeben, gibt der neue Lampentyp einen Wert von 0 bis 254 aus.

So wie alle anderen Lampentypen, besitzt auch der neue Lampentyp (DALI-Lampe) einen Adapter (FB_LampSetDirectDALIAdapter). Die Adapter sind bei der Umsetzung des Single Responsibility Principle (SRP) hinzugekommen und stellen sicher, dass die Funktionsblöcke der einzelnen Lampentypen nur für eine einzelne Fachlichkeit zuständig sind (siehe IEC 61131-3: SOLID – Das Single Responsibility Principle).

Das Beispielprogramm wurde zuletzt so angepasst, dass von dem neuen Lampentyp (FB_LampSetDirectDALI) der Ausgangswert innerhalb des Adapters von 0-254 auf 0-100 % skaliert wird. Dadurch verhält sich die DALI-Lampe genau wie die anderen Lampentypen, ohne das Liskov Substitution Principle (LSP) zu verletzen.

Dieses Beispielprogramm soll uns als Ausgangssituation für die Erklärung des Interface Segregation Principle (ISP) dienen.

Erweiterung der Implementierung

Auch dieses Mal, soll die Anwendung erweitert werden. Allerdings wird nicht ein neuer Lampentyp definiert, sondern ein vorhandener Lampentyp wird um eine Funktionalität erweitert. Die DALI-Lampe soll in der Lage sein, die Betriebsstunden zu zählen. Hierzu wird der Funktionsblock FB_LampSetDirectDALI um die Eigenschaft nOperatingTime erweitert.

PROPERTY PUBLIC nOperatingTime : DINT

Über den Setter kann der Betriebsstundenzähler auf einen beliebigen Wert gesetzt werden, während der Getter den aktuellen Zustand des Betriebsstundenzählers zurückgibt.

Da FB_Controller die einzelnen Lampentypen repräsentiert, wird dieser Funktionsblock ebenfalls um nOperatingTime erweitert.

Die Erfassung der Betriebsstunden erfolgt im Funktionsblock FB_LampSetDirectDALI. Ist der Ausgangswert > 0, so wird jede Sekunde der Betriebsstundenzähler um 1 erhöht:

IF (nLightLevel > 0) THEN
  tonDelay(IN := TRUE, PT := T#1S);
  IF (tonDelay.Q) THEN
    tonDelay(IN := FALSE);
    _nOperatingTime := _nOperatingTime + 1;
  END_IF
ELSE
  tonDelay(IN := FALSE);
END_IF

Die Variable _nOperatingTime ist die Backing Variable für die neue Eigenschaft nOperatingTime und ist im Funktionsblock deklariert.

Welche Möglichkeiten gibt es, um den Wert von nOperatingTime aus FB_LampSetDirectDALI in die Eigenschaft nOperatingTime von FB_Controller zu übertragen? Auch hier gibt es jetzt verschiedene Ansätze, um die geforderte Erweiterung in die gegebene Softwarestruktur zu integrieren.

Ansatz 1: Erweiterung von I_Lamp

Die Eigenschaft für das neue Leistungsmerkmal wird mit in die Schnittstelle I_Lamp integriert. Somit erhält auch der abstrakte Funktionsblock FB_Lamp die Eigenschaft nOperatingTime. Da alle Adapter von FB_Lamp erben, erhalten die Adapter aller Lampentypen diese Eigenschaft, unabhängig ob der Lampentyp einen Betriebsstundenzähler unterstützt oder nicht.

Der Getter und der Setter von nOperatingTime in FB_Controller können somit direkt auf nOperatingTime der einzelnen Adapter der Lampentypen zugreifen. Der Getter von FB_Lamp (abstrakter Funktionsblock, von dem alle Adapter erben) liefert den Wert -1 zurück. Somit kann das Fehlen des Betriebsstundenzählers erkannt werden.

IF (fbController.nOperatingTime >= 0) THEN
  nOperatingTime := fbController.nOperatingTime;
ELSE
  // service not supported
END_IF

Da FB_LampSetDirectDALI den Betriebsstundenzähler unterstützt, überschreibt der Adapter (FB_LampSetDirectDALIAdapter) die Eigenschaft nOperatingTime. Der Getter und der Setter vom Adapter greifen auf nOperatingTime von FB_LampSetDirectDALI zu. Der Wert des Betriebsstundenzählers wird somit bis zu FB_Controller weitergegeben.

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 1 (TwinCAT 3.1.4024) auf GitHub

Dieser Ansatz setzt das Leistungsmerkmal wie gewünscht um. Auch werden keine der bisher gezeigten SOLID-Prinzipien verletzt.

Allerdings wird die zentrale Schnittstelle I_Lamp erweitert, nur um bei einem Lampentyp ein weiteres Leistungsmerkmal hinzuzufügen. Alle anderen Adapter der Lampentypen, auch die, die das neue Leistungsmerkmal nicht unterstützen, erhalten über den abstrakten Basis-FB FB_Lamp ebenfalls die Eigenschaft nOperatingTime.

Mit jedem Leistungsmerkmal, welches auf diese Weise hinzugefügt wird, vergrößert sich die Schnittstelle I_Lamp und somit auch der abstrakte Basis-FB FB_Lamp.

Ansatz 2: zusätzliche Schnittstelle

Bei diesem Ansatz wird die Schnittstelle I_Lamp nicht erweitert, sondern es wird für die gewünschte Funktionalität eine neue Schnittstelle (I_OperatingTime) hinzugefügt. I_OperatingTime enthält nur die Eigenschaft, die für das Bereitstellen des Betriebsstundenzählers notwendig ist:

PROPERTY PUBLIC nOperatingTime : DINT

Implementiert wird diese Schnittstelle vom Adapter FB_LampSetDirectDALIAdapter.

FUNCTION_BLOCK PUBLIC FB_LampSetDirectDALIAdapter EXTENDS FB_Lamp IMPLEMENTS I_OperatingTime

Somit erhält FB_LampSetDirectDALIAdapter die Eigenschaft nOperationTime nicht über FB_Lamp bzw. I_Lamp, sondern über die neue Schnittstelle I_OperatingTime.

Greift FB_Controller im Getter von nOperationTime auf den aktiven Lampentyp zu, so wird vor dem Zugriff geprüft, ob der ausgewählte Lampentyp die Schnittstelle I_OperatingTime implementiert. Ist dieses der Fall, so wird über I_OperatingTime auf die Eigenschaft zugegriffen. Hat der Lampentyp die Schnittstelle nicht implementiert, wird -1 zurückgegeben.

VAR
  ipOperatingTime  : I_OperatingTime;
END_VAR
IF (__ISVALIDREF(_refActiveLamp)) THEN
  IF (__QUERYINTERFACE(_refActiveLamp, ipOperatingTime)) THEN
    nOperatingTime := ipOperatingTime.nOperatingTime;
  ELSE
    nOperatingTime := -1; // service not supported
  END_IF
END_IF

Ähnlich ist der Setter von nOperationTime aufgebaut. Nach der erfolgreichen Prüfung, ob I_OperatingTime von der aktiven Lampe implementiert wird, erfolgt über die Schnittstelle der Zugriff auf die Eigenschaft.

VAR
  ipOperatingTime  : I_OperatingTime;
END_VAR
IF (__ISVALIDREF(_refActiveLamp)) THEN
  IF (__QUERYINTERFACE(_refActiveLamp, ipOperatingTime)) THEN
    ipOperatingTime.nOperatingTime := nOperatingTime;
  END_IF
END_IF
(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 2 (TwinCAT 3.1.4024) auf GitHub

Analyse der Optimierung

Das Verwenden einer separaten Schnittstelle für das zusätzliche Leistungsmerkmal entspricht der ‚Optionalität‘ aus IEC 61131-3: SOLID – Das Liskov Substitution Principle. In dem obigen Beispiel kann zur Laufzeit des Programms geprüft werden (mit __QUERYINTERFACE()), ob eine bestimmte Schnittstelle implementiert und somit das jeweilige Leistungsmerkmal unterstützt wird. Weitere Eigenschaften, wie bIsDALIDevice aus dem ‚Optionalität‘-Beispiel, sind bei diesem Lösungsansatz nicht notwendig.

Wird pro Leistungsmerkmal bzw. Funktionalität eine separate Schnittstelle angeboten, können andere Lampentypen diese ebenfalls implementieren, um so das gewünschte Leistungsmerkmal umzusetzen. Soll FB_LampSetDirect ebenfalls einen Betriebsstundenzähler erhalten, so muss FB_LampSetDirect um die Eigenschaft nOperatingTime erweitert werden. Außerdem muss FB_LampSetDirectAdapter die Schnittstelle I_OperatingTime implementieren. Alle anderen Funktionsblöcke, auch FB_Controller, bleiben unverändert.

Ändert sich die Funktionsweise der Betriebsstundenzähler und I_OperatingTime erhält zusätzliche Methoden, so müssen nur die Funktionsblöcke angepasst werden, die auch das Leistungsmerkmal unterstützen.

Beispiele für das Interface Segregation Principle (ISP) sind auch im .NET zu finden. So gibt es in .NET die Schnittstelle IList. Diese Schnittstelle enthält Methoden und Eigenschaften für das Anlegen, Verändern und Lesen von Auflistungen. Je nach Anwendungsfall ist es aber ausreichend, dass der Anwender eine Auflistung nur lesen muss. Das Übergeben einer Auflistung durch IList würde in diesem Fall aber auch Methoden anbieten, um die Auflistung zu verändern. Für diese Anwendungsfälle gibt es die Schnittstelle IReadOnlyList. Mit dieser Schnittstelle kann eine Auflistung nur gelesen werden. Ein versehentliches Verändern der Daten ist somit nicht möglich.

Das Aufteilen von Fachlichkeiten in einzelne Schnittstellen erhöht somit nicht nur die Wartbarkeit, sondern auch die Sicherheit eines Softwaresystems.

Die Definition des Interface Segregation Principle

Damit kommen wir auch schon zur Definition des Interface Segregation Principle (ISP):

Ein Modul, das eine Schnittstelle benutzt, sollte nur diejenigen Methoden präsentiert bekommen, die sie auch wirklich benötigt.

Oder etwas anders formuliert:

Clients sollten nicht gezwungen werden, von Methoden abhängig zu sein, die sie nicht benötigen.

Ein häufiges Argument gegen das Interface Segregation Principle (ISP) ist die erhöhte Anzahl von Schnittstellen. Ein Softwareentwurf kann im Laufe seiner Entwicklungszyklen jederzeit noch angepasst werden. Wenn Sie also das Gefühl haben, das eine Schnittstelle zu viele Funktionalitäten beinhaltet, prüfen Sie, ob eine Aufteilung möglich ist. Natürlich sollte ein Overengineering immer vermieden werden. Ein gewisses Maß an Erfahrung kann hierbei hilfreich sein.

Abstrakte Funktionsblöcke stellen ebenfalls eine Schnittstelle (siehe FB_Lamp) dar. In einem abstrakten Funktionsblock können Grundfunktionen enthalten sein, die der Anwender nur um die notwendigen Details ergänzt. Es ist nicht notwendig, alle Methoden oder Eigenschaften selbst zu implementieren. Aber auch hierbei ist es wichtig, den Anwender nicht mit Fachlichkeiten zu belasten, die für seine Aufgaben nicht notwendig sind. Die Menge der abstrakten Methoden und Eigenschaften sollte möglichst klein sein.

Die Beachtung des Interface Segregation Principle (ISP) hält Schnittstellen zwischen Funktionsblöcken so klein wie möglich, wodurch die Kopplung zwischen den einzelnen Funktionsblöcken reduziert wird.

Zusammenfassung

Soll ein Softwaresystem weitere Leistungsmerkmale abdecken, so reflektieren Sie die neuen Anforderungen und erweitern Sie nicht voreilig bestehende Schnittstellen. Prüfen Sie, ob separate Schnittstellen nicht die bessere Entscheidung sind. Als Belohnung erhalten Sie ein Softwaresystem das leichter zu pflegen, besser zu testen und einfacher zu erweitern ist.

Im Letzten noch ausstehenden Teil, wird das Open/Closed Principle (OCP) näher erklärt.

Stefan Henneken: IEC 61131-3: SOLID – The Liskov Substitution Principle

„The Liskov Substitution Principle (LSP) requires that derived function blocks (FBs) are always compatible to their base FB. Derived FBs must behave like their respective base FB. A derived FB may extend the base FB, but not restrict it.” This is the core statement of the Liskov Substitution Principle (LSP), which Barbara Liskov formulated already in the late 1980s. Although the Liskov Substitution Principle (LSP) is one of the simpler SOLID principles, its violation is very common. The following example shows why the Liskov Substitution Principle (LSP) is important.

Starting situation

I use once again the example, which was already developed and optimized in the two previous posts. The core of the example are three lamp types, which are mapped by the function blocks FB_LampOnOff, FB_LampSetDirect and FB_LampUpDown. The interface I_Lamp and the abstract function block FB_Lamp secure a clear decoupling between the respective lamp types and the higher-level controller FB_Controller.

FB_Controller no longer accesses specific instances, but only a reference of the abstract function block FB_Lamp. The IEC 61131-3: SOLID – The Dependency Inversion Principle is used to break the fixed coupling.

To realize the required functionality, each lamp type provides its own methods. For this reason, each lamp type also has a corresponding adapter function block (FB_LampOnOffAdapter, FB_LampSetDirectAdapter and FB_LampUpDownAdapter), which is responsible for mapping between the abstract lamp (FB_Lamp) and the concrete lamp types (FB_LampOnOff, FB_LampSetDirect and FB_LampUpDown). This optimization is supported by the IEC 61131-3: SOLID – The Single Responsibility Principle.

Extension of the implementation

The three required lamp types can be mapped well by the existing software design. Nevertheless, it can happen that extensions, which seem simple at first sight, lead to difficulties later. The new lamp type FB_LampSetDirectDALI will serve as an example here.

DALI stands for Digital Addressable Lighting Interface and is a protocol for controlling lighting devices. Basically, the new function block behaves like FB_LampSetDirect, but with DALI the output value is not given in 0-100 % but in 0-254.

Optimization and analysis of the extensions

Which approaches are available to implement this extension? The different approaches will also be analyzed in more detail.

Approach 1: Quick & Dirty

High time pressure can tempt to realize the Quick & Dirty implementation. Since FB_LampSetDirect behaves similarly to the new DALI lamp type, FB_LampSetDirectDALI inherits from FB_LampSetDirect. To enable the value range of 0-254, the SetLightLevel() method of FB_LampSetDirectDALI is overwritten.

METHOD PUBLIC SetLightLevel
VAR_INPUT
  nNewLightLevel : BYTE(0..254);
END_VAR
nLightLevel := nNewLightLevel;

The new adapter function block (FB_LampSetDirectDALIAdapter) is also adapted so that the methods regard the value range 0-254.

As an example, the methods DimUp() and On() are shown here:

METHOD PUBLIC DimUp
IF (fbLampSetDirectDALI.nLightLevel <= 249) THEN
  fbLampSetDirectDALI.SetLightLevel(fbLampSetDirectDALI.nLightLevel + 5);
END_IF
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(fbLampSetDirectDALI.nLightLevel);
END_IF
METHOD PUBLIC On
fbLampSetDirectDALI.SetLightLevel(254);
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(fbLampSetDirectDALI.nLightLevel);
END_IF

The simplified UML diagram shows the integration of the function blocks for the DALI lamp into the existing software design:

(abstract elements are displayed in italics)

Sample 1 (TwinCAT 3.1.4024) on GitHub

This approach implements the requirements quickly and easily through a pragmatic strategy. But this also added some specifics that complicate the use of the blocks in an application.

For example, how should a user interface behave when it connects to an instance of FB_Controller and FB_AnalogValue outputs a value of 100? Does 100 mean that the current lamp is at 100 % or does the new DALI lamp output a value of 100, which would be well below 100 %?

The user of FB_Controller must always know the active lamp type in order to interpret the current output value correctly. FB_LampSetDirectDALI inherits from FB_LampSetDirect, but changes its behavior. In this example, the behavior is changed by overwriting the SetLightLevel() method. The derived FB (FB_LampSetDirectDALI) behaves differently to the base FB (FB_LampSetDirect). FB_LampSetDirect can no longer be replaced (substituted) by FB_LampSetDirectDALI. The Liskov Substitution Principle (LSP) is violated.

Approach 2: Optionality

In this approach, each lamp type contains a property that returns information about the exact function of the function block.

In .NET, for example, this approach is used in the abstract class System.IO.Stream. The Stream class serves as the base class for specialized streams (e.g., FileStream and NetworkStream) and specifies the most important methods and properties. This includes the methods Write(), Read() and Seek(). Since not every stream can provide all functions, the properties CanRead, CanWrite and CanSeek provide information about whether the corresponding method is supported by the respective stream. For example, NetworkStream can check at runtime whether writing to the stream is possible or whether it is a read-only stream.

In our example, I_Lamp is extended by the property bIsDALIDevice.

This means that FB_Lamp and therefore every adapter function block also receives this property. Since the functionality of bIsDALIDevice is the same in all adapter function blocks, bIsDALIDevice is not declared as abstract in FB_Lamp. This means that it is not necessary for all adapter function blocks to implement this property themselves. The functionality of bIsDALIDevice is inherited by FB_Lamp to all adapter function blocks.

For FB_LampSetDirectDALIAdapter, the backing variable of the property bIsDALIDevice is set to TRUE in the method FB_init().

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains  : BOOL;
  bInCopyCode   : BOOL;
END_VAR
SUPER^._bIsDALIDevice := TRUE;

For all other adapter function blocks, _bIsDALIDevice retains its initialization value (FALSE). The use of the FB_init() method is not necessary for these adapter function blocks.

The user of FB_Controller (MAIN block) can now query at program runtime whether the current lamp is a DALI lamp or not. If this is the case, the output value is scaled accordingly to 0-100 %.

IF (__ISVALIDREF(fbController.refActiveLamp) AND_THEN fbController.refActiveLamp.bIsDALIDevice) THEN
  nLightLevel := TO_BYTE(fbController.fbActualValue.nValue * 100.0 / 254.0);
ELSE
  nLightLevel := fbController.fbActualValue.nValue;
END_IF

Note: It is important to use the AND_THEN operator instead of THEN. This means that the expression to the right of AND_THEN is only executed if the first operand (to the left of AND_THEN) is TRUE. This is important here because otherwise the expression fbController.refActiveLamp.bIsDALIDevice would terminate the execution of the program in case of an invalid reference to the active lamp (refActiveLamp).

The UML diagram shows how FB_Lamp receives the property bIsDALIDevice via the interface I_Lamp and is thus inherited by all adapter function blocks:

(abstract elements are displayed in italics)

Sample 2 (TwinCAT 3.1.4024) on GitHub

This approach still violates the Liskov Substitution Principle (LSP). FB_LampSetDirectDALI behaves further on differently to FB_LampSetDirect. The user hast to take this difference into account (querying bIsDALIDevice) and correct it (scaling to 0-100 %). This is easy to overlook or to implement incorrectly.

Approach 3: Harmonization

In order not to violate the Liskov Substitution Principle (LSP) any further, the inheritance between FB_LampSetDirect and FB_LampSetDirectDALI is resolved. Even if both function blocks appear very similar at first glance, the inheritance should be avoided with at this point.

The adapter function blocks ensure that all lamp types can be controlled using the same methods. However, there are still differences in the representation of the output value.

In FB_Controller the initial value of the active lamp is represented by an instance of FB_AnalogValue. A new initial value is transmitted by the Update() method. To ensure that the initial value is also displayed uniformly, it is scaled to 0-100 % before the Update() method is called. The necessary adjustments are made exclusively in the methods DimDown(), DimUp(), Off() and On() of FB_LampSetDirectDALIAdapter.

The On() method is shown here as an example:

METHOD PUBLIC On
fbLampSetDirectDALI.SetLightLevel(254);
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(TO_BYTE(fbLampSetDirectDALI.nLightLevel * 100.0 / 254.0));
END_IF

The adapter function block contains all the necessary instructions, which causes the DALI lamp to behave to the outside as expected. FB_LampSetDirectDALI remains unchanged with this solution approach.

(abstract elements are displayed in italics)

Sample 3 (TwinCAT 3.1.4024) on GitHub

Optimization analysis

Through various techniques, it is possible for us to implement the desired extension without violating the Liskov Substitution Principle (LSP). Inheritance is a precondition to violate the LSP. If the LSP is violated, this may be an indication of a bad inheritance hierarchy within the software design.

Why is it important to follow the Liskov Substitution Principle (LSP)? Function blocks can also be passed as parameters. If a POU would expect a parameter of the type FB_LampSetDirect, then FB_LampSetDirectDALI could also be passed when using inheritance. However, the operation of the SetLightLevel() method is different for the two function blocks. Such differences can lead to undesirable behavior within a system.

The definition of the Liskov Substituation Principle

Let q(x) be a property provable about objects x of type T. Then q(y) should be true for objects y of type S where S is a subtype of T.

This is the more formal definition of the Liskov Substitution Principle (LSP) by Barbara Liskov. As mentioned above, this principle was already defined at the end of the 1980s. The complete elaboration was published under the title Data Abstraction and Hierarchy.

Barbara Liskov was one of the first women to earn a doctorate in computer science in 1968. In 2008, she was also one of the first women to receive the Turing Award. Early on, she became involved with object-oriented programming and thus also with the inheritance of classes (function blocks).

Inheritance places two function blocks in a specific relationship to each other. Inheritance here describes an is-a relationship. If FB_LampSetDirectDALI inherits from FB_LampSetDirect, the DALI lamp is a (normal) lamp extended by special (additional) functions. Wherever FB_LampSetDirect is used, FB_LampSetDirectDALI could also be used. FB_LampSetDirect can be substituted by FB_LampSetDirectDALI. If this is not ensured, the inheritance should be questioned at this point.

Robert C. Martin has included this principle in the SOLID principles. In the book (Amazon advertising link *) Clean Architecture: A Craftsman’s Guide to Software Structure and Design, this principle is explained further and extended to the field of software architecture.

Summary

By extending the above example, you have learned about the Liskov Substitution Principle (LSP). Complex inheritance hierarchies in particular are prone to violating this principle. Although the formal definition of the Liskov Substitution Principle (LSP) sounds complicated, the key message of this principle is simple to understand.

In the next post, our example will be extended again. The Interface Segregation Principle (ISP) will play a central role in it.

Norbert Eder: Git Commit signieren – No secret key

Mit der Signatur des Commits unterschreibst du den Commit persönlich und bestätigst, dass der übermittelte Code von dir stammt. Das kann nur machen, wer auch den privaten Schlüssel zur Verfügung hast. In der Regel bist das ausschließlich du. Damit kann zwar jemand mit deinem Namen und Mail-Adresse einen Commit erstellen und pushen und sich als Du ausgeben, nicht aber mit deiner Signatur unterschreiben (Zugriff auf das Repository vorausgesetzt).

Nun kommt es aber nach der Konfiguration unter Windows häufig zu diesem Fehler:

gpg: signing failed: No secret key

In diesem fall fehlt dir vermutlich nur eine Git-Konfiguration. Und zwar kann Git die GPG-Applikation nicht finden.

git config --global gpg.program [GPG-Pfad]

Einfach den GPG-Pfad mit dem direkten Pfad zur gpg.exe versehen und das Signieren funktioniert sofort.

Wie Git-Commits signieren?

Falls sich die allgemeine Frage auftut, wie man Git-Commits signieren kann, ein paar kurze Worte dazu. Wenn du beispielsweise GPG verwendest, kannst du (sofern noch nicht vorhanden) ein neues Schlüsselpaar anlegen. Mit

gpg --list-keys

kannst du dir dann dein Schlüsselpaare ausgeben lassen. Vom gewünschten Schlüsselpaar kopierst du dir die Schlüssel-Id. Diese hinterlegst du in der Git-Konfiguration als Standard-Schlüssel-Id. Hierzu nachfolgend einfach [KEYID] mit dre Schüssel-Id ersetzen.

git config --global user.signingkey [KEYID]

Nun noch unter Windows den Link zur gpg.exe eintragen, wie oben gezeigt und schon können Commits mit der zusätzlichen Option -S des Kommandos git commit signiert werden. Hierzu unbedingt auf Groß- und Kleinschreibung achten. Beispiel:

git commit -S -am "Test commit"

So einfach fügst du deine persönliche Unterschrift dem Commit hinzu. Ich persönlich empfehle diese Vorgehensweise.

Der Beitrag Git Commit signieren – No secret key erschien zuerst auf Norbert Eder.

Norbert Eder: Abhängigkeiten und Vulnerabilities im Griff

Umso größer Entwicklungsprojekte sind, umso mehr Abhängigkeiten bestehen. Alle Abhängigkeiten im Überblick zu behalten ist teilweise schon eine Herausforderung, ganz zu schweigen, von der allzu oft durchgeführten wahllosen Einbindung ohne Check der Lizenzen, Vulnerabilities etc. im Vorfeld. Aber wie bekommt man diese Themen alle in den Griff?

Prüfen auf Vulnerabilities

Die meisten Paketmanager etc. bieten mittlerweile entsprechende Features an. Mit dotnet list package --vulnerable erfolgt beispielsweise im .NET-Umfeld eine Auflistung aller vulnerablen Pakete. Mit npm audit kann eine derartige Liste mit NPM herausgefahren werden.

Diesen Varianten ist aber gemein, dass sie den Status zum Aufrufzeitpunkt abbilden. Nicht mehr und nicht weniger. Und möglicherweise möchte man etwas mehr:

  • Tracking der Abhängigkeiten über Versionen der eigenen Software hinweg
  • Übersicht aller Lizenzen der Abhängigkeiten
  • Auflistung und Risikobewertung aller Schwachstellen pro Version der eigenen Software
  • Möglichkeit, Schwachstellen zu auditieren und Entscheidungen zu dokumentieren
  • Automatische Aktualisierung/Auswertung durch Integration ins Build-System

Dependency Track von OWASP

Das Open Web Application Security Project (kurz OWASP) ist vielen vielleicht ein Begriff, bringt die Foundation doch regelmäßig die Top 10 Web Application Security Risks heraus. Diese sollten in der Webentwicklung auf jeden Fall neben den Secure Coding Practices [PDF] und dem Web Security Testing Guide im Auge behalten werden.

Mit Dependency-Track stellt OWASP ein Tool zur Verfügung, in welches mittels einer CycloneDX-BOM (Bill of Material) Listen von Abhängigkeiten importiert und gegen Vulnerability Datenbanken geprüft werden. Hierfür stehen VulnDB, GitHub Advisories und zahlreiche weitere Quellen zur Verfügung.

Für die Generierung der notwendigen BOM stehen zahlreiche Tools für unterschiedliche Entwicklungsplattformen zur Verfügung. Somit ist eine einfache Einbindung in die Buildumgebung problemlos zu machen.

Die Installation von Dependency-Track gestaltet sich denkbar einfach, da die Auslieferung unter anderem als Docker-Container erfolgt.

Nachfolgend einige Screenshots des Herstellers.

Dependency-Track: Übersicht der Komponenten
Dependency-Track: Audit der gefundenen Vulnerabilities

Zudem steht ein übersichtliches Dashboard für einen Überblick über die gesamte Softwareinfrastruktur und einer Bewertung des aktuellen Risikos bereit.

Dependency-Track: Dashboard

Dependency-Track steht auf GitHub zur Verfügung und bereichert die Entwicklungsumgebung kostenlos.

Aktives Abhängigkeiten- und Schwachstellen-Management notwendig

Der bloße Einsatz dieses Tools bringt keine Verbesserung der Situation. Vielmehr muss es einen klaren Verantwortlichen geben, der zum Einen ein Abhängigkeitsmanagement betreibt (Wildwuchs eingrenzen, Überblick, Lizenzen) und zum anderen ein Audit über gefundene Risiken durchführt und deren Behebung (Aktualisierung der Abhängigkeit, Austausch etc.) einleitet.

Umso zentraler dieses Thema im Entwicklungsprozess behandelt wird, umso besser und schneller kann auf Schwachstellen reagiert werden.

Der Beitrag Abhängigkeiten und Vulnerabilities im Griff erschien zuerst auf Norbert Eder.

Holger Schwichtenberg: Droht nun das große Namenschaos beim OR-Mapper Entity Framework Core?

Mit oder ohne Core im Namen? Das ist die Frage, die sich beim Blick auf die Historie des Entity Framework (Core) 7 stellt.

Code-Inside Blog: 'error MSB8011: Failed to register output.' & UTF8-BOM files

Be aware: I’m not a C++ developer and this might be an “obvious” problem, but it took me a while to resolve this issue.

In our product we have very few C++ projects. We use these projects for very special Microsoft Office COM stuff and because of COM we need to register some components during the build. Everything worked as expected, but we renamed a few files and our build broke with:

C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(2302,5): warning MSB3075: The command "regsvr32 /s "C:/BuildAgentV3_1/_work/67/s\_Artifacts\_ReleaseParts\XXX.Client.Addin.x64-Shims\Common\XXX.Common.Shim.dll"" exited with code 5. Please verify that you have sufficient rights to run this command. [C:\BuildAgentV3_1\_work\67\s\XXX.Common.Shim\XXX.Common.Shim.vcxproj]
C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(2314,5): error MSB8011: Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions. [C:\BuildAgentV3_1\_work\67\s\XXX.Common.Shim\XXX.Common.Shim.vcxproj]

(xxx = redacted)

The crazy part was: Using an older version of our project just worked as expected, but all changes were “fine” from my point of view.

After many, many attempts I remembered that our diff tool doesn’t show us everything - so I checked the file encodings: UTF8-BOM

Somehow if you have a UTF8-BOM encoded file that your C++ project uses to register COM stuff it will fail. I changed the encoding and to UTF8 and everyting worked as expected.

What a day… lessons learned: Be aware of your file encodings.

Hope this helps!

Code-Inside Blog: Which .NET Framework Version is installed on my machine?

If you need to know which .NET Framework Version (the “legacy” .NET Framework) is installed on your machine try this handy oneliner:

Get-ItemProperty "HKLM:SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full"

Result:

CBS           : 1
Install       : 1
InstallPath   : C:\Windows\Microsoft.NET\Framework64\v4.0.30319\
Release       : 528372
Servicing     : 0
TargetVersion : 4.0.0
Version       : 4.8.04084
PSPath        : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework
                Setup\NDP\v4\Full
PSParentPath  : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4
PSChildName   : Full
PSDrive       : HKLM
PSProvider    : Microsoft.PowerShell.Core\Registry

The version should give you more then enough information.

Hope this helps!

Christian Dennig [MS]: ASP.NET Custom Metrics with OpenTelemetry Collector & Prometheus/Grafana

Every now and then, I am asked which tracing, logging or monitoring solution you should use in a modern application, as the possibilities are getting more and more every month – at least, you may get that feeling. To be as flexible as possible and to rely on open standards, a closer look at OpenTelemetry is recommended. It becomes more and more popular, because it offers a vendor-agnostic solution to work with telemetry data in your services and send them to the backend(s) of your choice (Prometheus, Jaeger, etc.). Let’s have a look at how you can use OpenTelemetry custom metrics in an ASP.NET service in combination with the probably most popular monitoring stack in the cloud native space: Prometheus / Grafana.

TL;DR

You can find the demo project on GitHub. It uses a local Kubernetes cluster (kind) to setup the environment and deploys a demo application that generates some sample metrics. Those metrics are sent to an OTEL collector which serves as a Prometheus metrics endpoint. In the end, the metrics are scraped by Prometheus and displayed in a Grafana dashboard/chart.

Demo Setup

OpenTelemetry – What is it and why should you care?

OpenTelemetry

OpenTelemetry (OTEL) is an open-source CNCF project that aims to provide a vendor-agnostic solution in generating, collecting and handling telemetry data of your infrastructure and services. It is able to receive, process, and export traces, logs, and metrics to different backends like Prometheus , Jaeger or other commercial SaaS offering without the need for your application to have a dependency on those solutions. While OTEL itself doesn’t provide a backend or even analytics capabilities, it serves as the “central monitoring component” and knows how to send the data received to different backends by using so-called “exporters”.

So why should you even care? In today’s world of distributed systems and microservices architectures where developers can release software and services faster and more independently, observability becomes one of the most important features in your environment. Visibility into systems is crucial for the success of your application as it helps you in scaling components, finding bugs and misconfigurations etc.

If you haven’t decided what monitoring or tracing solution you are going to use for your next application, have a look at OpenTelemetry. It gives you the freedom to try out different monitoring solutions or even replace your preferred one later in production.

OpenTelemetry Components

OpenTelemetry currently consists of several components like the cross-language specification (APIs/SDKs and the OpenTelemetry Protocol OTLP) for instrumentation and tools to receive, process/transform and export telemetry data. The SDKs are available in several popular languages like Java, C++, C#, Go etc. You can find the complete list of supported languages here.

Additionally, there is a component called the “OpenTelemetry Collector” which is a vendor-agnostic proxy that receives telemetry data from different sources and can transform that data before sending it to the desired backend solution.

Let’s have a closer look at the components of the collector…receivers, processors and exporters:

  • Receivers – A receiver in OpenTelemetry is the component that is responsible for getting data into a collector. It can be used in a push- or pull-based approach. It can support the OLTP protocol or even scrape a Prometheus /metrics endpoint
  • Processor – Processors are components that let you batch-process, sample, transform and/or enrich your telemetry data that is being received by the collector before handing it over to an exporter. You can add or remove attributes, like for example “personally identifiable information” (PII) or filter data based on regular expressions. A processor is an optional component in a collector pipeline.
  • Exporter – An exporter is responsible for sending data to a backend solution like Prometheus, Azure Monitor, DataDog, Splunk etc.

In the end, it comes down to configuring the collector service with receivers, (optionally) processors and exporters to form a fully functional collector pipeline – official documentation can be found here. The configuration for the demo here is as follows:

receivers:
  otlp:
    protocols:
      http:
      grpc:
processors:
  batch:
exporters:
  logging:
    loglevel: debug
  prometheus:
    endpoint: "0.0.0.0:8889"
service:
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, prometheus]

The configuration consists of:

  • one OpenTelemetry Protocol (OTLP) receiver, enabled for http and gRPC communication
  • one processor that is batching the telemetry data with default values (like e.g. a timeout of 200ms)
  • two exporters piping the data to the console (logging) and exposing a Prometheus /metrics endpoint on 0.0.0.0:8889 (remote-write is also possible)

ASP.NET OpenTelemetry

To demonstrate how to send custom metrics from an ASP.NET application to Prometheus via OpenTelemetry, we first need a service that is exposing those metrics. In this demo, we simply create two custom metrics called otel.demo.metric.gauge1 and otel.demo.metric.gauge2 that will be sent to the console (AddConsoleExporter()) and via the OTLP protocol to a collector service (AddOtlpExporter()) that we’ll introduce later on. The application uses the ASP.NET Minimal API and the code is more or less self-explanatory:

using System.Diagnostics.Metrics;
using OpenTelemetry.Resources;
using OpenTelemetry.Metrics;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddOpenTelemetryMetrics(metricsProvider =>
{
    metricsProvider
        .AddConsoleExporter()
        .AddOtlpExporter()
        .AddMeter("otel.demo.metric")
        .SetResourceBuilder(ResourceBuilder.CreateDefault()
            .AddService(serviceName: "otel.demo", serviceVersion: "0.0.1")
        );
});

var app = builder.Build();
var otel_metric = new Meter("otel.demo.metric", "0.0.1");
var randomNum = new Random();
// Create two metrics
var obs_gauge1 = otel_metric.CreateObservableGauge<int>("otel.demo.metric.gauge1", () =>
{
    return randomNum.Next(10, 80);
});
var obs_gauge2 = otel_metric.CreateObservableGauge<double>("otel.demo.metric.gauge2", () =>
{
    return randomNum.NextDouble();
});

app.MapGet("/otelmetric", () =>
{
    return "Hello, Otel-Metric!";
});

app.Run();

We are currently dealing with custom metrics. Of course, ASP.NET also provides out-of-the-box metrics that you can utilize. Just use the ASP.NET instrumentation feature by adding AddAspNetCoreInstrumentation() when configuring the metrics provider – more on that here.

Demo

Time to connect the dots. First, let’s create a Kubernetes cluster using kind where we can publish the demo service, spin-up the OTEL collector instance and run a Prometheus/Grafana environment. If you want to follow along the tutorial, clone the repo from https://github.com/cdennig/otel-demo and switch to the otel-demo directory.

Create a local Kubernetes Cluster

To create a kind cluster that is able to host a Prometheus environment, execute:

$ kind create cluster --name demo-cluster \ 
        --config ./kind/kind-cluster.yaml

The YAML configuration file (./kind/kind-cluster.yaml) adjusts some settings of the Kubernetes control plane so that Prometheus is able to scrape the endpoints of the controller services. Next, create the OpenTelemetry Collector instance.

OTEL Collector

In the manifests directory, you’ll find two Kubernetes manifests. One is containing the configuration for the collector (otel-collector.yaml). It includes the ConfigMap for the collector configuration (which will be mounted as a volume to the collector container), the deployment of the collector itself and a service exposing the ports for data ingestion (4318 for http and 4317 for gRPC) and the metrics endpoint (8889) that will be scraped later on by Prometheus. It looks as follows:

apiVersion: v1

kind: ConfigMap
metadata:
  name: otel-collector-config
data:
  otel-collector-config: |-
    receivers:
      otlp:
        protocols:
          http:
          grpc:
    exporters:
      logging:
        loglevel: debug
      prometheus:
        endpoint: "0.0.0.0:8889"
    processors:
      batch:
    service:
      pipelines:
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging, prometheus]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: otel-collector
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otel-collector
  template:
    metadata:
      labels:
        app: otel-collector
    spec:
      containers:
        - name: collector
          image: otel/opentelemetry-collector:latest
          args:
            - --config=/etc/otelconf/otel-collector-config.yaml
          ports:
            - name: otel-http
              containerPort: 4318
            - name: otel-grpc
              containerPort: 4317
            - name: prom-metrics
              containerPort: 8889
          volumeMounts:
            - name: otel-config
              mountPath: /etc/otelconf
      volumes:
        - name: otel-config
          configMap:
            name: otel-collector-config
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: otel-collector
spec:
  type: ClusterIP
  ports:
    - name: otel-http
      port: 4318
      protocol: TCP
      targetPort: 4318
    - name: otel-grpc
      port: 4317
      protocol: TCP
      targetPort: 4317
    - name: prom-metrics
      port: 8889
      protocol: TCP
      targetPort: prom-metrics
  selector:
    app: otel-collector

Let’s apply the manifest.

$ kubectl apply -f ./manifests/otel-collector.yaml

configmap/otel-collector-config created
deployment.apps/otel-collector created
service/otel-collector created

Check that everything runs as expected:

$ kubectl get pods,deployments,services,endpoints

NAME                                  READY   STATUS    RESTARTS   AGE
pod/otel-collector-5cd54c49b4-gdk9f   1/1     Running   0          5m13s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/otel-collector   1/1     1            1           5m13s

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP                      22m
service/otel-collector   ClusterIP   10.96.194.28   <none>        4318/TCP,4317/TCP,8889/TCP   5m13s

NAME                       ENDPOINTS                                         AGE
endpoints/kubernetes       172.19.0.9:6443                                   22m
endpoints/otel-collector   10.244.1.2:8889,10.244.1.2:4318,10.244.1.2:4317   5m13s

Now that the OpenTelemetry infrastructure is in place, let’s add the workload exposing the custom metrics.

ASP.NET Workload

The demo application has been containerized and published to the GitHub container registry for your convenience. So to add the workload to your cluster, simply apply the ./manifests/otel-demo-workload.yaml that contains the Deployment manifest and adds two environment variables to configure the OTEL collector endpoint and the OTLP protocol to use – in this case gRPC.

Here’s the relevant part:

spec:
  containers:
  - image: ghcr.io/cdennig/otel-demo:1.0
    name: otel-demo
    env:
    - name: OTEL_EXPORTER_OTLP_ENDPOINT
      value: "http://otel-collector.default.svc.cluster.local:4317"
    - name: OTEL_EXPORTER_OTLP_PROTOCOL
      value: "grpc"

Apply the manifest now.

$ kubectl apply -f ./manifests/otel-demo-workload.yaml

Remember that the application also logs to the console. Let’s query the logs of the ASP.NET service (note that the podname will differ in your environment).

$ kubectl logs po/otel-workload-69cc89d456-9zfs7

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app/
Resource associated with Metric:
    service.name: otel.demo
    service.version: 0.0.1
    service.instance.id: b84c78be-49df-42fa-bd09-0ad13481d826

Export otel.demo.metric.gauge1, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4260064Z, 2022-08-20T11:40:51.3451557Z] LongGauge
Value: 10

Export otel.demo.metric.gauge2, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4274763Z, 2022-08-20T11:40:51.3451863Z] DoubleGauge
Value: 0.8778815716262417

Export otel.demo.metric.gauge1, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4260064Z, 2022-08-20T11:41:01.3387999Z] LongGauge
Value: 19

Export otel.demo.metric.gauge2, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4274763Z, 2022-08-20T11:41:01.3388003Z] DoubleGauge
Value: 0.35409627617124295

Also, let’s check if the data will be sent to the collector. Remember it exposes its /metrics endpoint on 0.0.0.0:8889/metrics. Let’s query it by port-forwarding the service to our local machine.

$ kubectl port-forward svc/otel-collector 8889:8889

Forwarding from 127.0.0.1:8889 -> 8889
Forwarding from [::1]:8889 -> 8889

# in a different session, curl the endpoint
$  curl http://localhost:8889/metrics

# HELP otel_demo_metric_gauge1
# TYPE otel_demo_metric_gauge1 gauge
otel_demo_metric_gauge1{instance="b84c78be-49df-42fa-bd09-0ad13481d826",job="otel.demo"} 37
# HELP otel_demo_metric_gauge2
# TYPE otel_demo_metric_gauge2 gauge
otel_demo_metric_gauge2{instance="b84c78be-49df-42fa-bd09-0ad13481d826",job="otel.demo"} 0.45433988869946285

Great, both components – the metric producer and the collector – are working as expected. Now, let’s spin up the Prometheus/Grafana environment, add the service monitor to scrape the /metrics endpoint and create the Grafana dashboard for it.

Add Kube-Prometheus-Stack

Easiest way to add the Prometheus/Grafana stack to your Kubernetes cluster is to use the kube-prometheus-stack Helm chart. We will use a custom values.yaml file to automatically add the static Prometheus target for the OTEL collector called demo/otel-collector (kubeEtc config is only needed in the kind environment):

kubeEtcd:
  service:
    targetPort: 2381
prometheus:
  prometheusSpec:
    additionalScrapeConfigs:
    - job_name: "demo/otel-collector"
      static_configs:
      - targets: ["otel-collector.default.svc.cluster.local:8889"]

Now, add the helm chart to your cluster by executing:

$ helm upgrade --install --wait --timeout 15m \
  --namespace monitoring --create-namespace \
  --repo https://prometheus-community.github.io/helm-charts \
  kube-prometheus-stack kube-prometheus-stack --values ./prom-grafana/values.yaml

Release "kube-prometheus-stack" does not exist. Installing it now.
NAME: kube-prometheus-stack
LAST DEPLOYED: Mon Aug 22 13:53:58 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"

Let’s have a look at the Prometheus targets, if Prometheus can scrape the OTEL collector endpoint – again, port-forward the service to your local machine and open a browser at http://localhost:9090/targets.

$ kubectl port-forward -n monitoring svc/kube-prometheus-stack-prometheus 9090:9090
Prometheus targets
Prometheus targets

That looks as expected, now over to Grafana and create a dashboard to display the custom metrics. As done before, port-forward the Grafana service to your local machine and open a browser at http://localhost:3000. Because you need a username/password combination to login to Grafana, we first need to grab that information from a Kubernetes secret:

# Grafana admin username
$ kubectl get secret -n monitoring kube-prometheus-stack-grafana -o jsonpath='{.data.admin-user}' | base64 --decode

# Grafana password
$ kubectl get secret -n monitoring kube-prometheus-stack-grafana -o jsonpath='{.data.admin-password}' | base64 --decode

# port-forward Grafana service
$ kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80

Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

After opening a browser at http://localhost:3000 and a successful login, you should be greeted by the Grafana welcome page.

Grafana Welcome Page

Add a Dashboard for the Custom Metrics

Head to http://localhost:3000/dashboard/import and upload the precreated dashboard from ./prom-grafana/dashboard.json (or simply paste its content to the textbox). After importing the definition, you should be redirected to the dashboard and see our custom metrics being displayed.

Add preconfigured dashboard
OTEL metrics gauge1 and gauge2

Wrap-Up

This demo showed how to use OpenTelemetry custom metrics in an ASP.NET service, sending telemetry data to an OTEL collector instance that is being scraped by a Prometheus instance. To close the loop, those custom metrics are eventually displayed in a Grafana dashboard. The advantage of this solution is that you use a common solution like OpenTelemetry to generate and collect metrics. To which service the data is finally sent and which solution is used to analyze the data can be easily exchanged via OTEL exporter configuration – if you don’t want to use Prometheus, you simply adapt the OTEL pipeline and export the telemetry data to e.g. Azure Monitor or DataDog, Splunk etc.

I hope the demo has given you a good introduction to the world of OpenTelemetry. Happy coding! 🙂

Jürgen Gutsch: ASP.NET Core on .NET 7.0 - Output caching

Finally, Microsoft added output caching to the ASP.NET Core 7.0 preview 6.

Output caching is a middleware that caches the entire output of an endpoint instead of executing the endpoint every time it gets requested. This will make your endpoints a lot faster.

This kind of caching is useful for APIs that provide data that don't change a lot or that gets accessed pretty frequently. It is also useful for more or less static pages, e.g. CMS output, etc. Different caching options will help you to fine-tune your output cache or to vary the cache based on header or query parameter.

For more dynamic pages or APIs that serve data that change a lot, it would make sense to cache more specifically on the data level instead of the entire output.

Trying output caching

To try output caching I created a new empty web app using the .NET CLI:

dotnet new web -n OutputCaching -o OutputCaching
cd OutputCaching
code .

This will create the new project and opens it in VSCode.

In the Program.cs you now need to add output caching to the ServiceCollection as well as using the middleware on the app:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddOutputCache();

var app = builder.Build();

app.UseOutputCache();

app.MapGet("/", () => "Hello World!");

app.Run();

This enables output caching in your application.

Let's use output caching with the classic example that displays the current date and time.

app.MapGet("/time", () => DateTime.Now.ToString());

This creates a new endpoint that displays the current date and time. Every time you refresh the result in the browser, you got a new time displayed. No magic here. Now we are going to add some caching magic to another endpoint:

app.MapGet("/time_cached", () => DateTime.Now.ToString())
	.CacheOutput();

If you access this endpoint and refresh it in the browser, the time will not change. The initial output got cached and you'll receive the cached output every time you refresh the browser.

This is good for more or less static outputs that don't change a lot. What if you have a frequently used API that just needs a short cache to reduce the calculation effort or to just reduce the database access. You can reduce the caching time to, let's say, 10 seconds:

 builder.Services.AddOutputCache(options =>
 {
     options.DefaultExpirationTimeSpan = TimeSpan.FromSeconds(10);
 });

This reduces the default cache expiration timespan to 10 seconds.

If you now start refreshing the endpoint we created previously, you'll get a new time every 10 seconds. This means the cache get's released every 10 seconds. Using the options you can also define the size of the cached body or the overall cache size.

If you provide a more dynamic API that receives parameters using query strings. You can vary the cache by the query string:

app.MapGet("/time_refreshable", () => DateTime.Now.ToString())
    .CacheOutput(p => p.VaryByQuery("time"));

This adds another endpoint that varies the cache by the query string argument called "time". This means the query string ?time=now, caches a different result than the query string ?time=later or ?time=before.

The VaryByQuery function allows you to add more than one query string:

app.MapGet("/time_refreshable", () => DateTime.Now.ToString())
    .CacheOutput(p => p.VaryByQuery("time", "culture", "format"));

In case you like to vary the cache by HTTP headers you can do this the same way using the VaryByHeader function:

app.MapGet("/time_cached", () => DateTime.Now.ToString())
    .CacheOutput(p => p.VaryByHeader("content-type"));

Further reading

If you like to explore more complex examples of output caching, it would make sense to have a look into the samples project:

https://github.com/dotnet/aspnetcore/blob/main/src/Middleware/OutputCaching/samples/OutputCachingSample/Startup.cs

Code-Inside Blog: How to run a Azure App Service WebJob with parameters

We are using WebJobs in our Azure App Service deployment and they are pretty “easy” for the most part. Just register a WebJobs or deploy your .exe/.bat/.ps1/... under the \site\wwwroot\app_data\Jobs\triggered folder and it should execute as described in the settings.job.

x

If you put any executable in this WebJob folder, it will be executed as planned.

Problem: Parameters

If you have a my-job.exe, then this will be invoked from the runtime. But what if you need to invoke it with a parameter like my-job.exe -param "test"?

Solution: run.cmd

The WebJob environment is “greedy” and will search for a run.cmd (or run.exe) and if this is found, it will be executed and it doesn’t matter if you have any other .exe files there. Stick to the run.cmd and use this to invoke your actual executable like this:

echo "Invoke my-job.exe with parameters - Start"

..\MyJob\my-job.exe -param "test"

echo "Invoke my-job.exe with parameters - Done"

Be aware, that the path must “match”. We use this run.cmd-approach in combination with the is_in_place-option (see here) and are happy with the results).

A more detailed explanation can be found here.

Hope this helps!

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.