Norbert Eder: Scratch – Kinder lernen programmieren

Ohne Computer läuft heute gar nichts mehr. Umso wichtiger ist es, zu verstehen, wie sowohl Computer, als auch die darauf laufende Software, funktionieren. Um dieses so wichtige Verständnis zu schüren, sollten schon Kinder mit dem Thema des Programmierens in Berührung kommen.

Dazu gibt es unterschiedlichste Werkzeuge. Eines, das ich – aus Erfahrung – sehr empfehlen kann, ist Scratch.

Scratch ist ein tolles Hilfsmittel für Neueinsteiger, vor allem aber Kinder und Jugendliche. Programme bestehen hier aus interaktiven Komponenten, die zusammengesetzt und mit „Leben“ versehen werden können. Mittels unterschiedlicher Bausteine können die Komponenten bewegt werden, es ist möglich, auf Ereignisse zu reagieren oder aber auch Sound abzuspielen und vieles mehr.

Durch das Bausteinsystem werden Syntaxfehler vermieden. Statt Frust gibt es schnelle Erfolge und treiben zu weiteren „Spielereien“ ein. Innerhalb kürzester Zeit können so zum Beispiel kleine Spiele entwickelt werden.

Kinder lernen so spielerisch einige Grundkonzepte der Programmierung kennen und können so in kurzer Zeit auf komplexere Sprachen umsteigen und sich weiterentwickeln.

Die Voraussetzungen für Scratch sind gering: Ein Computer und ein Browser werden benötigt. Die Entwicklung findet komplett im Browser statt. Die Programme können abgespeichert oder geladen werden und stehen so auch sofort zur Verfügung. Es kann auch offline entwickelt werden. Dazu steht Scratch-Desktop für Windows 10 und MacOS 10.13+ zur Verfügung.

Scratch - Programmieren lernen

Scratch – Programmieren lernen

Damit man nicht ganz alleine starten muss, gibt es auch eine große Community und zahlreiche Hilfen für den Einstieg. Vielleicht gibt es ja auch in deiner Nähe ein CoderDojo. Hier in Österreich gibt es das CoderDojo Linz und das CoderDojo Graz. Hier bekommt man Unterstützung, wenn man als Elternteil nicht ganz so firm in diesen Dingen ist.

Besonders hilfreich ist die Liste der Übungsbeispiele des CoderDojo Linz zu Scratch und HTML.

In diesem Sinne wünsche ich Happy Coding und interessante, lehrreiche Stunden mit dem Nachwuchs.

Der Beitrag Scratch – Kinder lernen programmieren erschien zuerst auf Norbert Eder.

Holger Schwichtenberg: .NET Framework 4.8 erkennen

Wie bei den Vorgängern stellt man ein vorhandenes .NET Framework 4.8 über einen Registry-Eintrag fest.

Stefan Henneken: IEC 61131-3: Parameterübergabe per FB_init

Je nach Aufgabenstellung kann es erforderlich sein, dass Funktionsblöcke Parameter benötigen, die nur einmalig für Initialisierungsaufgaben verwendet werden. Ein möglicher Weg, diese elegant zu übergeben, bietet die Methode FB_init().

Vor TwinCAT 3 wurden Initialisierungs-Parameter sehr häufig über Eingangsvariablen übergeben.

(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
  nDatabits   : BYTE(7..8);
  eParity     : E_Parity;
  nStopbits   : BYTE(1..2);	
END_VAR

Dieses hatte den Nachteil, dass in den graphischen Darstellungsarten die Funktionsblöcke unnötig groß wurden. Auch war es nicht möglich, ein Ändern der Parameter zur Laufzeit zu verhindern.

Sehr hilfreich ist hierbei die Methode FB_init(). Diese Methode wird vor dem Start der SPS-Task einmal implizit ausgeführt und kann dazu dienen, Initialisierungsaufgaben durchzuführen.

Der Dialog zum Hinzufügen von Methoden bietet hierzu eine fertige Vorlage an.

Pic01

In der Methode sind zwei Eingangsvariablen enthalten, welche Auskunft darüber geben, unter welchen Bedingungen die Methode ausgeführt wird. Die Variablen dürfen weder gelöscht noch verändert werden. Allerdings kann FB_init() um weitere Eingangsvariablen ergänzt werden.

Beispiel

Als Beispiel soll ein Baustein zur Kommunikation über eine serielle Schnittstelle dienen (FB_SerialCommunication). Dieser Baustein soll ebenfalls die serielle Schnittstelle mit den notwendigen Parametern initialisieren. Aus diesem Grund werden zu FB_init() drei Variablen hinzugefügt:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);		
END_VAR

Das Initialisieren der seriellen Schnittstelle erfolgt nicht direkt in FB_init(). Deshalb müssen die Parameter in Variablen kopiert werden, die sich im Funktionsblock befinden.

FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
  nInternalDatabits    : BYTE(7..8);
  eInternalParity      : E_Parity;
  nInternalStopbits    : BYTE(1..2);
END_VAR

In diesen drei Variablen werden die Werte aus FB_init() während der Initialisierung kopiert.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR

THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

Wird eine Instanz von FB_SerialCommunication angelegt, so sind diese drei zusätzlichen Parameter mit anzugeben. Die Werte werden direkt nach dem Namen des Funktionsblocks in runden Klammern angegeben:

fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
                                               eParity := E_Parity.None,
                                               nStopbits := 1);

Noch bevor die SPS-Task startet, wird die Methode FB_init() implizit aufgerufen, so dass die internen Variablen des Funktionsblocks die gewünschten Werte erhalten.

Pic02 

Mit dem Start der SPS-Task und dem Aufruf der Instanz von FB_SerialCommunication kann jetzt die Initialisierung der seriellen Schnittstelle erfolgen.

Es ist immer notwendig alle Parameter anzugeben. Eine Deklaration ohne eine vollständige Auflistung der Parameter ist nicht erlaubt und erzeugt beim Compilieren eine Fehlermeldung:

Pic03 

Arrays

Wird FB_init() bei Arrays verwendet, so sind für jedes Element die vollständigen Parameter anzugeben (mit eckige Klammern):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
                 (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                 (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];

Sollen alle Elemente die gleichen Initialisierungswerte erhalten, so ist es ausreichend, wenn die Parameter einmal vorhanden sind (ohne eckige Klammern):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
                                                             eParity := E_Parity.None,
                                                             nStopbits := 1);

Mehrdimensionale Arrays sind ebenfalls möglich. Auch hier müssen alle Initialisierungswerte angegeben werden:

aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
                      (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
                      (nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];

Vererbung

Kommt Vererbung zum Einsatz, so wird die Methode FB_init() immer mit vererbt. Als Beispiel soll hier FB_SerialCommunicationRS232 dienen:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication

Wird eine Instanz von FB_SerialCommunicationRS232 angelegt, so müssen auch die Parameter von FB_init() angegeben werden, welche von FB_SerialCommunication geerbt wurden:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1);

Es besteht außerdem die Möglichkeit FB_init() zu überschreiben. In diesem Fall müssen die gleichen Eingangsvariablen in der gleichen Reihenfolge und vom gleichen Datentyp vorhanden sein, wie bei dem Basis-FB (FB_SerialCommunication). Es können aber weitere Eingangsvariablen hinzugefügt werden, so dass der abgeleitete Funktionsblock (FB_SerialCommunicationRS232) zusätzliche Parameter erhält:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
  nBoudrate    : UDINT;	
END_VAR

THIS^.nInternalBoudrate := nBoudrate;

Wird eine Instanz von FB_SerialCommunicationRS232 angelegt, so sind alle Parameter, auch die von FB_SerialCommunication, anzugeben:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1,
                                                         nBoudRate := 19200);

In der Methode FB_init() von FB_SerialCommunicationRS232 ist nur das Kopieren des neuen Parameters (nBoudrate) notwendig. Dadurch, dass FB_SerialCommunicationRS232 von FB_SerialCommunication erbt, wird vor dem Start der SPS-Task auch FB_init() von FB_SerialCommunication implizit ausgeführt. Es werden immer beide FB_init() Methoden implizit aufgerufen, sowohl die von FB_SerialCommunication, als auch die von FB_SerialCommunicationRS232. Der Aufruf von FB_init() erfolgt bei Vererbung immer von ‚unten‘ nach ‚oben‘. Also erst von FB_SerialCommunication und anschließend von FB_SerialCommunicationRS232.

Parameter weiterleiten

Als Beispiel soll der Funktionsblock (FB_SerialCommunicationCluster) dienen, in dem mehrere Instanzen von FB_SerialCommunication deklariert werden:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
  fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  nInternalDatabits       : BYTE(7..8);
  eInternalParity         : E_Parity;
  nInternalStopbits       : BYTE(1..2);	
END_VAR

Damit die Parameter der Instanzen von außen einstellbar sind, erhält auch FB_SerialCommunicationCluster die Methode FB_init() mit den notwendigen Eingangsvariablen.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR

THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

Hierbei gibt es allerdings einiges zu beachten. Die Aufrufreihenfolge von FB_init() ist in diesem Fall nicht eindeutig definiert. In meiner Testumgebung erfolgen die Aufrufe von ‚innen‘ nach ‚außen‘. Erst wird fbSerialCommunication01.FB_init() und fbSerialCommunication02.FB_init() aufgerufen, danach erst fbSerialCommunicationCluster.FB_init(). Es ist nicht möglich, die Parameter von ‚außen‘ nach ‚innen‘ durchzureichen. Die Parameter stehen in den beiden inneren Instanzen von FB_SerialCommunication somit nicht zur Verfügung.

Die Reihenfolge der Aufrufe ändert sich, sobald FB_SerialCommunication und FB_SerialCommunicationRS232 vom gleichen Basis-FB abgeleitet werden. In diesem Fall wird FB_init() von ‚außen‘ nach ‚innen‘ aufgerufen. Dieser Ansatz ist aus zwei Gründen nicht immer umzusetzen:

  1. Liegt FB_SerialCommunication in einer Bibliothek, so kann die Vererbung nicht ohne weiteres geändert werden.
  2. Die Aufrufreihenfolge von FB_init() ist bei Verschachtelung nicht weiter definiert. Es ist also nicht auszuschließen, dass sich dieses in zukünftigen Versionen ändern kann.

Eine Variante das Problem zu lösen, ist der explizite Aufruf von FB_SerialCommunication.FB_init() aus FB_SerialCommunicationCluster.FB_init().

fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);

Alle Parameter, auch bInitRetains und bInCopyCode, werden direkt weitergegeben.

Achtung: Der Aufruf von FB_init() hat immer zur Folge das alle lokalen Variablen der Instanz initialisiert werden. Das muss beachtet werden, sobald FB_init() aus der SPS-Task explizit aufgerufen wird, statt implizit vor der SPS-Task.

Zugriff über Eigenschaften

Durch die Übergabe der Parameter per FB_init(), können diese zur Laufzeit weder von Außen gelesen noch verändert werden. Die einzige Ausnahme wäre der explizite Aufruf von FB_init() aus der SPS-Task. Dieses sollte aber grundsätzlich vermieden werden, da dadurch alle lokalen Variablen der Instanz werden neu initialisiert.

Soll der Zugriff aber dennoch möglich sein, so können für die Parameter entsprechende Eigenschaften angelegt werden:

Pic04

Die Setter und Getter der jeweiligen Eigenschaften greifen auf die entsprechenden lokalen Variablen in dem Funktionsblock zu (nInternalDatabits, eInternalParity und nInternalStopbits). Somit lassen sich die Parameter bei der Deklaration, als auch zur Laufzeit vorgeben.

Durch das Entfernen der Setter kann ein Ändern der Parameter zur Laufzeit verhindert werden. Sind die Setter vorhanden kann allerdings auch auf FB_init() verzichtet werden. Eigenschaften können ebenfalls direkt bei der Deklaration einer Instanz initialisiert werden.

fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
                                                   Parity := E_Parity.Odd,
                                                   Stopbits := 1);

Es können die Parameter von FB_init() und die Eigenschaften auch gleichzeitig angegeben werden:

fbSerialCommunication  : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
                                               (Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);

Vorrang haben in diesem Fall die Initialisierungswerte der Eigenschaften. Die Übergabe per Eigenschaft und FB_init() hat hier den Nachteil, das die Deklaration des Funktionsblocks unnötig lang wird. Beides zu implementieren erscheint mir auch nicht notwendig. Sind alle Parameter auch über Eigenschaften schreibbar, so kann auf die Initialisierung per FB_init() verzichtet werden. Als Fazit gilt: Dürfen Parameter zur Laufzeit nicht änderbar sein, so ist der Einsatz von FB_init() in Betracht zu ziehen. Soll der Schreibzugriff möglich sein, so bieten sich Eigenschaften an.

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Code-Inside Blog: Build Windows Server 2016 Docker Images under Windows Server 2019

Since the uprising of Docker on Windows we also invested some time into it and packages our OneOffixx server side stack in a Docker image.

Windows Server 2016 situation:

We rely on Windows Docker Images, because we still have some “legacy” parts that requires the full .NET Framework, thats why we are using this base image:

FROM microsoft/aspnet:4.7.2-windowsservercore-ltsc2016

As you can already guess: This is based on a Windows Server 2016 and besides the “legacy” parts of our application, we need to support Windows Server 2016, because Windows Server 2019 is currently not available on our customer systems.

In our build pipeline we could easily invoke Docker and build our images based on the LTSC 2016 base image and everything was “fine”.

Problem: Move to Windows Server 2019

Some weeks ago my collegue updated our Azure DevOps Build servers from Windows Server 2016 to Windows Server 2019 and our builds began to fail.

Solution: Hyper-V isolation!

After some internet research this site popped up: Windows container version compatibility

Microsoft made some great enhancements to Docker in Windows Server 2019, but if you need to “support” older versions, you need to take care of it, which means:

If you have a Windows Server 2019, but want to use Windows Server 2016 base images, you need to activate Hyper-V isolation.

Example from our own cake build script:

var exitCode = StartProcess("Docker", new ProcessSettings { Arguments = "build -t " + dockerImageName + " . --isolation=hyperv", WorkingDirectory = packageDir});

Hope this helps!

Holger Schwichtenberg: Viele Breaking Changes in Entity Framework Core 3.0

Von Entity Framework Core 3.0 gibt es mittlerweile eine vierte Preview-Version, in der man aber noch nicht keine der unten genannten neuen Funktionen findet. Vielmehr hat Microsoft eine erhebliche Anzahl von Breaking Changes eingebaut. Die Frage ist warum?

Johnny Graber: Buch-Rezension zu „Java by Comparison“

„Java by Comparison“ von Simon Harrer, Jörg Lenhard und Linus Dietz erschien 2018 bei The Pragmatic Programmers. Dieses Buch wagt sich an eine grosse Herausforderung: Wie kann man das über Jahre angeeignete Expertenwissen in einfacher Form Programmier-Anfängern zugänglich machen? Die Autoren nutzen dazu 70 Beispiele, in denen ein funktionierender erster Wurf einer wartbaren und durchdachten … Buch-Rezension zu „Java by Comparison“ weiterlesen

Holger Schwichtenberg: Wie man Entity Framework Core dazu bringt, die Klassennamen statt der DbSet-Namen als Tabellennamen zu verwenden

Microsofts objektrelationaler Mapper Entity Framework Core hat eine unangenehme Grundeinstellung: Die Datenbanktabellen heißen nicht wie die Klassennamen der Entitätsklassen, sondern wie die Property-Namen, die in der Kontextklasse bei der Deklaration des DbSet verwendet werden.

Code-Inside Blog: Update OnPrem TFS 2018 to AzureDevOps Server 2019

We recently updated our OnPrem TFS 2018 installation to the newest release: Azure DevOps Server

The product has the same core features as TFS 2018, but with a new UI and other improvements. For a full list you should read the Release Notes.

*Be aware: This is the OnPrem solution, even with the slightly missleading name “Azure DevOps Server”. If you are looking for the Cloud solution you should read the Migration-Guide.

“Updating” a TFS 2018 installation

Our setup is quite simple: One server for the “Application Tier” and another SQL database server for the “Data Tier”. The “Data Tier” was already running with SQL Server 2016 (or above), so we only needed to touch the “Application Tier”.

Application Tier Update

In our TFS 2018 world the “Application Tier” was running on a Windows Server 2016, but we decided to create a new (clean) server with Windows Server 2019 and doing a “clean” Azure DevOps Server install, but pointing to the existing “Data Tier”.

In theory it is quite possible to update the actual TFS 2018 installation, but because “new is always better”, we also switched the underlying OS.

Update process

The actual update was really easy. We did a “test run” with a copy of the database and everything worked as expected, so we reinstalled the Azure DevOps Server and run the update on the production data.

Steps:

x

x

x

x

x

x

x

x

x

x

x

x

x

x

Summary

If you are running a TFS installation, don’t be afraid to do an update. The update itself was done in 10-15 minutes on our 30GB-ish database.

Just download the setup from the Azure DevOps Server site (“Free trial…”) and you should be ready to go!

Hope this helps!

Jürgen Gutsch: Customizing ASP.NET Core Part 12: Hosting

In this 12th part of this series, I'm going to write about how to customize hosting in ASP.NET Core. We will look into the hosting options, different kind of hosting and a quick look into hosting on the IIS. And while writing this post this again seems to get a long one.

This will change in ASP.NET Core 3.0. I anyway decided to do this post about ASP.NET Core 2.2 because it still needs some time until ASP.NET Core 3.0 is released.

This post is just an overview bout the different kind of application hosting. It is surely possible to go a lot more into the details for each topic, but this would increase the size of this post a lot and I need some more topics for future blog posts ;-)

This series topics

Quick setup

For this series we just need to setup a small empty web application.

dotnet new web -n ExploreHosting -o ExploreHosting

That's it. Open it with Visual Studio Code:

cd ExploreHosting
code .

And voila, we get a simple project open in VS Code:

WebHostBuilder

Like in the last post, we will focus on the Program.cs. The WebHostBuilder is our friend. This is where we configure and create the web host. The next snippet is the default configuration of every new ASP.NET Core web we create using File => New => Project in Visual Studio or dotnet new with the .NET CLI:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
        	.UseStartup<Startup>();
}

As we already know from the previous posts the default build has all the needed stuff pre-configured. All you need to run an application successfully on Azure or on an on-premise IIS is configured for you.

But you are able to override almost all of this default configurations. Also the hosting configuration.

Kestrel

After the WebHostBuilder is created we can use various functions to configure the builder. Here we already see one of them, which specifies the Startup class that should be used. In the last post we saw the UseKestrel method to configure the Kestrel options:

.UseKestrel((host, options) =>
{
    // ...
})

Reminder: Kestrel is one possibility to host your application. Kestrel is a web server built in .NET and based on .NET socket implementations. Previously it was built on top of libuv, which is the same web server that is used by NodeJS. Microsoft removes the dependency to libuv and created an own web server implementation based on .NET sockets.

Docs: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel

This first argument is a WebHostBuilderContext to access already configured hosting settings or the configuration itself. The second argument is an object to configure Kestrel. This snippet shows what we did in the last post to configure the socket endpoints where the host needs to listen to:

.UseKestrel((host, options) =>
{
    var filename = host.Configuration.GetValue("AppSettings:certfilename", "");
    var password = host.Configuration.GetValue("AppSettings:certpassword", "");
    
    options.Listen(IPAddress.Loopback, 5000);
    options.Listen(IPAddress.Loopback, 5001, listenOptions =>
    {
        listenOptions.UseHttps(filename, password);
    });
})

This will override the default configuration where you are able to pass in URLs, eg. using the applicationUrl property of the launchSettings.json or an environment variable.

HTTP.sys

Do you know that there is another hosting option? A different web server implementation? It is HTTP.sys. This is a pretty mature library deep within Windows that can be used to host your ASP.NET Core application.

.UseHttpSys(options =>
{
    // ...
})

The HTTP.sys is different to Kestrel. It cannot be used in IIS because it is not compatible with the ASP.NET Core Module for IIS.

The main reason to use HTTP.sys instead of Kestrel is Windows Authentication which cannot be used in Kestrel only. Another reason is, if you need to expose it to the internet without the IIS.

Also the IIS is running on top of HTTP.sys for years. Which means UseHttpSys() and IIS are using the same web server implementation. To learn more about HTTP.sys please read the docs.

Hosting on IIS

An ASP.NET Core Application shouldn't be directly exposed to the internet, even if it's supported for even Kestrel or the HTTP.sys. It would be the best to have something like a reverse proxy in between or at least a service that watches the hosting process. For ASP.NET Core the IIS isn't only a reverse proxy. It also takes care of the hosting process in case it brakes because of an error or whatever. It'll restart the process in that case. Also Nginx may be used as an reverse proxy on Linux that also takes care of the hosting process.

To host an ASP.NET Core web on an IIS or on Azure you need to publish it first. Publishing doesn't only compiles the project. It also prepares the project to host it on IIS, on Azure or on an webserver on Linux like Nginx.

dotnet publish -o ..\published -r win32-x64

This produces an output that can be mapped in the IIS. It also creates a web.config to add settings for the IIS or Azure. It contains the compiled web application as a DLL.

If you publish a self-contained application it also contains the runtime itself. A self-contained application brings it's own .NET Core runtime, but the size of the delivery increases a lot.

And on the IIS? Just create a new web and map it to the folder where you placed the published output:

It get's a little more complicated if you need to change the security, if you have some database connections and so on. This would be a topic for a separate blog post. But in this small sample it simply works:

This is the output of the small Middleware in the startup.cs of the demo project:

app.Run(async (context) =>
{
    await context.Response.WriteAsync("Hello World!");
});

Nginx

Unfortunately I cannot write about Nginx, because I don't have a running Linux currently to play around with it. This is one of the many future projects I have. I just got ASP.NET Core running on Linux using the Kestrel webserver.

Conclusion

ASP.NET Core and the .NET CLI already contain all the tools to get it running on various platforms and to set it up to get it ready for Azure and the IIS, as well as Nginx. This is super easy and well described in the docs.

BTW: What do you think about the new docs experience compared to the old MSDN documentation?

I'll definitely go deeper into some of the topics and in ASP.NET Core there are some pretty cool hosting features that make it a lot more flexible to host your application:

Currently we have the WebHostBuilder that creates the hosting environment of the applications. In 3.0 we get the HostBuilder that is able to create a hosting environment that is completely independent from any web context. I'm going to write about the HostBuilder in one of the next blog posts.

Holger Schwichtenberg: Magdeburger Developer Days vom 20. bis 22. Mai 2019

Die Entwickler-Community-Konferenz "Magdeburger Developer Days" geht in die vierte Runde.

Jürgen Gutsch: Sharpcms.Core - Migrating an old ASP.NET CMS to ASP.​NET Core on Twitch

On my Twitch stream I planned to show how to migrate a legacy ASP.NET application to ASP.NET Core, to start a completely new ASP.NET Core project and to show some news about the .NET Developer Community. When I did the first stream and introduced the plans to the audience, it somehow turns into the direction to migrate the legacy application. So I chose the old Sharpcms project to show the migration, which is maybe not the best choice because this CMS doesn't use the common ASP.NET patterns.

About the sharpcms

Initially the Sharpcms was built by a Danish developer. Back when he stopped maintaining it, me and my friend Thomas Huber asked him to take over his project and to continue maintaining this project. He said yes and since than we were the main contributors and coordinators of this project.

This is where my Twitter handle was born. Initially I planned to use this Twitter account to promote the sharpcms, but I used it off-topic. I promoted blog posts, community events using this account as well did some interesting discussions on twitter. I used it too much, it got linked everywhere and it didn't make sense to change it anymore. Anyway the priorities changed. The sharpcms wasn't the main hobby project anymore, but I still used this Twitter handle. It still kinda makes sense to me, because I work with CSharp and I'm a kind of a CMS expert. (I developed on two different ones for years and used a lot more.)

We had huge plans with this project, but as always plans and priorities change with new family members and new jobs. We haven't done anything on that CMS for years. Actually I'm not sure whether this CMS is still used or not.

Anyway. This is one of the best CMS systems from my perspective. Easy to setup, lightweight and fast to run and easy to use for users without a technical background. Creating templates for this CMS need a good knowledge of XML and XSLT, because XML is the base of this CMS and XSLT is used for the templates. This was super fast with the .NET Framework. Caching wasn't really needed for the sharpcms.

Juergen.IO.Stream

In the first show on Twitch I introduced the plans about to migrate the sharpcms and the other one about to start a plain new ASP.NET Core project. It turns out that the audience wanted to see the migration project. I introduced the sharpcms, showed the original sources and started to create .NET Standard libraries to show the difficulties.

I wasn't that pessimistic than the audience, cause I still knew that CMS. There where not too much dependencies to the classic ASP.NET and System.Web stuff. And as expected it wasn't that hard.

The rendering of the output in the sharpcms is completely based on XML and XSLT. The sharpcms creates a XML structure that get's interpreted and rendered using XSLT templates.

XSLT is a XML based programming language that navigates through XML data and crates any output. It actually is a programming language, you are able to create decision statements, loops, functions and variables. It is limited, but as well as Razor, ASP or PHP you mix the programming language with the output you wanna create, which makes it easy and intuitive to use.

This means there is no rendering logic in the C# codes. All the C# code does is to work on the request and to create the XML data containing the data to show. At the end it transforms the XML using the XSLT templates.

The main work I needed to do to create the Sharpcms running is to wrap the ASP.NET Core request context into a request context that looks similar to the System.Web version that was used inside the Sharpcms. Because it heavily uses the ASP.NET WebForm page object and its properties.

The migration strategy was to get it running even if it is kinda hacky and to clean it up later on. Know we are in this state. The old Sharpcms sources are working on ASP.NET Core using .NET Standard libraries.

The Sharpcms.Core running on Azure: https://sharpcms.azurewebsites.net

Performance

Albert Weinert (a community guy, former MVP and a Twitch streamer as well) told me during the first stream, that XSLT isn't that fast in .NET Core. Unfortunately he was right. The transformation speed and the speed of reading the XML data isn't that fast.

We'll need to have a look into the performance and to find a way to speed it up. Maybe to create a alternative view engine to replace the XML and XSLT based view engine somewhen. It would also be possible to have multiple view engines. Razor, Handlebars or Liquid would be an option. All of these already have .NET implementations which can be used here.

Next steps

Even though the CMS is now running on ASP.NET Core, there's still a lot to do. Here are the next issues I need to work on:

  • Build on Azure DevOps #8

  • Performance:

    • Get rid of the physical XML data and move the data to a database #4
    • Speed up the XSL transformation #3
    • Find another way to render the UI, maybe using razor, handlebars or liquid #2
    • Add caching #1
  • Cleanup the codes #9

  • User password encryption #5

  • Provide NuGet packages to easily use the sharpcms #6

    • Provide a package for the frontend as well #7
  • Map the Middleware as routed one, like it should work in ASP.NET core 3.0

Join me

If you like to join me in the stream to work together with me on the Sharpcms.Core, feel free to tell me. I would be super happy to do a pair programming session to work on a specific problem. It would be great to have experts on this topics in the stream:

  • Razor or Handlebars to create an alternative view engine
  • Security and Encryption to make this CMS more secure
  • DevOps to create a build and release pipeline

Summary

Migrating the old Sharpcms to ASP.NET Core was fun, but it's not yet done. There is a lot more to do. I'll continue working on it on my stream, but will also do some other stuff in the streams.

If you like to work on the Sharpcms to help me to solve some issues or to start creating a modern documentation. Feel free. This would help a lot.

David Tielke: [Webcast] Softwarequalität Teil 2 - Prozessqualität

Es ist mal wieder Webcast-Zeit. Nachdem wir uns im ersten Teil die Grundlagen zum Thema Softwarequalität angeschaut haben, widmen wir uns im zweiten Teil der Prozessqualität. Wie sollte also ein guter Softwareentwicklungsprozess aussehen und wie sollte er nicht aussehen? Worauf muss geachtet werden und was solltet Ihr machen oder auch besser die Finger davon lassen? All diese Fragen beschäftigen uns im zweiten Teil zum Thema Softwarequalität. Viel Spaß damit!

Christina Hirth : Continuous Delivery Is a Journey – Part 3

In the first part I described why I think that continuous delivery is important for an adequate developer experience and in the second part I draw a rough picture about how we implemented it in a 5-teams big product development. Now it is time to discuss about the big impact – and the biggest benefits – regarding the development of the product itself.

Why do more and more companies, technical and non-technical people want to change towards an agile organisation? Maybe because the decision makers have understood that waterfall is rarely purposeful? There are a lot of motives – beside the rather wrong dumb one “because everybody else does this” – and I think there are two intertwined reasons for this: the speed at wich the digital world changes and the ever increasing complexity of the businesses we try to automate.

Companies/people have finally started to accept that they don’t know what their customer need. They have started to feel that the customer – also the market – has become more and more demanding regarding the quality of the solutions they get. This means that until Skynet is not born (sorry, I couldn’t resist 😁) we oftware developers, product owners, UX designers, etc. have to decide which solution would be the best to solve the problems in that specific business and we have to decide fast.

We have to deliver fast, get feedback fast, learn and adapt the consequences even faster. We have to do all this without down times, without breaking the existing features and – for most of us very important: without getting a heart attack every time we deploy to production.

IMHO These are the most important reasons why every product development team should invest in CI/CD.

The last missing piece of the jigsaw which allows us to deliver the features fast (respectively continuously) without disturbing anybody and without losing the control how and when features are released is called feature toggle.

feature toggle[1] (also feature switchfeature flagfeature flipperconditional feature, etc.) is a technique in software development that attempts to provide an alternative to maintaining multiple source-code branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users.[2]

Wikipedia

The concept is really simple: one feature should be hidden until somebody, something decides that it is allowed to be used.

function useNewFeature(featureId) {
  const e = document.getElementById(featureId);
  const feat = config.getFeature(featureId);
  if(!feat.isEnabled)
    e.style.display = 'none';
  else
    e.style.display = 'block';
}

As you see, implementing feature toggles is really that simple. To adopt this concept will need some effort though:

  • Strive for only one toggle (one if) per feature. At the beginning it will be hard or even impossible to achieve this but it is a very important to define this as a middle-term goal. Having only one toggle per feature means the code is highly decoupled and very good structured.
  • Place this (main) toggle at the entry point (a button, a new form, a new API endpoint) the first interaction point with the user (person or machine) and in disabled state it should hide this entry point.
  • The enabled state of the toggle should lead to new services (in micro service world), new arguments or to new functions, all of them implementing the behavior for feature.enabled == true. This will lead to code duplication: yes, this is totally ok. I look at it as a very careful refactoring without changing the initial code. Implementing a new feature should not break or eliminate existing features. The tests too (all kind of them) should be organized similarly: in different files, duplicated versions, implemented for each state.
the different states of the toggle lead to clearly separated paths
  • Through the toggle you gain real freedom to make mistakes or just the wrong feature. At the same time you can always enable the feature and show it the product owner or the stake holders. This means a feedback loop is reduced to minimum.
  • This freedom has a price of course: after the feature is implemented, the feedback is collected, the decision for enabling the feature was made, after all this the source code must be cleaned up: all code for feature.enabled == false must be removed. This is why it is so important to create the different paths so that the risk of introducing a bug is virtually zero. We want to reduce workload not increase it.
  • Toggles don’t have to be temporary, business toggles (i.e. some premium features or “maintenance mode”) can stay forever. It is important to define beforehand what kind of toggle will be needed because the business toggles will be always part of your source code. The default value for this kind of toggles should be false.
  • The default value for the temporary toggles should be true and it should be deactivated on production, activated during the development.

One advice regarding the tooling: start small, with a config map in kubernetes, a database table, a json file somewhere will suffice. Later on new requirements will appear, like notifying the client UI when a toggle changes or allowing the product owner to decide, when a feature will be enabled. That will be the moment to think about next steps but for the moment it is more important to adopt this workflow, adopt this mindset of discipline to keep the source code clean, learn the techniques how to organize the code basis and ENJOY HAVING THE CONTROL over the impact of deployments, feature decisions, stress!

That’s it, I shared all of my thoughts regarding this subject: your journey of delivering continuously can start or continued 😉) now.

p.s. It is time for the one sentence about feature branches:
Feature toggles will never work with feature branches. Period. This means you have to decide: move to trunk based development or forget continuous development.

p.p.s. For the most languages exist feature toggle libraries, frameworks, even platforms, it is not necessary to write a new one. There are libraries for different complexities how the state can be calculated (like account state, persons, roles, time settings), just pick one.

Update:

As pointed out by Gergely on Twitter, on Martin Fowlers blog is a very good article describing extensively the different feature toggles and the power of this technique: Feature Toggles (aka Feature Flags)

David Tielke: [Webcast] Softwarequalität Teil 1 - Einführung

Es ist mal wieder Webcast-Zeit - nach etlichen Anfragen in den letzten Tagen habe ich heute wieder mein Studio-Equipment aufgebaut und den Vortrag des .NET Day Franken als Webcast aufgezeichnet. Da auf der Konferenz schon die vorgegebenen 70 Minuten sehr knapp bemessen waren, habe ich das Ganze auf mehrere Folgen aufgeteilt, welche in den nächsten Tagen und Wochen erscheinen werden. Viel Spaß damit!

David Tielke: .NET Day Franken 2019 - Inhalte meiner Session "Softwarequaltät"

 Früher als in den letzten Jahren ist für mich die Konferenzsaison dieses Mal schon im April gestartet und direkt mit einer neuen Konferenz, dem .NET Day Franken 2019. Die Communitykonferenz mit knapp 200 Teilnehmern wurde zum zehnten Mal in Nürnberg von der Community veranstaltet und bot neben einem tollen Programm, einer super Orga vor allem eine sensationelle Location. Ich durfte einen 70-minütigen Vortrag zum Thema "Softwarequalität" beisteuern, in welchem neben den Basics vor allem die diversen Probleme und deren vermeintlichen Lösungen im Fokus standen. An dieser Stelle möchte ich noch einmal allen Teilnehmern und natürlich auch den Organisatoren für eine erstklassige Veranstaltung danken. Es halt sehr viel Spaß gemacht, ich hoffe wir sehen uns im nächsten Jahr wieder. Hier stelle ich nun die Folien meines Vortrags zur Verfügung.

David Tielke: .NET Day Franken 2019 - Inhalte meiner Session "Softwarequaltät"

 Früher als in den letzten Jahren ist für mich die Konferenzsaison dieses Mal schon im April gestartet und direkt mit einer neuen Konferenz, dem .NET Day Franken 2019. Die Communitykonferenz mit knapp 200 Teilnehmern wurde zum zehnten Mal in Nürnberg von der Community veranstaltet und bot neben einem tollen Programm, einer super Orga vor allem eine sensationelle Location. Ich durfte einen 70-minütigen Vortrag zum Thema "Softwarequalität" beisteuern, in welchem neben den Basics vor allem die diversen Probleme und deren vermeintlichen Lösungen im Fokus standen. An dieser Stelle möchte ich noch einmal allen Teilnehmern und natürlich auch den Organisatoren für eine erstklassige Veranstaltung danken. Es halt sehr viel Spaß gemacht, ich hoffe wir sehen uns im nächsten Jahr wieder. Hier stelle ich nun die Folien meines Vortrags zur Verfügung.

Jürgen Gutsch: Implement Middlewares using Endpoint Routing in ASP.​NET Core 3.0

If you have a Middleware that needs to work on a specific path, you should implement it by mapping it to a route in ASP.NET Core 3.0, instead of just checking the path names. This post doesn't handle regular Middlewares, which need to work all request, or all requests inside a Map or MapWhen branch.

At the Global MVP Summit 2019 in Redmond I attended the hackathon where I worked on my GraphQL Middlewares for ASP.NET Core. I asked Glen Condron for a review of the API and the way the Middleware gets configured. He told me that we did it all right. We followed the proposed way to provide and configure an ASP.NET Core Middleware. But he also told me that there is a new way in ASP.NET Core 3.0 to use this kind of Middlewares.

Glen asked James Newton King who works on the new Endpoint Routing to show me how this needs to be done in ASP.NET Core 3.0. James pointed me to the ASP.NET Core Health Checks and explained me the new way to go.

BTW: That's kinda closing the loop: Four summits ago Damien Bowden and I where working on the initial drafts of the ASP.NET Core Health Checks together with Glen Condron. Awesome that this is now in production ;-)

The new ASP.NET Core 3.0 implementation of the GraphQL Middlewares is in the aspnetcore30 branch of the repository: https://github.com/JuergenGutsch/graphql-aspnetcore

About Endpoint Routing

The MVP fellow Steve Gordon had an early look into Endpoint Routing. His great post may help you to understand Entpoint Routing.

How it worked before:

Until now you used MapWhen() to map the Middleware to a specific condition defined in a predicate:

Func<HttpContext, bool> predicate = context =>
{
    return context.Request.Path.StartsWithSegments(path, out var remaining) &&
                            string.IsNullOrEmpty(remaining);
};

return builder.MapWhen(predicate, b => b.UseMiddleware<GraphQlMiddleware>(schemaProvider, options));

(ApplicationBuilderExtensions.cs)

In this case the path is checked. This is pretty common to not only map based on paths. This allows you to also map on all other kind of criteria based on the HttpContext.

Also the much simpler Map() was a way to go:

builder.Map(path, branch => branch.UseMiddleware<GraphQlMiddleware>(schemaProvider, options));

How this should be done now

In ASP.NET Core 3.0 these kind of mappings, where you may listen on a specific endpoint, should be done using the EndpoiontRouteBuilder. If you create a new ASP.NET Core 3.0 web application. MVC is now added a little different in the Startup.cs than before:

app.UseRouting(routes =>
{
    routes.MapControllerRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}");
    routes.MapRazorPages();
});

The method MapControllerRoute() adds the controller based MVC and Web API. The new ASP.NET Core Health Checks, which also provide an own endpoint will also be added like this. Means we now have Map() methods as extension methods on the IEndpointRouteBuilder instead of Use() methods on the IApplicationBuilder. It is still possible to use the Use methods.

In case of the GraphQL Middleware it looks like this:

var pipeline = routes.CreateApplicationBuilder()
    .UseMiddleware<GraphQlMiddleware>(schemaProvider, options)
    .Build();

return routes.Map(pattern, pipeline)
    .WithDisplayName(_defaultDisplayName);

(EndpointRouteBuilderExtensions.cs)

Based on the current IEndpointRouteBuilder a new IApplicationBuilder is created, where we Use the GraphQL Middleware as before. We pass the ISchemaProvider and the GraphQlMiddlewareOptions as arguments to the Middleware. The result is a RequestDelegate in the pipeline variable.

The configured endpoint pattern and the pipeline than gets mapped to the IEndpointRouteBuilder. The small extension Method WithDisplayName() sets the configured display name to the endpoint.

I needed to copy this extension method to from the ASP.NET Core repository to my code base, because the current development build of ASP.NET Core didn't contain this method two weeks ago. I need to check the latest version ASAP.

In ASP.NET Core 3.0 the GraphQl and the GraphiQl Middleware can now added like this:

app.UseRouting(routes =>
{
    if (env.IsDevelopment())
    {
        routes.MapGraphiQl("/graphiql");
    }
    
    routes.MapGraphQl("/graphql");
    
    routes.MapControllerRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}");
    routes.MapRazorPages();
});

Conclusion

The new ASP.NET Core 3.0 implementation of the GraphQL Middlewares is on the aspnetcore30 branch of the repository: https://github.com/JuergenGutsch/graphql-aspnetcore

This approach feels a bit different. In my opinion it messes the startup.cs a little bit. Previously we added one middleware after another... line by line to the IApplicationBuilder method. With this approach we have some Middlewares still registered on the IApplicationBuilder and some others on the IEndpointRouteBuilder inside a lambda expression on a new IApplicationBuilder.

The other thing is, that the order isn't really clear anymore. When will the Middlewares inside the UseRouting() be executed and in which direction? I will dig deeper into this the next months.

Jürgen Gutsch: #MVPSummit2019 - Impressions...

Also this year I was invited to attend the yearly Global MVP Summit in Redmond and Bellevue. It started last week Sunday until Thursday. As last year I add two days before and after the summit to get some time to explore Seattle. This is a small summery of the 8 days in the Seattle area.

Just to weeks before the summit starts there was the so called #snowmageddon2019 in the north west of the US. Cold and a lot of snow from the US perspective. But I was sure, when I will arrive in Seattle it'll be sunny and warm. And it was. I never had a rainy day in Seattle. In Bellevue and Redmond I had, but never in Seattle. Also last year I stayed two nights before and two nights after the summit in downtown Seattle and it was sunny than, but rainy while staying in Bellevue. Anyway, Seattle is always sunny, people are happy and friendly because of that.

Pre-Summit days in Seattle

As well as last year I stayed the first two nights in the Green Tortoise Hostel in downtown Seattle near the Pike place. This is a cheap hostel, you need to share the room with six to eight other people. But it is anyway impressive. The weekend when I arrived it was ComiCon in Seattle and Sint Patrick's day. So the hostel was full of ComiCon attendees, people wearing green things, Backpackers, and some MVPs.

I again met the South Korean Azure MVP in this hostel as last year, who gave me the sticker of his Korean Azure user group. I also met him the two nights after in the same hostel as well as during the summit

Even if the hostel is cheap, compared with the hotels in Seattle, the location is absolutely awesome. If you leave the hostel, you will stumble into the only Starbucks restaurant, that serves the Pike Place Special Reserve outside the Pike Place. Leaving the restaurant, you will stumble into the public market of the Pike Place where you can grab some pastries for breakfast. Than leaving the Pike Place to take the breakfast in the sun within the Victor Steinbrueck Park.

I arrived on Friday and took the Light Rail to Downtown Seattle, checked in to the Green Tortoise and went for a walk threw the Pike Place and had the first awesome burger at Lowell's Restaurant while enjoying the nice view to the Puget Sound. Saturday starts slowly with the breakfast described in the last paragraph. Later than I joined some MVPs for the guided Market Experience tour. Where I learned a lot about the market.

Did you know that the fist Starbucks isn't really the first one, but the oldest one? Did you know that you need to found your business on the Pike Place to get a spot to sell your stuff? All you want to sell on the market needs to be produced by yourself (except meat, sausage and fish I think)

Later I joined some MVP Friends for lunch and for a walk to the space needle. We had lunch at the Pike Place Brewery before where I found sausages, sauerkraut and meshed potatoes on the menu. Beer brazed sausages, with fine apple sauerkraut. Seattle meats Bavaria. I needed to try it and it was really yummy.

In the evening we had free beer at the hostel. With free beer and my laptop I started to merge almost all of the pull requests to the ASP.NET Core GraphQL Middlewares, answered almost all open issues and updated the dependencies of the project.

The Summit days in Bellevue and Redmond

The Sunday also started slowly, before I took the express bus to Redmond where the Summit hotels are located. I checked in to the Marriott Bellevue, where I shared the room with the famous Alex Witkowski. This room was awesome, with a great view to the space needle and a super modern stylish sliding door to the bathroom that cannot be locked and that always wasn't really closed. Felt strange while sitting on the toilet, but must be super modern for a 599$ room ;-)

Sunday is the day where the most of the MVPs registering for the summit at the biggest summit hotel. Some soft skill talks were held there too. The first parties organized by MVPs or tools vendors where on Saturday so we joined them and met the first Microsofties and other famous MVPs. it got late and the Monday got hard. Anyway the actual Summit starts on Monday with a lot of technical sessions.

From Monday to Wednesday there where a lot of interesting technical sessions. Many of them really had a lot of value. Some others didn't contain new information for me, because the most stuff in my area was openly discussed on GitHub, but anyway clarified some rumors.

I really got into Razor Components, which is not about Blazor as I initially thought. Also Scott Hanselman did a clarification post about it. [link] Razor Components is component based development using Razor. It looks similar to React, even if it may be rendered on the server side, as well as on the client side using Blazor. Awesome stuff.

The Thursday also is a highlight for me. Thursday is hackathon day. I joined Jeff Fritz who showed us his mobile streaming setup. I got a chance to talk to Jeff and to other Twitch streamers, like Emanuele Bartolesi. Besides of that I worked on the ASP.NET Core CraphQL Middlewares and had a chance to get a review by Glen Condron. He also told me that the way how a Middleware is created changed in 3.0 for Middlewares that handle a specific path. I'll write about it in one of the next posts. Glen and James Newton King who works on the new ASP.NET Core routing supported me to get it running for ASP.NET Core 3.0.

Post-Summit days in Seattle

On Thursday after the hackathon I moved back to Seattle into the Green Tortoise and again met the south Korean Azure MVP at the check-in. I used the night to work on the ASP.NET Core CraphQL Middleware to finish the GraphQL Middleware registration using the route mapping.

Friday was shopping day. My wife always need some pants from her favorite store Seattle and I need to buy some souvenirs for the Kids (usually some t-shirts). After this was done I decided to explore the international district and china town where I also had a quick lunch in on of the Asian restaurants. China town was less colorful than expected but nice anyway. An awesome detail: You know you are in china town, if the street names are printed in two languages.

I left china town and unexpectedly stumbled into the old part of Seattle. The Pioneer Square was surprisingly nice. Old houses, small shops and pubs. One of the pups sells a German stout beer "Köstritzer", as well as "Biers" and "Brats".

Also found the "Berliner" döner and kebab restaurant, which is (as far as I know) the very first and the only real döner restaurant in the US:

In the evening I decided to go to the Hardrock Cafe across the street to take a dinner. I was there for the first time. I don't get why this is a popular place. Pretty loud, uncomfortable and the food is good but not really special. Anyway, I continued to get the GraphiQL Middleware (the GraphQL UI) running using the new route mapping and cleaned up all the changes. Free beer at the Green Tortoise and Coding matches pretty well.

Saturday was the day to fly back at home. The morning starts with the annual JustCommunity Summit at the Lowell's Restaurant in the Public Market area of the Pike Place. Kostia and I took a breakfast and talked about the plans of the INETA Germany and JustCommunity. Our goals: To have a strategy about the JustCommunity until the end of the year. We also need to lineup the INETA tasks with the community support of the .NET Foundation.

Leaving Seattle

This was the fifth time in Seattle which is one of the most impressive cities. Pretty diverse, fascinating and pretty much different to any other cities in the US I've bin (not that many unfortunately).

Leaving Seattle is a little bit like leaving home. The last years I didn't know why. Now I'm pretty sure it is because I always meet friends, community members and many other nice people for the summit. The Summit is a little bit like a annual family meetup.

But one week without the family is hard as well and it is time to go home to my lovely wife and the three boys :-)

Holger Schwichtenberg: Visual Studio 2019 erscheint heute

Microsoft wird heute Abend um 18 Uhr die Version 2019 seiner IDE freigeben.

Jürgen Gutsch: Git Flow - About, installing and using

The people who know me, also know that I'm a huge fan of consoles and CLIs. I run the dotnet CLI as well as the Angular CLI and the create-react CLI. Yeoman is also a tool I like. I own a Mac, but cannot really work with the Mac UI. I really prefer the terminal in Mac. Also Git is used in the console the most time. The only situation where I don't use git in the console, is while resolving merge conflicts. I configured KDiff3 as the merge tool. I don't really need a graphic user interface for all the other tasks to work with Git.

So I do using the Git Flow process.

About Git Flow

In general Git Flow is a branching concept over Git. It is pretty clear and intuitive, but following this concept manually in Git is a bit hard and needs some time. Git Flow is now implemented in many graphical user interfaces like SourceTree. This reduces the overhead.

Git Flow is mainly about merging and branching. It defines two main branches, which are "master" as the production/release branch and "develop" as the working branch. The actual work is done in different types of feature branches:

  • "feature" a branch created based on "develop" to implement new featues
    • will be merged back to "develop"
    • branch name pattern: feature/<name|ticket|#123-my-feature>
  • "release" a branch created based on "develop" to create a new release
    • the branch name gets the tag name
    • will create a tag
    • will be merged to "master" and "develop"
    • branch name pattern: release/<tag|version|1.2.0>
  • "hotfix" a branch created based on "master"
    • the branch name gets the tag name
    • will create a tag
    • will merge to "master" and "develop"
    • branch name pattern: hotfix/<tag|version|1.2.3>
  • "bugfix" less popular. We use "feature" to create bug fixes
    • not available in all tools
    • behaves like "feature"
  • "support" much less popular. We don't use it
    • not available in all tools
    • almost behaves like hotfixes

I propose to have a look into the Git Flow cheat sheet documentation to see how the branching concept works: http://danielkummer.github.io/git-flow-cheatsheet/

Git Flow is also a tool provided as Git extension. This reduces branching, merging, releasing tagging to just one single command and does all the needed tasks in the background for you. This CLI makes it super easy to follow Git Flow.

Install Git Flow as Git Extension

The installation is a bit annoying, because it needs a some additional tools and some more tasks for just a small Git extension.

To install it you need cygwin, which also is a console that gives you Linux like tools on Windows. The easiest way to install cygwin is to use Chocolatey, which is a packet manager for Windows. (apt-get for windows). You can also install it manually by running the installer, but you need to ensure to also install cyg-get, wget and util-linux, which is much easier using Chocolatey.

To install Chocolatey follow the instructions on https://chocolatey.org.

Open a console and type the following commands

choco install cygwin
choco install cyg-get

If this is done you can use cyg-get to install the needed extensions for the cygwin console

Open the console and type the following commands:

cyg-get install wget
cyg-get install util-linux

Now the cygwin is ready to use to install Git Flow. Type

cygwin

This will open the cygwin bash inside the current console.

Now you are able to run the installation of Git Flow. Copy the following command to the cygwin bash and press enter:

wget -q -O - --no-check-certificate https://raw.github.com/petervanderdoes/gitflow-avh/develop/contrib/gitflow-installer.sh install stable | bash

If this is done exit the bash by typing exit and close the console by typing exit. Closing the consoles and open it again ensures all the environment variables needed are available.

Open a new console and type git flow. You should now see the Git Flow CLI help like this:

Every time you checkout or create a new repository you need to run git flow init to enable Git Flow.

Using this command you will setup Git Flow on an existing repository by configuring the different branch prefixes and specifying the two main branches. I would propose to choose the default prefixes and names:

Working with Git Flow

Using Git Flow is pretty easy using this CLI. Let's assume we need to start working on a feature called "Implement validation". We could now write a command like this

git flow feature start implement-validation

This will work as expected:

Since the most of us are using a planning tool like Jira or TFS it would make more sense to use the ticket number here as feature name. In case you use the TFS I would propose to add the work item type to the number:

  • Jira: PROJ-101
  • TFS: Task-34212

This helps to keep the branch names clean and you don't start messing around with long branch names or wrong names. Git Flow usually deletes the feature branch after merging it back. So the list of branches will never be too long. But anyway, I learned in the past few years, it is much easier to follow ticket numbers than weird named branch names, because we talk about the current tickets every day in the daily scrum meeting.

All the commands that are not related to branches can be done using the regular Git CLI. That means commands to commit, to push and so on.

Git Flow will merge the branches, if you finish them. It doesn't work with rebase or other approaches. This means it'll take over the entire history of the feature branch. Because of this I would also propose to add the ticket number to the commit messages like this: "PROJ-101: adds validation to the form". This makes it easy to follow the history in case it is needed.

To finish a feature you should first merge the latest changes of the development branch in:

git fetch --all
git merge develop
git flow feature finish

If you don't add the feature name to the git flow feature finish command, Git Flow will try to close the current feature branch and will write out a message in case the current branch is not a feature branch.

I would propose to always merge the latest changes of develop to the current feature branch to solve possible conflicts within the feature branch instead in the develop branch. This way the merge to develop will almost never have a conflict.

I showed the way how to work with Git Flow using the feature branch. But it'll work the same way with the other branch types. Except with the release and the hotfix branches where you need to set the tag name as feature name. This should be the version number of the release or the version number of the hotfix.

While finishing these two branches Git Flow will ask you for a tag message. After finishing it you need to push both the master and the develop brunch, as well as the tags:

git push --all
git push --tags

For more information about the Git Flow commands please follow the documentation on Daniel Kummer's Git Flow cheat sheet: http://danielkummer.github.io/git-flow-cheatsheet/. (Which is BTW the best Git Flow documentations ever)

Conclusion

I really love the CLI help of this tool. It is not only descriptive but also explaining. The same way the GIT CLI is explaining things. It is also providing proposals in case a command is miss-spelled.

Git Flow helps me to speed up the branching and merging flows and to follow the Git Flow process. I proposed to use Git Flow in the company and works pretty well there. And I learned a lot about how this process works in production.

As written somewhen in the past, It also helps me to write my blog. I really use Git Flow to organize my posts I'm working on. I'm creating a feature per post and a hotfix in case I need to fix a post or something else on the blog. I use SemVer to version my releases and hotfixes: Every post increases the feature number and a hotfix increases the patch number. The feature number also is the number of post in my blog. The number of open features in my blog is the number of posts I'm working on. This way I can work on many posts separately and I'm able to release the posts separately.

Code-Inside Blog: Load hierarchical data from MSSQL with recursive common table expressions

Scenario

We have a pretty simple scenario: We have a table with a simple Id + ParentId schema and some demo data in it. I have seen this design quite a lot in the past and in the relational database world this is the obvious choice.

x

Problem

Each data entry is really simple to load or manipulate. Just load the target element and change the ParentId for a move action etc.. A more complex problem is how to load a whole “data tree”. Let’s say I want to load all children or parents of a given Id. You could load everything, but if your dataset is large enough, this operation will work poorly and might kill your database.

Another naive way would be to query this with code from a client application, but if your “tree” is big enough, it will consume lots of resources, because for each “level” you open a new connection etc.

Recursive Common Table Expressions!

Our goal is to load the data in one go as effective as possible - without using Stored Procedures(!). In the Microsoft SQL Server world we have this handy feature called “common table expresions (CTE)”. A common table expression can be seen as a function inside a SQL statement. This function can be invoked by itself and now we can call this a “recursive common table expression”.

The syntax itself is a bit odd, but works well and you can enhance it with JOINs from other tables.

Scenario A: From child to parent

Let’s say you want to go the tree upwards from a given Id:

WITH RCTE AS
    (
    SELECT anchor.Id as ItemId, anchor.ParentId as ItemParentId, 1 AS Lvl, anchor.[Name]
    FROM Demo anchor WHERE anchor.[Id] = 7
    
    UNION ALL
    
    SELECT nextDepth.Id  as ItemId, nextDepth.ParentId as ItemParentId, Lvl+1 AS Lvl, nextDepth.[Name]
    FROM Demo nextDepth
    INNER JOIN RCTE recursive ON nextDepth.Id = recursive.ItemParentId
    )
                                    
SELECT ItemId, ItemParentId, Lvl, [Name]
FROM RCTE as hierarchie

The anchor.[Id] = 7 is our starting point and should be given as a SQL parameter. The with statement starts our function description, which we called “RCTE”. In the first select we just load everything from the target element. Note, that we add a “Lvl” property, which starts at 1. The UNION ALL is needed (at least we were not 100% if there are other options). In the next line we are doing a join based on the Id = ParentId schema and we increase the “Lvl” property for each level. The last line inside the common table expression uses the “recursive” feature.

Now we are done and can use the CTE like a normal table in our final statement.

Result:

x

We now only load the “path” from the child entry up to the root entry.

If you ask why we introduce the “lvl” column: With this column it is really easy see each “step” and it might come handy in your client application.

Scenario B: From parent to all descendants

With a small change we can do the other way around. Loading all descendants from a given id.

The logic itself is more or less identical, we changed only the INNER JOIN RCTE ON …

WITH RCTE AS
    (
    SELECT anchor.Id as ItemId, anchor.ParentId as ItemParentId, 1 AS Lvl, anchor.[Name]
    FROM Demo anchor WHERE anchor.[Id] = 2
    
    UNION ALL
    
    SELECT nextDepth.Id  as ItemId, nextDepth.ParentId as ItemParentId, Lvl+1 AS Lvl, nextDepth.[Name]
    FROM Demo nextDepth
    INNER JOIN RCTE recursive ON nextDepth.ParentId = recursive.ItemId
    )
                                    
SELECT ItemId, ItemParentId, Lvl, [Name]
FROM RCTE as hierarchie

Result:

x

In this example we only load all children from a given id. If you point this to the “root”, you will get everything except the “alternative root” entry.

Conclusion

Working with trees in a relational database might not “feel” as good as in a document database, but it doesn’t mean, that such scenarios needs to perform bad. We use this code at work for some bigger datasets and it works really well for us.

Thanks to my collegue Alex - he discovered this wild T-SQL magic.

Hope this helps!

Albert Weinert: Kostenloser Live ASP.NET Core Authentication und Authorization Deep Dive am 31.03.2019

Kostenlos aber nicht umsonst!

Auf meinem Twitch Live Conding Kanal habe ich ein Follower Goal ausgerufen. Bei einhundert Followern mache ich Live einen ASP.NET Core Authentication und Authorization Deep Dive. Diese Ziel ist nun erreicht und nun muss ich Taten folgen lassen.

Die Taten starten am Sonntag den 31. März 2019 um 11 Uhr, zusammen mit Jürgen Gutsch der sich freundlicher Weise als Moderator, Fragesteller und Verbindung zum Chat zu Verfügung stellt, der Deep Dive Live auf Sendung gehen.

Was erwartet euch?

2-3 Stunden Möglichkeiten, Dos and Don'ts rund um das Thema, auch werdet Ihr Fragen, Probleme und Wünsche äußern können. Vorab- oder Live im Chat. Von der Cookie Authentizierung bis Open ID Connect, von der Absicherung gegen übliche Angriffe aus dem Netz, was für Bestandteile gibt es. Viele Hinweise zu dem was man alles falsche mache kann und warum und wie man es besser richtig macht.

Es wird kein reiner Vortrag sein, sondern ein lockerer Dialog zwischen Jürgen, dem Chat und mir, wobei ich natürlich auch viel Code zeigen und schreiben werden.

Du hast Fragen zum Thema?

Dann hinterlasse sie am besten beim passenden GitHub Issue, oder über Twitter mit dem Hashtag #deepdivealbert. Alterantiv hier als Kommentar posten. Natürlich kannst Du dich auch während dem Stream einbringen, dazu brauchst Du ein Twitch Konto und musst angemeldet sein.

Bei Twitch anmelden?

Nein, Du kannst den Stream auch ohne Anmeldung sehen, aber Du kannst dann nicht am Chat teilnehmen.

Die Aufzeichnung ist nun Online

Uli Armbruster: Freigrenze vs. Freibetrag

Heute bin ich von meinem Steuerberater auf die wichtige Unterscheidung zw. Freigrenze und Freibetrag aufmerksam gemacht worden, was ich bis dato synonym verwendet habe.

Dazu ein Beispiel:

  • 40€ sind die jeweilige Grenze
  • Der Kauf übersteigt die Grenze um 1€, sprich die Gesamtkosten belaufen sich auf 41€

Freigrenze

Im Fall der Freigrenze muss bei Übersteigung des Grenzbetrags die volle Summe versteuert werden, d.h. die vollen 41€ sind steuerpflichtig.

Freibetrag

Im Fall des Freibetrags muss lediglich der die Grenze übersteigende Betrag versteuert werden, d.h. 1€.

Bei Streuartikeln (bis 10€ netto) und Sachzuwendungen an Geschäftsfreunde (35€ pro Jahr und Person) bzw. Sachzuwendungen an Arbeitnehmer (44€ pro Mitarbeiter und Monat | nicht auf Folgemonat übertragbar) handelt es sich um Freigrenzen. Leider sind im betrieblichen Umfeld die meisten Grenzen Freigrenzen.

Ein Freibetrag wäre z.B. der sogenannte Rabattfreibetrag, bei dem der Arbeitgeber seinen Angestellten Rabatte auf die eigenen Waren oder Dienstleistungen gewährt.

Michael Schwarz: Nach längerer Pause - jetzt zu Apple Themen auf Twitter

Nach längerer Pause bin ich jetzt zu Apple Themen auf Twitter umgestiegen. Unter https://twitter.com/DieApfelFamilie könnt ihr mir folgen.


Christina Hirth : Continuous Delivery Is a Journey – Part 2

After describing the context a little bit in part one it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I’m sorry, but it is a quite long part 🙄)

The very first step starts with pushing all the current commits to master (if you work with feature branches you will probably encounter a new level of self-made complexity which I don’t intend to discuss about).

This action triggers the first checks and quality gates like licence validation and unit tests. If all checks are “green” the new version of the software will be saved to the repository manager and will be tagged as “latest”.

Successful push leads to a new version of my service/pkg/docker image

At this moment the continuous integration is done but the features are far from being used by any customer. I have a first feedback that I didn’t brake any tests or other basic constraints but that’s all because nobody can use the features, it is not deployed anywhere yet.

Well let Jenkins execute the next step: deployment to the Kubernetes environment called integration (a.k.a. development)

Continuous delivery to the first environment including the execution of first acceptance tests

At this moment all my changes are tested if they can work together with the currently integrated features developed by my colleagues and if the new features are evolving in the right direction (or are done and ready for acceptance).

This is not bad, but what if I want to be sure that I didn’t break the “platform”, what if I don’t want to disturb everybody else working on the same product because I made some mistakes – but I still want to be a human ergo be able to make mistakes 😉? This means that my behavioral and structure changes introduced by my commits should be tested before they land on integration.

These must be obviously a different set of tests. They should test if the whole system (composed by a few microservices each having it’s own data persistence, one or more UI-Apps) is working as expected, is resilient, is secure, etc.

At this point came the power of Kubernetes (k8s) and ksonnet as a huge help. Having k8s in place (and having the infrastructure as code) it is almost a no-brainer to set up a new environment to wire up the single systems in isolation and execute the system tests against it. This needs not only the k8s part as code but also the resources deployed and running on it. With ksonnet can be every service, deployment, ingress configuration (manages external access to the services in a cluster), or config map defined and configured as code. ksonnet not only supports to deploy to different environments but offers also the possibility to compare these. There are a lot of tools offering these possibilities, it is not only ksonnet. It is important to choose the fitting tool and is even more important to invest the time and effort to configure everything as code. This is a must-have in order to achieve a real automation and continuous deployment!

Good developer experience also means simplified continuous deployment

I will not include here any ksonnet examples, they have a great documentation. What is important to realize is the opportunity offered with such an approach: if everything is code then every change can be checked in. Everything checked in can be included observed/monitored, can trigger pipelines and/or events, can be reverted, can be commented – and the feature that helped us in our solution – can be tagged.

What happens in a continuous delivery? Some change in VCS triggers pipeline, the fitting version of the source code is loaded (either as source code like ksonett files or as package or docker image), the configured quality gate checks are verified (runtime environment is wired up, the specs with the referenced version are executed) and in case of success the artifact will be tagged as “thumbs up” and promoted to the next environment. We started do this manually to gather enough experience to automate the process.

Deploy manually the latest resources from integration to the review stage

If you have all this working you have finished the part with the biggest effort. Now it is time to automate and generalize the single steps. After the Continuous Integration the only changes will occur in the ksonnet repo (all other source code changes are done before), which is called here deployment repo.

Roll out, test and eventually roll back the system ready for review

I think, this post is already to long. The next part ( I think, it will be the last one) I would like to write about the last essential method, how to deploy to production, without annoying anybody (no secret here, this is why feature toggles were invented for 😉) and about some open questions or decisions what we encountered on our journey.

Every graphic is realized with plantuml thank you very much!

to be continued …

Golo Roden: Einführung in Node.js, Folge 26: Let's code (comparejs)

JavaScript verfügt – wie auch andere Programmiersprachen – über Operatoren zum Vergleichen von Werten. Leider läuft ihre Funktionsweise häufig der Intuition zuwider. Warum also nicht die Vergleichsoperatoren in Form eines Moduls neu schreiben und dabei auf vorhersagbares Verhalten achten?

Christina Hirth : Continuous Delivery Is a Journey – Part 1

Last year my colleagues and I had the pleasure to spend 2 days with @hamvocke and @diegopeleteiro from @thoughtworks reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: “think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this is what your CD should enable.”

Even if (or maybe because) this thought scared the hell out of us, it became our vision for the next few months because we saw great opportunities we would gain if we would be able to work this way.

Let me describe the context we were working:

  • Four business teams, 100% self-organized, owning 1…n Self-contained Systems, creating microservices running as Docker containers orchestrated with Kubernetes, hosted on AWS.
  • Boundaries (as in Domain Driven Design) defined based on the business we were in.
  • Each team having full ownership and full accountability for their part of business (represented by the SCS).
  • Basic heuristics regarding source code organisation: “share nothing” about business logic, “share everything” about utility functions (in OSS manner), about experiences you made, about the lessons you learned, about the errors you made.
  • Ensuring the code quality and the software quality is 100% team responsibility.
  • You build it, you run it.
  • One Platform-as-a-service team to enable this business teams to deliver features fast.
  • Gitlab as VS, Jenkins as build server, Nexus as package repository
  • Trunk-based development, no cherry picking, “roll fast forward” over roll back.
Teams
4 Business Teams + 1 Platform-as-a-Service Team = One Product

The architecture we have chosen was meant to support our organisation: independent teams able to work and deliver features fast and independently. They should decide themselves when and what they deploy. In order to achieve this we defined a few rules regarding inter-system communication. The most important ones are:

  • Event-driven Architecture: no synchronous communication only asynchronous via the Domain Event Bus
  • Non-blocking systems: every SCS must remain (reduced) functional even if all the other systems are down

We had only a couple of exceptions for these rules. As an example: authentication doesn’t really make sense in asynchronous manner.

Working in self-organized, independent teams is a really cool thing. But

with great power there must also come great responsibility

Uncle Ben to his nephew

Even though we set some guards regarding the overall architecture, the teams still had the ownership for the internal architecture decisions. As at the beginning we didn’t have continuous delivery in place every team was alone responsible for deploying his systems. Due the missing automation we were not only predestined to make human errors but we were also blind for the couplings between our services. (And we spent of course a lot of time doing stuff manually instead of letting Jenkins or Gitlab or some other tool doing this stuff for us 🤔 )

One example: every one of our systems had at least one React App and a GraphQL API as the main communication (read/write/subscribe) channel. One of the best things about GraphQL is the possibility to include the GraphQL-schema in the react App and this way having the API Interface definition included in the client application.

Is this not cool? It can be. Or it can lead to some very smelly behavior, to a real tight coupling and to inability to deploy the App and the API independently. And just like my friend @etiennedi says: “If two services cannot be deployed independently they aren’t two services!”

This was the first lesson we have learned on this journey: If you don’t have a CD pipeline you will most probably hide the flaws of your design.

One can surely ask “what is the problem with manual deployment?” – nothing, if you have only a few services to handle, if every one in your team knows about these couplings and dependencies and is able to execute the very precise deployment steps to minimize the downtime. But otherwise? This method doesn’t scale, this method is not very professional – and the biggest problem: this method ignores the possibilities offered by Kubernetes to safely roll out, take down, or scale everything what you have built.

Having an automated, standardized CD pipeline as described at the beginning – with the goal that every commit will land on production in a few seconds – having this in place forces everyone to think about the consequences of his/hers commit, to write backwards compatible code, to become a more considered developer.

to be continued …

Stefan Henneken: MEF Part 3 – Life cycle management and monitoring

Part 1 took a detailed look at binding of composable parts. In an application, however, we sometimes need to selectively break such bindings without deleting the entire container. We will look at interfaces which tell parts whether binding has taken place or whether a part has been deleted completely.

The IPartImportsSatisfiedNotification interface

For parts, it can be helpful to know when binding has taken place. To achieve this, we implement an interface called IPartImportsSatisfiedNotification. This interface can be implemented in both imports and exports.

[Export(typeof(ICarContract))]
public class BMW : ICarContract, IPartImportsSatisfiedNotification
{
    // ...
    public void OnImportsSatisfied()
    {
        Console.WriteLine("BMW import is satisfied.");
    }
}
class Program : IPartImportsSatisfiedNotification
{
    [ImportMany(typeof(ICarContract))]
    private IEnumerable<Lazy<ICarContract>> CarParts { get; set; }
 
    static void Main(string[] args)
    {
        new Program().Run();
    }
    void Run()
    {
        var catalog = new DirectoryCatalog(".");
        var container = new CompositionContainer(catalog);
        container.ComposeParts(this);
        foreach (Lazy<ICarContract> car in CarParts)
            Console.WriteLine(car.Value.StartEngine("Sebastian"));
        container.Dispose();
    }
    public void OnImportsSatisfied()
    {
        Console.WriteLine("CarHost imports are satisfied.");
    }
}

Sample 1 (Visual Studio 2010) on GitHub

When the above program is run, after executing container.ComposeParts() (line 14) the method OnImportsSatisfied() of the host will be executed. If this is the first time an export has been accessed, the export will first run the constructor, then its OnImportsSatisfied() method, and finally its StartEngine() method.

If we don’t use the Lazy<T> class, the sequence in which the methods are called is somewhat different. In this case, after executing the container.ComposeParts() method, the constructor, and then the OnImportsSatisfied() method will first be executed for all exports. Only then the OnImportsSatisfied() method of the host will be called, and finally the StartEngine() method for all exports.

Using IDisposable

As usual in .NET, the IDisposable interface should also be implemented by exports. Because the Managed Extensibility Framework manages the parts, only the container containing the parts should call Dispose(). If the container calls Dispose(), it also calls the Dispose() method of all of the parts. It is therefore important to call the container’s Dispose() method once the container is no longer required.

Releasing exports

If the creation policy is defined as NonShared, multiple instances of the same export will be created. These instances will then only be released when the entire container is destroyed by using the Dispose() method. With long-lived applications in particular, this can lead to problems. Consequently, the CompositionContainer class possesses the methods ReleaseExports() and ReleaseExport(). ReleaseExports() destroys all parts, whilst ReleaseExport() releases parts individually. If an export has implemented the IDisposable interface, its Dispose() method is called when you release the export. This allows selected exports to be removed from the container, without having to destroy the entire container. The ReleaseExports() and ReleaseExport() methods can only be used on exports for which the creation policy is set to NonShared.

In the following example, the IDisposable interface has been implemented in each export.

using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarBMW
{
    [Export(typeof(ICarContract))]
    public class BMW : ICarContract, IDisposable
    {
        private BMW()
        {
            Console.WriteLine("BMW constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the BMW.", name);
        }
        public void Dispose()
        {
            Console.WriteLine("Disposing BMW.");
        }
    }
}

The host first binds all exports to the import. After calling the StartEngine() method, we use the ReleaseExports() method to release all of the exports. After re-binding the exports to the import, this time we remove the exports one by one. Finally, we use the Dispose() method to destroy the container.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.NonShared)]
        private IEnumerable<Lazy<ICarContract>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
 
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
 
            Console.WriteLine("");
            Console.WriteLine("ReleaseExports.");
            container.ReleaseExports<ICarContract>(CarParts);
            Console.WriteLine("");
 
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
 
            Console.WriteLine("");
            Console.WriteLine("ReleaseExports.");
            foreach (Lazy<ICarContract> car in CarParts)
                container.ReleaseExport<ICarContract>(car);
 
            Console.WriteLine("");
            Console.WriteLine("Dispose Container.");
            container.Dispose();
        }
    }
}

The program output therefore looks like this:

CommandWindowSample02

Sample 2 (Visual Studio 2010) on GitHub

Golo Roden: Einführung in Node.js, Folge 25: Let's code (is-subset-of)

Will man in JavaScript wissen, ob ein Array oder ein Objekt eine Teilmenge eines anderen Arrays oder eines anderen Objekts ist, lässt sich das nicht einfach herausfinden – erst recht nicht, wenn eine rekursive Analyse gewünscht ist. Warum also nicht ein Modul zu dem Zweck entwickeln?

André Krämer: Verstärkung bei der Quality Bytes GmbH in Sinzig gesucht (Softwareentwickler .NET, Softwareentwickler Angular, Xamarin, ASP.NET Core)

Das könnte dein neuer Schreibtisch sein Im Sommer 2018 habe ich gemeinsam mit einem Partner die Quality Bytes GmbH in Sinzig am Rhein, gelegen zwischen Bonn und Koblenz, gegründet. Seitdem entwicklen wir in einem Team von vier Entwicklern spannende Lösungen im Web- und Mobile-Umfeld. Wir setzen dabei auf moderne Technologien und Werkzeuge, wie z. B. ASP.NET Core Angular Xamarin Azure DevOps git Typescript C# Aktuell haben wir mehrere Stellen zu besetzen.

Code-Inside Blog: Check Scheduled Tasks with Powershell

Task Scheduler via Powershell

Let’s say we want to know the latest result of the “GoogleUpdateTaskMachineCore” task and the corresponding actions.

x

All you have to do is this (in a Run-As-Administrator Powershell console) :

Get-ScheduledTask | where TaskName -EQ 'GoogleUpdateTaskMachineCore' | Get-ScheduledTaskInfo

The result should look like this:

LastRunTime        : 2/26/2019 6:41:41 AM
LastTaskResult     : 0
NextRunTime        : 2/27/2019 1:02:02 AM
NumberOfMissedRuns : 0
TaskName           : GoogleUpdateTaskMachineCore
TaskPath           : \
PSComputerName     :

Be aware that the “LastTaskResult” might be displayed as an integer. The full “result code list” documentation only lists the hex value, so you need to convert the number to hex.

Now, if you want to access the corresponding actions you need to work with the “actual” task like this:

PS C:\WINDOWS\system32> $task = Get-ScheduledTask | where TaskName -EQ 'GoogleUpdateTaskMachineCore'
PS C:\WINDOWS\system32> $task.Actions


Id               :
Arguments        : /c
Execute          : C:\Program Files (x86)\Google\Update\GoogleUpdate.exe
WorkingDirectory :
PSComputerName   :

If you want to dig deeper, just checkout all the properties:

PS C:\WINDOWS\system32> $task | Select *


State                 : Ready
Actions               : {MSFT_TaskExecAction}
Author                :
Date                  :
Description           : Keeps your Google software up to date. If this task is disabled or stopped, your Google
                        software will not be kept up to date, meaning security vulnerabilities that may arise cannot
                        be fixed and features may not work. This task uninstalls itself when there is no Google
                        software using it.
Documentation         :
Principal             : MSFT_TaskPrincipal2
SecurityDescriptor    :
Settings              : MSFT_TaskSettings3
Source                :
TaskName              : GoogleUpdateTaskMachineCore
TaskPath              : \
Triggers              : {MSFT_TaskLogonTrigger, MSFT_TaskDailyTrigger}
URI                   : \GoogleUpdateTaskMachineCore
Version               : 1.3.33.23
PSComputerName        :
CimClass              : Root/Microsoft/Windows/TaskScheduler:MSFT_ScheduledTask
CimInstanceProperties : {Actions, Author, Date, Description...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties

If you have worked with Powershell in the past this blogpost should “easy”, but it took me a while to see the result code and to check if the action was correct or not.

Hope this helps!

Golo Roden: Einführung in Node.js, Folge 24: Let's code (typedescriptor)

Der typeof-Operator von JavaScript hat einige Schwächen: Er kann beispielsweise nicht zwischen Objekten und Arrays unterscheiden und identifiziert null fälschlicherweise als Objekt. Abhilfe schafft ein eigenes Modul, das Typen verlässlich identifiziert und beschreibt.

Jürgen Gutsch: Problems using a custom Authentication Cookie in classic ASP.​NET

A customer of mine created an own authentication service that combines various login mechanisms in their on-premise application environment. On this central service combines authentication via Active Directory, classic ASP.NET Forms Authentication and a custom login via a number of an access card.

  • Active Directory (For employees only)
  • Forms Authentication (Against a user store in the database for extranet users and and against the AD for employees via extranet)
  • Access badges (for employees, this authentication results in lower access rights)

This worked pretty nice in their environment until I created a new application which needs to authenticate against this service and was build using ASP.NET 4.7.2

BTW: Unfortunately I couldn't use ASP.NET Core here because I needed to reuse specific MVC components that are shared between all the applications.

I also wrote "classic ASP.NET" which feels a bit wired. I worked with ASP.NET for a long time (.NET 1.0) and still work with ASP.NET for specific customers. But it really is kinda classic since ASP.NET Core is out and since I worked with ASP.NET Core a lot as well.

Hot the customer solution works

I cannot go into the deep details, because this is the customers code, you only need to get the idea.

The problem because it didn't work with the new ASP.NET Framework is, that they use a custom authentication cookie that was based on ASP.NET forms authentication. I'm pretty sure, when the authentication service was created they didn't know about ASP.NET Identity or it didn't exist. They created a custom Identity, that stores all the user information as properties. They build an authentication ticket out of it and use forms authentication to encrypt and store that cookie. The cookie name is customized in the web.config which is not an issue. All the apps share the same encryption information.

The client applications that uses the central authentication service, read that cookie, decrypt the information using forms authentication, de-serialize the data into that custom authentication ticket that contains the user information. The user than gets created and stored into the User property of the current HttpContext and is authenticated in the application.

This sounds pretty straight foreword is working well, except in newer ASP.NET versions.

How it should work

The best way to use the authentication cookie would be to use the ASP.NET Identity mechanisms to create that cookie. After the authentication happened on the central service, the needed user information should have been stored as claims inside the identity object, instead of properties in a custom Identity object. The authentication cookie should have been stored using the forms authentication mechanism only, without an custom authentication ticket. The forms authentication is able to create that ticket including all the claims.

On the client applications forms authentication would have been red the cookie and would have been created a new Identity including all the claims that are defined in the central authentication service. The forms authentication module would have stored the user in the current HttpContext as well.

Less code, more easy. IMHO.

What is the actual problem?

The actual problem is, that the client applications reads the authentication cookie from the CookieCollection on Application_PostAuthenticateRequest:

// removed logging and other overhead

protected void Application_PostAuthenticateRequest(Object sender, EventArgs e)
{
    var serMod = default(CustomUserSerializeModel);

	var authCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
	if (authCookie != null || Request.IsLocal)
	{
		var ticket = FormsAuthentication.Decrypt(authCookie.Value); 
		var serializer = new JavaScriptSerializer();
		serMod = serializer.Deserialize<CustomUserSerializeModel>(ticket.UserData);
    }
    
    // some fallback code ...
    
    if (serMod != null)
	{
		var user = new CustomUser(serMod);
		var cultureInfo = CultureInfo.GetCultureInfo(user.Language);

		HttpContext.Current.User = user;
        Thread.CurrentThread.CurrentCulture = ci;
        Thread.CurrentThread.CurrentUICulture = ci;
	}
    
    // some more code ...
}

In newer ASP.NET Frameworks the authentication cookie gets removed from the cookie collection after the user was authenticated.

Actually I have no idea since what version the cookie will be removed, but this is anyway a good thing because of security reasons, but there are no information in the release notes since ASP.NET 4.0.

Anyway the cookie collection doesn't contain the authentication cookie anymore and the cookie variable is null if I try to read it out of the collection.

BTW: The cookie is still in the request headers and could be read manually. But including the encryption it could be difficult to read it.

I tried to solve this problem by reading the cookie on Application_AuthenticateRequest. This is also not working, because the FormsAuthenticationModule reads the cookie previously.

The next try was on to read it on Application_BeginRequest. This in generally woks, I get the cookie and I can read it. But, because the cookie is configured as authentication cookie, the FormsAuthModule tries to read it and fails. It'll set the User to null because there is an authentication cookie available which doesn't contain valid information. Which also makes kinda sense.

So this is not the right solution as well.

I worked on that problem almost four months. (Not completely four months, but for many hours within this four months.) I compared applications and other solutions. Because there was no hint about the removal of the authentication cookie and because it was working on the old applications I was pretty confused about the behavior.

I studied the source code of ASP.NET to get the solution. And there is one.

And finally the solution

The solution is to read the cookie on FormsAuthentication_OnAuthenticate in the global.asax and not to store the user in the current context, but in the event arguments User property. The user than gets stored in the context by the FormsAutheticationModule, that also executes this event handler.

// removed logging and other overhead

protected void FormsAuthentication_OnAuthenticate(Object sender, FormsAuthenticationEventArgs args)
{
	AuthenticateUser(args);
}

public void AuthenticateUser(FormsAuthenticationEventArgs args)
{    
	var serMod = default(CustomUserSerializeModel);

	var authCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
	if (authCookie != null || Request.IsLocal)
	{
		var ticket = FormsAuthentication.Decrypt(authCookie.Value); 
		var serializer = new JavaScriptSerializer();
		serMod = serializer.Deserialize<CustomUserSerializeModel>(ticket.UserData);
    }
    
    // some fallback code ...
    
    if (serMod != null)
	{
		var user = new CustomUser(serMod);
		var cultureInfo = CultureInfo.GetCultureInfo(user.Language);

		args.User = user; // <<== this does the thing!
        Thread.CurrentThread.CurrentCulture = ci;
        Thread.CurrentThread.CurrentUICulture = ci;
	}
    
    // some more code ...
}

That's it.

Conclusion

Pleas don't create custom authentication cookies, try the FormsAuthentication and ASP.NET Identity mechanisms first. This is much simpler and won't break that way because of future changes.

Also please don't write a custom authentication service, because there is already a good one out there that is the almost the standard. Have a look into the IdentityServer, that also provides the option to handle different authentications mechanisms using common standards and technologies.

If you really need to create a custom solution, be carefully and know what you are doing.

Jürgen Gutsch: Thoughts about repositories and ORMs or why we love rocket science!

The last architectural brain dump I did in my blog was more than three years ago. At that time it was my German blog, which was shut down unfortunately. Anyway, this is another brain dump. This time I want to write about the sense of the repository pattern in combination with an object relational mapper (ORM) like Entity Framework (EF) Core.

A Brain Dump is a blog post where I write down a personal opinion about something. Someone surely has a different opinion and it is absolutely fine. I'm always happy to learn about the different opinions and thoughts. So please tell me afterwards in the comments.

In the past years I had some great discussions about the sense and nonsense of the repository pattern. Mostly offline, but also online on twitter or in the comments of this blog. I also followed discussions on twitter, in Jeff Fritz's stream. (Unfortunately I can't find all the links to the online discussions anymore.)

My idea is you don't need to use repositories, if you use a unit of work. It is not only my idea, but it makes absolutely sense to me and I also favor not to use repositories in case I use EF or EF Core. There are many reasons. Let us look at them.

BTW: One of the leads of the .NET user group Bern was one of the first person who pointed me to this thought many years ago while I was complaining about an early EF version in my old blog.

YAGNI - you ain't gonna need it

In the classic architecture pattern you had three layers: UI, Business and Data. That made kinda sense in the past, in a world of monolithic applications without an ORM. At that time you wrapped all of the SQL and data mapping stuff into the data layer. The actual work with the data was done in the business layer and the user interacted with the data in the UI. Later this data layers become more and more generic or turned onto repositories.

BTW: At least I created generic data layers in the past which generated the SQL based on the type the data need to get mapped to. I used the property information of the types as well as attributes. Just before ORM were a thing in .NET. Oh... Yes... I created OR mappers, but I didn't really care the past days ;-)

Wait... What is a data layer for, if you already use a ORM?

To encapsulate the ORM? Why would you do that? To change the underlaying ORM, if needed? When did you ever changed the ORM and why?

These days you don't need to change the ORM in case you change the database. You only need to change the date provider of the ORM. Because the ORM is generic and is able to access various database systems.

To not have ORM dependencies in the business and UI layers? You'll ship your app including all dependencies anyway.

To easier test the business logic in an isolated way, without the ORM? This is anyway possible and you need to test the repositories as well in an isolated way. Just mock the DbContext.

You ain't gone need an additional layer you also need to maintain and to test. This is additional senseless code in the most cases. It just increases the lines of code and only makes sense, if you get paid for code instead of solutions (IMHO)

KISS - keep it simple and stupid

In almost all cases, the simplest solution is the best one. Why? Because it is a solution and because it is simple ;-)

Simple to understand, simple to test and simple to maintain. For the most of us, it is hard to create a simple solution, because our brains aren't working that way. That's the crux we have as software developers: We are able to understand complex scenarios, write complex programs, building software for self driving cars, video games and space stations.

In reality our job is to make complex things as simple as possible. The most of us do that, by writing business software that helps a lot of people doing their work in an efficient way that saves a lot of time and money. But often we use rocket science, or we use sky scraper technology to just build a tiny house.

Why? Because we are developers, we think in a complex way and we really, really love rocket science.

But sometimes we should look for a little simpler solution. Don't write code you don't need. Don't create a complex architecture, if you just need a tiny house. Don't use rocket science to build a car. Keep it simple and stupid. Your application just needs to work for the customer. This way you'll save your customers and your own money. You'll save time to make more customers happy. Happy customers also means more money for you and your company in a mid term.

SRP - Single responsibility prinziple

I think the SRP principle was confused a little in the past. What kind of responsibilities are we talking about? Should a business logic not fetch data or should a product service not create orders? Do you see the point? In my opinion we should split the responsibilities by topic first and later inside the service classes we are able to split on method level by abstraction or what ever we need to separate.

This should keep the dependencies as small as possible and every service is a single isolated module, which is responsible for a specific topic instead of a specific technology or design pattern.

BTW: What is a design pattern for? IMO it is needed to classify a peace of code, to talk about it and to get a common language. Don't think in patterns and write patterns. Think about features and write working code instead.

Let me write some code to describe what I mean

Back to the repositories, let's write some code and let's compare some code snippets. This is just some kind of fake code. But a saw something like this a lot in the past. In the first snippet we have a business layer, which needs three repositories to update some data. Mostly a repository is created per database table or per entity. This is why this business layer need to use two more repositories just to check for additional fields:

public class AwesomeBusiness
{
    private readonly AwesomeRepository _awesomeRepository;
    private readonly CoolRepository _coolRepository;
    private readonly SuperRepository _superRepository;

    public AwesomeBusiness(
        AwesomeRepository awesomeRepository,
        CoolRepository coolRepository,
        SuperRepository superRepository)
    {
        _awesomeRepository = awesomeRepository;
        _coolRepository = coolRepository;
        _superRepository = superRepository;
    }
    
    public void UpdateAwesomeness(int id)
    {
        var awesomeness = _awesomeRepository.GetById(id);
        awesomeness.IsCool = _coolRepository.HasCoolStuff(awesomeness.Id);
        awesomeness.IsSuper = _superRepository.HasSuperStuff(awesomeness.Id);
        awesomeness.LastCheck = DateTime.Now;
        _awesomeRepository.UpdateAwesomeness(awesomeness);
    }
}

public class AwesomeRepository
{
    private readonly AppDbContext _dbContext;

    public AwesomeRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal void UpdateAwesomeness(Awesomeness awesomeness)
    {
        var aw = _dbContext.Awesomenesses.FirstOrDefault(x => x.Id = awesomeness.Id);
        aw.IsCool = awesomeness.IsCool;
        aw.IsSuper = awesomeness.IsSuper;
        aw.LastCheck = awesomeness.LastCheck;
        _dbContext.SaveChanges();
    }
}

public class SuperRepository
{
    private readonly AppDbContext _dbContext;

    public SuperRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal bool HasSuperStuff(int id)
    {
        return _dbContext.SuperStuff.Any(x => x.AwesomenessId == id);
    }
}

public class CoolRepository
{
    private readonly AppDbContext _dbContext;

    public CoolRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal bool HasCoolStuff(int id)
    {
        return _dbContext.CoolStuff.Any(x => x.AwesomenessId == id);
    }
}

public class Awesomeness
{
    public int Id { get; set; }
    public bool IsCool { get; set; }
    public bool IsSuper { get; set; }
    public DateTime LastCheck { get; set; }
}

Usually the Repositories are much bigger than this small classes, they provide functionality for the default CRUD operations on the object and sometimes some more.

I've seen a lot of repositories the last 15 years, some where kind generic or planned to be generic. Most of them are pretty individual depending on the needs of that object to work on. This is so much overhead for such a simple feature.

BTW: I remember a Clean Code training I did in Romania for a awesome company which great developers that were highly motivated. I worked with them for years and it was always a pleasure. Anyway. At the end of that training I did a small code kata the everyone should know: The FizzBuzz Kate. It was awesome. This great developers used all the patterns and practices they learned during the training to try to solve that Kata. After an hour they had a not working Enterprise FizzBuzz Application. It was rocket science to just iterate threw a list of numbers. They completely forgot about the most important Clean Code principles KISS and YAGNI. At the end I was the bad trainer, when I wrote the FizzBuzz in just a few lines of code in a single method without any interfaces, factories and repositories.

Why not writing the code just like this?

public class AwesomeService
{
    private readonly AppDbContext _dbContext;

    public AwesomeService(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    public void UpdateAwesomeness(int id)
    {
        var awesomeness = _dbContext.Awesomenesses.First(x => x.Id == id);
        
        awesomeness.IsCool = _dbContext.CoolStuff.Any(x => x.AwesomenessId == id);
        awesomeness.IsSuper = _dbContext.SuperStuff.Any(x => x.AwesomenessId == id);
        awesomeness.LastCheck = DateTime.Now;

        _dbContext.SaveChanges();
    }
}

public class Awesomeness
{
    public int Id { get; set; }
    public bool IsCool { get; set; }
    public bool IsSuper { get; set; }
    public DateTime LastCheck { get; set; }
}

This is simple, less code, less dependencies, easy to understand and anyway working. Sure it uses EF directly. If there is really, really, really the need to encapsulate the ORM. Why don't you create repositories by topic instead of per entity? Let's have a look:

public class AwesomeService
{
    private readonly AwesomeRepository _awesomeRepository;

    public AwesomeService(AwesomeRepository awesomeRepository)
    {
        _awesomeRepository = awesomeRepository;
    }     

    public void UpdateAwesomeness(int id)
    {
        _awesomeRepository.UpdateAwesomeness(id);
    }   
}

public class AwesomeRepository
{
    private readonly AppDbContext _dbContext;

    public AwesomeRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal void UpdateAwesomeness(int id)
    {
        var awesomeness = _dbContext.Awesomenesses.First(x => x.Id == id);
        awesomeness.IsCool = _dbContext.CoolStuff.Any(x => x.AwesomenessId == id);
        awesomeness.IsSuper = _dbContext.SuperStuff.Any(x => x.AwesomenessId == id);
        awesomeness.LastCheck = DateTime.Now;
        _dbContext.SaveChanges();
    }
}

public class Awesomeness
{
    public int Id { get; set; }
    public bool IsCool { get; set; }
    public bool IsSuper { get; set; }
    public DateTime LastCheck { get; set; }
}

This looks clean as well and encapsulates EF pretty well. But why do we really need the AwesomeService in that case? It just calls the repository. It doesn't contain any real logic, but needs to be tested and maintained. I also saw this kind of services a lot the last 15 years. This also doesn't make sense to me anymore. At the end I always end up with the second solution.

We don't need to have a three layered architecture, because we had the last 20, 30 or 40 years.

It always depends in software architecture.

Architectural point of view

My architectural point of view changed over the last years. I don't look at data objects anymore. If I do architecture, I don't wear the OOP glasses anymore. I try to explore the data flows inside the solution first. Where do the data come from, how do the data transform on the way to the target and where do the data go? I don't think in layers anymore. I try to figure out what is the best way to let the date flow into the right direction. I also try to use the users perspective to find an efficient way for the users as well.

I'm looking for the main objects the application is working on. In that case object isn't a .NET object or any .NET Type. It is just a specification. If I'm working on a shopping card, the main object is the order that produces money. This is the object that produces the most of the actions and contains and produces the most of the data.

Depending on the size and the kind of the application, I end up using different kind architectural patterns.

BTW: Pattern in the sense of idea how to solve the current problem. Not in the sense of patterns I need to use. I'll write about this architectural patterns in a separate blog post soon.

Despite what pattern is used, there's no repository anymore. There are services that provide the data in the way I need them in the UI. Sometimes the Services are called Handlers, depending what architectural pattern is used, but they work the same way. Mostly they are completely independent from each other. There's not a thing like a UserService or GroupService, but there's an AuthService or an ProfileService. There is no ProductService, CategoryService or CheckoutService, but an OrderService.

What do you think?

Does this make sense to you? What do you think?

I know this is a topic that is always discussed in a controversy way. But it shouldn't. But tell me your opinion about that topic. I'm curious about your thoughts and really like to learn more from you.

For me it worked quite well this way. I reduced a lot of overhead and a lot of code I need to maintain.

Stefan Henneken: IEC 61131-3: The ‘Decorator’ Pattern

With the help of the decorator pattern, new function blocks can be developed on the basis of existing function blocks without overstraining the principle of inheritance. In the following post, I will introduce the use of this pattern using a simple example.

The example should calculate the price (GetPrice()) for different pizzas. Even if this example has no direct relation to automation technology, the basic principle of the decorator pattern is described quite well. The pizzas could just as well be replaced by pumps, cylinders or axes.

First variant: The ‘Super Function Block’

In the example, there are two basic kinds of pizza: American style and Italian style. Each of these basic sorts can have salami, cheese and broccoli as a topping.

The most obvious approach could be to place the entire functionality in one function block.

Properties determine the ingredients of the pizza, while a method performs the desired calculation.

Picture01

Furthermore, FB_init() is extended in such a way that the ingredients are already defined during the declaration of the instances. Thus different pizza variants can be created quite simply.

fbAmericanSalamiPizza : FB_Pizza(ePizzaStyle := E_PizzaStyle.eAmerican,
                                 bHasBroccoli := FALSE,
                                 bHasCheese := TRUE,
                                 bHasSalami := TRUE);
fbItalianVegetarianPizza : FB_Pizza(ePizzaStyle := E_PizzaStyle.eItalian,
                                    bHasBroccoli := TRUE,
                                    bHasCheese := FALSE,
                                    bHasSalami := FALSE);

The GetPrice() method evaluates this information and returns the requested value:

METHOD PUBLIC GetPrice : LREAL
 
IF (THIS^.eStyle = E_PizzaStyle.eItalian) THEN
  GetPrice := 4.5;
ELSIF (THIS^.eStyle = E_PizzaStyle.eAmerican) THEN
  GetPrice := 4.2;
ELSE
  GetPrice := 0;
  RETURN;
END_IF
IF (THIS^.bBroccoli) THEN
  GetPrice := GetPrice + 0.8;
END_IF
IF (THIS^.bCheese) THEN
  GetPrice := GetPrice + 1.1;
END_IF
IF (THIS^.bSalami) THEN
  GetPrice := GetPrice + 1.4;
END_IF

Actually, it’s a pretty solid solution. But as is so often the case in software development, the requirements change. So the introduction of new pizzas may require additional ingredients. The FB_Pizza function block is constantly growing and so is its complexity. The fact that everything is contained in one function block also makes it difficult to distribute the final development among several people.

Sample 1 (TwinCAT 3.1.4022) on GitHub

Second Variant: The ‚Hell of Inheritance‘

In the second approach, a separate function block is created for each pizza variant. In addition, an interface (I_Pizza) defines all common properties and methods. Since the price has to be determined for all pizzas, the interface contains the GetPrice() method.

The two function blocks FB_PizzaAmericanStyle and FB_PizzaItalianStyle implement this interface. Thus the function blocks replace the enumeration E_PizzaStyle and are the basis for all further pizzas. The GetPrice() method returns the respective base price for these two FBs.

Based on this, different pizzas are defined with the different ingredients. For example, the pizza Margherita has cheese and tomatoes. The salami pizza also needs salami. Thus, the FB inherits for the salami pizza from the FB of the pizza Margherita.

The GetPrice() method always uses the super pointer to access the underlying method and adds the amount for its own ingredients, given that they are available.

METHOD PUBLIC GetPrice : LREAL
 
GetPrice := SUPER^.GetPrice();
IF (THIS^.bSalami) THEN
  GetPrice := GetPrice + 1.4;
END_IF

This results in an inheritance hierarchy that reflects the dependencies of the different pizza variants.

Picture02

This solution also looks very elegant at first glance. One advantage is the common interface. Each instance of one of the function blocks can be assigned to an interface pointer of type I_Pizza. This is helpful, for example, with methods, since each pizza variant can be passed via a parameter of type I_Pizza.

Also different pizzas can be stored in an array and the common price can be calculated:

PROGRAM MAIN
VAR
  fbItalianPizzaPiccante     : FB_ItalianPizzaPiccante;
  fbItalianPizzaMozzarella   : FB_ItalianPizzaMozzarella;
  fbItalianPizzaSalami       : FB_ItalianPizzaSalami;
  fbAmericanPizzaCalifornia  : FB_AmericanPizzaCalifornia;
  fbAmericanPizzaNewYork     : FB_AmericanPizzaNewYork;
  aPizza                     : ARRAY [1..5] OF I_Pizza;
  nIndex                     : INT;
  lrPrice                    : LREAL;
END_VAR
 
aPizza[1] := fbItalianPizzaPiccante;
aPizza[2] := fbItalianPizzaMozzarella;
aPizza[3] := fbItalianPizzaSalami;
aPizza[4] := fbAmericanPizzaCalifornia;
aPizza[5] := fbAmericanPizzaNewYork;
 
lrPrice := 0;
FOR nIndex := 1 TO 5 DO
  lrPrice := lrPrice + aPizza[nIndex].GetPrice();
END_FOR

Nevertheless, this approach has several disadvantages.

What happens if the menu is adjusted and the ingredients of a pizza change as a result? Assuming the salami pizza should also get mushrooms, the pizza Piccante also inherits the mushrooms, although this is not desired. The entire inheritance hierarchy must be adapted. The solution becomes inflexible because of the firm relationship through inheritance.

How does the system handle individual customer wishes? For example, double cheese or ingredients that are not actually intended for a particular pizza.

If the function blocks are located in a library, these adaptations would be only partially possible.

Above all, there is a danger that existing applications compiled with an older version of the library will no longer behave correctly.

Sample 2 (TwinCAT 3.1.4022) on GitHub

Third variant: The Decorator Pattern

Some design principles of object-oriented software development are helpful to optimize the solution. Adhering to these principles should help to keep the software structure clean.

Open-closed Principle

Open for extensions: This means that the original functionality of a module can be changed by using extension modules. The extension modules only contain the adaptations of the original functionality.

Closed for changes: This means that no changes to the module are necessary to extend it. The module provides defined extension points to connect to the extension modules.

Identify those aspects that change and separate them from those that remain constant

How are the function blocks divided so that extensions are necessary in as few places as possible?

So far, the two basic pizza varieties, American style and Italian style, have been represented by function blocks. So why not also define the ingredients as function blocks? This would enable us to comply with the Open Closed Principle. Our basic varieties and ingredients are constant and therefore closed to change. However, we must ensure that each basic variety can be extended with any number of ingredients. The solution would therefore be open to extensions.

The decorator pattern does not rely on inheritance when behaviour is extended. Rather, each side order can also be understood as a wrapper. This wrapper covers an already existing dish. To make this possible, the side orders also implement the interface I_Pizza. Each side order also contains an interface pointer to the underlying wrapper.

The basic pizza type and the side orders are thereby nested into each other. If the GetPrice() method is called from the outer wrapper, it delegates the call to the underlying wrapper and then adds its price. This goes on until the call chain has reached the basic pizza type that returns the base price.

Picture03

The innermost wrapper returns its base price:

METHOD GetPrice : LREAL
 
GetPrice := 4.5;

Each further decorator adds the requested surcharge to the underlying wrapper:

METHOD GetPrice : LREAL
 
IF (THIS^.ipSideOrder  0) THEN
  GetPrice := THIS^.ipSideOrder.GetPrice() + 0.9;
END_IF

So that the underlying wrapper can be passed to the function block, the method FB_init() is extended by an additional parameter of type I_Pizza. Thus, the desired ingredients are already defined during the declaration of the FB instances.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains  : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode   : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  ipSideOrder   : I_Pizza;
END_VAR
 
THIS^.ipSideOrder := ipSideOrder;

To make it easier to see how the individual wrappers run through, I have provided the GetDescription() method. Each wrapper adds a short description to the existing string.

Picture04

In the following example, the ingredients of the pizza are specified directly in the declaration:

PROGRAM MAIN
VAR
  // Italian Pizza Margherita (via declaration)
  fbItalianStyle : FB_PizzaItalianStyle;
  fbTomato       : FB_DecoratorTomato(fbItalianStyle);
  fbCheese       : FB_DecoratorCheese(fbTomato);
  ipPizza        : I_Pizza := fbCheese;
 
  fPrice         : LREAL;
  sDescription   : STRING;  
END_VAR
 
fPrice := ipPizza.GetPrice(); // output: 6.5
sDescription := ipPizza.GetDescription(); // output: 'Pizza Italian Style: - Tomato - Cheese'

There is no fixed connection between the function blocks. New pizza types can be defined without having to modify existing function blocks. The inheritance hierarchy does not determine the dependencies between the different pizza variants.

Picture05

In addition, the interface pointer can also be passed by property. This makes it possible to combine or change the pizza at run-time.

PROGRAM MAIN
VAR
  // Italian Pizza Margherita (via runtime)
  fbItalianStyle  : FB_PizzaItalianStyle;
  fbTomato        : FB_DecoratorTomato(0);
  fbCheese        : FB_DecoratorCheese(0);
  ipPizza         : I_Pizza;
 
  bCreate         : BOOL;
  fPrice          : LREAL;
  sDescription    : STRING;
END_VAR
 
IF (bCreate) THEN
  bCreate := FALSE;
  fbTomato.ipDecorator := fbItalianStyle;
  fbCheese.ipDecorator := fbTomato;
  ipPizza := fbCheese;
END_IF
IF (ipPizza  0) THEN
  fPrice := ipPizza.GetPrice(); // output: 6.5
  sDescription := ipPizza.GetDescription(); // output: 'Pizza Italian Style: - Tomato - Cheese'
END_IF

Special features can also be integrated in each function block. These can be additional properties, but also further methods.

The function block for the tomatoes is to be offered optionally also as organic tomato. One possibility, of course, is to create a new function block. This is necessary if the existing function block cannot be extended (e.g., because it is in a library). However, if this requirement is known before the first release, it can be directly taken into account.

The function block receives an additional parameter in the method FB_init().

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains      : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode       : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  ipSideOrder       : I_Pizza;
  bWholefoodProduct : BOOL;
END_VAR
 
THIS^.ipSideOrder := ipSideOrder;
THIS^.bWholefood := bWholefoodProduct;

This parameter could also be changed at run-time using a property. When the price is calculated, the option is taken into account as required.

METHOD GetPrice : LREAL
 
IF (THIS^.ipSideOrder  0) THEN
  GetPrice := THIS^.ipSideOrder.GetPrice() + 0.9;
  IF (THIS^.bWholefood) THEN
    GetPrice := GetPrice + 0.3;
  END_IF
END_IF

A further optimization can be the introduction of a basic FB (FB_Decorator) for all decorator FBs.

Picture06

Sample 3 (TwinCAT 3.1.4022) on GitHub

Definition

In the book „Design pattern. Elements of reusable object-oriented software” by Gamma, Helm, Johnson and Vlissides, it is expressed as follows:

„The decorator patterns provide a flexible alternative to subclassing for […] extending functionality.”

Implementation

The crucial point with the decorator pattern is that when extending a function block, inheritance is not used. If the behaviour is to be supplemented, function blocks are nested into each other; they are decorated.

The central component is the IComponent interface. The functional blocks to be decorated (Component) implement this interface.

The function blocks that serve as decorators (Decorator) also implement the IComponent interface. In addition, they also contain a reference (interface pointer component) to another decorator (Decorator) or to the basic function block (Component).

The outermost decorator thus represents the basic function block, extended by the functions of the decorators. The method Operation() is passed through all function blocks. Whereby each function block may add any functionalities.

This approach has some advantages:

  • The original function block (component) does not know anything about the add-ons (decorator). It is not necessary to extend or adapt it.
  • The decorators are independent of each other and can also be used for other applications.
  • The decorators can be combined with each other at any time.
  • A function block can therefore change its behaviour either by declaration or at run-time.
  • A client that accesses the function block via the IComponent interface can handle a decorated function block in the same way. The client does not have to be adapted; it becomes reusable.

But also some disadvantages have to be considered:

  • The number of function blocks can increase significantly, which makes the integration into an existing library more complex.
  • The client does not recognize whether it is the original base component (if accessed via the IComponent interface) or whether it has been enhanced by decorators. This can be an advantage (see above), but can also lead to problems.
  • The long call sequences make troubleshooting more difficult. The long call sequences can also have a negative effect on the performance of the application.

UML Diagram

Picture07

Related to the example above, the following mapping results:

Client MAIN
IComponent I_Pizza
Operation() GetPrice(), GetDescription()
Decorator FB_DecoratorCheese, FB_DecoratorSalami, FB_DecoratorTomato
AddedBehavior() bWholefoodProduct
component ipSideOrder
Component FB_PizzaItalianStyle, FB_PizzaAmericanStyle

Application examples

The decorator pattern is very often found in classes that are responsible for editing data streams. This concerns both the Java standard library and the Microsoft .NET framework.

Thus, there is the class System.IO.Stream in the .NET Framework. System.IO.FileStream and System.IO.MemoryStream inherit from this class. Both subclasses also contain an instance of Stream. Many methods and properties of FileStream and MemoryStream access this instance. You can also say: The subclasses FileStream and MemoryStream decorate Stream.

Further use cases are libraries for the creation of graphical user interfaces. These include WPF from Microsoft as well as Swing for Java.

A text box and a border are nested into each other; the text box is decorated with the border. The border (with the text box) is then passed to the page.

Stefan Lieser: TDD vs. Test-first

Immer wieder erlebe ich Diskussionen um die Frage, wie man denn nun Software so richtig richtig testet. Die Erkenntnis, dass automatisierte Tests notwendig sind, scheint sich inzwischen durchgesetzt zu haben. Ich erlebe nicht mehr, dass Entwickler ernsthaft behaupten, automatisierte Tests wären Zeitverschwendung, zu kompliziert, in ihrem Projekt halt unmöglich oder was auch immer die Argumente ... Read more

Der Beitrag TDD vs. Test-first erschien zuerst auf Refactoring Legacy Code.

Jürgen Gutsch: WPF and WinForms will run on .NET Core 3

Maybe you already heard or read about the fact that Microsoft brings WinForms and WPF to .NET Core 3.0. Maybe you already saw the presentations on the Connect conference, or any other conference or recording when Scott Hanselman shows how to run a pretty old WPF application on .NET Core. I saw a demo where he run BabySmash on .NET Core.

BTW: My oldest sun really loved that BabySmash when he was a baby :-)

You did not hear, read or see about it?

WPF and WinForms on .NET Core?

I was really wondering about this step, even because I wrote an article for a German .NET magazine some months before where I mentioned that Microsoft won't build a UI Stack for .NET Core. There where some other UI stacks built by the community. The most popular is Avalonia.

But this step makes sense anyway. Since the .NET Standards moves the API of .NET Core more to the same level of .NET Framework, it was a question of time when the APIs are almost equal. WPF and WinForms is based on .NET Libraries, it should basically also run on .NET Core.

Does this mean it runs on Linux and Mac?

Nope! Since WinForms and WPF uses Windows only technology in the background, it cannot run on Linux or Mac. It is really depending on Windows. The sense of running it on .NET Core is performance and to be independent from any framework. .NET Core is optimized for performance, to run superfast web applications in the cloud. .NET Core is also independent from the installed framework on the machine. Just deploy the runtime together with your application.

You are now able to run fast and self-contained Windows desktop applications. That's awesome, isn't it?

Good I wrote that article some months before ;-)

Anyways...

The .NET CLI

Every time I install a new version of the .NET Core runtime I try dotnet new and I was positively shocked about what I saw this time:

You are now able to create a Windows Forrms or a WPF application using the .NET CLI. This is cool. And for sure I needed to try it out:

dotnet new -n WpfTest -o WpfTest
dotnet new -n WpfTest -o WpfTest

And yes, it is working as you can see here in Visual Studio Code:

And this is the WinForms project in VS Code

Running dotnet run on the WPF project:

And again on the WinForms GUI:

IDE

Visual Studio Code isn't the right editor for this kind of projects. If you know XAML pretty well, it will work, but WinForms definitely won't work well. You need to write the designer code manually and you don't have any designer support yet. Maybe there will be some designer in the future, but I'm not sure.

The best choice to work with WinForms and WPF on .NET Core is Visual Studio 2017 or newer.

Last words

I don't think I will now start to write desktop apps on .NET Core 3, because I'm a web guy. But it is a really nice option to build apps like this on .NET Core.

BTW: Even EF 6 will work in .NET Core 3, that means you also don't need to rewrite the database access part of your desktop application.

As I wrote, you can now use this super fast framework and the option to create self contained apps. I would suggest to try it out, to play around with it. Do you have an older desktop application based on WPF or WinForms? I would be curious about whether you can run it on .NET Core 3. Tell me how easy it was to get it running on .NET Core 3.

Code-Inside Blog: Office Add-ins with ASP.NET Core

The “new” Office-Addins

Most people might associate Offce-Addins with “old school” COM addins, but since a couple of years Microsoft pushes a new add-in application modal powered by HTML, Javascript and CSS.

The cool stuff is, that these add-ins will run unter Windows, macOS, Online in a browser and on the iPad. If you want to read more about the general aspects, just checkout the Microsoft Docs.

In Microsoft Word you can find those addins under the “Insert” ribbon:

x

Visual Studio Template: Urgh… ASP.NET

Because of the “new” nature of the Add-ins you could actually use your favorite text editor and create a valid Office Add-ins. There are some great tooling out there, including a Yeoman generator for Office-Add-ins.

If you want to stick with Visual Studio you might want to install the __“Office/SharePoint development-Workload”. After the installation you should see a couple of new templates appear in your Visual Studio:

x

Sadly, those templates still uses ASP.NET and not ASP.NET Core.

x

ASP.NET Core Sample

If you want to use ASP.NET Core, you might want to take a look at my ASP.NET Core-sample. It is not a VS template - it is meant to be a starting point, but feel free to create one if this would help!

x

The structure is very similar. I moved all the generated HTML/CSS/JS stuff in a separate area and the Manifest.xml points to those files.

Result should be something like this:

x

Warning:

In the “ASP.NET”-Offce-Addin-development world there is one feature that is kinda cool, but seems not to be working with ASP.NET Core projects. The original Manifest.xml generated by the Visual Studio template uses a placeholder called “~remoteAppUrl”. It seems that Visual Studio was able to replace this placeholder during startup with the correct URL of the ASP.NET application. This is not possible with a ASP.NET Core application.

The good news is, that this feature is not really needed. You just need to point to the correct URL and everything is fine and the debugging is OK as well.

Hope this helps!

Jürgen Gutsch: Customizing ASP.​NET Core Part 11: WebHostBuilder

In my post about Configuring HTTPS in ASP.NET Core 2.1, a reader asked how to configure the HTTPS settings using user secrets.

"How would I go about using user secrets to pass the password to listenOptions.UseHttps(...)? I can't fetch the configuration from within Program.cs no matter what I try. I've been Googling solutions for like a half hour so any help would be greatly appreciated." https://github.com/JuergenGutsch/blog/issues/110#issuecomment-441177441

In this post I'm going to answer this question.

This series topics

WebHostBuilderContext

It is about this Kestrel configuration in the Program.cs. In that post I wrote that you should use user secrets to configure the certificates password:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
        	.UseKestrel(options =>
            {
                options.Listen(IPAddress.Loopback, 5000);
                options.Listen(IPAddress.Loopback, 5001, listenOptions =>
                {
                    listenOptions.UseHttps("certificate.pfx", "topsecret");
                });
            })
        	.UseStartup<Startup>();
}

The reader wrote that he couldn't fetch the configuration inside this code. And he is true, if we are only looking at this snippet. You need to know that the method UseKestrel() is overloaded:

.UseKestrel((host, options) =>
{
    // ...
})

This first argument is a WebHostBuilderContext. Using this you are able to access the configuration.

So lets rewrite the lambda a little bit to use this context:

.UseKestrel((host, options) =>
{
    var filename = host.Configuration.GetValue("AppSettings:certfilename", "");
    var password = host.Configuration.GetValue("AppSettings:certpassword", "");
    
    options.Listen(IPAddress.Loopback, 5000);
    options.Listen(IPAddress.Loopback, 5001, listenOptions =>
    {
        listenOptions.UseHttps(filename, password);
    });
})

In this sample I chose to write the keys using the colon divider because this is the way you need to read nested configurations from the appsettings.json:

{
    "AppSettings": {
        "certfilename": "certificate.pfx",
        "certpassword": "topsecret"
    },
    "Logging": {
        "LogLevel": {
            "Default": "Warning"
        }
    },
    "AllowedHosts": "*"
}

You are also able to read from the user secrets store with this keys:

dotnet user-secrets init
dotnet user-secrets set "AppSettings:certfilename" "certificate.pfx"
dotnet user-secrets set "AppSettings:certpassword" "topsecret"

As well as environment variables:

SET APPSETTINGS_CERTFILENAME=certificate.pfx
SET APPSETTINGS_CERTPASSWORD=topsecret

Why does it work?

Do you remember the days back where you needed to configure app configuration in the Startup.cs ASP.NET Core? That was configured in the constructor of the Startup class and looked similar like this, if you added user secrets:

 var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json")
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    if (env.IsDevelopment())
    {
        builder.AddUserSecrets();
    }

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();

This code now is wrapped inside the CreateDefaultBuilder Method (see on GitHub) and looks like this:

builder.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);

    if (env.IsDevelopment())
    {
        var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
        if (appAssembly != null)
        {
            config.AddUserSecrets(appAssembly, optional: true);
        }
    }

    config.AddEnvironmentVariables();

    if (args != null)
    {
        config.AddCommandLine(args);
    }
})

It is almost the same code and it is one of the first things that gets executed when building the WebHost. It needs to be one of the first things because the Kestrel is configurable via the app configuration. Maybe you know that you are able to specify ports and URLs and so on using environment variables or the appsettings.json:

I found this lines in the WebHost.cs:

builder.UseKestrel((builderContext, options) =>
{
    options.Configure(builderContext.Configuration.GetSection("Kestrel"));
})

That means you are able to add this lines to the appsettings.json to configure Kestrel endpoints:

"Kestrel": {
  "EndPoints": {
  "Http": {
  "Url": "http://localhost:5555"
 }}}

Or to use environment variables like this to configure the endpoint:

SET KESTREL_ENDPOINTS_HTTP_URL=http://localhost:5555

Also this configuration isn't executed

Conclusion

Inside the Program.cs you are able to use app configuration inside the lambdas of the configuration methods, if you have access to the WebHostBuilderContext. This way you can use all the configuration you like to configure the WebHostBuilder.

I just realized that this post could be placed between Customizing ASP.NET Core Part 02: Configuration and Customizing ASP.NET Core Part 04: HTTPS. So I made this the eleventh part of the Customiting ASP.NET Core Series.

Holger Schwichtenberg: Windows-10-Apps per PowerShell löschen

Mit einem PowerShell-Skript kann man elegant beliebig viele unerwünschte Windows-10-Store-Apps auf einmal deinstallieren.

Stefan Henneken: MEF Part 2 – Metadata and creation policies

Part 1 dealt with fundamentals, imports and exports. Part 2 follows on from part 1 and explores additional features of the Managed Extensibility Framework (MEF). This time the focus is on metadata and creation policies.

Metadata

Exports can use metadata to expose additional information. To query this information, we use the class Lazy<>, use of which avoids causing the composable part to create an instance.

For our example application, we will go back to the example from part 1. We have an application (CarHost.exe) which uses imports to bind different cars (BMW.dll and Mercedes.dll). There is a contract (CarContract.dll) which contains the interface via which the host accesses the exports.

The metadata consist of three values. Firstly, a string containing the name (Name). Secondly, an enumeration indicating a colour (Color). Lastly, an integer containing the price (Price).

There are a number of options for how exports can make these metadata available to imports:

  1. non-type-safe
  2. type-safe via an interface
  3. type-safe via an interface and user-defined export attributes
  4. type-safe via an interface and enumerated user-defined export attributes

Option 1: non-type-safe

In this option, metadata are exposed using the ExportMetadata attribute. Each item of metadata is described using a name-value pair. The name is always a string type, whilst the value is an Object type. In some cases, it may be necessary to use the cast operator to explicitly convert a value to the required data type. In this case, the Price value needs to be converted to the uint data type.

We create two exports, each of which exposes the same metadata, but with differing values.

using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarMercedes
{
    [ExportMetadata("Name", "Mercedes")]
    [ExportMetadata("Color", CarColor.Blue)]
    [ExportMetadata("Price", (uint)48000)]
    [Export(typeof(ICarContract))]
    public class Mercedes : ICarContract
    {
        private Mercedes()
        {
            Console.WriteLine("Mercedes constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the Mercedes.", name);
        }
    }
}
 
using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarBMW
{
    [ExportMetadata("Name", "BMW")]
    [ExportMetadata("Color", CarColor.Black)]
    [ExportMetadata("Price", (uint)55000)]
    [Export(typeof(ICarContract))]
    public class BMW : ICarContract
    {
        private BMW()
        {
            Console.WriteLine("BMW constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the BMW.", name);
        }
    }
}

The ICarContract interface exposes the method, so that it is then available to the import. It represents the ‘contract’ between the imports and the exports. The enumeration CarColor is also defined in the same namespace.

using System;
namespace CarContract
{
    public interface ICarContract
    {
        string StartEngine(string name);
    }
    public enum CarColor
    {
        Unkown,
        Black,
        Red,
        Blue,
        White
    }
}

Metadata for the import can be accessed using the class Lazy<T, TMetadata>. This class exposes the Metadata property. Metadata is of type Dictionary<string, object>.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract, Dictionary<string, object>>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract, Dictionary<string, object>> car in CarParts)
            {
                if (car.Metadata.ContainsKey("Name"))
                    Console.WriteLine(car.Metadata["Name"]);
                if (car.Metadata.ContainsKey("Color"))
                    Console.WriteLine(car.Metadata["Color"]);
                if (car.Metadata.ContainsKey("Price"))
                    Console.WriteLine(car.Metadata["Price"]);
                Console.WriteLine("");
            }
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            container.Dispose();
        }
    }
}

If we want to access a specific item of metadata, we need to verify that the export has indeed defined the required item. It may well be that different imports expose different metadata.

On running the program, we can clearly see that accessing the metadata does not initialise the export parts. Only once we access the StartEngine() method do we create an instance, thereby calling the constructor.

CommandWindowSample01

Since the metadata is stored in a class of type Dictionary<string, object>, it can contain any number of items of metadata. This has advantages and disadvantages. The advantage is that all metadata are optional and the information they expose is entirely arbitrary – the value is of type Object. However, this also entails a loss of type safety. This is a major disadvantage. When accessing metadata, we always need to check that the metadata is actually present. Failure to do so can lead to some nasty runtime errors.

Sample 1 (Visual Studio 2010) on GitHub

Option 2: type-safe via an interface

Just as the available methods and properties of an export can be specified using an interface (ICarContract), it is also possible to define metadata using an interface. In this case, the individual values which will be available are specified using properties. You can only define properties which can be accessed using a get accessor. (If you try to define a set accessor, this will cause a runtime error.)

For our example, we will create three properties of the required type. We define an interface for the metadata as follows:

public interface ICarMetadata
{
    string Name { get; }
    CarColor Color { get; }
    uint Price { get; }
}

The interface for the metadata is used during verification between import and export. All exports must expose the defined metadata. If metadata are not present, this again results in a runtime error. If a property is optional, you can use the DefaultValue attribute.

[DefaultValue((uint)0)]
uint Price { get; }

To avoid having to define all metadata in an export, all properties in this example will be decorated with the DefaultValue attribute.

using System;
using System.ComponentModel;
namespace CarContract
{
    public interface ICarMetadata
    {
        [DefaultValue("NoName")]
        string Name { get; }
 
        [DefaultValue(CarColor.Unkown)]
        CarColor Color { get; }
 
        [DefaultValue((uint)0)]
        uint Price { get; }
    }
}

The ICarContract interface and the exports are created in exactly the same way as in the first example.

To access the metadata, the interface for the metadata is used as the value for TMetadata in the Lazy<T, TMetadata> class. In this example, this is the ICarMetadata interface. Individual items of metadata are therefore available via the Metadata property.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using System.Linq;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        IEnumerable<Lazy<ICarContract, ICarMetadata>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract, ICarMetadata> car in CarParts)
            {
                Console.WriteLine(car.Metadata.Name);
                Console.WriteLine(car.Metadata.Color);
                Console.WriteLine(car.Metadata.Price);
                Console.WriteLine("");
            }
            // invokes the method only of black cars
            var blackCars = from lazyCarPart in CarParts
                            let metadata = lazyCarPart.Metadata
                            where metadata.Color == CarColor.Black
                            select lazyCarPart.Value;
            foreach (ICarContract blackCar in blackCars)
                Console.WriteLine(blackCar.StartEngine("Sebastian"));
            Console.WriteLine(".");
            // invokes the method of all imports
             foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            container.Dispose();
        }
    }
}

Since the ICarMetadata interface specifies the name and type of the metadata, it can be accessed directly. This type-safety brings with it a small but useful advantage – it is now possible to access the CarParts property using LINQ. This means that it is possible to filter by metadata, so that only specific imports are used.

The first foreach loop outputs the metadata from all exports. The second uses LINQ to create a query which produces a list containing only those exports where the metadata has a specific value – in this case where Color has the value CarColor.Black. The StartEngine() method of these exports only is called. The final foreach loop calls this method for all exports.

CommandWindowSample02

Once again, we can clearly see that neither outputting all metadata nor the LINQ query initialises an export. A new instance is only created (and the constructor therefore called) on calling the StartEngine() method.

Sample 2 (Visual Studio 2010) on GitHub

In my opinion, interfaces should be used to work with metadata wherever possible. Sure, it may be a little more work, but this approach does avoid unwanted runtime errors.

Option 3: type-safe via an interface and user-defined export attributes

Defining metadata in the export has one further disadvantage. The name has to be supplied in the form of a string. With long names in particular, it’s easy for typos to creep in. Any typos will not be recognised by the compiler, producing errors which only become apparent at runtime. Of course things would be a lot easier if Visual Studio listed all valid metadata whilst typing and if the compiler noticed any typos. This happy state can be achieved by creating a separate attribute class for the metadata. To achieve this, all we need to do to our previous example is add a class.

using System;
using System.ComponentModel.Composition;
namespace CarContract
{
    [MetadataAttribute]
    public class CarMetadataAttribute : Attribute
    {
        public string Name { get; set; }
        public CarColor Color { get; set; }
        public uint Price { get; set; }
    }
}

This class needs to be decorated with the MetadataAttribute attribute and derived from the Attribute class. The individual values to be exported via the metadata are specified using properties. The type and name of the properties must match that specified in the interface for the metadata. We previously defined the ICarContract interface as follows:

using System;
using System.ComponentModel;
namespace CarContract
{
    public interface ICarMetadata
    {
        [DefaultValue("NoName")]
        string Name { get; }
 
        [DefaultValue(CarColor.Unkown)]
        CarColor Color { get; }
 
        [DefaultValue((uint)0)]
        uint Price { get; }
    }
}

We can not decorate an export with metadata by using this newly defined attribute.

[CarMetadata(Name="BMW", Color=CarColor.Black, Price=55000)]
[Export(typeof(ICarContract))]
public class BMW : ICarContract
{
    // ...
}

Now Visual Studio can give the developer a helping hand when entering metadata. All valid metadata are displayed during editing. In addition, the compiler is now in a position to verify that all of the entered metadata are valid.

VisualStudioSample03

Sample 3 (Visual Studio 2010) on GitHub

Option 4: type-safe via an interface and enumerated user-defined export attributes

Up to this point, it has not been possible to have multiple entries for a single item of metadata. However, there could be situations where we want an enumeration containing options which we wish to be able to combine together. We’re now going to extend our car example to allow us to additionally define the audio system with which the car is equipped. To do this, we first define an enum containing all of the possible options:

public enum AudioSystem
{
    Without,
    Radio,
    CD,
    MP3
}

Now we add a property of type AudioSystem to the ICarMetadata interface.

using System;
using System.ComponentModel;
namespace CarContract
{
    public interface ICarMetadata
    {
        [DefaultValue("NoName")]
        string Name { get; }
 
        [DefaultValue(CarColor.Unkown)]
        CarColor Color { get; }
 
        [DefaultValue((uint)0)]
        uint Price { get; }
 
        [DefaultValue(AudioSystem.Without)]
        AudioSystem[] Audio { get; }
    }
}

Because a radio can also include a CD player, we need to be able to specify multiple options for specific items of metadata. In the export, the metadata is declared as follows:

using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarBMW
{
    [CarMetadata(Name="BMW", Color=CarColor.Black, Price=55000)]
    [CarMetadataAudio(AudioSystem.CD)]
    [CarMetadataAudio(AudioSystem.MP3)]
    [CarMetadataAudio(AudioSystem.Radio)]
    [Export(typeof(ICarContract))]
    public class BMW : ICarContract
    {
        private BMW()
        {
            Console.WriteLine("BMW constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the BMW.", name);
        }
    }
}
 
using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarMercedes
{
    [CarMetadata(Name="Mercedes", Color=CarColor.Blue, Price=48000)]
    [CarMetadataAudio(AudioSystem.Radio)]
    [Export(typeof(ICarContract))]
    public class Mercedes : ICarContract
    {
        private Mercedes()
        {
            Console.WriteLine("Mercedes constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the Mercedes.", name);
        }
    }
}

Whilst the Mercedes has just a radio, the BMW also has a CD player and MP3 player.

To achieve this, we create an additional attribute class. This attribute class represents the metadata for the audio equipment (CarMetadataAudio).

using System;
using System.ComponentModel.Composition;
namespace CarContract
{
    [MetadataAttribute]
    [AttributeUsage(AttributeTargets.Class, AllowMultiple=true)]
    public class CarMetadataAudioAttribute : Attribute
    {
        public CarMetadataAudioAttribute(AudioSystem audio)
        {
            this.Audio = audio;
        }
        public AudioSystem Audio { get; set; }
    }
}

To allow us to specify multiple options for this attribute, this class has to be decorated with the AttributeUsage attribute and AllowMultiple needs to be set to true for this attribute. Here, the attribute class has been provided with a constructor, which takes the value directly as an argument.

Multiple metadata are output via an additional loop (see lines 28 and 29):

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using System.Linq;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract, ICarMetadata>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract, ICarMetadata> car in CarParts)
            {
                Console.WriteLine("Name: " + car.Metadata.Name);
                Console.WriteLine("Price: " + car.Metadata.Price.ToString());
                Console.WriteLine("Color: " + car.Metadata.Color.ToString());
                foreach (AudioSystem audio in car.Metadata.Audio)
                    Console.WriteLine("Audio: " + audio);
                Console.WriteLine("");
            }
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            container.Dispose();
        }
    }
}

Running the program yields the expected result:

CommandWindowSample04

Sample 4 (Visual Studio 2010) on GitHub

There is one further option, but this I will leave for a later post in which I will talk about inherited exports. It allows both the export and metadata to be decorated with an attribute simultaneously.

Creation Policies

In the previous examples, we used the Export and ImportMany attributes to bind multiple exports to a single import. But what does MEF do when multiple imports are available to a single export? This requires us to adapt the above example somewhat. The exports and the contract remain unchanged. In the host, instead of one list, we create two. Both lists take the same exports.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract>> CarPartsA { get; set; }
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract>> CarPartsB { get; set; }
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarPartsA)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            Console.WriteLine("");
            foreach (Lazy<ICarContract> car in CarPartsB)
                Console.WriteLine(car.Value.StartEngine("Michael"));
            container.Dispose();
        }
    }
}

This change means that two lists (imports) are assigned to each export. The program output, however, implies that each export is instantiated only once.

CommandWindowSample05

Sample 5 (Visual Studio 2010) on GitHub

If Managed Extensibility Framework finds a matching export for an import, it creates an instance of the export. This instance is shared with all other matching imports. MEF treats each export as a singleton.

We can modify this default behaviour both for exports and for imports by using the creation policy. Each creation policy can have the value Shared, NonShared or Any. The default setting is Any. An export for which the policy is defined as Shared or NonShared is only deemed to match an import if the creation policy of the import matches that of the export or is Any. To be considered matching, imports and exports must have compatible creation policies. If both imports and exports are defined as Any (or are undefined), both parts will be specified as Shared.

The creation policy for an export is defined using the PartCreationPolicy attribute.

[Export(typeof(ICarContract))]
[PartCreationPolicy(CreationPolicy.NonShared)]

In the case of the Import or ImportAny attribute, the creation policy is defined by using the RequiredCreationPolicy property.

[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.NonShared)]
private IEnumerable<Lazy<ICarContract>> CarPartsA { get; set; }

The following output illustrates the case where the creation policy is set to NonShared. There are now two instances of each export.

CommandWindowSample06

Sample 6 (Visual Studio 2010) on GitHub

It is also possible to combine creation policies. For the import, I have decorated one list with NonShared and two further lists with Shared.

[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.NonShared)]
private IEnumerable<Lazy<ICarContract>> CarPartsA { get; set; }
 
[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.Shared)]
private IEnumerable<Lazy<ICarContract>> CarPartsB { get; set; }
 
[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.Shared)]
private IEnumerable<Lazy<ICarContract>> CarPartsC { get; set; }

The output shows how MEF creates the instances and assigns them to the individual imports:

CommandWindowSample06b

The first list has its own independent instances of the exports. Lists two and three share the same instances.

Outlook

It is very encouraging that a framework of this kind has been standardised. Some Microsoft teams are already successfully using MEF, the best known example is Visual Studio. Let’s hope that more products will follow suit, and that this ensures that MEF continues to undergo further development.

Part 3 deals with the life cycle of composable parts.

Golo Roden: Eine Referenz für Domain-Driven Design

Der Einstieg in Domain-Driven Design fällt häufig schwer, weil die Terminologie von DDD an sich schwer verständlich und zunächst verwirrend wirkt. Abhilfe schaffen ein anschaulicher Einstieg und eine gute Referenz.

Stefan Lieser: Unit Tests are your Friends

Angeregt durch die Diskussion mit Teilnehmern eines Clean Code Developer Workshops habe ich mich wieder einmal mit der Frage befasst, welche Strategien beim automatisierten Testen ich anwende und empfehle. Die Diskussion drehte sich vor allem um die Frage, ob private Methoden durch Unit Tests getestet werden sollen und wie dies dann technisch am besten realisiert ... Read more

Der Beitrag Unit Tests are your Friends erschien zuerst auf Refactoring Legacy Code.

Jürgen Gutsch: Integration testing data access in ASP.​NET Core

In the last post, I wrote about unit testing data access in ASP.NET Core. This time I'm going to go into integration tests. This post shows you how to write an end-to-end test using a WebApplicationFactory and hot to write specific integration test.

Unit tests vs. Integration tests

I'm sure most of you already know the difference. In a few discussions I learned that some developers don't have a clear idea about the difference. At the end is doesn't really matter, because every test is a good test. Both the unit tests and the integration tests are coded test, they look similar and use the same technology. The difference is in the concepts of how and what to test and in the scope of the test:

  • A unit test tests a logical unit, a single isolated component, a function or a feature. A unit test isolates this component to test it without any dependencies, like I did in the last post. First I tested the actions of a controller, without testing the actual service in behind. Than I tested the service methods in an isolated way with a faked DbContext. Why? Because unit tests shouldn't break because of a failing dependency. A unit test should be fast in development and in execution. It is a development tool. So it shouldn't cost a lot of time to write one. And in fact, setting up a unit test is much cheaper than setting up an integration test. Usually you write a unit test during or immediately after implementing the logic. In the best case you'll write a unit test before implementing the logic. This would be the TDD way, test driven development or test driven design.

  • An integration tests does a lot more. It tests the composition of all units. It ensures that all units are working together in the right way. This means it may need a lot more effort to setup a test because you need to setup the dependencies. An integration test can test a feature from the UI to the database. It integrates all the dependencies. On the other hand an integrations test can be isolated on a hot path of a feature. It is also legit to fake or mock aspects that don't need to be tested in this special case. For example, if you test a user input from the UI to the database, you don't need to test the logging. Also an integration test shouldn't fail because on an error outside the context. This also means to isolate an integration test as much as possible, maybe by using an in-memory database instead of a real one.

Let's see how it works:

Setup

I'm going to reuse the solution created for the last post to keep this section short.

I only need to create another XUnit test project, to add it to the existing solution and to add a reference to the WebToTest and some NuGet packages:

dotnet new xunit -o WebToTest.IntegrationTests -n WebToTest.IntegrationTests
dotnet sln add WebToTest.IntegrationTests
dotnet add WebToTest.IntegrationTests reference WebToTest

dotnet add WebToTest.IntegrationTests package GenFu
dotnet add WebToTest.IntegrationTests package moq
dotnet add WebToTest.IntegrationTests package Microsoft.AspNetCore.Mvc.Testing

At the next step I create a test class for a web integration test. This means I setup a web host for the application-to-test to call the web via a web client. This is kind of an UI test than, not based on UI events, but I'm able to get and analyze the HTML result of the page to test.

ASP.NET Core since 2.0 has the possibility to setup a test host to run the a web in the test environment. This is pretty cool. You don't need to setup an actual web server to run a test against the web. This gets done automatically by using the generic WebApplicationFactory. You just need to specify the type of the Startup class of the web-to-test:

using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc.Testing;
using Xunit;

namespace WebToTest.IntegrationTests
{
    public class PersonTests : IClassFixture<WebApplicationFactory<Startup>>
    {
        private readonly WebApplicationFactory<Startup> _factory;

        public PersonTests(WebApplicationFactory<Startup> factory)
        {
            _factory = factory;
        }
        
        // put test methods here
    }
}

Also the XUnit IClassFixture is special here. This generic interface tells XUnint to create an instance of the generic argument per test to run. In this case I get a new instance of the WebApplicationFactory of Startup per test. Wo this test class created its own web server every time a test method gets executed. This is a isolated test environment per test.

End-to-end tests

Our first integration tests will ensure the MVC routes are working. This tests create a web host and calls the web via HTTP. It tests parts of the application from the UI to the database. This is an end-to-end test.

Instead of a XUnit Fact, we create a Theory this time. A Theory marks a test method which is able to retrieve input data via attribute. The InlineDataAttribute defines the data we want to pass in. In this case the MVC route URLs:

[Theory]
[InlineData("/")]
[InlineData("/Home/Index")]
[InlineData("/Home/Privacy")]
public async Task BaseTest(string url)
{
    // Arrange
    var client = _factory.CreateClient();

    // Act
    var response = await client.GetAsync(url);

    // Assert
    response.EnsureSuccessStatusCode(); // Status Code 200-299
    Assert.Equal("text/html; charset=utf-8",
        response.Content.Headers.ContentType.ToString());
}

Let's try it

dotnet test WebToTest.IntegrationTests

This actually creates 3 test results as you can see in the output window:

We'll now need to do the same thing for the API routs. Why in a separate method? Because the first integration test also checks the content type which is the type of a HTML document. The content type of the API results is application/json:

[Theory]
[InlineData("/api/person")]
[InlineData("/api/person/1")]
public async Task ApiRouteTest(string url)
{
    // Arrange
    var client = _factory.CreateClient();

    // Act
    var response = await client.GetAsync(url);

    // Assert
    response.EnsureSuccessStatusCode(); // Status Code 200-299
    Assert.Equal("application/json; charset=utf-8",
        response.Content.Headers.ContentType.ToString());
}

This also works and we have to more successful tests now:

This isn't completely isolated, because it uses the same database as the production or the test web. At least it is the same file based SQLite database as in the test environment. Because a test should be as fast as possible, wouldn't it make sense to use a in-memory database instead?

Usually it would be possible to override the service registration of the Startup.cs with the WebApplicationFactory we retrieve in the constructor. It should be possible to add the ApplicationDbContext and to configure an in-memory database:

public PersonTests(WebApplicationFactory<Startup> factory)
{
    _factory = factory.WithWebHostBuilder(config =>
    {
        config.ConfigureServices(services =>
        {
            services.AddDbContext<ApplicationDbContext>(options =>
                options.UseInMemoryDatabase("InMemory"));
        });
    });
}

Unfortunately, I didn't get the seeding running for the in-memory database using the current preview version of ASP.NET Core 3.0. This will result in an failing test for the route URL /api/person/1 because the Person with the Id 1 isn't available. This is a known issue on GitHub: https://github.com/aspnet/EntityFrameworkCore/issues/11666

To get this running we need to ensure seeding explicitly every time we create an instance of the DbContext.

public PersonService(ApplicationDbContext dbContext)
{
    _dbContext = dbContext;
    _dbContext.Database?.EnsureCreated();
}

This hopefully gets fixed, because it is kinda bad to add this line only for the integration tests. Anyway, it works this way. Maybe you find a way to call EnsureCreated() in the test class.

Specific integration tests

Sometimes it makes sense to test more specific parts of the application, without starting a web host and without accessing a real database. Just to be sure that the individual units are working together. This time I'm testing the PersonController together with the PersonService. I'm going to mock the DbContext, because the database access isn't relevant for the test. I just need to ensure the service provides the data to the controller in the right way and to ensure the controller is able to handle these data.

At first I create a simple test class that is able to create the needed test data and the DbContext mock:

public class PersonIntegrationTest
{
    // put the tests here

    private Mock<ApplicationDbContext> CreateDbContextMock()
    {
        var persons = GetFakeData().AsQueryable();

        var dbSet = new Mock<DbSet<Person>>();
        dbSet.As<IQueryable<Person>>().Setup(m => m.Provider).Returns(persons.Provider);
        dbSet.As<IQueryable<Person>>().Setup(m => m.Expression).Returns(persons.Expression);
        dbSet.As<IQueryable<Person>>().Setup(m => m.ElementType).Returns(persons.ElementType);
        dbSet.As<IQueryable<Person>>().Setup(m => m.GetEnumerator()).Returns(persons.GetEnumerator());

        var context = new Mock<ApplicationDbContext>();
        context.Setup(c => c.Persons).Returns(dbSet.Object);
        
        return context;
    }

    private IEnumerable<Person> GetFakeData()
    {
        var i = 1;
        var persons = A.ListOf<Person>(26);
        persons.ForEach(x => x.Id = i++);
        return persons.Select(_ => _);
    }
}

At next I wrote the tests, which look similar to to the PersonControllerTests I wrote in the last blog post. Only the arrange part differs a little bit. This time I don't pass the mocked service in, but an actual one that uses a mocked DbContext:

[Fact]
public void GetPersonsTest()
{
    // arrange
    var context = CreateDbContextMock();

    var service = new PersonService(context.Object);

    var controller = new PersonController(service);

    // act
    var results = controller.GetPersons();

    var count = results.Count();

    // assert
    Assert.Equal(26, count);
}

[Fact]
public void CreateDbContextMock()
{
    // arrange
    var context = CreateDbContext();

    var service = new PersonService(context.Object);

    var controller = new PersonController(service);

    // act
    var result = controller.GetPerson(1);
    var person = result.Value;

    // assert
    Assert.Equal(1, person.Id);
}

Let's try it by using the following command:

dotnet test WebToTest.IntegrationTests

Et voilà:

At the end we should run all the tests of the solution at once to be sure not to break the existing tests and the existing code. Just type dotnet test and see what happens:

Conclusion

I wrote that integration test will cost a lot more effort than unit test. This isn't completely true since we are able to use the WebApplicationFactory. In many other cases it will be a little more expensive, depending how you want to test and how many dependencies you have.. You need to figure out how you want to isolate a integration test. More isolation sometimes means more effort, less isolation means more dependencies that may break your test.

Anyway. Writing integration tests in my point of view is more important than writing unit tests, because it tests that the parts of the application are working together. And it is not that hard and doesn't cost that much.

Just do it. If you never wrote tests in the past: Try it. It feels great to be on the save way, to be sure the code is working as expected.

Jürgen Gutsch: Unit testing data access in ASP.​NET Core

I really like to be in contact with the dear readers of my blog. I get a lot of positive feedback about my posts via twitter or within the comments. That's awesome and that really pushed me forward to write more posts like this. Some folks also create PRs for my blog posts on GitHub to fix typos and other errors of my posts. You also can do this, by clicking the link to the related markdown file on GitHub at the end of every post.

Many thanks for this kind of feedback :-)

The reader Mohammad Reza recently asked me via twitter to write about unit testing an controller that connects to a database and to fake the data for the unit tests.

@sharpcms Hello jurgen, thank you for your explanation of unit test : Unit Testing an ASP.NET Core Application. it's grate. can you please explain how to use real data from entity framwork and fake data for test in a controller?

@Mohammad: First of all: I'm glad you like this post and I would be proud to write about that. Here it is:

Setup the solution using the .NET CLI

First of all, let's create the demo solution using the .NET CLI

mkdir UnitTestingAspNetCoreWithData & cd UnitTestingAspNetCoreWithData
dotnet new mvc -n WebToTest -o WebToTest
dotnet new xunit -n WebToTest.Tests -o WebToTest.Tests
dotnet new sln -n UnitTestingAspNetCoreWithData

dotnet sln add WebToTest
dotnet sln add WebToTest.Tests

This lines are creating a solution directory adding a web to test and a XUnit test project. Also a solution file gets added and the two projects will be added to the solution file.

dotnet add WebToTest.Tests reference WebToTest

This command won't work in the current version of the .NET Core, because the XUnit project still targets netcoreapp2.2. You cannot reference a higher target version. It should be equal or lower than the target version of the referencing project. You should change the the target to netcoreapp3.0 in the csproj of the test project before executing this command:

<TargetFramework>netcoreapp3.0</TargetFramework>

Now we need to add some NuGet references:

dotnet add WebToTest package GenFu
dotnet add WebToTest.Tests package GenFu
dotnet add WebToTest.Tests package moq

At first we add GenFu, which is a dummy data generator. We need it in the web project to seed some dummy data initially to the database and we need it in the test project to generate test data. We also need Moq to create fake objects, e.g. fake data access in the test project.

Because the web project is an empty web project it also doesn't contain any data access libraries. We need to add Enitity Framework Core to the project.

dotnet add WebToTest package Microsoft.EntityFrameworkCore.Sqlite -v 3.0.0-preview.18572.1
dotnet add WebToTest package Microsoft.EntityFrameworkCore.Tools -v 3.0.0-preview.18572.1
dotnet add WebToTest package Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore -v 3.0.0-preview.18572.1

I'm currently using the preview version of .NET Core 3.0. The version number will change later on.

Now we can start Visual Studio Code

code .

In the same console window we can call the following command to execute the tests:

dotnet test WebToTest.Tests

Creating the controller to test

The Controller we want to test is an API controller that only includes two GET actions. It is only about the concepts. Testing additional actions, POST and PUT actions is almost the same. This is the complete controller to test.

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using WebToTest.Data.Entities;
using WebToTest.Services;

namespace WebToTest.Controllers
{
    [Route("api/[controller]")]
    [ApiController()]
    public class PersonController : Controller
    {
        private readonly IPersonService _personService;

        public PersonController(IPersonService personService)
        {
            _personService = personService;
        }
        // GET: api/Person
        [HttpGet]
        public IEnumerable<Person> GetPersons()
        {
            return _personService.AllPersons();
        }

        // GET: api/Person/5
        [HttpGet("{id}")]
        public ActionResult<Person> GetPerson(int id)
        {
            var todoItem = _personService.FindPerson(id);

            if (todoItem == null)
            {
                return NotFound();
            }

            return todoItem;
        }
    }
}

As you can see, we don't use entity framework directly in the controller. I would propose to encapsulate the data access in service classes, which prepare the data as you need it.

Some developers prefer to encapsulate the actual data access in an additional repository layer. From my perspective this is not needed, if you use an OR mapper like Entity Framework. One reason is that EF already is the additional layer that encapsulates the actual data access. And the repository layer is also an additional layer to test and to maintain.

So the service layer contains all the EF stuff and is used here. This also makes testing much easier because we don't need to mock the EF DbContext. The Service gets passed in via dependency injection.

Let's have a quick look into the Startup.cs where we need to configure the services:

public void ConfigureServices(IServiceCollection services)
{
    // [...]

    services.AddDbContext<ApplicationDbContext>(options =>
        options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

    services.AddTransient<IPersonService, PersonService>();
    
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
}

What I added to the ConfigureServices method is one line to register and configure the DbContext and one line to register the PersonService used in the controller. Both types are not created yet. Before we create them we also need to add a few lines to the config file. Open the appsettings.json and add the connection string to the SQLite database:

{
  "ConnectionStrings": {
    "DefaultConnection": "DataSource=app.db"
  },
  // [...]
}

That's all about the configuration. Let's go back to the implementation. The next step is the DbContext. To keep the demo simple, I just use one Person entity here:

using GenFu;
using Microsoft.EntityFrameworkCore;
using WebToTest.Data.Entities;

namespace WebToTest.Data
{
    public class ApplicationDbContext : DbContext
    {
        public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
            : base(options)
        {}
        
        public ApplicationDbContext() { }

		protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            // seeding
            var i = 1;
            var personsToSeed = A.ListOf<Person>(26);
            personsToSeed.ForEach(x => x.Id = i++);
            modelBuilder.Entity<Person>().HasData(personsToSeed);
        }

        public virtual DbSet<Person> Persons { get; set; }
    }
}

We only have one DbSet of Person here. In the OneModelCreating method we use the new seeding method HasData() to ensure we have some data in the database. Usually you would use real data to seed to the database. In this case I use GenFu do generate a list of 26 persons. Afterwards I need to ensure the IDs are unique, because by default GenFu generates random numbers for the ids which may result in a duplicate key exception.

The person entity is simple as well:

using System;

namespace WebToTest.Data.Entities
{
    public class Person
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime DateOfBirth { get; set; }
        public string City { get; set; }
        public string State { get; set; }
        public string Address { get; set; }
        public string Telephone { get; set; }
        public string Email { get; set; }
    }
}

Now let's add the PersonService which uses the ApplicationDbContext to fetch the data. Even the DbContext gets injected into the constructor via dependency injection:

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
using WebToTest.Data;
using WebToTest.Data.Entities;

namespace WebToTest.Services
{
    public class PersonService : IPersonService
    {
        private readonly ApplicationDbContext _dbContext;
        public PersonService(ApplicationDbContext dbContext)
        {
            _dbContext = dbContext;
        }

        public IEnumerable<Person> AllPersons()
        {
            return _dbContext.Persons
            	.OrderBy(x => x.DateOfBirth)
            	.ToList();
        }

        public Person FindPerson(int id)
        {
            return _dbContext.Persons
            	.FirstOrDefault(x => x.Id == id);
        }
    }

    public interface IPersonService
    {
        IEnumerable<Person> AllPersons();
        Person FindPerson(int id);
    }
}

We need the interface to register the service using a contract and to create a mock service later on in the test project.

If this is done, don't forget to create an initial migration to create the database:

dotnet ef migrations add Initial -p WebToTest -o Data\Migrations\

This puts the migration into the Data folder in our web project. No we are able to create and seed the database:

dotnet ef database update -p WebToTest

In the console you will now see how the database gets created and seeded:

Now the web project is complete and should run. You can try it by calling the following command and calling the URL https://localhost:5001/api/person in the browser:

dotnet run -p WebToTest

You now should see the 26 persons as JSON in the browser:

Testing the controller

In the test project I renamed the initially scaffolded class to PersonControllerTests. After that I created a small method that creates the test data we'll return to the controller. This is exactly the same code we used to seed the database:

private IEnumerable<Person> GetFakeData()
{
    var i = 1;
    var persons = A.ListOf<Person>(26);
    persons.ForEach(x => x.Id = i++);
    return persons.Select(_ => _);
}

We now can create out first test to test the controllers GetPersons() method:

[Fact]
public void GetPersonsTest()
{
    // arrange
    var service = new Mock<IPersonService>();

    var persons = GetFakeData();
    service.Setup(x => x.AllPersons()).Returns(persons);

    var controller = new PersonController(service.Object);

    // Act
    var results = controller.GetPersons();

    var count = results.Count();

    // Assert
    Assert.Equal(count, 26);
}

In the first line we use Moq to create a mock/fake object of our PersonService. This is why we need the interface of the service class. Moq creates proxy objects out of interfaces or abstract classes. Using Moq we are now able to setup the mock object, by telling Moq we want to return this specific list of persons every time we call the AllPersons() method.

If the setup is done we are able to inject the proxy object of the IPersonService into the controller. Our controller now works with a fake service instead using the original one. Inside the unit test we don't need a connection to the database now. That makes the test faster and more independent from any infrastructure outside the code to test

In the act section we call the GetPersons() method and will check the results afterwards in the assert section.

How does it look like with the GetPerson() method that returns one single item?

The second action to test returns an ActionResult of Person, so we only need to get the result a little bit different:

[Fact]
public void GetPerson()
{
    // arrange
    var service = new Mock<IPersonService>();

    var persons = GetFakeData();
   	var firstPerson = persons.First();
    service.Setup(x => x.FindPerson(1)).Returns(firstPerson);

    var controller = new PersonController(service.Object);

    // act
    var result = controller.GetPerson(1);
    var person = result.Value;

    // assert
    Assert.Equal(1, person.Id);
}

Also the setup differs, because we setup another method that returns a single Person instead of a IEnumerable of Person.

To execute the tests run the next command in the console:

dotnet test WebToTest.Tests

This should result in the following output if all is done right:

Testing the service layer

How does it look like to test the service layer? In that case we need to mock the DbContext to feed the service with fake data.

In the test project I created a new test class called PersonServiceTests and a test method that tests the AllPersons() method of the PersonService:

[Fact]
public void AllPersonsTest()
{
    // arrange
    var context = CreateDbContext();

    var service = new PersonService(context.Object);

    // act
    var results = service.AllPersons();

    var count = results.Count();

    // assert
    Assert.Equal(26, count);
}

This looks pretty simple at the first glance, but the magic is inside the method CreateDbContext, which created the mock object of the DbContext. I don't return the actual object, in case I need to extend the mock object in the current test method. Let's see how the DbContext is created:

private Mock<ApplicationDbContext> CreateDbContext()
{
    var persons = GetFakeData().AsQueryable();

    var dbSet = new Mock<DbSet<Person>>();
    dbSet.As<IQueryable<Person>>().Setup(m => m.Provider).Returns(persons.Provider);
    dbSet.As<IQueryable<Person>>().Setup(m => m.Expression).Returns(persons.Expression);
    dbSet.As<IQueryable<Person>>().Setup(m => m.ElementType).Returns(persons.ElementType);
    dbSet.As<IQueryable<Person>>().Setup(m => m.GetEnumerator()).Returns(persons.GetEnumerator());

    var context = new Mock<ApplicationDbContext>();
    context.Setup(c => c.Persons).Returns(dbSet.Object);
    return context;
}

The DbSet cannot be easily created, it is a bit special. This is why I need to mock the DbSet and to setup the Provider, the Expression the ElementType and the Enumerator using the values from the persons list. If this is done, I can create the ApplicationDbContext mock and setup the DbSet of Person on that mock. For every DbSet in your DbContext you need to add this four special setups on the mock DbSet. This seems to be a lot of overhead, but it is worth the trouble to test the service in an isolated way.

Sure you could use a in memory database with a real DbContext, but in this case the service isn't really isolated and and we anyway have a kind of a unit test.

The second test of the PersonService is pretty similar to the first one:

[Fact]
public void FindPersonTest()
{
    // arrange
    var context = CreateDbContext();

    var service = new PersonService(context.Object);

    // act
    var person = service.FindPerson(1);

    // assert
    Assert.Equal(1, person.Id);
}

Let's run the tests and see if it's all working as expected:

dotnet test WebToTest.Tests

Also this four tests passed.

Summary

In this tutorial the setup took the biggest part, just to get a running API controller that we can use to test.

  • We created the solution in the console using the .NET CLI.
  • We added a service layer to encapsulate the data access.
  • We added a EF DbContext to use in the service layer.
  • We registered the services and the DbContext in the DI.
  • We used the service in the controller to create two actions which return the data.
  • We started the application to be sure all is running fine.
  • We created one test on an action that doesn't return an ActionResult
  • We created another test on an action that returns an ActionResult
  • We ran the tests successfully in the console using the .NET CLI

Not using the DbContext in the Controller directly makes it a lot easier to test the controller by passing in a mock service. Why? Because it is easier to fake the service instead of the DbContext. It also keeps the controller small which makes maintenance a lot easier later on.

Faking the DbContext is a bit more effort, but also possible as you saw in the last section.

Please find the complete code sample her on GitHub: https://github.com/JuergenGutsch/unit-testing-aspnetcore3

Conclusion

@Mohammad I hope this post will help you and answer your questions :-)

Using ASP.NET Core there is no reason not to unit test the most important and critical parts of your application. If needed you are able to unit test almost all in your ASP.NET Core application.

Unit test is no magic but it is also not the general solution to ensure the quality of your app. To ensure that all tested units are working together you definitely need to have some some integration tests.

I'll do another post about integration tests using ASP.NET Core 3.0 soon.

Albert Weinert: Mein Twitch.TV Kanal ist nun Live

Live und in Farbe

Letztes Jahr habe ich nach jahrelangen zögern nun endlich auch das Live Coding auf Twitch.TV angefangen. Dies habe ich schon ewig vor. Tom Wendel hatte schon früh diesen Trend gesehen und mit Octo Awesome und anderen Projekten viele Stunden Live gestreamt. Dies wollte ich auch machen, kann doch nicht so schwer sein.

Aber, will mich überhaupt jemand sehen oder auch noch hören haben mich lange in meiner Komfortzone sitzen lassen. Jeodch auch wegen Fritz & Friends und CodeRushed dachte ich mir immer mehr, mache es einfach.

Dann habe ich den Schritt gemacht. Ich finde es entwickelt sich ganz prächtig. Natürlich übe ich noch rund um das Streaming, auch wie ich vor der Kamera gebe oder was ich so sage.

Falls Ihr Interesse habt schaut einfach mal rein.

Wo findet Ihr es?

Was gab es bis jetzt?

Es wird z.b. Live eine Konferenz Site als PWA entwickelt. Mit der sollen sich Teilnehmer einer Konferenz Ihren Session Plan für den Tag erstellen können. Die OpenSource Lizenz unter der es stehen wird ist noch definiert. Aber sie wird frei verwendbar sein.

Auch gab es schon zwei Erklärbär Videos.

Eins wo ich AdHoc etwas zum ASP.NET Core Options System erzähle.

Und eins wo mir Mike Build was über GraphQL erklärt.

Auch wurde ein kleines Tool, der Browser Deflector entwickelt und via Azure DevOps auf GitHub deployed. Mit dem man für bestimmte Urls bestimmte Browser starten kann.

Was ist geplant?

Geplant ist so einiges, ich möchte mehr Live Streams machen wie der mit Mike Bild, wo Experten zu einem Thema etwas im Dialog erzählen und gleichzeitig dazu etwas zeigen

Es wird weiter Live gecodet an verschiedenen Projekten

  • Fertigstellung der Konferenz App
  • Entwicklungs eines Familien Kalendars ala Dakboard
  • Einen auf Arduino und Light Stripes basierender Durchhaltezähler (auf Wunsch einer einzelnen Dame)
  • was anderes

Diese Angaben sind ohne gewähr ;)

Auch ist geplant ein einen ASP.NET Core Authentication & Authorization Deep Dive zu machen, aber erst wenn ich 100 Follower auf meinem Twitch Kanal habe.

Wenn dieser Euch interessiert könnte Ihr hier schon vorab eure Fragen & Herausforderungen einkippen. So dass ich diese berücksichtigen kann.

Sonst noch was?

Twitch is ein Amazon Dienst, wenn Ihr Amazon Prime habt könnt Ihr eurer Twitch Konto damit verknüpfung und schaut Werbefrei.

Während des Streams gibt es einen Live Chat, darin könnt Ihr mit mir und anderen interagieren. Um dort rein schreiben zu können ist ein Twitch Konto erforderlich.

Ich würde mich freuen wenn sich noch mehr Leute zu den Live Streams einfinden, und natürlich mir dort folgen. Damit werdet wenn Ihr wollt von Twitch über den Start des Streams informiert. Regelmäßige Zeiten gibt es noch nicht, aber besondere Events wie das mit Mike oder der Deep Dive werden natürlich entsprechend terminiert.

Holger Schwichtenberg: Alle Grafikdateien aus einem Word-Dokument exportieren

Mit einem PowerShell-Skript, das man per Kontextmenü aufruft, bekommt man ganz schnell einen Ordner mit allen Grafiken, die sich in einem Word-DOCX befinden.

Norbert Eder: Ziele 2019

Wie jedes Jahr, nehme ich mir auch für das kommende Jahr wieder einige Ziele vor. Einige davon möchte ich hiermit öffentlich zugänglich machen, einige werden alleine nur für mich (bzw. für einen nicht-öffentlichen Kontext) existieren.

Softwareentwicklung

Das kommende Jahr wird sich voraussichtlich um die Themen .NET Core, Golang, Container, Serverless und IoT drehen. Es stehen einige große Projekte ins Haus. Hierzu wird viel architekturelle aber auch sicherheitsrelevante Arbeit zu leisten sein. Die Ziele liegen hierbei weniger bei den einzelnen Technologien, sondern vielmehr in der Größenordnung und damit einhergehenden architekturellen und performancetechnischen Herausforderungen.

Bewertbares Ziel: Mehr Fachbeiträge hier im Blog als 2018.

Fotografie

Ich war 2018 sehr viel unterwegs und es sind tolle Fotos entstanden. Allerdings fand ich nicht die Muse, diese auch online zu präsentieren. 2019 möchte ich mehr in meine Portfolios investieren und sie aufmotzen. Auch soll es wieder meine Reiseberichte geben.

Nachdem ich bereits im „Portrait-Modus“ bin, werde ich anschließen und mehr in dieses Thema investieren.

Neben der Fotografie möchte ich mich auch mit dem Thema Video mehr beschäftigen. In diesen Bereich fallen viele andere Themen wie Sound, Bearbeitungssoftware usw. Meinen Weg möchte ich dokumentieren und somit einen Einblick in dieses Thema liefern.

Auch zum Thema Bildbearbeitung wird es einige Tutorials geben. Dabei werde ich mich mit der Software Luminar auseinandersetzen.

Blog

Nachdem voriges Jahr einige Aufgaben liegen geblieben sind, möchte ich diese nun 2019 erledigen. D.h. weniger Fotografie auf meiner Website, sondern wieder mehr Softwareentwicklung, IT und Technik. Die Fotografie wird auf meine Website https://norberteder.photography „verdrängt“. Der Umbau hat bereits begonnen.

Im Rückblick 2018 habe ich das Ende meines Projektes #fotomontag angekündigt – zumindest auf dieser Website. Es geht weiter, etwas verändert, aber doch. Ebenfalls auf https://norberteder.photography.

Lesen

Nachdem ich voriges Jahr relativ schnell meine ursprüngliches Ziel von 15 Büchern auf 50 angehoben habe, möchte ich dieses Jahr mit einem Ziel von 25 Büchern loslegen. Den Fortschritt könnt ihr auch dieses Jahr wieder auf https://goodreads.com verfolgen.

Zum Schluss möchte ich euch ein wunderbares Jahr 2019 wünschen. Viel Gesundheit, Glück, aber auch Erfolg.

Der Beitrag Ziele 2019 erschien zuerst auf Norbert Eder.

Norbert Eder: Rückblick 2018

Rückblick 2018

Die Jahren fliegen dahin. Schon wieder ist eines rum. Wie jedes Jahr, möchte ich einen Blick zurück werfen und über die vergangenen 365 Tage nachdenken. Dabei möchte natürlich meine Ziele für 2018 nicht außer Acht lassen und in die Bewertung einfließen lassen.

Softwareentwicklung

Wie ich es mir vorgenommen habe, beschäftigte ich mich 2018 sehr viel mit .NET Core, Angular und dem Thema Internet of Things (IoT). Zusätzlich habe ich wieder über den Tellerrand geguckt und bei Golang reingeschnuppert. Gerade Golang entwickelt sich bei mir persönlich zu einer beliebten Programmiersprache.

Sehr viel Energie ging 2018 in das Thema Docker, Microservices und Orchestrierung.

Fotografie

Auch 2018 habe ich es geschafft, jeden Montag ein Foto für mein Projekt #fotomontag zu veröffentlichen. Nach 209 veröffentlichten Fotos in 4 Jahren, geht diese Ära allerdings zu Ende.

Zusätzlich hat sich gerade auf dem Gebiet der Fotografie sehr viel bei mir getan:

  • Zahlreiche wissenswerte Beiträge auf https://norberteder.photography, auch das Portfolio wurde überarbeitet und erweitert
  • Ich habe ja eine Liste meines Foto-Equipments auf meinem Blog. Dieser musste wieder adaptiert werden :)
  • Dieses Jahr habe ich es endlich geschafft und zahlreiche Portrait-/Model-Shootings gemacht. Dabei konnte ich richtig tolle Erfahrungen sammeln und mich weiterentwickeln. Vielen Dank an dieser Stelle an alle, mit denen ich zusammenwirken konnte.
  • Schlussendlich gab es eine Menge Fotoreisen: Piran/Portoroz, Dubrovnik, Kotor, Nürnberg, Ostsee, Dresden, Budapest und Prag.

Wie du siehst, hat sich also wirklich viel getan – mehr als ich mir erhofft bzw. geplant hatte.

Blog

Für das Blog hatte ich mir mehr vorgenommen. Zwar habe ich einige Artikel ausgemistet und aktualisiert, doch wollte ich wieder viel mehr über Softwareentwicklung bloggen. Durch die Arbeit einerseits und das doch aufwändige Hobby der Fotografie andererseits, blieb einfach zu wenig Zeit, die ich dann doch anderweitig nutzte.

Seit diesem Jahr steht die Kommentarfunktion nicht mehr zur Verfügung. Das ist nicht der DSGVO geschuldet, sondern vielmehr der Qualität. Feedback erhalte ich nun über andere Kanäle (E-Mail hauptsächlich) und das qualitativ hochwertiger, da der Aufwand für das Feedback einfach höher ist.

Ein herzliches Dankeschön an meine treuen Leser, die mir die Stange halten und auch immer wieder Feedback geben.

Bücher

Seit diesem Jahr verwalte ich meine Bücher über goodreads. Meinen Account hatte ich zwar schon lange, aber das Potential blieb mir lange Zeit verborgen. Nun, seit heuer nutze ich diese Plattform.

Für 2018 hatte ich mir vorgenommen, 15 Bücher zu lesen. Tatsächlich wurden es 50 (nicht ausschließlich Fach- bzw. Sachbücher).

Top 5 Beiträge

Fazit

Insgesamt hielt das Jahr 2018 zahlreiche Herausforderungen bereit. Das war nicht immer leicht, allerdings gab es wieder viel zu lernen – und das ist wichtig und gut.

Der Beitrag Rückblick 2018 erschien zuerst auf Norbert Eder.

Alexander Schmidt: Azure Active Directory B2C – Teil 2

Dieser Teil widmet sich vor allem den Themen Identity Provider, Policies und User attributes.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.