Stefan Henneken: IEC 61131-3: abstrakter FB vs. Schnittstelle

Seit TwinCAT V3.1 Build 4024 können Funktionsblöcke, Methoden und Eigenschaften als abstract gekennzeichnet werden. Abstrakte FBs können nur als Basis-FB für die Vererbung genutzt werden. Ein direktes Instanziieren von abstrakten FBs ist nicht möglich. Somit haben abstrakte FBs eine gewisse Ähnlichkeit zu Schnittstellen. Es stellt sich nun die Frage, wann eine Schnittstelle und wann ein abstrakter FB zum Einsatz kommen sollte.

Eine sehr gute Beschreibung zu abstract liefert der Post The ABSTRACT keyword aus dem Blog PLCCoder.com oder das Beckhoff Information System. Deshalb soll das Wichtigste nur kurz wiederholt werden.

abstrakte Methoden

METHOD PUBLIC ABSTRACT DoSomething : LREAL
  • bestehen ausschließlich aus der Deklaration und enthalten keine Implementierung. Der Methodenrumpf ist leer.
  • können public, protected oder internal sein. Der Zugriffsmodifizierer private ist nicht erlaubt.
  • können nicht zusätzlich als final deklariert werden.

abstrakte Eigenschaften

PROPERTY PUBLIC ABSTRACT nAnyValue : UINT
  • können Getter oder Setter oder beides enthalten.
  • Getter und Setter bestehen ausschließlich aus der Deklaration und enthalten keine Implementierung.
  • können public, protected oder internal sein. Der Zugriffsmodifizierer private ist nicht erlaubt.
  • können nicht zusätzlich als final deklariert werden.

abstrakte Funktionsblöcke

FUNCTION_BLOCK PUBLIC ABSTRACT FB_Foo
  • Sobald eine Methode oder eine Eigenschaft mit abstract deklariert wurde, muss auch der Funktionsblock mit abstract deklariert werden.
  • Von abstrakten FBs können keine Instanzen angelegt werden. Abstrakte FBs können nur bei der Vererbung als Basis-FB verwendet werden.
  • Alle abstrakte Methoden und alle abstrakte Eigenschaften müssen überschrieben werden, damit ein konkreter FB entsteht. Aus der abstrakten Methode oder einer abstrakten Eigenschaft wird durch das Überschreiben eine konkrete Methode oder eine konkrete Eigenschaft.
  • Abstrakte Funktionsblöcke können zusätzlich konkrete Methoden und/oder konkrete Eigenschaften enthalten.
  • Werden bei der Vererbung nicht alle abstrakte Methoden oder nicht alle abstrakte Eigenschaften überschrieben, so kann der erbende FB auch wieder nur ein abstrakter FB sein (schrittweise Konkretisierung).
  • Zeiger oder Referenzen von Typ eines abstrakten FBs sind erlaubt. Diese können aber auf konkrete FBs referenzieren und somit deren Methoden oder Eigenschaften aufrufen (Polymorphismus).

Unterschiede abstrakter FB und Schnittstelle

Besteht ein Funktionsblock ausschließlich aus abstrakten Methoden und abstrakten Eigenschaften, so enthält dieser Funktionsblock keinerlei Implementierungen und hat dadurch eine gewisse Ähnlichkeit mit Schnittstellen. Im Detail gibt es allerdings einige Besonderheiten zu beachten.

Schnittstelleabstrakter FB
unterstützt Mehrfachvererbung+
kann lokale Variablen enthalten+
kann konkrete Methoden enthalten+
kann konkrete Eigenschaften enthalten+
unterstützt neben public noch weitere Zugriffsmodifizierer+
Verwendung bei Arrays+nur über POINTER

Durch die Tabelle kann der Eindruck entstehen, dass Schnittstellen nahezu komplett durch abstrakte FBs austauschbar sind. Allerdings bieten Schnittstellen eine größere Flexibilität durch die Möglichkeit, in unterschiedlichen Vererbungshierarchien verwendet zu werden. In dem Post IEC 61131-3: Objektkomposition mit Hilfe von Interfaces wird hierzu ein Beispiel gezeigt.

Als Entwickler stellt sich somit die Frage, wann eine Schnittstelle und wann ein abstrakter FB genutzt werden sollte. Die einfache Antwort lautet: am besten beides gleichzeitig. Hierdurch steht eine Standardimplementierung im abstrakten Basis-FB zur Verfügung, wodurch das Ableiten erleichtert wird. Jedem Entwickler bleibt aber die Freiheit erhalten, die Schnittstelle direkt zu implementieren.

Beispiel

Für die Datenverwaltung von Angestellten sind Funktionsblöcke zu entwerfen. Hierbei wird unterschieden zwischen Festangestellten (FB_FullTimeEmployee) und Vertragsmitarbeiter (FB_ContractEmployee). Jeder Mitarbeiter wird durch seinen Vornamen (sFirstName), Nachnamen (sLastName) und der Personalnummer (nPersonnelNumber) identifiziert. Hierzu werden entsprechende Eigenschaften bereitgestellt. Außerdem wird eine Methode benötigt, die den vollständigen Namen inklusive Personalnummer als formatierten String ausgibt (GetFullName()). Die Berechnung des Monatseinkommens erfolgt durch die Methode GetMonthlySalary().

Die Unterschiede beider Funktionsblöcke bestehen in der Berechnung des Monatseinkommens. Während der Festangestellte ein Jahreseinkommen (nAnnualSalary) bezieht, ergibt sich das Monatseinkommen des Vertragsmitarbeiters aus dem Stundenlohn (nHourlyPay) und der Monatsarbeitszeit (nMonthlyHours). Somit besitzen die beiden Funktionsblöcke für die Berechnung des Monatseinkommens unterschiedliche Eigenschaften. Die Methode GetMonthlySalary() ist in beiden Funktionsblöcken enthalten, unterscheidet sich aber in der Implementierung.

1. Lösungsansatz: abstrakter FB

Da beide FBs etliche Gemeinsamkeiten haben, liegt es nahe einen Basis-FB (FB_Employee) zu erstellen. Dieser Basis-FB enthält alle Methoden und Eigenschaften, die in beiden FBs enthalten sind. Da sich aber die Methoden GetMonthlySalary() in der Implementierung unterscheiden, wird diese in FB_Employee als abstract gekennzeichnet. Dadurch müssen alle FBs, die von diesen Basis-FB erben, GetMonthlySalary() überschreiben.

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 1 (TwinCAT 3.1.4024) auf GitHub

Nachteile

Der Lösungsansatz sieht auf dem ersten Blick sehr solide aus. Wie aber weiter Oben schon erwähnt, kann der Einsatz von Vererbung auch Nachteile mit sich ziehen. Besonders dann, wenn FB_Employee Teil einer Vererbungskette ist. Alles was FB_Employee über diese Kette erbt, wird auch an FB_FullTimeEmployee und FB_ContractEmployee vererbt. Kommt FB_Employee in einem anderen Zusammenhang zum Einsatz, so kann eine umfangreiche Vererbungs-Hierarchie zu weiteren Problemen führen.

Auch gibt es Einschränkungen bei dem Versuch, alle Instanzen in einem Array als Referenzen abzulegen. Folgende Deklaration wird vom Compiler nicht zugelassen:

aEmployees : ARRAY [1..2] OF REFERENCE TO FB_Employee; // error

Statt Referenzen müssen Zeiger verwendet werden:

aEmployees : ARRAY [1..2] OF POINTER TO FB_Employee;

Allerdings ist bei der Verwendung von Zeigern einiges zu beachten (z.B. beim Online-Change). Aus diesem Grund versuche ich Zeiger so weit wie möglich zu vermeiden.

Vorteile

Es ist zwar nicht möglich, direkt eine Instanz eines abstrakten FB anzulegen, allerdings kann per Referenz auf die Methoden und Eigenschaften eines abstrakten FB zugegriffen werden.

VAR
  fbFullTimeEmployee :  FB_FullTimeEmployee;
  refEmployee        :  EFERENCE TO FB_Employee;
  sFullName          :  STRING;
END_VAR
refEmployee REF= fbFullTimeEmployee;
sFullName := refEmployee.GetFullName();

Auch kann es ein Vorteil sein, dass die Methode GetFullName() und die Eigenschaften sFirstName, sLastName und nPersonnelNumber im abstrakten Basis-FB schon vollständig implementiert und dort nicht als abstract deklariert wurden. Ein Überschreiben dieser Elemente in den abgeleiteten FBs ist nicht mehr notwendig. Soll z.B. die Formatierung für den Namen angepasst werden, so ist dieses nur an einer Stelle durchzuführen.

2. Lösungsansatz: Schnittstelle

Ein Ansatz mit Schnittstellen ähnelt sehr stark der vorherigen Variante. Die Schnittstelle enthält alle Methoden und Eigenschaften, die bei beiden FBs (FB_FullTimeEmployee und FB_ContractEmployee) gleich sind.

Beispiel 2 (TwinCAT 3.1.4024) auf GitHub

Nachteile

Dadurch das FB_FullTimeEmployee und FB_ContractEmployee die Schnittstelle I_Employee implementieren, muss jeder FB alle Methoden und alle Eigenschaften aus der Schnittstelle auch enthalten. Das betrifft auch die Methode GetFullName(), die in beiden Fällen die gleiche Berechnung durchführt.

Wurde ein Schnittstelle veröffentlich (z.B. durch eine Bibliothek) und in verschiedenen Projekten eingesetzt, so sind Änderungen an dieser Schnittstelle nicht mehr möglich. Wird eine Methode oder eine Eigenschaft hinzugefügt, so müssen auch alle Funktionsblöcke angepasst werden, die diese Schnittstelle implementieren. Bei der Vererbung von FBs ist dieses nicht notwendig. Wird ein Basis-FB erweitert, so müssen alle FBs die davon erben nicht verändert werden. Außer, die neuen Methoden oder Eigenschaften sind abstrakt.

Tipp: Kommt es doch vor, dass man eine Schnittstelle später anpassen muss, so kann man eine neue Schnittstelle anlegen. Diese erbt von der ursprünglichen Schnittstelle und wird um die notwendigen Elemente erweitert.

Vorteile

Funktionsblöcke können mehrere Schnittstellen implementieren. Schnittstellen sind dadurch in vielen Fällen flexibler einsetzbar.

Bei einem Funktionsblock kann zur Laufzeit per __QUERYINTERFACE() eine bestimmte Schnittstelle abgefragt werden. Wurde dieses implementiert, so ist über diese Schnittstelle ein Zugriff auf den FB möglich. Dieses macht den Einsatz von Schnittstellen sehr flexibel.

Ist die Implementierung einer bestimmten Schnittstelle bekannt, so kann der Zugriff über die Schnittstelle auch direkt erfolgen.

VAR
  fbFullTimeEmployee :  FB_FullTimeEmployee;
  ipEmployee         :  I_Employee;
  sFullName          :  STRING;
END_VAR
ipEmployee := fbFullTimeEmployee;
sFullName := ipEmployee.GetFullName();

Auch können Schnittstellen als Datentyp für ein Array verwendet werden. Alle FBs, welche die Schnittstelle I_Employee implementieren, können zu dem folgenden Array hinzugefügt werden.

aEmployees : ARRAY [1..2] OF I_Employee;

3. Lösungsansatz: Kombination abstrakter FB und Schnittstelle

Warum nicht beide Ansätze miteinander kombinieren und somit von den Vorteilen beider Varianten profitieren?

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 3 (TwinCAT 3.1.4024) auf GitHub

Bei der Kombination der beiden Ansätze wird zunächst die Schnittstelle zur Verfügung gestellt. Anschließend wird die Verwendung der Schnittstelle durch den abstrakten Funktionsblock FB_Employee vereinfacht. Gleiche Implementierungen von gemeinsamen Methoden können in dem abstrakten FB bereitgestellt werden. Eine mehrfache Implementierung ist nicht notwendig. Kommen neue FBs hinzu, können diese auch direkt die Schnittstelle I_Employee nutzen.

Der Aufwand für die Umsetzung ist erstmal etwas höher als bei den beiden vorherigen Varianten. Aber gerade bei Bibliotheken, die von mehreren Programmierern eingesetzt und über Jahre weiterentwickelt werden, kann sich dieser Mehraufwand lohnen.

  • Wenn der Anwender keine eigene Instanz des FBs anlegen soll (weil dieses nicht sinnvoll erscheint), dann sind abstrakte FBs oder Schnittstellen hilfreich.
  • Wenn man die Möglichkeit haben will, in mehr als einen Basistyp zu verallgemeinern, dann sollte eine Schnittstelle zum Einsatz kommen.
  • Wenn ein FB ohne die Implementierung der Methoden oder Eigenschaften vereinbart werden kann, dann sollte man eine Schnittstelle dem abstrakten FB vorziehen.

Daniel Schädler: Desired State Configuration und Group Managed Service Accounts

Ausgangslage

In meinem Unternehmen setzen wir IBM Urban Code Deploy ein für die Provisionierung der Server. Nun da dies einen manuellen Mehraufwand bedeutet muss dieses Vorgehen effizienter gestaltet werden. Aus diesem Grund habe ich mit für die Konfiguration mittels Desired State Configuration (DSC) entschieden.

Ziel

Durchführung

Für die Durchführung orientieren wir uns an den gesteckten Zielen.

Ist JAVA instaliert?

Hat man JAVA installiert, so wird dies meistens in ein Verzeichnis wie JDK oder JAVA gemacht und im Pfad wird das entsprechende angegeben. Die Konfiguration wird nicht gestartet, sollte JAVA nicht installiert sein. Dies kann man auf einfache Art und Weise überprüfen.

if(($env:Path.Split(';') | Where-Object { Where-Object { $_ -like "*JDK*" -or $_ -like "*JAVA*" } ))

Nun gut, wenn alles korrekt verläuft dann kann die Konfiguration beginnen.

Installation des Agenten

Der Agent wird mittels XCOPY Installation auf dem System installiert. Das hat den Vorteil, dass der Agent bereits als Vorlage "gezipped" zur Verfügung steht und dann noch an die richtige Stelle kopiert wird. Um diesen Vorgang auszuführen braucht es die Resource Archive, die bereits standardmässig von Windows zur Verfügung steht. Die Resource sieht wie folgt aus:

            Archive UrbanCodeBinaries
            {
                Destination = "C:\Program Files"
                Ensure =  "Present"
                Path = $agentSource
                Force = $true                
            }

Die Destination ist das Ziel in welche die binären Dateien extrahiert werden. Der Path wird durch das Aufrufen des Konfigurationsscriptes mitgegeben.

Konfiguration des Dienstes für den Agenten

Für die Konfiguration des Agenten ist die Service Resource verwendet worden, die wie folgt konfiguriert wird.


            # Preconfigure the properties for the service
            $agentName = ("ucd-agent-{0}" -f $env:COMPUTERNAME)
            $servicePathArguments = "//RS//{0}" -f $agentName
            $servicePath = Join-Path -Path $env:ProgramFiles -ChildPath "ibm-ucd-agent\native\x64\winservice.exe"

            Service UrbanCodeAgentService
            {
                Ensure = "Present"
                Name = $agentName
                StartupType = "Manual"
                Path = ("`"{0}`" {1}" -f $servicePath, $servicePathArguments)
                Description =  "IBM Urban Code Agent"                
                DisplayName = $agentName  
                State = "Stopped"                             
            }

Vor der Ressource sind die notwendigen Parameter vorkonfiguriert worden, da dieser JAVA Agent über einen Windows-Dienst Wrappter mit Argumenten gestartet wird. Der Dienst ist nun konfiguriert nur noch nicht so wie gewünscht.

Konfiguration des Dienstes für den Agent

Die Dienst Resource lässt nur zu einen Benutzer-Account mit Passwort zu verknüpfen. Da kein Object PSCredential Objekt ohne Passwort erstellt werden kann und das Passwort für den Group Managed Service Account unbekannt ist, muss man sich mittels WMI den Dienst wie gewünscht konfigurieren. Dies geschieht mittels der Script Resource

            # Third Set the Service with a script Resource
            Script ServiceScript{
                DependsOn = "[Service]UrbanCodeAgentService"
                GetScript = {                    
                    return Get-WmiObject -Class win32_Service | Where-Object -Property Name -like ("*{0}*" -f $using:agentName)
                }
                TestScript = {                    
                    $service = [scriptblock]::Create($GetScript).Invoke()                    
                    if($service){
                        return ($service.StartName -eq $using:groupManagedServiceAccount)
                    }
                    return $false
                }
                SetScript = {                    
                    $service = [scriptblock]::Create($GetScript).Invoke()
                    if($service){
                        $service.Change($null, $null, $null, $null, $null, $null, $using:groupManagedServiceAccount, $null, $null, $null, $null)
                    }                    
                }
            }

Wichtig an dieser Stelle ist zu erwähnen, dass diese Script-Resource mit dem DependsOn versehen ist. Das bedeutet, dass diese Ressource erst durchgeführt wird, wenn die angegeben Ressource erfolgreich appliziert werden konnte.

Hier noch das ganze Script

param([Parameter(Mandatory=$true,HelpMessage="The full path to the template ucd agent zip")]
      [ValidateNotNullOrEmpty()]
      [string]$agentSource="C:\Temp\ibm-ucd-agent.zip",
      [Parameter(Mandatory=$true,HelpMessage="The Group Managed Account that is used as service account.")]
      [ValidateNotNullOrEmpty()]
      [string]$groupManagedServiceAccount="IFC1\srvgp-ucd-r$"
      )

Configuration UrbanCodeAgentConfiguration{

    Import-DscResource -ModuleName PSDesiredStateConfiguration 

    if(($env:Path.Split(';') | Where-Object { $_ -like "*JDK*" -or $_ -like "*JAVA*" } )){
        Node $env:COMPUTERNAME
        {            
            # First Extract the service binaries to the Destination
            Archive UrbanCodeBinaries
            {
                Destination = "C:\Program Files"
                Ensure =  "Present"
                Path = $agentSource
                Force = $true                
            }

            # Preconfigure the properties for the service
            $agentName = ("ucd-agent-{0}" -f $env:COMPUTERNAME)
            $servicePathArguments = "//RS//{0}" -f $agentName
            $servicePath = Join-Path -Path $env:ProgramFiles -ChildPath "ibm-ucd-agent\native\x64\winservice.exe"

            # Second configure the service
            Service UrbanCodeAgentService
            {
                Ensure = "Present"
                Name = $agentName
                StartupType = "Manual"
                Path = ("`"{0}`" {1}" -f $servicePath, $servicePathArguments)
                Description =  "IBM Urban Code Agent"                
                DisplayName = $agentName  
                State = "Stopped"                             
            }  
            
            # Third Set the Service with a script Resource
            Script ServiceScript{
                DependsOn = "[Service]UrbanCodeAgentService"
                GetScript = {                    
                    return Get-WmiObject -Class win32_Service | Where-Object -Property Name -like ("*{0}*" -f $using:agentName)
                }
                TestScript = {                    
                    $service = [scriptblock]::Create($GetScript).Invoke()                    
                    if($service){
                        return ($service.StartName -eq $using:groupManagedServiceAccount)
                    }
                    return $false
                }
                SetScript = {                    
                    $service = [scriptblock]::Create($GetScript).Invoke()
                    if($service){
                        $service.Change($null, $null, $null, $null, $null, $null, $using:groupManagedServiceAccount, $null, $null, $null, $null)
                    }                    
                }
            }
        }        
    }else {
        Write-Host "Java is not installed"
    }
}

UrbanCodeAgentConfiguration

Eine Bemerkung zum Node: Started man die Desired State Configuration, so ist eine MOF-Datei das Resultat. Will man diese archivieren, so würde diese standardmässig "localhost" heissen, was nicht hilfreich wäre. Aus diesem Grund verwende ich immer die Variable $env:COMPUTERNAME um so eine sprechende MOF-Datei zu erhalten.

Fazit

Nicht alles was bereits standardmässig zur Verüfung steht, kann gleich für jeden Anwendungsfall verwendet werden. Mit der Script-Ressource ist es möglich eine gewisse Flexibilität gegeben und man kann auch ohne dedizierte Scripts Aktionen ausführen, die von keiner Ressource bereitgestellt werden. Für Fragen und Anregungen bin ich offen und wenn Dir der Artikel gefallen hat, dann freue ich mich über ein like

Golo Roden: Einführung in React, Folge 8: Fortgeschrittenes JSX

Eines der Kernkonzepte von React ist die JavaScript-Spracherweiterung JSX, weshalb es wichtig ist, sich auch mit deren fortgeschrittenen Konzepten zu beschäftigen.

Golo Roden: Einführung in Docker, Folge 2: Docker installieren

Bevor man Docker verwenden kann, muss man es zunächst installieren und konfigurieren. Die zweite Folge der Einführung in Docker zeigt, wie das funktioniert.

Daniel Schädler: Erstellen eines Windows Dienstes mit Desired State Configuration

In diesem Blog-Post soll gezeigt werden, wie man einfach einen Dienst mittels DSC (Desired State Configuration) installiert und wieder deinstalliert. Die meisten Beispiele zeigen nur kleine Schnipsel an Code, sodass kein Praxisnahes Beispiel herangezogen werden kann.

Ziel

  • Die Installation eines Windows Dienstes mittels DSC auf einem Windows Server 2019 Core

Durchführung

Um die Voraussetzungen zu erfüllen, um DSC verwenden zu können, kann hier nachgelesen werden.

Schritte zur Durchführung

  1. Die Quelldateien des Dienstes müssen von einem Quellverzeichnis in das Zielverzeichnis des Dienstes kopiert werden.
  2. Der Dienst muss dann mittels DSC installiert werden können.

Für diese Operation sind die zwei Windows Resourcen notwendig, die bereits fixfertig vom Betriebssystem geliefert werden.

Das komplette Script nachfolgend aufgeführt:

param([Parameter(Mandatory=$true)][string]$source,
      [Parameter(Mandatory=$true)][string]$destination,
      [Parameter(Mandatory=$true)][string]$servicename,
      [Parameter(Mandatory=$true)][string]$pathtoServiceExecutable)

Configuration ServiceDemo
{

    Import-DscResource -ModuleName PSDesiredStateConfiguration    

    Node $env:COMPUTERNAME
    {    
        File ServiceDirectoryCopy
        {
            Ensure = "Present"
            Type = "Directory"
            Recurse = $true
            SourcePath = $source
            DestinationPath = $destination
        }

        Service DemoService
        {
            Ensure = "Present"
             Name = $servicename
             StartupType = "Manual"
             State = "Stopped"
             Path = $pathtoServiceExecutable
        }
    }
}

Die Variable $env:COMPUTERNAME wird verwendet, damit die generierte MOF-Datei den Namen des Servers aufweist und die Konfiguration so zu anderen unterschieden werden kann.

Die ServiceDirectoryCopy Aktion führt das Kopieren in rekursiver Form der Dienst Dateien aus.

Die Operation DemoService konfiguriert und installiert den Dienst.

Nun muss mittels Aufruf der Scriptes und den Parametern die MOF-Datei erzeugt werden. In diesem Beispiel sieht der Aufruf wie folgt aus:

.\CreateService.ps1 -source "C:\_install\DemoService" -destination "C:\Program Files\DemoService" -servicename "DemoService" -pathtoServiceExecutable "C:\Program Files\DemoService\DemoService.exe"

Ist alles korrekt gelaufen, so sieht man das nachfolgende Resultat:

Erzeugte MOF-Datei
Die Erezugte MOF-Datei

Der Inhalt dieser Datei sieht dann so aus:

/*
@TargetNode='WIN-MFVO0VQ8PB7'
@GeneratedBy=Administrator
@GenerationDate=09/08/2020 12:07:47
@GenerationHost=WIN-MFVO0VQ8PB7
*/

instance of MSFT_FileDirectoryConfiguration as $MSFT_FileDirectoryConfiguration1ref
{
ResourceID = "[File]ServiceDirectoryCopy";
 Type = "Directory";
 Ensure = "Present";
 DestinationPath = "C:\\Program Files\\DemoService";
 ModuleName = "PSDesiredStateConfiguration";
 SourceInfo = "C:\\Users\\Administrator\\Documents\\DemoServiceConfiguration.ps1::14::9::File";
 Recurse = True;
 SourcePath = "C:\\_install\\DemoService\\";

ModuleVersion = "1.0";

 ConfigurationName = "ServiceDemo";

};
instance of MSFT_ServiceResource as $MSFT_ServiceResource1ref
{
ResourceID = "[Service]DemoService";
 State = "Stopped";
 SourceInfo = "C:\\Users\\Administrator\\Documents\\DemoServiceConfiguration.ps1::23::9::Service";
 Name = "DemoService";
 StartupType = "Manual";
 ModuleName = "PSDesiredStateConfiguration";
 Path = "C:\\Program Files\\DemoService\\notepad.exe";

ModuleVersion = "1.0";

 ConfigurationName = "ServiceDemo";

};
instance of OMI_ConfigurationDocument


                    {
 Version="2.0.0";
 

                        MinimumCompatibleVersion = "1.0.0"; 

                        CompatibleVersionAdditionalProperties= {"Omi_BaseResource:ConfigurationName"}; 

                        Author="Administrator"; 

                        GenerationDate="09/08/2020 12:07:47"; 

                        GenerationHost="WIN-MFVO0VQ8PB7";

                        Name="ServiceDemo";

                    };

Nun muss die Konfiguration appliziert werden. Dies geschieht mit dem folgenden Befehl:

Start-DscConfiguration .\ServiceDemo

Das Ausführen erzeugt einen Powershell Job der ausgeführt wird.

Ausführung der Konfiguration

Nun kann überprüft werden, ob die Konfiguration unseren Wünschen entspricht.

Get-Service *Demo*

liefert dann das Ergebnis, das erwartet wird.

Installierter Beispiel-Dienst

Anmerkung: Zum Demonstrationszweck dient bei mir notepad.exe als Demo-Dienst. Fragen wir die Dienst-Details mittels

Get-CimInstance -Class Win32_Service -Property * | Where-Object -Property Name -like "*Demo*"

so sehen wir alle konfigurierten Werte, dieses Dienstes und sehen, dass die Konfiguration angewendet worden ist.

Dienst Details

Fazit

Mit einfachen Mitteln lassen sich so ohne Komplexe Scripts gewünschte Zustände eines Systems festlegen und die Konfiguration ist einfacher zu lesen und zu interpretieren. Ein weiterer Vorteil liegt in der Idempotenz. Ein Nachteil dieser Methode ist, dass die Anmeldeinformationen nicht mitgegeben werden können und diese anschliessend konfiguriert werden müssen. Eine Alternative hierfür könnnte sein, dass man selber eine Resource schreibt und dies implementiert oder wie bereits erwähnt über Set-CimInstance, die entsprechende Eigenschaft des Dienst-Objektes setzt.

Falls der Artikel Gefallen gefunden hat, freue ich mich über einen Like und bin für Rückmeldungen offen.

Weiterführende Links

Golo Roden: Einführung in Docker, Folge 1: Grundkonzepte

Docker ist als Werkzeug aus der modernen Web- und Cloud-Entwicklung nicht mehr wegzudenken. Daher veröffentlichen Docker und the native web in enger Zusammenarbeit einen kostenlosen Videokurs, mit dem man den Einsatz von Docker auf einfachem Weg lernen kann.

Golo Roden: Götz & Golo: Ein Jahr später

Am 19. August 2019 hatten Götz und ich unsere Blogserie "Götz & Golo" angekündigt. Inzwischen sind zwölf Folgen erschienen. Daher ist es an der Zeit für einen kritischen Rück- und einen konstruktiven Ausblick.

Code-Inside Blog: How to run a legacy WCF .svc Service on Azure AppService

Last month we wanted to run good old WCF powered service on Azures “App Service”.

WCF… what’s that?

If you are not familiar with WCF: Good! For the interested ones: WCF is or was a framework to build mostly SOAP based services in the .NET Framework 3.0 timeframe. Some parts where “good”, but most developers would call it a complex monster.

Even in the glory days of WCF I tried to avoid it at all cost, but unfortunately I need to maintain a WCF based service.

For the curious: The project template and the tech is still there. Search for “WCF”.

VS WCF Template

The template will produce something like that:

The actual “service endpoint” is the Service1.svc file.

WCF structure

Running on Azure: The problem

Let’s assume we have a application with a .svc endpoint. In theory we can deploy this application to a standard Windows Server/IIS without major problems.

Now we try to deploy this very same application to Azure AppService and this is the result after we invoke the service from the browser:

"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable." (HTTP Response was 404)

Strange… very strange. In theory a blank HTTP 400 should appear, but not a HTTP 404. The service itself was not “triggered”, because we had some logging in place, but the request didn’t get to the actual service.

After hours of debugging, testing and googling around I created a new “blank” WCF service from the Visual Studio template got the same error.

The good news: It’s was not just my code something is blocking the request.

After some hours I found a helpful switch in the Azure Portal and activated the “Failed Request tracing” feature (yeah… I could found it sooner) and I discovered this:

Failed Request tracing image

Running on Azure: The solution

My initial thoughts were correct: The request was blocked. It was treated as “static content” and the actual WCF module was not mapped to the .svc extension.

To “re-map” the .svc extension to the correct handler I needed to add this to the web.config:

...
<system.webServer>
    ...
	<handlers>
		<remove name="svc-integrated" />
		<add name="svc-integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler" resourceType="File" preCondition="integratedMode" />
	</handlers>
</system.webServer>
...

With this configuration everything worked as expected on Azure AppService.

Be aware:

I’m really not 100% sure why this is needed in the first place. I’m also not 100% sure if the name svc-integrated is correct or important.

This blogpost is a result of these tweets.

That was a tough ride… Hope this helps!

Daniel Schädler: WindowsFeatures Export in ein Desired State Configuration Script

Ziel

Mein Ziel, das Exportieren der WindowsFeatures eines Servers und diese dann in ein Desired State Configuration .ps1 zu speichern.

Voraussetzungen

  • Windows Server 2019 Core ist installiert.
  • Es sind nur die Standard-WindowsFeatures aktiviert (Standardmässig sind hier 18 installiert. Dies kann variieren).

Durchführung

Die Nachfolgenen Schritte zeigen auf, wie man dies für einen Remote Server machen sollte:

1.) Die Anmeldeinformationen mit Get-Credential in der Powershell Sitzung speichern

2.) Um effizient mit dem /den entfernten Servern arbeiten zu können, ist es empfehlenswert dies mit einer neuen Powershell Sitzung auf den Server zu machen mit New-PSSession

3.) Nun kann mit Invoke-Command auf den gewünschten Server zugegriffen werden.

4.) In dieser Sitzung, wird mit dem Cmdlet Get-WindowsFeature die gewünschten WindowsFeatures herausgefiltert.

5.) Anschliessend wird mit einer String-Variable ein Powershell-Script erstellt.

6.) Zum Schluss kann das erstellte Script ausgeführt werden. Wenn alles erfolgreich läuft, dann sieht man eine erstellte Mof-Datei, dass für Start-DscConfiguration verwendet werden kann.

PS C:\Users\U80794990> Invoke-Command -Session $session -ScriptBlock {                                                                                                                                 $features = Get-WindowsFeature | where -Property Name -like "*Web*"                                                                                                                                $script = "Configuration AspNetCoreOnIIS { `n"                                                                                                                                                       $script += "`t Import-DscResource -ModuleName 'PSDesiredStateConfiguration' `n"
 $script += "`t `t Node 'loalhost' { `n"
 foreach($feature in $features){
 $name = $feature.Name
 $script += "`t `t `t WindowsFeature $name {`n"
 $script += "`t `t `t `t Name = '$name' `n"
 $script += "`t `t `t `t Ensure = 'Present' `n"
 $script += "`t `t `t} `n"
 }
 $script += "`t `t } `n"
 $script += "`t } `n"
 $script += "AspNetCoreOnIIS -OutPutPath:`"C:\ConfigurationPath`""
 $script | Out-File C:\ServerConfiguration.ps1 -Force -Encoding utf8
 }                                                                                                                                                                                                 

Führt man nun das Script aus, mittels Invoke-Command auf dem Server aus, das generiert worden ist, so erhält man im Erfolgsfall nachfolgende Meldung.

Generierte MOF-Datei

Das generierte Script, für die WindowsFeatures sieht dann so aus: (Nur ein kleiner Ausschnitt)

Configuration AspNetCoreOnIIS { 
	 Import-DscResource -ModuleName 'DesiredStateConfiguration' 
	 	 Node 'loalhost' { 
	 	 	 WindowsFeature ADCS-Enroll-Web-Pol {
	 	 	 	 Name = 'ADCS-Enroll-Web-Pol' 
	 	 	 	 Ensure = 'Present' 
	 	 	} 
	 	 	 WindowsFeature ADCS-Enroll-Web-Svc {
	 	 	 	 Name = 'ADCS-Enroll-Web-Svc' 
	 	 	 	 Ensure = 'Present' 
	 	 	} 
.......

Nun ist es ein leichtes, einen Server für einen bestimmten Einsatzzweck vorzubereiten, indem bei den nicht gewünschten Funktionen, einfahch der Parameter "Ensure" auf Absent gesetzt werden muss.

Fazit

Ich finde die Desired State Configuration leichtgewichtig und ich denke, dass man mit dieser Art der Konfiguration vieles erledigen kann und es nicht immer ein Ainsible oder Puppet oder sonst ein Werkzeug geben muss. So kann auf eine einfach Art und Weise eine Boilerplate Konfiguration erstellt und dann für die unterschiedlichen Zwecke in einer anderen Konfiguration gespeichert werden.

Weiterführende Links

Golo Roden: Einführung in React, Folge 7: React-Forms

Die vergangenen beiden Folgen haben gezeigt, wie das Verarbeiten von Eingaben und das Verwalten von Zustand in React funktionieren. Wie lassen sich mit diesem Wissen Formulare erstellen?

Holger Schwichtenberg: GPX-Dateien verbinden mit der PowerShell

Dieses PowerShell-Skript verbindet eine beliebige Anzahl von GPX-Dateien zu einer Datei anhand von Datum und Uhrzeit.

Daniel Schädler: OpenSSH Agent /Server auf mehreren Windows Server Core 2019 installieren

Ziel

In diesem Beitrag möchte ich erläutern, wie es möglich ist den OpenSSH Server und den Agenten auf mehreren Windows Servern zu installieren und dies ohne Internetverbindung

Voraussetzungen

Um das Ziel zu erfüllen, ist es notwendig, dass die Features On Demand für Windows 2019 Server und Windows 10 (denn nur hier sind die beiden Komponenten vorhanden) zur Verfügung stehen. Diesen Artikel habe ich mir zu Hilfe genommen um das zu bewerkstelligen. Weiter gilt zu beachten dass:

  • Alle Server über WinRM /WSMan erreichbar sind
  • Der Administrator berechtigt ist, auf den Server zuzugreifen
  • Windows Server 2019 Core
  • Die Features On Demand liegen auf dem lokalen Rechner bereit um diese zu kopieren (bei mir sind diese gezipped).

Durchführung

Um sich mit den Servern zu verbinden bin ich wie folgt vorgegangen:

  1. Erstellen eines Arrays in der Powershell
$servers = "ISRV-01","ISRV-02","ISRV-03","ISRV-04"
  1. Zwischenspeichern der Anmeldeinformationen für den Zugriff auf die Server. Dies kann wie folgt bewerkstelligt werden:
$cred = Get-Credential
  1. Anschliessend wird durch das Array wie folgt iterriert:
$servers | Foreach-Object -Process {
	
}
  1. In diesem Iterationsblock wird zuerst das Features On Demand ZIP kopiert, dann eine Session auf den Zielserver erstellt und annschliessend die gewünschten Funktionen installiert.
Kopiervorgang Fortschrittsanzeige
Kopiervorgang Fortschrittsanzeige
$session = New-PSSession -ComuputerName $_ -Credential $cred
	Copy-Item C:\Temp\_FeaturesOnDemand.zip -Destination C:\ -ToSession $session
	Invoke-Command -Session $session -ScriptBlock{
		Expand-Archive C:\_FeaturesOnDemand.zip -DestinationPath C:\ -Force
		Add-WindowsCapability -Online -Name OpenSSH.Agent~~~~0.0.1.0 -Source C:\_FeaturesOnDemand
		Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 -Source C:\_FeaturesOnDemand
		Remove-Item C:\_FeaturesOnDemand.zip -Force
		Remove-Item C:\_FeaturesOnDemand -Recurse -Froce
}
Remove-PSSession -Id $session.Id

Den Status der Installation sieht man dann wie folgt:

Installationsfortschritt und Status
Installationsfortschritt und Status

Fazit

Mit einfachen Mitteln lassen sich ganze Serverfarmen administrieren. Natürlich könnte man nun auch die Active Directory nach gewünschten Servern Abfragen und diese dann administrieren. Zum Abschluss noch das Ganze Script.

$servers | Foreach-Object -Process {
$session = New-PSSession -ComuputerName $_ -Credential $cred
Copy-Item C:\Temp\_FeaturesOnDemand.zip -Destination C:\ -ToSession $session
	Invoke-Command -Session $session -ScriptBlock{
		Expand-Archive C:\_FeaturesOnDemand.zip -DestinationPath C:\ -Force
		Add-WindowsCapability -Online -Name OpenSSH.Agent~~~~0.0.1.0 -Source C:\_FeaturesOnDemand
		Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 -Source C:\_FeaturesOnDemand
		Remove-Item C:\_FeaturesOnDemand.zip -Force
		Remove-Item C:\_FeaturesOnDemand -Recurse -Froce
	}
Remove-PSSession -Id $session.Id
}

Jürgen Gutsch: ASP.NET Core Health Checks

Since a while I planned to write about the ASP.NET Health Checks which are actually pretty cool. The development of the ASP.NET Core Health Checks started in fall 2016. At that time it was a architectural draft. In November 2016 during the Global MVP Summit in Redmond we got ask to hack some health checks based on the architectural draft. It was Damien Bowden and me who met Glen Condron and Andrew Nurse during the Hackathon on the last summit day to get into the ASP.NET Health Checks and to write the very first checks and to try the framework.

Actually, I prepared a talk about the ASP.NET Health Checks. And I would be happy to do the presentation at your user group or your conference.

What are the health checks for?

Imagine that you are creating an ASP.NET application that is pretty much dependent on some sub systems, like a database, a file system, an API, or something like that. This is a pretty common scenario. Almost every application is dependent on a database. If the connection to the database got lost for different reasons, the application will definitely break. This is how applications are developed since years. The database is the simplest scenario to imagine what the ASP.NET health checks are good for, but not the real reason why they are developed. So let's continue with the database scenario.

  • What if you where able the check whether the database is available or not before you actually connect to it.
  • What if you where able to tell your application to show a user friendly message about the database that is not available.
  • What if you could simply switch to a fallback database in case the actual one is not available?
  • What if you could tell a load balancer to switch to a different fallback environment, in case your application is unhealthy because of the missing database?

You can exactly do this with the ASP.NET Health Checks:

Check the health and availability of your sub-systems, provide an endpoint that tells other systems about the health of the current application, and consume health check endpoints of other systems.

Health checks are mainly made for microservice environments. where loosely coupled applications need to know the health state of the systems they are depending on. But they are also useful in more monolithic applications that are also dependent on some kind of subsystems and infrastructure.

How to enable health checks?

I'd like to show the health check configuration in a new, plain and simple ASP.NET MVC project that I will create using the .NET CLI in my favorite console:

dotnet new mvc -n HealthCheck.MainApp -o HealthCheck.MainApp

The health checks are already in the framework and you don't need to add an separate NuGet package to use it. It is in the Microsoft.Extensions.Diagnostics.HealthChecks package that should be already available after the installation of the latest version of .NET Core.

To enable the health checks you need to add the relating services to the DI container:

public void ConfigureServices(IServiceCollection services)
{
    services.AddHealthChecks();
    services.AddControllersWithViews();
}

This is also the place where we add the checks later on. But this should be good for now.

To also provide an endpoint to tell other applications about the state of the current system you need to map a route to the health checks inside the Configure method of the Startup class:

app.UseEndpoints(endpoints =>
{
    endpoints.MapHealthChecks("/health");
    endpoints.MapControllerRoute(
        name: "default",
        pattern: "{controller=Home}/{action=Index}/{id?}");
});

This will give you a URL where you can check the health state of your application. Let's quickly run the application and call this endpoint with a browser:

Celling the endpoint:

Our application is absolutely healthy. For sure, because there is no health check yet, that checks for something.

Writing health checks

Like in many other APIs (e. g. the Middlewares) there are many ways to add health checks . The simplest way and the best way to understand how it is working is to use lambda methods:

services.AddHealthChecks()
    .AddCheck("Foo", () =>
        HealthCheckResult.Healthy("Foo is OK!"), tags: new[] { "foo_tag" })
    .AddCheck("Bar", () =>
        HealthCheckResult.Degraded("Bar is somewhat OK!"), tags: new[] { "bar_tag" })
    .AddCheck("FooBar", () =>
        HealthCheckResult.Unhealthy("FooBar is not OK!"), tags: new[] { "foobar_tag" });

Those lines add three different health checks. They are named and the actual check is a Lambda expression that returns a specific HealthCheckResult. The result can be Healthy, Degraded or Unhealthy.

  • Healthy: All is fine obviously.
  • Degraded: The system is not really healthy, but it's not critical. Maybe a performance problem or something like that.
  • Unhealthy: Something critical isn't working.

Usually a health check result has at least one tag to group them by topic or whatever. The message should be meaningful to easily identify the actual problem.

Those lines are not really useful, but they show how the health check are working. If we run the app again and call the endpoint, we would see a Unhealthy state, because it always shows the lowest state, which is Unhealthy. Feel free to play around with the different HealthCheckResult

Now let's demonstrate an more useful health check. This one pings a needed resource in the internet and checks the availability:

services.AddHealthChecks()
    .AddCheck("ping", () =>
    {
        try
        {
            using (var ping = new Ping())
            {
                var reply = ping.Send("asp.net-hacker.rocks");
                if (reply.Status != IPStatus.Success)
                {
                    return HealthCheckResult.Unhealthy("Ping is unhealthy");
                }

                if (reply.RoundtripTime > 100)
                {
                    return HealthCheckResult.Degraded("Ping is degraded");
                }

                return HealthCheckResult.Healthy("Ping is healthy");
            }
        }
        catch
        {
            return HealthCheckResult.Unhealthy("Ping is unhealthy");
        }
    });

This actually won't work, because my blog runs on Azure and Microsoft doesn't allow to ping the app services. Anyway, this demo shows you how to handle the specific results and how to return the right HealthCheckResults depending on the state of the the actual check.

But it doesn't really make sense to write those tests as lambda expressions and to mess with the Startup class. Good there is a way to also add class based health checks.

Also just a simple and useless one, but it demonstrates the basic concepts:

public class ExampleHealthCheck : IHealthCheck
{
    public Task<HealthCheckResult> CheckHealthAsync(
        HealthCheckContext context,
        CancellationToken cancellationToken = default(CancellationToken))
    {
        var healthCheckResultHealthy = true;

        if (healthCheckResultHealthy)
        {
            return Task.FromResult(
                HealthCheckResult.Healthy("A healthy result."));
        }

        return Task.FromResult(
            HealthCheckResult.Unhealthy("An unhealthy result."));
    }
}

This class implements CheckHealthAsync method from the IHealthCheck interface. The HealthCheckContext contains the already registered health checks in the property Registration. This might be useful to check the state of other specific health checks.

To add this class as a health check in the application we need to use the generic AddCheck method:

services.AddHealthChecks()
    .AddCheck<ExampleHealthCheck>("class based", null, new[] { "class" });

We also need to specify a name and at least one tag. With the second argument I'm able to set a default failing state. But null is fine, in case I handle all exceptions inside the health check, I guess.

Expose the health state

As mentioned, I'm able to provide an endpoint to expose the health state of my application to systems that depends on the current app. But by default it responses just with a simple string that only shows the simple state. It would be nice to see some more details to tell the consumer what actually is happening.

Fortunately this is also possible by passing HealthCheckOptions into the MapHealthChecks method:

app.UseEndpoints(endpoints =>
{
    endpoints.MapHealthChecks("/health", new HealthCheckOptions()
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    });
    endpoints.MapControllerRoute(
        name: "default",
        pattern: "{controller=Home}/{action=Index}/{id?}");
});

With the Predicate you are able to filter specific health checks to execute and to get the state of those. In this case I want to execute them all. The ResponseWriter is needed to write the health information of the specific checks to the response. In that case I used a ResponseWriter from a community project that provides some cool UI features and a ton of ready-to-use health checks.

dotnet add package AspNetCore.HealthChecks.UI

The UIResponseWriter of that project writes a JSON output to the HTTP response that includes many details about the used health checks:

{
  "status": "Unhealthy",
  "totalDuration": "00:00:00.7348450",
  "entries": {
    "Foo": {
      "data": {},
      "description": "Foo is OK!",
      "duration": "00:00:00.0010118",
      "status": "Healthy"
    },
    "Bar": {
      "data": {},
      "description": "Bar is somewhat OK!",
      "duration": "00:00:00.0009935",
      "status": "Degraded"
    },
    "FooBar": {
      "data": {},
      "description": "FooBar is not OK!",
      "duration": "00:00:00.0010034",
      "status": "Unhealthy"
    },
    "ping": {
      "data": {},
      "description": "Ping is degraded",
      "duration": "00:00:00.7165044",
      "status": "Degraded"
    },
    "class based": {
      "data": {},
      "description": "A healthy result.",
      "duration": "00:00:00.0008822",
      "status": "Healthy"
    }
  }
}

In case the overall state is Unhealthy the endpoint sends the result with a 503 HTTP response status, otherwise it is a 200. This is really useful if you just want to handle the HTTP response status.

The community project provides a lot more features. Also a nice UI to visualize the health state to humans. I'm going to show you this in a later section.

Handle the states inside the application

In the most cases you don't want to just expose the state to depending consumer of your app. It might also be the case that you need to handle the different states in your application, by showing a message in case the application is not working properly, disabling parts of the application that are not working, switching to a fallback source, or whatever is needed to run the application in an degraded state.

To do things like this, you can use the HealthCheckService that is already registered to the IoC Container with the AddHealthChecks() method. You can inject the HealthCheckService using the IHealthCheckService interface wherever you want.

Let's see how this is working!

In the HomeController I created a constructor that injects the IHealthCheckService the same way as other services need to be injected. I also created a new Action called Health that uses the HealthCheckService and calls the method CheckHealthAsync() to execute the checks and to retrieve a HealthReport. The HealthReport is than passed to the view:

public class HomeController : Controller
{
    private readonly IHealthCheckService _healthCheckService;

    public HomeController(
        IHealthCheckService healthCheckService)
    {
        _healthCheckService = healthCheckService;
    }

    public async Task<IActionResult> Health()
    {
        var healthReport = await _healthCheckService.CheckHealthAsync();
        
        return View(healthReport);
    }

Optionally you are able to pass a predicate to the method CheckHealthAsync(). With the Predicate you are able to filter specific health checks to execute and to get the state of those. In this case I want to execute them all.

I also created a view called Health.cshtml. This view retrieves the HealthReport and displays the results:

@using Microsoft.Extensions.Diagnostics.HealthChecks;
@model HealthReport

@{
    ViewData["Title"] = "Health";
}
<h1>@ViewData["Title"]</h1>

<p>Use this page to detail your site's health.</p>

<p>
    <span>@Model.Status</span> - <span>Duration: @Model.TotalDuration.TotalMilliseconds</span>
</p>
<ul>
    @foreach (var entry in Model.Entries)
    {
    <li>
        @entry.Value.Status - @entry.Value.Description<br>
        Tags: @String.Join(", ", entry.Value.Tags)<br>
        Duration: @entry.Value.Duration.TotalMilliseconds
    </li>
    }
</ul>

To try it out, I just need to run the application using dotnet run in the console and calling https://localhost:5001/home/health in the browser:

You could also try to analyze the HealthReport in the Controller, in your services to do something specific in case the the application isn't healthy anymore.

A pretty health state UI

The already mentioned GitHub project AspNetCore.Diagnostics.HealthChecks also provides a pretty UI to display the results in a nice and human readable way.

This just needs a little more configuration in the Startup.cs

Inside the method ConfigureServices() I needed to add the health checks UI services

services.AddHealthChecksUI();

And inside the method Configure() I need to map the health checks UI Middleware right after the call of MapHealthChecks:

endpoints.MapHealthChecksUI();

This adds a new route to our application to call the UI: /healthchecks-ui

We also need to register our health API to the UI. This will be done using small setting to the appsetings.json:

{
  ... ,
  "HealthChecksUI": {
    "HealthChecks": [
      {
        "Name": "HTTP-Api",
        "Uri": "https://localhost:5001/health"
      }
    ],
    "EvaluationTimeOnSeconds": 10,
    "MinimumSecondsBetweenFailureNotifications": 60
  }
}

This way you are able to register as many health endpoints to the UI as you like. Think about a separate application that only shows the health states of all your microservices. This would be the way to go.

Let's call the UI using this route /healthchecks-ui

(Wow... Actually, the ping seemed to work, when I did this screenshot. )

This is awesome. This is a really great user interface to display the health of all your services.

About the Webhooks and customization of the UI, you should read the great docs in the repository.

Conclusion

The health checks are definitely a thing you should look into. No matter what kind of web application you are writing, it can help you to create more stable and more responsive applications. Applications that know about their health can handle degraded of unhealthy states in a way that won't break the whole application. This is very useful, at least from my perspective ;-)

To play around with the demo application used for this post visit the repository on GitHub: https://github.com/JuergenGutsch/healthchecks-demo

Marco Scheel: Enable Unified Labeling for Microsoft 365 Groups (and Teams) in your tenant via PowerShell script

Microsoft announced end of June 2020 the “General Availability” of the Microsoft Information Protection integration for Group labeling. Unified labeling is now available für all Microsoft 365 Groups (Teams, SharePoint, …).

Microsoft Information Protection is a built-in, intelligent, unified, and extensible solution to protect sensitive data across your enterprise – in Microsoft 365 cloud services, on-premises, third-party SaaS applications, and more. Microsoft Information Protection provides a unified set of capabilities to know your data, protect your data, and prevent data loss across Microsoft 365 apps (e.g. Word, PowerPoint, Excel, Outlook) and services (e.g. Teams, SharePoint, and Exchange).

Source: https://techcommunity.microsoft.com/t5/microsoft-security-and/general-availability-microsoft-information-protection/ba-p/1497769

The feature is currently an opt-in solution. The previous Azure AD based group classification is still available and supported. If you want to switch to the new solution to apply sensitivity labels to your groups you need to run some lines of PowerShell. This is the Microsoft documentation:

https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/groups-assign-sensitivity-labels#enable-sensitivity-label-support-in-powershell

The feature is configured with the same commands as the AAD based classification. You have to set the value for “EnableMIPLabels“ to true.The documentation is expecting that you already have Azure AD directory settings for the template “Group.Unified“. If this is not the case you can also follow the instructions on the Azure AD directory settings for Groups:

https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/groups-settings-cmdlets#create-settings-at-the-directory-level

To make it easier for my customers and for you, I’ve created a PowerShell that will help and work in any configuration. Check out the latest version of my script in this GitHub repository:

https://github.com/marcoscheel/snippets/blob/master/m365-enable-ul-groups/enable-ulclassification.ps1

$tenantdetail = $null;
$tenantdetail = Get-AzureADTenantDetail -ErrorAction SilentlyContinue;
if ($null -eq $tenantdetail)
{
   #connect as gloabl admin
   Connect-AzureAD
   $tenantdetail = Get-AzureADTenantDetail -ErrorAction SilentlyContinue;
}
if ($null -eq $tenantdetail)
{
   Write-Host “Error connecting to tenant” -ForegroundColor Red;
   Exit
}

$settingIsNew = $false;
$setting = Get-AzureADDirectorySetting | Where-Object { $_.DisplayName -eq “Group.Unified”};
if ($null -eq $setting){
   Write-Host “Not directory settings for Group.Unified found. Create new!” -ForegroundColor Green;
   $settingIsNew = $true;
   $aaddirtempid = (Get-AzureADDirectorySettingTemplate | Where-Object { $_.DisplayName -eq “Group.Unified” }).Id;
   $template = Get-AzureADDirectorySettingTemplate -Id $aaddirtempid;
   $setting = $template.CreateDirectorySetting();
}
else{
   Write-Host “Directory settings for Group.Unified found. Current value for EnableMIPLabels:” -ForegroundColor Green;
   Write-Host $setting[“EnableMIPLabels”];
}

$setting[“EnableMIPLabels”] = “true”;
if (-not $settingIsNew){
   #Reset AAD based classsification?
   #$setting[“ClassificationList”] = “”;
   #$setting[“DefaultClassification”] = “”;
   #$setting[“ClassificationDescriptions”] = “”;
}

if ($settingIsNew){

   New-AzureADDirectorySetting -DirectorySetting $setting;
   Write-Host “New directory settings for Group.Unified applied.” -ForegroundColor Green;
   $setting = Get-AzureADDirectorySetting | Where-Object { $_.DisplayName -eq “Group.Unified”};
}
else{
   Set-AzureADDirectorySetting -Id $setting.Id -DirectorySetting $setting;
   Write-Host “Updated directory settings for Group.Unified.” -ForegroundColor Green;
   $setting = Get-AzureADDirectorySetting | Where-Object { $_.DisplayName -eq “Group.Unified”};
}
$setting.Values;

Holger Schwichtenberg: Inhalt eines ZIP-Archives mit PowerShell auflisten, ohne es zu entpacken

Für das Auflisten des Inhalts von ZIP-Archiven existiert kein Commandlet, aber man kann die .NET-Klasse System.IO.Compression.ZipFile in der PowerShell nutzen.

Christian Dennig [MS]: Azure DevOps Terraform Provider

Not too long ago, the first version of the Azure DevOps Terraform Provider was released. In this article I will show you with several examples which features are currently supported in terms of build pipelines and how to use the provider – also in conjunction with Azure. The provider is the last “building block” for many people working in the “Infrastructure As Code” space to create environments (including Git Repos, service connections, build + release pipelines etc.) completely automatically.

The provider was released in June 2020 in version 0.0.1, but to be honest: the feature set is quite rich already at this early stage.

The features I would like to discuss with the help of examples are as follows:

  • Create a DevOps project including a hosted Git repo.
  • Creation of a build pipeline
  • Usage of variables and variable groups
  • Creating an Azure service connection and using variables/secrets from an Azure KeyVault

Example 1: Basic Usage

The Azure DevOps provider can be integrated in a script like any other Terraform provider. All that’s required is the URL to the DevOps organisation and a Personal Access Token (PAT) with which the provider can authenticate itself against Azure DevOps.

The PAT can be easily created via the UI of Azure DevOps by creating a new token via User Settings --> Personal Access Token --> New Token. For the sake of simplicity, in this example I give “Full Access” to it…of course this should be adapted for your own purposes.

Create a personal access token

The documentation of the Terraform Provider contains information about the permissions needed for the respective resource.

Defining relevant scopes

Once the access token has been created, the Azure DevOps provider can be referenced in the terraform script as follows:

provider "azuredevops" {
  version               = ">= 0.0.1"
  org_service_url       = var.orgurl
  personal_access_token = var.pat
}

The two variables orgurl and pat should be exposed as environment variables:

$ export TF_VAR_orgurl = "https://dev.azure.com/<ORG_NAME>"
$ export TF_VAR_pat = "<PAT_FROM_AZDEVOPS>"

So, this is basically all that is needed to work with Terraform and Azure DevOps. Let’s start by creating a new project and a git repository. Two resources are needed for this, azuredevops_project and azuredevops_git_repository:

resource "azuredevops_project" "project" {
  project_name       = "Terraform DevOps Project"
  description        = "Sample project to demonstrate AzDevOps <-> Terraform integragtion"
  visibility         = "private"
  version_control    = "Git"
  work_item_template = "Agile"
}

resource "azuredevops_git_repository" "repo" {
  project_id = azuredevops_project.project.id
  name       = "Sample Empty Git Repository"

  initialization {
    init_type = "Clean"
  }
}

Additionally, we also need an initial pipeline that will be triggered on a git push to master.
In a pipeline, you usually work with variables that come from different sources. These can be pipeline variables, values from a variable group or from external sources such as an Azure KeyVault. The first, simple build definition uses pipeline variables (mypipelinevar):

resource "azuredevops_build_definition" "build" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline"

  ci_trigger {
    use_yaml = true
  }

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline.yaml"
  }

  variable {
    name      = "mypipelinevar"
    value     = "Hello From Az DevOps Pipeline!"
    is_secret = false
  }
}

The corresponding pipeline definition looks as follows:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Pipeline is running!
    echo And here is the value of our pipeline variable
    echo $(mypipelinevar)
  displayName: 'Run a multi-line script'

The pipeline just executes some scripts – for demo purposes – and outputs the variable stored in the definition to the console.

Running the Terraform script, it creates an Azure DevOps project, a git repository and a build definition.

Azure DevOps Project
Git Repository
Pipeline

As soon as the file azure_pipeline.yaml discussed above is pushed into the repo, the corresponding pipeline is triggered and the results can be found in the respective build step:

Running pipeline
Output of build pipeline

Example 2: Using variable groups

Normally, variables are not directly stored in a pipeline definition, but rather put into Azure DevOps variable groups. This allows you to store individual variables centrally in Azure DevOps and then reference and use them in different pipelines.

Fortunately, variable groups can also be created using Terraform. For this purpose, the resource azuredevops_variable_group is used. In our script this looks like this:

resource "azuredevops_variable_group" "vars" {
  project_id   = azuredevops_project.project.id
  name         = "my-variable-group"
  allow_access = true

  variable {
    name  = "var1"
    value = "value1"
  }

  variable {
    name  = "var2"
    value = "value2"
  }
}

resource "azuredevops_build_definition" "buildwithgroup" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline with VarGroup"

  ci_trigger {
    use_yaml = true
  }

  variable_groups = [
    azuredevops_variable_group.vars.id
  ]

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline-with-vargroup.yaml"
  }
}

The first part of the terraform script creates the variable group in Azure DevOps (name: my-variable-group) including two variables (var1 and var2), the second part – a build definition – uses the variable group, so that the variables can be accessed in the corresponding pipeline file (azure-pipeline-with-vargroup.yaml).

It has the following content:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: my-variable-group

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Var1: $(var1)
    echo Var2: $(var2)
  displayName: 'Run a multi-line script'

If you run the Terraform script, the corresponding Azure DevOps resources will be created: a variable group and a pipeline.

Variable Group

If you pusht the build YAML file to the repo, the pipeline will be executed and you should see the values of the two variables as output on the build console.

Output of the variables from the variable group

Example 3: Using an Azure KeyVault and Azure DevOps Service Connections

For security reasons, critical values are neither stored directly in a pipeline definition nor in Azure DevOps variable groups. You would normally use an external vault like Azure KeyVault. Fortunately, with Azure DevOps you have the possibility to access an existing Azure KeyVault directly and access secrets which are then made available as variables within your build pipeline.

Of course, Azure DevOps must be authenticated/authorized against Azure for this. Azure DevOps uses the concept of service connections for this purpose. Service connections are used to access e.g. Bitbucket, GitHub, Jira, Jenkis…or – as in our case – Azure. You define a user – for Azure this is a service principal – which is used by DevOps pipelines to perform various tasks – in our example fetching a secret from a KeyVault.

To demonstrate this scenario, various things must first be set up on Azure:

  • Creating an application / service principal in the Azure Active Directory, which is used by Azure DevOps for authentication
  • Creation of an Azure KeyVault (including a resource group)
  • Authorizing the service principal to the Azure KeyVault to be able to read secrets (no write access!)
  • Creating a secret that will be used in a variable group / pipeline

With the Azure Provider, Terraform offers the possibility to manage Azure services. We will be using it to create the resources mentioned above.

AAD Application + Service Principal

First of all, we need a service principal that can be used by Azure DevOps to authenticate against Azure. The corresponding Terraform script looks like this:

data "azurerm_client_config" "current" {
}

provider "azurerm" {
  version = "~> 2.6.0"
  features {
    key_vault {
      purge_soft_delete_on_destroy = true
    }
  }
}

## Service Principal for DevOps

resource "azuread_application" "azdevopssp" {
  name = "azdevopsterraform"
}

resource "random_string" "password" {
  length  = 24
}

resource "azuread_service_principal" "azdevopssp" {
  application_id = azuread_application.azdevopssp.application_id
}

resource "azuread_service_principal_password" "azdevopssp" {
  service_principal_id = azuread_service_principal.azdevopssp.id
  value                = random_string.password.result
  end_date             = "2024-12-31T00:00:00Z"
}

resource "azurerm_role_assignment" "contributor" {
  principal_id         = azuread_service_principal.azdevopssp.id
  scope                = "/subscriptions/${data.azurerm_client_config.current.subscription_id}"
  role_definition_name = "Contributor"
}

With the script shown above, both an AAD Application and a service principal are generated. Please note that the service principal is assigned the role Contributor – on subscription level, see the scope assignment. This should be restricted accordingly in your own projects (e.g. to the respective resource group)!

Azure KeyVault

The KeyVault is created the same way as the previous resources. It is important to note that the user working against Azure is given full access to the secrets in the KeyVault. Further down in the script, permissions for the Azure DevOps service principal are also granted within the KeyVault – but in that case only read permissions! Last but not least, a corresponding secret called kvmysupersecretsecret is created, which we can use to test the integration.

resource "azurerm_resource_group" "rg" {
  name     = "myazdevops-rg"
  location = "westeurope"
}

resource "azurerm_key_vault" "keyvault" {
  name                        = "myazdevopskv"
  location                    = "westeurope"
  resource_group_name         = azurerm_resource_group.rg.name
  enabled_for_disk_encryption = true
  tenant_id                   = data.azurerm_client_config.current.tenant_id
  soft_delete_enabled         = true
  purge_protection_enabled    = false

  sku_name = "standard"

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id

    secret_permissions = [
      "backup",
      "get",
      "list",
      "purge",
      "recover",
      "restore",
      "set",
      "delete",
    ]
    certificate_permissions = [
    ]
    key_permissions = [
    ]
  }

}

## Grant DevOps SP permissions

resource "azurerm_key_vault_access_policy" "azdevopssp" {
  key_vault_id = azurerm_key_vault.keyvault.id

  tenant_id = data.azurerm_client_config.current.tenant_id
  object_id = azuread_service_principal.azdevopssp.object_id

  secret_permissions = [
    "get",
    "list",
  ]
}

## Create a secret

resource "azurerm_key_vault_secret" "mysecret" {
  key_vault_id = azurerm_key_vault.keyvault.id
  name         = "kvmysupersecretsecret"
  value        = "KeyVault for the Win!"
}

If you have followed the steps described above, the result in Azure is a newly created KeyVault containing one secret:

Azure KeyVault

Service Connection

Now, we need the integration into Azure DevOps, because we finally want to access the newly created secret in a pipeline. Azure DevOps is “by nature” able to access a KeyVault and the secrets it contains. To do this, however, you have to perform some manual steps – when not using Terraform – to enable access to Azure. Fortunately, these can now be automated with Terraform. The following resources are used to create a service connection to Azure in Azure DevOps and to grant access to our project:

## Service Connection

resource "azuredevops_serviceendpoint_azurerm" "endpointazure" {
  project_id            = azuredevops_project.project.id
  service_endpoint_name = "AzureRMConnection"
  credentials {
    serviceprincipalid  = azuread_service_principal.azdevopssp.application_id
    serviceprincipalkey = random_string.password.result
  }
  azurerm_spn_tenantid      = data.azurerm_client_config.current.tenant_id
  azurerm_subscription_id   = data.azurerm_client_config.current.subscription_id
  azurerm_subscription_name = "<SUBSCRIPTION_NAME>"
}

## Grant permission to use service connection

resource "azuredevops_resource_authorization" "auth" {
  project_id  = azuredevops_project.project.id
  resource_id = azuredevops_serviceendpoint_azurerm.endpointazure.id
  authorized  = true 
}
Service Connection

Creation of an Azure DevOps variable group and pipeline definition

The last step necessary to use the KeyVault in a pipeline is to create a corresponding variable group and “link” the existing secret.

## Pipeline with access to kv secret

resource "azuredevops_variable_group" "kvintegratedvargroup" {
  project_id   = azuredevops_project.project.id
  name         = "kvintegratedvargroup"
  description  = "KeyVault integrated Variable Group"
  allow_access = true

  key_vault {
    name                = azurerm_key_vault.keyvault.name
    service_endpoint_id = azuredevops_serviceendpoint_azurerm.endpointazure.id
  }

  variable {
    name    = "kvmysupersecretsecret"
  }
}
Variable Group with KeyVault integration

Test Pipeline

All prerequisites are now in place, but we still need a pipeline with which we can test the scenario.

Script for the creation of the pipeline:

resource "azuredevops_build_definition" "buildwithkeyvault" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline with KeyVault Integration"

  ci_trigger {
    use_yaml = true
  }

  variable_groups = [
    azuredevops_variable_group.kvintegratedvargroup.id
  ]

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline-with-keyvault.yaml"
  }
}

Pipeline definition (azure-pipeline-with-keyvault.yaml):

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: kvintegratedvargroup

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo KeyVault secret value: $(kvmysupersecretsecret)
  displayName: 'Run a multi-line script'

If you have run the Terraform script and pushed the pipeline file into the repo, you will get the following result in the next build (the secret is not shown in the console for security reasons, of course!):

Output: KeyVault integrated variable group

Wrap-Up

Setting up new Azure DevOps projects was not always the easiest task, as sometimes manual steps were required. With the release of the first Terraform provider version for Azure DevOps, this has changed almost dramatically 🙂 You can now – as one of the last building blocks for automation in a dev project – create many things via Terraform in Azure DevOps. In the example shown here, the access to an Azure KeyVault including the creation of the corresponding service connection could be achieved. However, only one module was shown here – frankly, one for a task that “annoyed” me every now and then, as most of it had to be set up manually before having a Terraform provider. The provider can also manage branch policies, set up groups and group memberships etc. With this first release you are still “at the beginning of the journey”, but in my opinion, it is a “very good start” with which you can achieve a lot.

I am curious what will be supported next!

Sample files can be found here: https://gist.github.com/cdennig/4866a74b341a0079b5a59052fa735dbc

Golo Roden: Standardlösung oder Eigenbau?

In der Softwareentwicklung steht man häufig vor der Wahl, eine fertige Standardlösung von der Stange zu verwenden oder eine Eigenentwicklung durchzuführen. Was ist ratsam?

Golo Roden: Einführung in React, Folge 6: React-State, Teil 2

Eines der wichtigsten Konzepte für Anwendungen ist das Entgegennehmen und Verarbeiten von Eingaben. Wie funktioniert das in React?

Code-Inside Blog: EWS, Exchange Online and OAuth with a Service Account

This week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.

But here is the full story:

Our goal

We wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?

The big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.

So… what now?

EWS is… old. Why?

The Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.

To mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. “Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.

Another argument for using EWS was, that we could support OnPrem and Online with one code base.

Docs from Microsoft

The good news is, that EWS and the Auth problem is more or less good documented here.

There are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.

Delegation:

Delegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.

Application:

Application means, that the app itself can do some actions without any user involved.

EWS and the application way

At first we thought that we might need to use the “application” way.

The good news is, that this was easy and worked. The bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.

Back to the delegation way:

EWS and the delegation way

The documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.

Solution / TL;DR

After some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:

  1. Follow the normal “delegate” steps from the Microsoft Docs

  2. Instead of this code, which will trigger the login UI:

...
// The permission scope required for EWS access
var ewsScopes = new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" };

// Make the interactive token request
var authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();
...

Use the “AcquireTokenByUsernamePassword” method:

...
var cred = new NetworkCredential("UserName", "Password");
var authResult = await pca.AcquireTokenByUsernamePassword(new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" }, cred.UserName, cred.SecurePassword).ExecuteAsync();
...

To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.

Now you should be able to get the AccessToken and do some EWS magic.

I posted a shorter version on Stackoverflow.com

Hope this helps!

Daniel Schädler: App Deployment mit Minikube /kubectl

Ziel

In diesem Beitrag möchte ich in meiner Umgebung die Applikations Verteilungen (Deployments) lernen. Weiter möchte ich eine .NET Applikation auf den Cluster mit kubectl verteilen.

Ich folge den hier beschriebenen Scrhitte und verwende eine einfache .NET Core Applikation für das Deployment.

Voraussetzungen

Um diese Aufgabe zu erfüllen sind folgende Voraussetzungen gegeben:

  • Minikube ist installiert
  • Eine .NET Core App ist vorhanden um auf einen Container verteilt werden zu können.
  • Docker für Windows muss installiert sein
  • Unter Umständen muss das LINUX Subsystem für Windows installiert sein, wenn ihr die entsprechende Option im Docker-Installer ausgewählt habt.

Durchführung

Um das oben genannte Ziel zu erreichen muss ein Kubernetes Deployment erstellt werden. Für die Erstellung eines Deploments wird das CLI kubectl verwendet, dass in meinem Fall im C:\Program Files\kubectl installiert worden ist.

  1. Starten der Powershell als Administrator
  2. Anschliessend den minikube starten, wenn er noch nicht gestartet worden ist.

Der erfolgreiche Start sieht dann so aus:

Erfolgreicher Minikube Start in der Powershell

Werden beim Starten Fehler festgestellt, so sollte zuerst mit minikube stop versucht werden das Ganze zu stoppen und wieder zu starten.

Den Status kann man dann damit überprüfen:

minikube status

Un erhält dann die folgende Meldung:

Minikube Status – Powershell
  1. Nun wird der Befehl
kubectl version
kubectl get nodes

ausgeführt.

Das sieht dann so aus:

kubectl version und nodes in der Powershell

Es ist ersichtlich, dass der Client und der Server verfügbar sind. Weiter ist ersichtlich, dass ein verfügbarer Knoten (Node) da ist. Kubernetes verwendet für die Applikationsverteilung die verfügbaren Ressourcen auf Basis der Knoten (Nodes).

  1. Nun kommt der spannende Teil lokal in der Powershell. Hier muss die .NET Core Applikation angegeben werden, die verteilt werden soll. Hierzu habe ich eine Beispiel-Applikation analog dem hier beschriebenen Vorgehen erstellt.

Das Docker-Image ist mit dem Befehl

docker build -t app-demo -f Dockerfile .

erstellt worden.

Anschliessend kann man überprüfen ob das Image in der lokalen Registry vorhanden ist mit:

docker images
App-Demo in der lokalen Registry – Powershell
  1. Anschliessend habe ich dann mit kubectl ein Deplyoment erstellt, das auf die lokale Docker Registry geht.
kubectl create deployment app-demo --image=app-demo:latest

kubectl teil mir dann mit, dass das Deployment erfolgreich erstellt worden ist.

Deployment ist erstellt worden – Powershell

Fazit

Das Ganze, sieht einfach aus aber der eine oder andere Fallstrick ist dann doch vorhanden. Wenn zum Beispiel das .NET Core Image nicht von einer zentralen Registry runtergeladen werden kann, sondern selber gehostet werden muss, aus Sicherheitsgründen oder politischen Vorgaben. Falls der Artikel Gefallen gefunden hat, freue ich mich über einen Like und bin für Rückmeldungen offen.

Holger Schwichtenberg: Buch zu Blazor WebAssembly und Blazor Server

Das aktuelle Fachbuch des Dotnet-Doktors bietet den Einstieg in Microsofts neues Webframework.

Jürgen Gutsch: Getting the .editorconfig working with the .NET Framework and MSBuild

I demonstrated my results of the last post about the .editorconfig to the team last week. They were quite happy about the fact that the build fails on a code style error but there was one question I couldn't really answer. The Question was:

Does this also work for .NET Framework?

It should, because it is Roslyn who analyses the code. It is not any framework who does it.

To try it out I created three different class libraries that have the same class file linked into it, with the same code style errors:

    using System;

namespace ClassLibraryNetFramework
{
    public class EditorConfigTests
    {
    public int MyProperty { get; } = 1;
    public EditorConfigTests() { 
    if(this.MyProperty == 2){
        Console.WriteLine("Hallo Welt");
        }        }
    }
}

This code file has at least eleven code style errors in it:

I created a .NET Standard library, a .NET Core library, and a .NET Framework library in VS2019 this time. The solution in VS2019 now looks like this:

I also added the MyGet Roslyn NuGet Feed to the NuGet sources and referenced the code style analyzers:

This is the URL and the package name for you to copy:

  • https://dotnet.myget.org/F/roslyn/api/v3/index.json
  • Microsoft.CodeAnalysis.CSharp.CodeStyle Version: 3.8.0-1.20330.5

I also set the global.json to the latest preview of the .NET 5 SDK to be sure to use the latest tools:

{
  "sdk": {
    "version": "5.0.100-preview.6.20318.15"
  }
}

It didn't really work - My fault in the last blog post!

I saw some code style errors in VS2019 but not all the eleven errors I expected. I tried a build and the build didn't fail. Because I knew it worked the last time I tried it using the the dotnet CLI. I did the same here. I ran dotnet build and dotnet msbuild but the build didn't fail.

This is exactly what you don't need as a software developer: Doing things exactly the same twice and one time it works and on the other time it fails and you have no idea why.

I tried a lot of things and compared project files, solution files, and .editorconfig files. Actually I compared it with the Weather Stats application I used in the last post. At the end I found one line in the the PropertyGroup of the project files of the weather application that shouldn't be there but actually was the reason why it worked.

<CodeAnalysisRuleSet>..\editorconfig.ruleset</CodeAnalysisRuleSet>

While trying to get it running for the last post, I also experimented with a ruleset file. The ruleset file is a XML file that can be used to enable or disable analysis rules in VS2019. I added a ruleset file to the solution and linked it into the projects, but forgot about that.

So it seemed the failing builds of the last post wasn't because of the .editorconfig but because of this ruleset file.

It also seemed the ruleset file is needed to get it working. That shouldn't be the case and I asked the folks via the GitHub Issue about that. The answer was fast:

  • Fakt #1: The ruleset file isn't needed

  • Fakt #2: The regular .editorconfig entries don't work yet

The solution

Currently the ruleset entries where moved to the .editorconfig this means you need to add IDE specific entries to the .editorconfig to get it running, which also means you will have redundant entries until all the code style analyzers are moved to Roslyn and are mapped to the .editorconfig:

# IDE0007: Use 'var' instead of explicit type
dotnet_diagnostic.IDE0007.severity = error

# IDE0055 Fix formatting
dotnet_diagnostic.IDE0055.severity = error

# IDE005_gen: Remove unnecessary usings in generated code
dotnet_diagnostic.IDE0005_gen.severity = error

# IDE0065: Using directives must be placed outside of a namespace declaration
dotnet_diagnostic.IDE0065.severity = error

# IDE0059: Unnecessary assignment
dotnet_diagnostic.IDE0059.severity = error

# IDE0003: Name can be simplified
dotnet_diagnostic.IDE0003.severity = error  

As mentioned, these entries are already in the .editorconfig but written differently.

In the GitHub Issue they also wrote to add a specific line, in case you don't know all the IDE numbers. This line writes out warnings for all the possible code style failures. You'll see the numbers in the warning output and you can now configure how the code style failure should be handled:

# C# files
[*.cs]
dotnet_analyzer_diagnostic.category-Style.severity = warning

This solves the problem and it actually works really good.

Conclusion

Even if it solves the problem, I really hope this is a intermediate solution only, because of the redundant entries in the .editorconfig. I would prefer to not have the IDE specific entries, but I guess this needs some more time and a lot work done by Microsoft.

Holger Schwichtenberg: AsNoTrackingWithIdentityResolution() in Entity Framework Core 5.0 ab Preview 7

Microsoft hat den Namen für die forcierte Identitätsfeststellung beim Laden im "No-Tracking"-Modus geändert.

Jürgen Gutsch: .NET Interactive in Jupyter Notebooks

Since almost a year I do a lot of Python projects. Actually Python isn't that bad. Python and Flask to build web applications work almost similar to NodeJS and ExpressJS. Similarly to NodeJS, Python development is really great using Visual Studio Code.

People who are used to use Python know Jupyter Notebooks to create interactive documentations. Interactive documentation means that the code snippets are executable and that you can use Python code to draw charts or to calculate and display data.

If I got it right, Jupyter Notebook was IPython in the past. Now Jupyter Notebook is a standalone project and the IPython project focuses on Python Interactive and Python kernels for Jupyter Notebook.

The so called kernels extend Jupyter Notebook to execute a specific language. The Python kernel is default. You are able to install a lot more kernels. There are kernels for NodeJS and more.

Microsoft is working on .NET Interactive and kernels for Jupyter Notebook. You are now able to write interactive documentations in Jupyter Notebook using C#, F# and PowerShell, as well.

In this blog post I'll try to show you how to install and to use it.

Install Jupyter Notebook

You need to have Python3 installed on your machine. The best way to install Python on Windows is to use Chocolatey:

choco install python

Actually I use Chocolatey since many years as a Windows package manager and never had any problems.

Alternatively you could download and install Pythion 3 directly or by using the Anaconda installer.

If Python is installed you can install Jupyter Notebook using the Python package manager PIP:

pip install notebook

You now can use Jupyter by just type jupyter notebook in the console. This would start the Notebook with the default Python3 kernel. The following command shows the installed kernels:

jupyter kernelspec list

We'll see the python3 kernel in the console output:

Install .NET Interactive

The goal is to have the .NET Interactive kernels running in Jupyter. To get this done you first need to install the latest build of .NET Interactive from MyGet:

dotnet tool install -g --add-source "https://dotnet.myget.org/F/dotnet-try/api/v3/index.json" Microsoft.dotnet-interactive

Since NuGet is not the place to publish continuous integration build artifacts, Microsoft uses MyGet as well to publish previews, nightly builds, and continuous integration build artifacts.

Or install the latest stable version from NuGet:

dotnet tool install -g Microsoft.dotnet-interactive

If this is installed you can use dotnet interactive to install the kernels to Jupyter Notebooks

dotnet interactive jupyter install

Let's see, whether the kernels are installed or not:

jupyter kernelspec list

listkernels02

That's it. We now have four different kernels installed.

Run Jupyter Notebook

Let's run Jupyter by calling the next command. Be sure to navigate into a folder where your notebooks are or where you want to save your notebooks:

cd \git\hub\dotnet-notebook
jupyter notebook

startnotebook

It now starts a webserver that serves the notebooks from the current location and opens a Browser. The current folder will be the working folder for the currently running Jupyter instance. I don't have any files in that folder yet.

Here we have the Python3 and the three new .NET notebook types available:

notebook01

I now want to start playing around with a C# based notebook. So I create a new .NET (C#) notebook:

Try .NET Interactive

Let's add some content and a code snippet. At first I added a Markdown cell.

The so called "cells" are content elements the support specific content types. A Markdown cell is one type as well as a Code cell. The later executes a code snippet and shows the output underneath:

notebook02

That is a easy one. Now let's play with variables usage. I placed two more code cells and some small markdown cells below:

notebook03

And re-run the entire notebook:

notebook04

As well as in Python notebooks the variables are used and valid in the entire notebook and not only in the single code cell.

What else?

Inside a .NET Interactive notebook you can do the same stuff as in a regular code file. You are able to connect to a database, to Azure or just open a file locally on your machine. You can import namespaces as well as reference NuGet packages:

#r "nuget:NodaTime,2.4.8"
#r "nuget:Octokit,0.47.0"

using Octokit;
using NodaTime;
using NodaTime.Extensions;
using XPlot.Plotly;

VS Code

VS Code is also supporting Jupyter Notebooks usingMicrosoft's Python Add-In:

vscode01

Actually, it needs a couple of seconds until the Jupyter server is started. If it is up and running, it works like charm in VS Code. I really prefer VS Code over the browser interface to write notebooks.

GitHub

If you use a notepad to open a notebook file, you will see that it is a JSON file that also contains the outputs of the code cells:

vscode01

Because of that, I was really surprised that GitHub supports Jupyter Notebooks as well and displays it in a human readable format including the outputs. I expected to see the source code of the notebook, instead of the output:

vscode01

The rendering is limited but good enough to read the document. This means, it could make sense to write a notebook instead of a simple markdown file on GitHub.

Conclusion

I really like the concept of the interactive documentations. This is pretty common in the data science, analytics, and statistics universe. Python developers, as well as MatLab developers know that concept.

Personally I see a great benefit in other areas, too. Like learning, library and API documentation, as well as in all documentations that focus on code.

I also see a benefit on documentations about production lines, where several machines working together in a chain. Since you are able to use and execute .NET code, you could connect to machine sensors to read the state of the machines to display it in the documentation. The maintaining people are now able to see the state directly in the documentation of the production line.

Marco Scheel: Beware of the Teams Admin center to create new teams (and assign owners)

The Microsoft Teams Admin Center can be used to create a new Team. The initial dialog allows you to set multiple owners for the Team. This feature was added over time and is a welcome addition to make the life of an administrator easier. But the implementation has a big shortcoming: The specified owners in this dialog will not become a member of the underlying Microsoft 365 Group in Azure Active Directory. As a result all Microsoft 365 group services checking for members will not behave as expected. For example: These owners will not be able to access Planner. Other services like Teams and SharePoint work by accident.

image

Let’s start with some basic information. Office Microsoft 365 Groups use a special AAD group type. But it is still a group in Azure Active Directory. A group in AAD is very similar to the old-school AD group in our on-premises directories. The group is made of members. In most on-premises cases these groups are managed by your directory admins. But also in AD you can specify owners of a group that will then be able to manage these groups… if they have the right tool (dsa.msc, …). In the cloud Microsoft (and myself) is pushing towards self-service for group management. This “self-service-first approach” is obvious since the introduction of now Office Microsoft 365 Groups. Teams being one of the most famous M365 Group services is also pushing to the owner/member model where end users are owners of a Team (therefor an AAD group). All end user facing UX from Microsoft is abstracting away how the underlying AAD group is managed. Teams for example will group owners and members in two sections:

image

But if you look at the underlying AAD group you will find, that every owner is also a member:

image

And the Azure AD portal shows a dedicated section to manage owners of the group:

image

The Microsoft Admin portal also has dedicated sections for members and owners:

image

In general it is important that your administrative staff is aware how group membership (including ownership) is working. The problem with the Teams Admin portal as mentioned in the beginning is the result of this initial dialog leaving the group in an inconsistent state without making that admin aware of this misalignment. Lets check the group created in the initial screenshot using the Teams admin center. We specified 5 admin users and one member after the initial dialog. Non of the admin users was added as a member in AAD (Microsoft 365 Admin portal screenshot):

image

 But looking at the teams Admin center won’t show this “misconfiguration”:

image

If Leia wants to access the associated Planner service for this Team/Group the following error will show:

image

Planner is checking against group membership. Only is the user is a member Planner (and other services) will check if the user is also an owner and then show administrative controls. Teams itself looks ok. In SharePoint implemented a hack and the owners of the AAD group are granted Site Collection Admin permission so every item is accessible, because that is what Site Collection Admins are for. If your users are reporting problem like this and based on the Teams Admin center everything looks ok, go check “a better” portal like AAD.

To fix the problem in the Teams Admin center the owner has to be demoted to member state and then promoted to being a owner again.

A quick test with the SharePoint admin center shows that the system is managing membership as intended and every owner will also be a member. SharePoint requires one owner (Leia) in the first dialog and the second dialog allows additional owners and members:

image

The result for members in AAD is correct as all owners are also added as members:

image

Until this bug design flaw is fixed we recommend to not add more owners in the initial dialog. The currently logged-in admin user will be added as the only owner of the created group. In the next step remove your admin and add the requested users as members and promote them to owners. If you rely on self-service this should is not a problem. If you have a custom creation process automated hopefully you add all owner as member, but you should be good most of the time. We at glueckkanja-gab are supporting many customers with our custom lifecycle solution. We are adding a option to report this problem and a optional config switch to fix any detected misalignments.

I’ve created a UserVoice “idea” for this “bug”. So please lets go: vote!

https://microsoftteams.uservoice.com/forums/555103-public/suggestions/40951714-add-owners-also-as-members-aad-group-in-the-init 

Daniel Schädler: Kubernetes Cluster erstellen mit Minikube

Ziel

Mein Ziel in diesem Beitrag ist, einen Kubernetes Cluster, mit meinem SetUp zu erstellen und mich mehr und mehr vertraut mit Containern und Kubernetes zu machen. Dieser Artikel beschreibt die Vorgehensweise mit meinem Setup und folgt dem Tutorial hier

Voraussetzungen

Damit ich einen Cluster installieren kann, muss zuerst Minikube installiert sein, welches eine leichtgewichtige Implementierung von Kubernetes darstellt. Dies kreiiert eine virutelle Machine (nachfolgend VM genannt) auf dem lokalen Computer.

Durchführung

In diesem Abschnitt gehe ich Schritt für Schritt das interaktive Tutorial lokal nach denselben Schritten durch.

Version und Starten von Minikube

Natürlich kann auch mit dem interaktiven Tutorial auf gearbeitet werden. Da ich nun mein eigenes Setup habe, starte ich zuerst die Powershell als Administrator und gehe wie folgt vor:

  1. Powershell als Administrator starten
  2. Minikube version anzeigen und starten
Minikube Version und Start mit Powershell

Zum Vergleich, im interaktiven Terminal sieht dass dan so aus:

Minikube Version und Start im interaktiven Terminal auf kubernetes.io

Cluster Version

Um mit Kubernetes zu interagieren wird das Kommandzeilen Werkzeug kubectl verwendet. Zuerst schauen wir uns ein paar Cluster informationen an. Zu diesem Zweck geben wir das Kommando

kubectl version
kubectl version mit Powershell

Die interaktive Konsole zeigt diese Informationen ein wenig übersichtlicher an.

kubectl version im interaktiven Terminal auf kubernetes.io

Cluster Details

Um die Cluster Details anzuzeigen geben wir folgende Kommandos ein:

kubectl cluster-info
kubectl get nodes

Mit dem Nodes-Kommando werden uns alle Nodes (Knoten) gezeigt die wir für unsere Applikationen verwenden können.

kubectl cluster und node informationen in der powershell

Das interaktive Terminal zeigt uns die Informationen wie folgt an:

kubectl cluster und node informationen im interaktiven Terminal von kubernetes.io

Fazit

Ich hätte nie gedacht, dass dies so einfach sein wird und bin mir bewusst, dass die Herausforderungen noch kommen werden. Nun freue ich mich auf die weiteren Module und werde diese mit dem Fokus einer .NET Applikation durcharbeiten. Weiter geht es dann mit dem nächsten Modul mit meiner Installation.

Daniel Schädler: Meine ersten Schritte mit Kubernetes /Minikube

Ziel

Das Ziel, ist es einen Minikube auf meinem Windows Rechner zu installieren um mich in die Welt von Container und Kubernetes einzuarbeiten. Das soll eine Artikelserie geben, die basierend auf den folgenden Tutorials ist Kubernetes.io

Voraussetzungen

Zuerst müssen die folgenden Voraussetzungen erfüllt sein, damit Minikube installiert werden kann:

  • kubectl muss installiert sein

Ich habe dies mit Powershell durchgeführt. Hierzu muss diese als Administrator ausgeführt werden. Anschliessend kann mit dem folgenden Befehl die Installation initiiert werden.

Install-Script Name 'install-kubectl' -Scope CurrentUser -Force
install-kubectl.ps1 -DownloadLocation "C:\Program Files\kubectl"

Es kann sein, dass der NuGet Provider, wie im nachfolgenden Bild, aktualisiert werden muss. Hier mit „Y“ bestätigen und den Befehl noch einmal ausführen, wie oben beschrieben.

Installation von Minikube mit Powershell

Ich selber hatte Probleme, die mit dem Bit-Filetransfer die Installation verhinderten. Damit ich trotzdem weiterkam, habe ich die Binary manuell heruntergeladen, diese in den „C:\Program Files\kubectl“ Ordner kopiert und den Pfad angegeben.

 Die Version ist mir mit folgendem Befehl angezeigt worden:

kubectl client --version
Kubectl Client version

Nun sind alle Voraussetzungen für die Minikube Installation, wie [hier](https://kubernetes.io/docs/tasks/tools/install-kubectl/) beschrieben, durchgeführt worden.

Durchführung

Für die Installation des Minikube, habe ich mich für den Windows Installer entschieden. Dieser muss dann als Administrator installiert werden.

Minikube Installation als Administrator

Minikube Setup

Ist die ausführbare Datei als Administrator gestartet worden, so werden die Installationsschritte durchgegangen:

  1. Als erstes wählen wir die Sprache "Englisch"
Setup Sprache wählen
  1. Anschliessend sind die Lizenvereinbarungen zu bestätigen.

  2. Danach den Installationsort wählen. Ich habe diesen auf dem Standardwert wie unten dargestellt belassen.

Installationsziel für Minikube

Minikube Installationbestätigung

Mein Setup ist Windows 10 mit Hyper-V. So muss Minikube gestartet werden. Eine Liste der Treiber kann hier gefunden werden.

Zu disem Zweck muss die Powershell wieder mit Administratorenrechten und dem folgenden Befehl ausgeführt werden:

minikube start --driver=hyperv

Ist alles korrekt eingegeben worden, dann startet Minikube und setzt seine Umgebung zur Verwendung mit Hyper-V auf, wie im nachfolgenden Bild dargestellt.

Minikube mit Hyper-V initialisieren

Fazit

So kann ich mir, ohne grosse Kosten auf meinem private Azure Portal einmal Kubernetes lokal in Verbindung mit Hyper-V anschauen und mich vertraut machen. In weiteren Artikeln werde ich mich analog den hier hier durchhangeln und mir dann später eine Umgebung aufzubauen, die in etwa bei den meisten Deployments vorkommt.

Golo Roden: Einführung in React, Folge 5: React-State

Anwendungen zeigen Daten nicht nur statisch an, sondern ändern diese auch von Zeit zu Zeit, nicht zuletzt bedingt durch Eingaben der Anwender. Wie geht man in React mit dynamischen Daten um?

Christian Dennig [MS]: Release to Kubernetes like a Pro with Flagger

Introduction

When it comes to running applications on Kubernetes in production, you will sooner or later have the challenge to update your services with a minimum amount of downtime for your users…and – at least as important – to be able to release new versions of your application with confidence…that means, you discover unhealthy and “faulty” services very quickly and are able to rollback to previous versions without much effort.

When you search the internet for best practices or Kubernetes addons that help you with these challenges, you will stumble upon Flagger, as I did, from WeaveWorks.

Flagger is basically a controller that will be installed in your Kubernetes cluster. It helps you with canary and A/B releases of your services by handling all the hard stuff like automatically adding services and deployments for your “canaries”, shifting load over time to these and rolling back deployments in case of errors.

As if that wasn’t good enough, Flagger also works in combination with popular Service Meshes like Istio and Linkerd. If you don’t want to use Flagger with such a product, you can also use it on “plain” Kubernetes, e.g. in combination with an NGINX ingress controller. Many choices here…

I like linkerd very much, so I’ll choose that one in combination with Flagger to demonstrate a few of the possibilities you have when releasing new versions of your application/services.

Prerequisites

linkerd

I already set up a plain Kubernetes cluster on Azure for this sample, so I’ll start by adding linkerd to it (you can find a complete guide how to install linkerd and the CLI on https://linkerd.io/2/getting-started/):

$ linkerd install | kubectl apply -f -

After the command has finished, let’s check if everything works as expected:

$ linkerd check && kubectl -n linkerd get deployments
...
...
control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
flagger                  1/1     1            1           3h12m
linkerd-controller       1/1     1            1           3h14m
linkerd-destination      1/1     1            1           3h14m
linkerd-grafana          1/1     1            1           3h14m
linkerd-identity         1/1     1            1           3h14m
linkerd-prometheus       1/1     1            1           3h14m
linkerd-proxy-injector   1/1     1            1           3h14m
linkerd-sp-validator     1/1     1            1           3h14m
linkerd-tap              1/1     1            1           3h14m
linkerd-web              1/1     1            1           3h14m

If you want to, open the linkerd dashboard and see the current state of your service mesh, execute:

$ linkerd dashboard

After a few seconds, the dashboard will be shown in your browser.

Microsoft Teams Integration

For alerting and notification, we want to leverage the MS Teams integration of Flagger to get notified each time a new deployment is triggered or a canary release will be “promoted” to be the primary release.

Therefore, we need to setup a WebHook in MS Teams (a MS Teams channel!):

  1. In Teams, choose More options () next to the channel name you want to use and then choose Connectors.
  2. Scroll through the list of Connectors to Incoming Webhook, and choose Add.
  3. Enter a name for the webhook, upload an image and choose Create.
  4. Copy the webhook URL. You’ll need it when adding Flagger in the next section.
  5. Choose Done.

Install Flagger

Time to add Flagger to your cluster. Therefore, we will be using Helm (version 3, so no need for a Tiller deployment upfront).

$ helm repo add flagger https://flagger.app

$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

[...]

$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090 \
--set msteams.url=<YOUR_TEAMS_WEBHOOK_URL>

Check, if everything has been installed correctly:

$ kubectl get pods -n linkerd -l app.kubernetes.io/instance=flagger

NAME                       READY   STATUS    RESTARTS   AGE
flagger-7df95884bc-tpc5b   1/1     Running   0          0h3m

Great, looks good. So, now that Flagger has been installed, let’s have a look where it will help us and what kind of objects will be created for canary analysis and promotion. Remember that we use linkerd in that sample, so all objects and features discussed in the following section will just be relevant for linkerd.

How Flagger works

The sample application we will be deploying shortly consists of a VueJS Single Page Application that is able to display quotes from the Star Wars movies – and it’s able to request the quotes in a loop (to be able to put some load on the service). When requesting a quote, the web application is talking to a service (proxy) within the Kubernetes cluster which in turn talks to another service (quotesbackend) that is responsible to create the quote (simulating service-to-service calls in the cluster). The SPA as well as the proxy are accessible through a NGINX ingress controller.

After the application has been successfully deployed, we also add a canary object which takes care of the promotion of a new revision of our backend deployment. The Canary object will look like this:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: quotesbackend
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: quotesbackend
  progressDeadlineSeconds: 60
  service:
    port: 3000
    targetPort: 3000
  analysis:
    interval: 20s
    # max number of failed metric checks before rollback
    threshold: 5
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 70
    stepWeight: 10
    metrics:
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      threshold: 99
      interval: 1m
    - name: request-duration
      # maximum req duration P99
      # milliseconds
      threshold: 500
      interval: 30s

What this configuration basically does is watching for new revisions of a quotesbackend deployment. In case that happens, it starts a canary deployment for it. Every 20s, it will increase the weight of the traffic split by 10% until it reaches 70%. If no errors occur during the promotion, the new revision will be scaled up to 100% and the old version will be scaled down to zero, making the canary the new primary. Flagger will monitor the request success rate and the request duration (linkerd Prometheus metrics). If one of them drops under the threshold set in the Canary object, a rollback to the old version will be started and the new deployment will be scaled back to zero pods.

To achieve all of the above mentioned analysis, flagger will create several new objects for us:

  • backend-primary deployment
  • backend-primary service
  • backend-canary service
  • SMI / linkerd traffic split configuration

The resulting architecture will look like that:

So, enough of theory, let’s see how Flagger works with the sample app mentioned above.

Sample App Deployment

If you want to follow the sample on your machine, you can find all the code snippets, deployment manifests etc. on https://github.com/cdennig/flagger-linkerd-canary

Git Repo

First, we will deploy the application in a basic version. This includes the backend and frontend components as well as an Ingress Controller which we can use to route traffic into the cluster (to the SPA app + backend services). We will be using the NGINX ingress controller for that.

To get started, let’s create a namespace for the application and deploy the ingress controller:

$ kubectl create ns quotes

# Enable linkerd integration with the namespace
$ kubectl annotate ns quotes linkerd.io/inject=enabled

# Deploy ingress controller
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ kubectl create ns ingress
$ helm install my-ingress ingress-nginx/ingress-nginx -n ingress

Please note, that we annotate the quotes namespace to automatically get the Linkerd sidecar injected during deployment time. Any pod that will be created within this namespace, will be part of the service mesh and controlled via Linkerd.

As soon as the first part is finished, let’s get the public IP of the ingress controller. We need this IP address to configure the endpoint to call for the VueJS app, which in turn is configured in a file called settings.js of the frontend/Single Page Application pod. This file will be referenced when the index.html page gets loaded. The file itself is not present in the Docker image. We mount it during deployment time from a Kubernetes secret to the appropriate location within the running container.

One more thing: To have a proper DNS name to call our service (instead of using the plain IP), I chose to use NIP.io. The service is dead simple! E.g. you can simply use the DNS name 123-456-789-123.nip.io and the service will resolve to host with IP 123.456.789.123. Nothing to configure, no more editing of /etc/hosts…

So first, let’s determine the IP address of the ingress controller…

# get the IP address of the ingress controller...

$ kubectl get svc -n ingress
NAME                                            TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
my-ingress-ingress-nginx-controller             LoadBalancer   10.0.93.165   52.143.30.72   80:31347/TCP,443:31399/TCP   4d5h
my-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.157.46   <none>         443/TCP                      4d5h

Please open the file settings_template.js and adjust the endpoint property to point to the cluster (in this case, the IP address is 52.143.30.72, so the DNS name will be 52-143-30-72.nip.io).

Next, we need to add the correspondig Kubernetes secret for the settings file:

$ kubectl create secret generic uisettings --from-file=settings.js=./settings_template.js -n quotes

As mentioned above, this secret will be mounted to a special location in the running container. Here’s the deployment file for the frontend – please see the sections for volumes and volumeMounts:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: quotesfrontend
spec:
  selector:
      matchLabels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
  replicas: 1
  minReadySeconds: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
    spec:
      containers:
      - name: quotesfrontend
        image: csaocpger/quotesfrontend:4
        volumeMounts:
          - mountPath: "/usr/share/nginx/html/settings"
            name: uisettings
            readOnly: true
      volumes:
      - name: uisettings
        secret:
          secretName: uisettings

Last but not least, we also need to adjust the ingress definition to be able to work with the DNS / hostname. Open the file ingress.yaml and adjust the hostnames for the two ingress definitions. In this case here, the resulting manifest looks like that:

Now we are set to deploy the whole application:

$ kubectl apply -f base-backend-infra.yaml -n quotes
$ kubectl apply -f base-backend-app.yaml -n quotes
$ kubectl apply -f base-frontend-app.yaml -n quotes
$ kubectl apply -f ingress.yaml -n quotes

After a few seconds, you should be able to point your browser to the hostname and see the “Quotes App”:

Basic Quotes app

If you click on the “Load new Quote” button, the SPA will call the backend (here: http://52-143-30-72.nip.io/quotes), request a new “Star Wars” quote and show the result of the API Call in the box at the bottom. You can also request quotes in a loop – we will need that later to simulate load.

Flagger Canary Settings

We need to configure Flagger and make it aware of our deployment – remember, we only target the backend API that serves the quotes.

Therefor, we deploy the canary configuration (canary.yaml file) discussed before:

$ kubectl apply -f canary.yaml -n quotes

You have to wait a few seconds and check the services, deployments and pods to see if it has been correctly installed:

$ kubectl get svc -n quotes

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
quotesbackend           ClusterIP   10.0.64.206    <none>        3000/TCP   51m
quotesbackend-canary    ClusterIP   10.0.94.94     <none>        3000/TCP   70s
quotesbackend-primary   ClusterIP   10.0.219.233   <none>        3000/TCP   70s
quotesfrontend          ClusterIP   10.0.111.86    <none>        80/TCP     12m
quotesproxy             ClusterIP   10.0.57.46     <none>        80/TCP     51m

$ kubectl get po -n quotes
NAME                                     READY   STATUS    RESTARTS   AGE
quotesbackend-primary-7c6b58d7c9-l8sgc   2/2     Running   0          64s
quotesfrontend-858cd446f5-m6t97          2/2     Running   0          12m
quotesproxy-75fcc6b6c-6wmfr              2/2     Running   0          43m

kubectl get deploy -n quotes
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
quotesbackend           0/0     0            0           50m
quotesbackend-primary   1/1     1            1           64s
quotesfrontend          1/1     1            1           12m
quotesproxy             1/1     1            1           43m

That looks good! Flagger has created new services, deployments and pods for us to be able to control how traffic will be directed to existing/new versions of our “quotes” backend. You can also check the canary definition in Kubernetes, if you want:

$ kubectl describe canaries -n quotes

Name:         quotesbackend
Namespace:    quotes
Labels:       <none>
Annotations:  API Version:  flagger.app/v1beta1
Kind:         Canary
Metadata:
  Creation Timestamp:  2020-06-06T13:17:59Z
  Generation:          1
  Managed Fields:
    API Version:  flagger.app/v1beta1
[...]

You will also receive a notification in Teams, that a new deployment for Flagger has been detected and initialized:

Kick-Off a new deployment

Now comes the part where Flagger really shines. We want to deploy a new version of the backend quote API – switching from “Star Wars” quotes to “Star Trek” quotes! What will happen, is the following:

  • as soon as we deploy a new “quotesbackend”, Flagger will recognize it
  • new versions will be deployed, but no traffic will be directed to them at the beginning
  • after some time, Flagger will start to redirect traffic via Linkerd / TrafficSplit configurations to the new version via the canary service, starting – according to our canary definition – at a rate of 10%. So 90% of the traffic will still hit our “Star Wars” quotes
  • it will monitor the request success rate and advance the rate by 10% every 20 seconds
  • if 70% traffic split will be reached without throwing any significant amount of errors, the deployment will be scaled up to 100% and propagated as the “new primary”

Before we deploy it, let’s request new quotes in a loop (set the frequency e.g. to 300ms via the slider and press “Load in Loop”).

Base deployment: Load quotes in a loop.

Then, deploy the new version:

$ kubectl apply -f st-backend-app.yaml -n quotes

$ kubectl describe canaries quotesbackend -n quotes
[...]
[...]
Events:
  Type     Reason  Age                   From     Message
  ----     ------  ----                  ----     -------
  Warning  Synced  14m                   flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  14m                   flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  4m7s                  flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Starting canary analysis for quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Advance quotesbackend.quotes canary weight 10
  Warning  Synced  3m7s (x2 over 3m27s)  flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Normal   Synced  2m47s                 flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  2m27s                 flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  2m7s                  flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  107s                  flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  87s                   flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  67s                   flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  7s (x3 over 47s)      flagger  (combined from similar events): Promotion completed! Scaling down quotesbackend.quotes

You will notice in the UI that every now and then a quote from “Star Trek” will appear…and that the frequency will increase every 20 seconds as the canary deployment will receive more traffic over time. As stated above, when the traffic split reaches 70% and no errors occured in the meantime, the “canary/new version” will be promoted as the “new primary version” of the quotes backend. At that time, you will only receive quotes from “Star Trek”.

Canary deployment: new quotes backend servicing “Star Trek” quotes.

Because of the Teams integration, we also get a notification of a new version that will be rolled-out and – after the promotion to “primary” – that the rollout has been successfully finished.

Starting a new version rollout with Flagger
Finished rollout with Flagger

What happens when errors occur?

So far, we have been following the “happy path”…but what happens, if there are errors during the rollout of a new canary version? Let’s say we have produced a bug in our new service that will throw an error when requesting a new quote from the backend? Let’s see, how Flagger will behave then…

The version that will be deployed will start throwing errors after a certain amount of time. Due to the fact that Flagger is monitoring the “request success rate” via Linkerd metrics, it will notice that something is “not the way it is supposed to be”, stop the promotion of the new “error-prone” version, scale it back to zero pods and keep the current primary backend (means: “Star Trek” quotes) in place.

$ kubectl apply -f error-backend-app.yaml -n quotes

$ k describe canaries.flagger.app quotesbackend  
[...]
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  23m                    flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  23m                    flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  13m                    flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  3m43s (x4 over 9m43s)  flagger  (combined from similar events): New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m23s (x2 over 12m)    flagger  Advance quotesbackend.quotes canary weight 10
  Normal   Synced  3m23s (x2 over 12m)    flagger  Starting canary analysis for quotesbackend.quotes
  Warning  Synced  2m43s (x4 over 12m)    flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Warning  Synced  2m3s (x2 over 2m23s)   flagger  Halt quotesbackend.quotes advancement success rate 0.00% < 99%
  Warning  Synced  103s                   flagger  Halt quotesbackend.quotes advancement success rate 50.00% < 99%
  Warning  Synced  83s                    flagger  Rolling back quotesbackend.quotes failed checks threshold reached 5
  Warning  Synced  81s                    flagger  Canary failed! Scaling down quotesbackend.quotes

As you can see in the event log, the success rate drops to a significant amount and Flagger will halt the promotion of the new version, scale down to zero pods and keep the current version a the “primary” backend.

New backend version throwing errors
Teams notification: service rollout stopped!

Conclusion

With this article, I have certainly only covered the features of Flagger in a very brief way. But this small example shows what a great relief Flagger can be when it comes to the rollout of new Kubernetes deployments. Flagger can do a lot more than shown here and it is definitely worth to take a look at this product from WeaveWorks.

I hope I could give some insight and made you want to do more…and to have fun with Flagger 🙂

As mentioned above, all the sample files, manifests etc. can be found here: https://github.com/cdennig/flagger-linkerd-canary.

Jürgen Gutsch: Getting the .editorconfig working with MSBuild

UPDATE: While trying the .editorconfig and writing this post, I did a fundamental mistake. I added a ruleset file to the projects and this is the reason why it worked. It wasn't really the .editorconfig in this case. I'm really sorry about that. Please find this post to learn how it is really working.

In January I wrote a post about setting up VS2019 and VSCode to use the .editorconfig. In this post I'm going to write about how to get the .editorconfig settings checked during build time.

It works like it should work: In the editors. And it works in VS2019 at build-time. But it doesn't work at build time using MSBuild. This means it won't work with the .NET CLI, it won't work with VSCode and it won't work on any build server that uses MSBuild.

Actually this is a huge downside about the .editorconfig. Why shall we use the .editoconfig to enforce the coding style, if a build in VSCode doesn't fail, but it fails in VS2019 does? Why shall we use the .editorconfig, if the build on a build server doesn't fail. Not all of the developers are using VS2019, sometimes VSCode is the better choice. And we don't want to install VS2019 on a build server and don't want to call vs.exe to build the sources.

The reason why it is like this is as simple as bad: The Roslyn analyzers to check the codes using the .editorconfig are not yet done.

Actually, Microsoft is working on that and is porting the VS2019 coding style analyzers to Roslyn analyzers that can be downloaded and used via NuGet. Currently, the half of the work is done and some of the analyzers can be used in the project. See here: #33558

With this post I'd like to try it out. We need this for our projects in the YOO, the company I work for and I'm really curious about how this is going to work in a real project

Code Analyzers

To try it out, I'm going to use the Weather Stats App I created in previous posts. Feel free to clone it from GitHub and follow the steps I do within this post.

At first you need to add a NuGet package:

Microsoft.CodeAnalysis.CSharp.CodeStyle

This is currently a development version and hosted on MyGet. This needs you to follow the installation instructions on MyGet. Currently it is the following .NET CLI command:

dotnet add package Microsoft.CodeAnalysis.CSharp.CodeStyle --version 3.8.0-1.20330.5 --source https://dotnet.myget.org/F/roslyn/api/v3/index.json

The version number might change in the future. Currently I use the version 3.8.0-1.20330.5 which is out since June 30th.

You need to execute this command for every project in your solution.

After executing this command you'll have the following new lines in the project files:

<PackageReference Include="Microsoft.CodeAnalysis.CSharp.CodeStyle" Version="3.8.0-1.20330.5">
    <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    <PrivateAssets>all</PrivateAssets>
</PackageReference>

If not just copy this line into the project file and run dotnet restore to actually load the package.

This should be enough to get it running.

Adding coding style errors

To try it out I need to add some coding style errors. I simply added some these:

Roslyn conflicts

Maybe you will get a lot of warnings about that an instance of the analyzers cannot be created because of a missing Microsoft.CodeAnalysis 3.6.0 Assembly like this:

Could not load file or assembly 'Microsoft.CodeAnalysis, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

This is might strange because the code analysis assemblies should already be available in case Roslyn is used. Actually this error happens, if you do a dotnet build while VSCode is running the Roslyn analyzers. Strange but reproduceable. Maybe Roslyn analyzers can only run once at the same time.

To get it running without those warnings, you can simply close VSCode or wait for a few seconds.

Get it running

Actually it didn't work in my machine the first times. The reason was that I forgot to update the global.json. I still used a 3.0 runtime to run the analyzers. This doesn't work.

After updating the global.json to a 5.0 runtime (preview 6 in my case) it failed as expected:

Since the migration of the IDE analyzers to Roslyn analyzers is half done, not all of the errors will fail the build. This is why the the IDE0003 rule doesn't appear here. I used the this keyword twice in the code above, that should also fail the build.

Conclusion

Actually I was wondering why Microsoft didn't start earlier to convert the VS2019 analyzers into Roslyn code analyzers. This is really valuable for teams where developers use VSCode, VS2019, VS for Mac or any other tool to write .NET Core applications. It is not only about showing coding style errors in an editor, it should also fail the build in case coding style errors are checked in.

Anyway, it is working Good. And hopefully Microsoft will complete the set of analyzers as soon as possible.

Golo Roden: Entdeckungsreise in die selbst genutzte Programmiersprache

Gleichwohl in welcher Programmiersprache man entwickelt, für nahezu jeden Entwickler gibt es von Zeit zu Zeit noch Neues zu entdecken. Doch wie lässt sich eine systematische Entdeckungsreise in die eigene Programmiersprache mit dem Alltag vereinbaren?

Holger Schwichtenberg: Behandlung der Umsatzsteuersätze von 5 und 16 Prozent in der Elster-Umsatzsteuervoranmeldung

Die Finanzverwaltung verzichtet nun einfach auf die Trennung nach Steuersätzen. Entwickler von Buchhaltungslösungen müssen aber deutlich agiler sein.

Code-Inside Blog: Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?

Within our product we move more and more stuff in the .NET Core land. Last week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:

.NET Framework 4.5 or higher.

With .NET Core the answer is sligthly different:

In theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.

This system is called “Framework-dependent apps roll forward” and sounds good.

The bad part

Unfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:

It’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.

The good part

With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.

Summery

Read the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.

As a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.

Hope this helps!

Jürgen Gutsch: Exploring Orchard Core - Part 1

Since I while I planned to try out the Orchard Core Application Framework. Back than I saw an awesome video where Sébastien Ros showed an early version of Orchard Core. If I remember right it was this ASP.NET Community Standup: ASP.NET Community Standup - November 27, 2018 - Sebastien Ros on Headless CMS with Orchard Core

Why a blog series

Actually this post wasn't planned to be a series but as usual the posts are getting longer and longer. The more I write, the more came in mind to write about. Bloggers now this, I guess. So I needed to decide, whether I want to write a monster blog post or a series of smaller posts. Maybe the later is easier to read and to write.

What is Orchard Core?

Orchard Core is an open-source modular and multi-tenant application framework built with ASP.NET Core, and a content management system (CMS) built on top of that application framework.

Orchard Core is not a new version of the Orchard CMS. It is a completely new thing written in ASP.NET Core. The Orchard CMS was designed as a CMS, but Orchard Core was designed to be an application framework that can be used to build a CMS, a blog or whatever you want to build. I really like the idea to have a framework like this.

I don't want to repeat the stuff, that is already on the website. To learn more about it just visit it: https://www.orchardcore.net/

I had a look into the Orchard CMS, back then when I was evaluating a new blog. It was good, but I didn't really feel confident.

Currently the RC2 is out since a couple of days and version 1 should be released in September 2020. The roadmap already defines features for future releases.

Let's have a first glimpse

When I try a CMS or something like this, I try to follow the quick start guide. I want to start the application up to have a first look and feel. As a .NET Core fan-boy I decided to use the .NET CLI to run the application. But first I have to clone the repository, to have a more detailed look later on and to run the sample application:

git clone https://github.com/OrchardCMS/OrchardCore.git

This clones the current RC2 into a local repository.

Than we need to cd into the repository and into the web sample:

cd OrchardCore\
cd src\OrchardCore.Cms.Web\

Since this should be a ASP.NET Core application, it should be possible to run the dotnet run command:

dotnet run

As usual in ASP.NET Core I get two URLs to call the app. The HTTP version on port 5000 and the HTTPS version on port 5001.

I'm now should be able to call the CMS in the browser. Et voilà:

Since every CMS has an admin area, I tried /admin for sure.

At the first start it asks you about to set initial credentials and stuff like this. I already did this before. At every other start I just see the log-in screen:

After the log-in I feel myself warmly welcomed... kinda :-D

Actually this screenshot is a little small, because it hides the administration menu which is the last item in menu. You should definitely have a look into the /admin/features page that has a ton of features to enable. Stuff like GraphQL API, Lucene search indexing, Markdown editing, templating, authentication providers and a lot more.

But I won't go threw all the menu items. You can just have a look by yourself. I actually want to explore the application framework.

I want to see some code

This is why I stopped the application and open it in VS Code and this is where the fascinating stuff is.

Ok. This is where I thought the fascinating stuff is. There is almost nothing. There are a ton of language files, an almost empty wwwroot folder, some configuration files and the common files like a *.csproj, the startup.cs and the program.cs. Except the localization part, it completely looks like an empty ASP.NET Core project.

Where is all the Orchard stuff? I expected a lot more to see.

The Program.cs looks pretty common, except the usage of NLog which is provided via OrchardCore.Logging package:

using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using OrchardCore.Logging;

namespace OrchardCore.Cms.Web
{
    public class Program
    {
        public static Task Main(string[] args)
            => BuildHost(args).RunAsync();

        public static IHost BuildHost(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureLogging(logging => logging.ClearProviders())
                .ConfigureWebHostDefaults(webBuilder => webBuilder
                    .UseStartup<Startup>()
                    .UseNLogWeb())
                .Build();
    }
}

This clears the default logging providers and adds the NLog web logger. It also uses the common Startup class which is really clean and doesn't need a lot of configuration .

using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace OrchardCore.Cms.Web
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddOrchardCms();
        }

        public void Configure(IApplicationBuilder app, IHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseStaticFiles();

            app.UseOrchardCore();
        }
    }
}

It only adds the services for the Orchard CMS in the method ConfigureServices and uses Orchard Core stuff in the method Configure.

Actually this Startup configures the Orchard Core as CMS. It seems I would also be able to add the Orchard Core to the ServiceCollection by using AddOrchardCore(). I guess this would just add Core functionality to the application. Let's see if I'm right.

Both the AddOrchardCms() and the AddOrchardCore() methods are overloaded and can be configured using an OrchardCoreBuilder. Using this overloads you can add Orchard Core features to your application. I guess the method AddOrchardCms() has a set of features preconfigured to behave like a CMS:

It is a lot of guessing and trying right now. But I didn't read any documentation until now. I just want to play around.

I also wanted to see what is possible with the UseOrchardCore() method, but this one just has on optional parameter to add an action the retrieves the IApplicationBuilder . I'm not sure why this action is really needed. I mean I would be able to configure ASP.NET Core features inside this action. I could also nest a lot of UseOrchardCore() calls. But why?

I think, it is time to have a look into the docs at https://docs.orchardcore.net/en/dev/. Don't confuse it with the docs on https://docs.orchardcore.net/. This are the Orchard CMS docs that might be outdated now.

The docs are pretty clear. Orchard Core comes in two different targets: The Orchard Core Framework and the Orchard Core CMS. The sample I opened here is the Orchard Core CMS sample. To learn how the Framework works, I need to clone the Orchard Core Samples repository: https://github.com/OrchardCMS/OrchardCore.Samples

I will write about this in the next part this series.

Not a conclusion yet

I will continue exploring the Orchard Core Framework within the next days and continue to write about it in parallel. The stuff I saw until now is really promising and I like the fact that it simply works without a lot of configuration. Exploring the new CMS would be another topic and really interesting as well. Maybe I will find some time in the future.

Mario Noack: NDepend 2020.1

Als ich vor kurzem gefragt wurde, ob ich meinen Test der neuen NDepend Version erneuern möchte, stimmte ich gern zu. Diese Software bezeichnet sich selbst als ein „Schweizer Taschenmesser für .NET und .NET Core Entwicklung Teams“. Es bietet vielfältige Möglichkeiten, um ein Projekt auf Code-technische Probleme und technische Schulden zu untersuchen und das auch über einen zeitlichen Verlauf in grafischer Form darzustellen bzw. die konkreten Veränderungen zu ermitteln.

Die Installation ist einfach und die Integration in Visual Studio ist optional, schnell gemacht und für mich ein echter Mehrwert, da ich bereits während der Entwicklung schnellen Zugriff auf alle relevanten Funktionen habe. Obwohl NDepend dann regelmäßig automatisch Analyseergebnisse sammelt, merkt man davon während der normalen Arbeit in Visual Studio nichts. Das ist bei Visual Studio Erweiterungen anderer Hersteller leider nicht immer der Fall.

NDepend Dashboard

Mit einem einfachen Klick auf einen großen Kreis erhält man eine kleine Übersicht, kann eine Analyse von Hand starten oder einfach ins Dashboard wechseln. Dort erhält man eine Einschätzung der Projektqualität. Das basiert auf einem sehr ausgefeilten Regelwerk, welches der Hersteller entwickelt hat. Ich habe das Regelwerk nicht selbst geändert. Ich habe schließlich das Projekt der Code-Analyse wieder aufgegriffen, weil ich das Tool als neutralen und unbestechlichen Auditor meiner Softwareprojekte sehe, der wesentlich mehr Erfahrung als ich in diesem Bereich hat. Die Darstellung des Dashboards beinhaltet eine anpassbare Übersicht. Bestandteil sind einmal Diagramme über die Entwicklung der Kennzahlen. Weiter ist, im Besonderen nach der Auswahl einer Vergleichsbasis, eine Darstellung der Projektbewertung jetzt und im direkten Vergleich zu finden. Das schließt eine nach Schweregrad gruppierte Übersicht der Anzahl der Vorfälle ein. Ein Klick auf diese Zahl bringt ein sofort in eine konkrete Übersicht der Vorfälle bzw. Probleme. Dies ist besonders dann hilfreich, wenn man eine Entwicklung einer Kennzahl vielleicht nicht so erwartet hat und man nun die Ursache ergründen möchte.

NDepend issue list

Hier beginnt jetzt aber auch der anspruchsvolle Teil! Man kann sehr leicht die einzelnen Vorfälle heraussuchen und bekommt die Fundstellen sogar angezeigt, wenn Sie nur der Vergleichsbasis zu finden waren, nun also entfernt oder korrigiert wurde. Ferner sieht man die Definition des Vorfalls in einer LINQ ähnlichen Sprache, mit einer ausführlichen Erklärung und weiteren externen Verweisen. Die Beschreibungen sind sehr gut, prinzipbedingt aber gerade am Anfang keine leichte Kost! Hier muss man schon einen festen Willen haben, das persönliche Niveau zu verbessern. Keine Verständnisprobleme bei englischen Texten sind auch hilfreich. Dann findet man in NDepend jedoch einen idealen Partner.

Ein Prunkstück, auf das der Hersteller besonders stolz ist, ist der Dependency Graph. Der Name ist Programm! Grenzen scheint es für dieses Modul nicht wirklich zu geben. Man kann mit einer sehr hohen Geschwindigkeit das Zusammenspiel der eigenen oder externen Klassen, Namensbereichen oder Funktionen anschauen und durch viele, oft sehr sinnvoll vorbelegte Optionen, übersichtlich gestalten. Das kann sehr hilfreich bei der Sichtung oder Umgestaltung von Codebereichen sein. Die Leistungsklasse wird eindrucksvoll durch die Präsentation der .NET Core 3 Klassen gezeigt. In meinen eigenen aktuellen Projekten ist mir die Struktur jedoch extrem vertraut. Darum ist steht bei mir das Modul nur in der zweiten Reihe, für viele andere Entwickler wird das sicher nicht gelten.

Kommen wir noch zur Einschätzung des Preises. Dieser liegt ungefähr im Bereich der JetBrains-Tools. Da der Kundenkreis merklich kleiner sein wird, ist das bezogen auf den Funktionsumfang mehr als fair. Positive muss hier erwähnt werden, dass NDepend einer permanenten Entwicklung unterliegt und man trotzdem nicht den Eindruck bekommt, dass die Fertigstellung erst beim Kunden geschieht. Das obligatorische Subscription-Model ist damit gerechtfertigt. Im Vergleich zu meinen älteren Versionen empfinde ich beispielsweise die Integration innerhalb Visual Studio wesentlich umfangreicher und runder.

Fazit: Vermisst habe ich lediglich die Kombination der Analyse mit einer Versionsverwaltung wie SVN oder Git. Das ist schade und stellt sicher noch ein großes Potential für die Zukunft dar. Mein persönliches Highlight von NDepend bleibt ganz klar die große Menge an vordefinierten Regeln. Diese sind sauber definiert, gut gruppiert, flexibel anpassbar und vor allen Dingen gut mit den passenden Hilfethemen verbunden.

Daniel Schädler: Dokumentieren ohne Microsoft Word

Voraussetzungen

Ich bin als System Engineer bei der Schweizerischen Eidgenossenschaft tätig und wir haben die tägliche Herausforderung, dass wir Artefakte zum Beispiel für die Betriebsübergabe gemäss HERMES verpflichtet sind zu erstellen.Als Beispiel in diesem Blogpost habe ich das Betriebshandbuch und dessen Vorlage genommen, die in diesem Blogpost als Markdown verwendet werden soll.

Die Struktur des Dokumentes ist nachfolgend aufgeführt:

  • Systemübersicht
  • Aufnahme des Betriebes
    • Voraussetzungen für die Betriebsaufnahme
      • Ablauf der Betriebsaufnahme
      • Qualitätssicherung nach Betriebsaufnahme
      • Vorgaben zur Abnahme des Systems
  • Durchführung und Überwachung des Betriebes
    • Betriebsüberwachung
    • Datensicherung
    • Kontrolle zum Datenschutz
    • Statistisken, Kennzahlen, Messzahlen
    • Vorgehen im Fehlerfall
    • Hindweise auf Betriebsprozesse
  • Unterbrechung oder Beendigung des Betriebes
    • System stoppen
    • Ablauf für die Wiederinbetriebnahme
    • Qualitätssicherung nach Wiederinbetriebnahme
    • Abbau des System, Archivierung, Übergabe
  • Supportorganisation
    • Supportprozesse
    • Organisation mit Rollen
  • Changemanagement
    • Changemanagement Prozess
      • Changemanagement mit Rollen, Kontaktinformationen
  • Sicherheitsbestimmungen

Hierzu verwende ich die folgenden Visual Studio Code Extensions:

Eine sehr hilfreiche Quelle für die Anwendung des Authoring Packs findet man hier

Duchführung

Hierfür erstelle ich mir die notwendige Struktur des Betriebsdokumentes auf dem Dateisystem. Dies sieht dann bei mir so aus:

  • Systemübersicht
  • Aufnahme des Betriebes
  • Durchführung des Betriebes
  • Unterbrechung oder Beendigung des Betriebes
  • Supportorganisation
  • Changemanagement
  • Sicherheitbestimmungen

Als Beispiel die Ordnerstruktur für die Ordnerübersicht.

In diesem Ordner sind alle relevanten Artefakte für dieses Kapitel integriert. Bedauerlicherweise ist es aktuell nicht möglich mit dem Microsoft Authoring Pack SVG Grafiken zu referenzieren.

Jedoch kann mit der INCLUDE Direktive, Mardown Dateien zusammen geführt werden, sodass zum Schluss dann ein gesamthaftes Betriebshandbuch erstellt werden kann. Diese sieht dann folgendermassen aus:

[!INCLUDE [Systemoverview](Link zum Kapitel)]
[!INCLUDE [Betriebsaufnahme](Link zum Kapitel)]
[!INCLUDE [Durchführung des Betriebes](Link zum Kapitel)]
[!INCLUDE [Unterbrechung des Betriebes](Link zum Kapitel)]

Fazit

Das Microsoft Authoring Pack hilf viel beim Dokumentieren nur wäre es wünschenswert, wenn die Grafiken nicht nur als jpg/png unterstützt werden würden, sondern auch als SVG, das ja von yUML automatisch generiert wird. Für hilfreiche Rückmeldungen bin ich dankbar.

Golo Roden: Einführung in React, Folge 4: Webpack

Wer Anwendungen mit React entwickelt, braucht über kurz oder lang einen Bundler wie Webpack. Doch wie verbindet man Webpack mit React?

Holger Schwichtenberg: PowerShell 7: Null Conditional Operator ?. und ?[]

In PowerShell 7.0 gibt es als experimentelles Feature den Null Conditional Operator ?. für Einzelobjekte und ?[] für Mengen.

Stefan Henneken: 10-year Anniversary

Exactly 10 years ago today, I published the first article here on my blog. The idea was born in 2010 during a customer training in Switzerland. The announced extensions of IEC 61131-3 were lively discussed at the dinner. I had promised the participants that evening to show a small example on this topic at the end of the training. At that time, Edition 3 of IEC 61131-3 had not yet been released, but CODESYS had its first beta versions, so that the participants could familiarize themselves with the language extensions. So later in the hotel room I started to keep my promise and prepared a small example.

Pleased about the interest in the new features of IEC 61131-3, I later sat at the gate in the airport and was able to think a little bit about the last days. I asked myself again and again whether and how I could pass on the example to all others who are interested. Since I was following certain blogs regularly at that time, and I still do, the idea had come up to run a blog as well.

At the same time, Microsoft offered an appropriate platform to run your own blog without having to deal with the technical details yourself. With the Live Writer, Microsoft also provided a free editor with which texts could be created very easily and loaded directly onto the weblog publication system. At the time, I wanted to save myself the effort of administering the blogging software on a web host. I preferred to invest the time in the content of the articles.

After a few considerations and a number of discussions, I published ‘test articles’ on C# and .NET. After these exercises and the experiences from the training, I created and published the first articles on IEC 61131-3. I also noticed that by writing the articles my knowledge on the respective topic was deepened and consolidated. Additionally to the IEC 61131-3, I also wanted to deal with topics related to .NET and therefore I started a series on MEF and the TPL. But I also realized that I had to set priorities.

In the meantime Microsoft stopped its blog service, but offered a migration to WordPress. There is also the possibility to host the blog for free. The statistics functions are very helpful. These provide information about the number of hits of each article. It is also lists the country from which the articles are retrieved. Fortunately, I saw the number of hits increase each year:

In 2014, I also made a decision to publish the articles not only in German but also in English. So in the last 10 years, about 70 posts have been published, 20 of which are in English. Most of the hits still come from the German-speaking countries. Here are the top 5 from 2019:

Germany44.7 %
Switzerland6.5 %
United States6.3 %
Netherlands4.3 %
Austria4.1 %

Asian countries and India are hardly represented so far. Either the access to WordPress is limited or the search engines there rate my site differently.

After all these years, I decided to switch to a paid service at WordPress. One reason is the free choice of an own URL. Instead of https://wordpress.StefanHenneken.com my blog is now accessible via https://StefanHenneken.net. Furthermore, advertising is turned off, which I didn’t always find suitable, and on which I had no influence at all. On this occasion, I also slightly changed the design of the sites.

I will continue to publish posts on IEC 61131-3 in German and English. In the medium term, however, new topics may be included.

At this point, I would like to thank all readers. I am always glad about a comment or if my page is recommended via LinkedIn, Xing, or whatever other means. My thanks also go to the people who have helped with the creation of the texts through comments, suggestions for improvement or proofreading.

Stefan Henneken: 10-jähriges Jubiläum

Auf den Tag genau ist es 10 Jahre her, das ich den ersten Artikel hier auf meinem Blog veröffentlicht habe. Die Idee ist 2010 während einer Kundenschulung in der Schweiz entstanden. Beim Abendessen wurde über die angekündigten Erweiterungen der IEC 61131-3 rege diskutiert. Den Teilnehmern hatte ich an diesem Abend versprochen, zum Ende der Schulung ein kleines Beispiel zu diesem Thema zu zeigen. Damals war die Edition 3 der IEC 61131-3 noch nicht veröffentlicht, aber von CODESYS gab es die ersten Betaversionen, so dass man sich mit den Erweiterungen der Sprache vertraut machen konnte. Somit begann ich später im Hotelzimmer mein Versprechen einzulösen und ich habe ein kleines Beispiel vorbereitet.

Erfreut über das Interesse an den Neuerungen der IEC 61131-3 saß ich später im Flughafen am Gate und konnte ein wenig über die letzten Tage nachdenken. Ich stellte mir immer wieder die Frage, ob und wie ich das Beispiel an andere Interessierte weitergeben könnte. Da ich zu dem Zeitpunkt bestimmte Blogs regelmäßig verfolgt habe, und das tue ich heute noch, war die Idee aufgekommen, ebenfalls einen Blog zu betreiben.

Microsoft bot zur gleichen Zeit eine entsprechende Plattform an, um einen eigenen Blog zu betreiben, ohne selbst sich um die technischen Details kümmern zu müssen. Auch lieferte Microsoft mit dem Live Writer einen kostenlosen Editor an, mit dem die Texte sehr einfach erstellt und auf das Weblog-Publikationssystem direkt geladen werden konnten. Damals wollte ich mir den Aufwand ersparen bei einem Webhost selbst die Blog-Software zu administrieren. Die Zeit wollte ich lieber in den Inhalt der Artikel investieren.

Nach einigen Überlegungen und etlichen Diskussionen habe ich erstmal ‚Testartikel‘ zu C# und .NET veröffentlicht. Nach diesen Übungen und den Erfahrungen aus der Schulung habe ich die ersten Artikel zur IEC 61131-3 erstellt und veröffentlicht. Auch habe ich gemerkt, dass durch das Schreiben der Artikel sich mein Wissen zu dem jeweiligen Thema weiter vertiefte und gefestigt wurde. Neben der IEC 61131-3 wollte ich mich noch zusätzlich mit Themen rund um .NET beschäftigen und habe deshalb auch eine Serie zu MEF und der TPL gestartet. Ich merkte aber auch, dass ich Schwerpunkte setzen musste.

Zwischenzeitig stellte Microsoft seinen Blog-Dienst ein, bot aber eine Migration nach WordPress an. Auch dort besteht die Möglichkeit, den Blog kostenlos zu hosten. Sehr hilfreich sind die Statistikfunktionen. Diese geben Auskunft über die Anzahl der Aufrufe der jeweiligen Artikel. Auch wird aufgeführt aus welchem Land die Aufrufe kommen. Erfreulicher Weise konnte ich beobachten, wie jedes Jahr die Anzahl der Aufrufe zunahm:

Ab 2014 faste ich zusätzlich den Endschluss, die Artikel nicht nur in Deutsch, sondern auch in Englisch zu veröffentlichen. Somit wurden in den letzten 10 Jahren ca. 70 Posts veröffentlicht, wovon 20 in Englisch sind. Nach wie vor, kommen aber die meisten Aufrufe aus dem deutschsprachigen Raum. Hier die Top 5 aus dem Jahr 2019:

undefinedGermany44,7 %
undefinedSwitzerland6,5 %
undefinedUnited States6,3 %
undefinedNetherlands4,3 %
undefinedAustria4,1 %

Kaum vertreten sind bisher die asiatischen Länder und auch Indien. Entweder sind die Zugriffe auf WordPress nur eingeschränkt möglich oder die dortigen Suchmaschinen bewerten meine Seite anders.

Nach all den Jahren habe ich mich dazu entschlossen, bei WordPress auf einen kostenpflichtigen Dienst umzustellen. Ein Grund ist die freie Wahl einer eignen URL; statt  https://wordpress.StefanHenneken.com ist mein Blog jetzt über https://StefanHenneken.net erreichbar. Außerdem wird die Werbung ausgeblendet, die ich nicht immer passend fand und auf der ich keinerlei Einfluss nehmen konnte. Bei der Gelegenheit wurde auch das Design der Seiten ebenfalls etwas angepasst.

Inhaltlich werde ich weiterhin Posts zu IEC 61131-3 in Deutsch und Englisch veröffentlichen. Mittelfristig werden aber evtl. noch neue Themengebiete hinzukommen.

An dieser Stelle möchte ich mich bei allen Lesern bedanken. Ich freue mich auch immer über einen Kommentar oder wenn meine Seite per LinkedIn, Xing, oder wie auch immer weiterempfohlen wird. Mein Dank gilt auch den Personen, die durch Hinweise, Verbesserungsvorschläge oder Korrekturlesungen bei der Erstellung der Texte geholfen haben.

Johnny Graber: Abhängigkeiten im Code aufzeigen mit NDepend 2020

NDepend ist ein praktisches Werkzeug zur statischen Codeanalyse. Die grosse Neuerung in der Version 2020 ist der komplett überarbeitete Abhängigkeitsgraph. Ich hatte bereits mehrmals über NDepend geschrieben (hier, hier und hier) und möchte nicht mehr darauf verzichten. Der alte Abhängigkeitsgraph Bisher konnte man die Grafik auf der obersten Ebene nur bedingt beeinflussen. Man konnte wählen … Abhängigkeiten im Code aufzeigen mit NDepend 2020 weiterlesen

Holger Schwichtenberg: PowerShell 7: Null Coalescing Assigment Operator ??=

Eine weitere Behandlung des $null-Falls ist in PowerShell 7.0 hinzugekommen, und zwar in Form des Operators "Null Coalescing Assignment" mit ??=.

Golo Roden: Vergünstigte Remote-Workshops zu DDD, CQRS, TypeScript und Kryptografie

Weiterbildung und -qualifizierung sind im Moment vielleicht wichtiger als je zuvor. Auch wenn einige durch die Lockerungsmaßnahmen wieder im Büro sein können, arbeiten viele nach wie vor remote von zu Hause. Für sie bietet the native web ausgewählte Workshops zum rabattierten Preis an.

Daniel Schädler: Dokumentieren mit Markdown und Visual Studio Code

Ausgangslage

Für die Erstellung von Code bieten sich verschiedene Werkzeuge an. Eines davon ist Visual Studio Code, welches mit seiner Plug-In Vielfalt extrem erweitert werden kann.Viele Code-Dokumentationen bauen auf Markdown auf.

Ziel

Das Ziel ist, mit Visual Studio Code eine einfache Dokumentation, inklusive Grafiken zu erstellen und diese dann generieren zu lassen. Folgender Fall soll abgedeckt werden.

  • Eine Klassendiagramm als Bild in einem Markdown anzeigen.

Natürlich sind noch weitere Diagramme möglich. Eine Hilfestellung bietet sich beim Plug-In Ersteller an (https://marketplace.visualstudio.com/_apis/public/gallery/publishers/ms-vscode-remote/vsextensions/remote-ssh/0.51.0/vspackage) wo viele weitere Beispiele zu finden sind.

Vorbereitungen

Für meine Installation habe ich foglende Komponenten verwendet:

Somit können wir starten zu dokumentieren.

Ausführung

Als ersters wird eine yuml Datei erstellt.

yuml Datei erstellen Erstellte yuml-Datei

Es ist ersichtlich, dass auf der rechten Seite bereits eine Vorschau aktiv ist. Fängt man in der Datei mit den vorgsehenen Tags an, so wird einem mit „class“ Autovervollständigung angeboten, die mit Tab bestätig werden kann.

Autofill

Anschliessend ist der „Stub-Code“ bereits generiert.

Stub code

Auch das bereits generierte Bild lässt sich sehen und kann anschliessend im Readme, mittels Markdown Syntax, eingebettet werden. 

SVG Referenz im Markdown

Markdown Referenz auf das SVG-Bild

Das SVG-Bild, wird mit der Option

{generate:true}

forciert.

Einbinden lässt es sich dann wie nachfolgend dargestellt. 

SVG Referenz im Markdown

Markdown Referenz auf das SVG-Bild

Auch die Vorschau, auch wenn nicht Word konkurrenzfähig, lässt sich sehen. 

Doc Preview

Dokument-Vorschau

Um einenen Export zu starten drückt man CTRL+P und gibt dann Export ein.

Fazit

Mit einfachen Mitteln, kann eine Dokumentation erstellt und exportiert werden, die sich sehen lassen kann. Konstruktive Kritik und Anregungen werden gerne entgegen genommen.

Golo Roden: Einführung in React, Folge 3: React Components

Einer der wichtigsten Aspekte von React ist die Komponentenorientierung, denn sie ermöglicht die Wiederverwendbarkeit von UI-Elementen. Doch wie schreibt man React Components?

Holger Schwichtenberg: PowerShell 7: Null Coalescing Operator ??

Der neue PowerShell-Operator ?? liefert den Wert des vo­rangestellten Ausdrucks, wenn dieser nicht $null ist.

Jürgen Gutsch: [Off-Topic] 2020 will be a year full of challenges

This post really is off-topic.

It seemed that 2020 started to get a good year. Life was kind of normal and I was looking forward to the upcoming events. Community events as well as family events. I was really looking forward to the MVP Summit 2020, to meeting good friends again and to visit my favorite city in the US.

But life wouldn't be life if there were no changes and challenges.

February 1st was the end of my old life but the start of a new one. A challenging one and a life full of new chances and opportunities.

What did change?

My wife and I, we broke up. Without any drama and stuff like this. It was a kind of spontaneous decision but a needed one. It was unexpected for our family and friends but we kind of knew about this for the last three years. It was a shock for the kids for sure but, as I said, there was never any drama and we are still friends and are talking and laughing together. The shock for the kids was a small one and a short one. Actually nothing really changed for the kids, except living in two houses which for sure is a huge change but they seem to love it, to enjoy it like a kind of adventure. Every house has other things they like, other rules and it seems they love to live in both houses.

This might also be shocking for friends who are reading this right now.

To leave the wife that was on my side for around 16 years felt strange and odd. But at the end it was a good decision for both of us.

This for sure results in new challenging situations, like moving into a new apartment and stuff like this.

What else did change?

The second challenging situation is the one that happens to all of us all over the world. The COVID-19 lockdowns all over the world are challenging for everyone. Especially the kids and their parents. Fortunately, I am able to work from home and, fortunately, child care is divided 50/50 between me and my wife. (She is still called my wife because we are still married.) But to work and to do home-schooling and child care in parallel is different, challenging and might be really hard and almost impossible for some parents.

Since COVID-19 happened to central Europe, I was working from home. Actually, today is the first day since weeks I commute to work. Feels strange sitting in the train, for more than one hour, and wearing a face mask. The train is almost empty. Only a few people are talking because of those annoying masks.

And, the first time since months, I have some time to write a blog post. I used to write while commuting, so I took the chance to write some stuff.

Actually, I started this post two weeks ago. Now it is the second time since COVID-19 to commute to work :-D

What comes next?

The new situation and the less commute time are the reasons why I didn't write a blog post ore something else since January.

The move to the new apartment is done. The kids know the new situation since more than a month and get used to it. It is all settling and calming down and the numbers around COVID-19 are getting better and better in central Europe. Let's have a small look into the future:

I'm going to start challenging myself again to do some more stuff for the developer community:

  • Writing a blog post at least every second week
    • Just ask for topics.
  • Rewriting my book to update it to ASP.NET Core 5.0 and really, really, really get it published
  • Writing some more technical articles for my favorite magazine.
  • Trying to do some talks on conferences and user groups
    • The next planned talk is at the DWX in November this year
  • Getting my streaming set up and running in the new apartment and start streaming again
    • The bandwidth could be a problem in the new apartment.
    • But just recording and feed a YouTube channel could be an option, too.

Actually, this is kind of challenging but why not? I really love challenges :-)

Christian Dennig [MS]: WSL2: Making Windows 10 the perfect dev machine!

Disclaimer: I work at Microsoft. And you might think that this makes me a bit biased about the current topic. However, I was an enthusiastic MacOS / MacBook user – both privately and professionally. I work as a so-called “Cloud Solution Architect” in the area of Open Source / Cloud Native Application Development – i.e. everything concerning container technologies, Kubernetes etc. This means that almost every tool you have to deal with is Unix based – or at least, it only works perfectly on that platform. That’s why I early moved to the Apple ecosystem, because it makes your (dev) life so much easier – although you get some not so serious comments on it at work every now and then 🙂

Well, things have changed…

Introduction

In this article I would like to describe how my current setup of tools / the environment looks like on my Windows 10 machine and how to setup the latest version of the Windows Subsystem for Linux 2 (WSL2) optimally – at least for me – when working in the “Cloud Native” domain.

Long story short…let’s start!

Basics

Windows Subsystem for Linux 2 (WSL2)

The whole story begins with the installation of WSL2, which is now available with the current version of Windows (Windows 10, version 2004, build 19041 or higher). The Linux subsystem has been around for quite a while now, but it has never been really usable – at least this is the case for version 1 (in terms of performance, compatibility etc.).

The bottom line is that WSL2 gives you the ability to run ELF64 Linux binaries on Windows – with 100% system call compatibility and “near-native” performance! The Linux kernel (optimized in size and performance for WSL2) is built by Microsoft from the latest stable branch based on the sources available on “kernel.org”. Updates of the kernel are provided via Windows Update.

I won’t go into the details of the installation process as you can simply get WSL2 by following this tutorial: https://docs.microsoft.com/en-us/windows/wsl/install-win10

It comes down to:

  • installing the Subsystem for Linux
  • enabling “Virtual Machine Platform”
  • setting WSL2 as the default version

Next step is to install the distribution of your choice…

Install Ubuntu 20.04

I decided to use Ubuntu 20.04 LTS, because I already know the distribution well and have used it for private purposes for some time – there are, of couse, others: Debian, openSUSE, Kali Linux etc. No matter which one you choose, the installation itself couldn’t be easier: all you have to do is open the Windows Store app, find the desired distribution and click “Install” (or simply click on this link for Ubuntu: https://www.microsoft.com/store/apps/9n6svws3rx71).

Windows Store Ubuntu 20.04 LTS
Ubuntu Installation

Once it is installed, you have to check if “version 2” of the subsystem for Linux is used (we have set “version 2” as default, but just in case…). Therefor, open a Powershell prompt and execute the following commands:

C:\> wsl --list --verbose
  NAME                   STATE           VERSION
* Ubuntu-20.04           Running         2
  docker-desktop-data    Running         2
  docker-desktop         Running         2
  Ubuntu-18.04           Stopped         2

If you see “Version 1” for Ubuntu-20.04, please run…

C:\> wsl --set-version Ubuntu-20.04 2

This will convert the distribution to be able to run in WSL2 mode (grab yourself a coffee, the conversion takes some time 😉 ).

Windows Terminal

Next, you need a modern, feature-rich and lightweight terminal. Fortunately, Microsoft also delivered on this: the Open Source Windows Terminal.  It includes many of the features most frequently requested by the Windows command-line community including support for tabs, rich text, globalization, configurability, theming & styling etc.

The installation is also done via the Windows Store: https://www.microsoft.com/store/productId/9N0DX20HK701

Once it’s on your machine, we can tweak the settings of the terminal to use Ubuntu 20.04 as the default terminal. Therefor, open Windows Terminal and hit “Ctrl+,” (that opens the settings.json file in your default text editor).

Add the guid of the Ubuntu 20.04 profile to the “defaultProfile” property:

Default Profile

Last but not least we upgrade all existing packages to be up to date.

$ sudo apt upgrade

So, the “basics” are in place now…we have a terminal that’s running Ubuntu Linux in Windows. Next, let’s give it super-powers!

Setup / tweak the shell

The software that is now being installed is an extract of what I need for my daily work. Of course the selection differs from what you might want (although I think this will cover a lot someone in the “Cloud Native” space would install). Nevertheless, it was important for me to list almost everything here, because it basically also helps me if I have to set up the environment again in the future 🙂

SSH Keys

Since it’s in the nature of a developer to work with GitHub (and other services, of course :)), I first need an SSH key to authenticate against the service. To do this, I create a new key (or copy an existing one to ~/.ssh/), which I then publish to GitHub (via their website).

At the same time the key is added to ssh-agent, so you don’t have to enter the corresponding keyphrase all the time when using it.

$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
# start the ssh-agent in the background
$ eval $(ssh-agent -s)
> Agent pid 59566
$ ssh-add ~/.ssh/id_rsa

Oh My Zsh

Now comes the best part 🙂 To give the Ubuntu shell (which is bash by default) real superpowers, I exchange it with zsh in combination with the awesome project Oh My Zsh (which provides hundreds of plugins, customizing options, tweaks etc. for it). zsh is an extended bash shell which has many improvements and extensions compared to bash. Among other things, the shell can be themed, the command prompt adjusted, auto-completion can be used etc.

So, let’s install both:

$ sudo apt install git zsh -y
# After the installation has finished, add OhMyZsh...
$ sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

When ready, OhMyZsh can be customized via the .zshrc file in you home directory (e.g. enable plugins, set the theme). Here are the settings I usually make:

  • Adjust Theme
  • Activate plugins

Let’s do this step by step…

Theme

As theme, I use powerlevel10k (great stuff!), which you can find here.

Sample: powerlevel10k (Source: https://github.com/romkatv/powerlevel10k)

The installation is very easy by first cloning the repo to your local machine and then activating the theme in ~/.zshrc (variable ZSH_THEME, see screenshot below):

$ git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/powerlevel10k

Adjust theme to use

The next time you open a new shell, a wizard guides you through all options of the theme and allows you to customize the look&feel of your terminal (if the wizard does not start automatically or you want to restart it, simply run p10k configure from the command prompt).

The wizard offers a lot of options. Just find the right setup for you, play around with it a bit and try out one or the other. My setup finally looks like this:

My powerlevel10k setup

Optional, but recommended…install the corresponding fonts (and adjust the settings.json of Windows Terminal to use these, see image below): https://github.com/romkatv/powerlevel10k#meslo-nerd-font-patched-for-powerlevel10k

Window Terminal settings

Plugins

In terms of OhMyZsh plugins, I use the following ones:

  • git (git shortcuts, e.g. “gp” for “git pull“, “gc” for “git commit -v“)
  • zsh-autosuggestions / zsh-completions (command completion / suggestions)
  • kubectl (kubectl shortcuts / completion, e.g. “kaf” for “kubectl apply -f“, “kgp” for “kubectl get pods“, “kgd” for “kubectl get deployment” etc.)
  • ssh-agent (starts the ssh agent automatically on startup)

You can simply add them by modifying .zshrc in your home directory:

Activate oh-my-zsh plugins in .zshrc

Additional Tools

Now comes the setup of the tools that I need and use every day. I will not go into detail about all of them here, because most of them are well known or the installation is incredibly easy. The ones, that don’t need much explanation are:

Give me more…!

There are a few tools that I would like to discuss in more detail, as they are not necessarily widely used and known. These are mainly tools that are used when working with Kubernetes/Docker. This is exactly the area where kubectx/kubens and stern are located. Docker for Windows and Visual Studio Code are certainly well known to everyone and are familiar through daily work. The reason why I want to talk about the latter two is because they meanwhile tightly integrate with WSL2!

kubectx / kubens

Who doesn’t know it? You work with Kubernetes and have to switch between clusters and/or namespaces all the time…forgetting the appropriate commands to set the context correctly and typing yourself “to death”. This is where the tools kubectx and kubens come in and help you to switch between different clusters and namespaces quickly and easily. I never want to work with a system again where these tools are not installed – honestly. To see kubectx/kubens in action, here are the samples from their GitHub repo:

kubectx in action
kubens in action

To install both tools, follow these steps:

$ sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
$ sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
$ sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens
mkdir -p ~/.oh-my-zsh/completions
chmod -R 755 ~/.oh-my-zsh/completions
ln -s /opt/kubectx/completion/kubectx.zsh ~/.oh-my-zsh/completions/_kubectx.zsh
ln -s /opt/kubectx/completion/kubens.zsh ~/.oh-my-zsh/completions/_kubens.zsh

To be able to work with the autocomletion features of these tools, you need to add the following line at the end of your .zshrc:

autoload -U compinit && compinit

Congrats, productivity gain: 100% 🙂

stern

stern allows you to output the logs of multiple Pods simultaneously to the local command line. In Kubernetes it is normal to have many services running at the same time that communicate with each other. It is sometimes difficult to follow a call through the cluster. With stern, this becomes relatively easy, because you can select pods by e.g. label selectors from which you want to follow the logs.

With the command stern -l application=scmcontacts e.g. you can stream the logs of all pods with the label application=scmcontacts to your local shell…which then looks like that (each color represents another pod!):

stern log streams

To install stern, use this script:

$ sudo curl -fsSL -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64
$ sudo chmod 755 /usr/local/bin/stern

One more thing

Docker for Windows has been around for a long time and is probably running on your machine right now. What some people may not know is that Docker for Windows integrates seamlessly with WSL2. If you are already running Docker on Windows, a simple invocation of the settings is enough to enable Docker / WSL2 integration:

Activate WSL2 based engine
Choose WSL / distro integration

If you want more details about the integration, please visit this page: https://www.docker.com/blog/new-docker-desktop-wsl2-backend/ For this article, the fact that Docker now runs within WSL2 is sufficient 🙂

Last but not least, one short note. Of course, Visual Studio Code can also be integrated into WSL2. If you install a current version of the editor in Windows, all components to run VS Code with WSL2 are included.

A simple call of code . in the respective directory with your source code is sufficient to install the Visual Studio Code Server (https://github.com/cdr/code-server) in Ubuntu. This allows VSCode to connect remotely to your distro and work with source code / Frameworks that are located in WSL2.

That’s all 🙂

Wrap-Up

Pretty long blog post now, I know…but it contains all the tools that are necessary (take that with a “grain of salt” 😉 ) to make your Windows 10 machine a “wonderful experience” for you as a developer or architect in the “Cloud Native” space. You have a full compatible Linux “system”, which tightly integrates with Windows. You have .NET Core, Go, NodeJS, tools to work in the “Kubernetes universe”, the best code editor currently out there, git, ssh-agent etc. etc.…and a beautiful terminal which makes working with it simply fun!

For sure, there are thing that I missed or I just don’t know at the moment. I would love to hear from you, if I forgot “the one” tool to mention. Looking forward to read your comments/suggestions!

Hope this helps someone out there! Take care…

Photo by Maxim Selyuk on Unsplash

Holger Schwichtenberg: PowerShell 7: Zugriff auf den letzten Fehler

Seit PowerShell 7.0 kann man die letzten aufgetretenen Fehler mit dem Commandlet Get-Error abrufen.

Golo Roden: Fachlichen Code schreiben: Excusez-moi, do you sprechen Español?

Programmieren bedeutet nicht nur, technischen, sondern vor allem auch fachlichen Code zu schreiben. Doch in welcher natürlichen Sprache?

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.