Holger Schwichtenberg: Neu in .NET 6 [3]: dotnet new --update

Die Serie zu den Neuerungen in .NET 6 behandelt im dritten Teil den neuen Kommandozeilenbefehl, um Projektvorlagen zu aktualisieren.

Thorsten Hans: VPN disconnects? Here’s why

If you’ve ever had issues with your VPN disconnecting frequently, don’t worry – you’re not the only one. In fact, everyone who’s ever used a VPN will tell you that they’ve had this happen to them at least once or twice.

This is precisely why we’ve decided to dedicate this article to VPN disconnects. In it, we will elaborate on the following 3 main points:

  1. Common disconnect issues and fixes
  2. What a VPN Kill Switch is and why you need it
  3. Other frequent VPN issues and fixes

Let’s start with…

Common reasons and common fixes for disconnects

Reason #1 The server you’re connected to is too far away.

The way that VPNs work already slows down the connection speed a bit, but adding the extra pressure of your data travelling across the globe and passing through several networks can be fatal to your Internet’s performance. This could easily lead to your VPN simply disconnecting.

Fix #1 Select a Different Server Location

Simply select a different server location from your VPN client. Try to stick to a server in your area, unless you absolutely need the VPN for services like BBC iPlayer or Netflix.

Reason #2 Your VPN protocol is not suitable

VPN protocols, just like network protocols, are sets of rules for formatting and processing data. Protocols exist as a kind of a translator and intermediary between computers. So, if you don’t have the right VPN protocol, that would make your connection quite hard to maintain.

Fix #2 Simply change the protocol

For most people, the OpenVPN protocol works quite well, but that doesn’t mean you shouldn’t test out other protocols supported by your VPN provider. We’ve read stories of people getting a more stable connection with the IKEv2 protocol, which provides an almost equal level of security.

Other protocols, such as PPTP and WireGuard, are also worth the try. Just make sure that they are supported by your VPN provider beforehand.

Reason #3 Data-heavy programs running in the background

Sometimes a program may be getting an update, and sometimes Windows can be causing this issue. There could be software that constantly tries to connect to the Internet, and this may be a problem for your VPN.

Fix #3 Check Task Manager and Settings

Check your Task Manager and your VPN app settings for any unnecessary apps running in the background. If you find any, simply disable them.

Extra fix: Simply restart everything. You know how your Internet is slow sometimes, and when you simply restart the router, it becomes faster? It’s the same principle, and sometimes all you need is a quick restart of your system.

Although not technically a solution to the problem with disconnecting VPNs, we think it is also important to talk about …

VPN is connecting on a tablet

VPN Kill Switches

If the network connection drops, your computer’s IP address is no longer masked and goes back to its original form. This makes your activity easy to track, and what’s worse – this can happen without you even noticing.

In order to protect your identity even in the case of a disconnect, several VPN providers offer the “kill switch” feature. A kill switch is basically a mechanism that cuts your connection to the internet automatically in case of a VPN disconnect.

Keep in mind that even if a given VPN service comes with a kill switch, this does not make it automatically active. To activate many VPN kill switches, you need to go into the VPN app settings and activate the feature.

Other frequent issues and fixes

The issues we mentioned above are the most basic, most common ones that can happen to any user, at any time. Below you will find an additional issue that is limited to Disney Plus/BBC iPlayer/Netflix users.

VPN not working with Disney Plus/BBC iPlayer

The root cause of this issue does not lie in the VPN provider itself – it lies in the way these streaming services work. A common feature that Disney Plus, BBC iPlayer and Netflix share is “geo-blocking”, meaning that these services offer you specific shows based on your location. If, say, a specific show is available only in the UK, you would not be able to access it through the German version of the same service.

Typically, you can easily bypass this feature by using a VPN, BUT certain sites recognize when you’re using a VPN, and unfortunately, these streaming services are among them. The way they typically recognize a VPN is by checking in a specific database whether or not your IP address belongs to a VPN service or not. Since smaller, cheaper VPN providers have more limited resources, only the biggest VPNs can afford to constantly change their IP addresses in order to bypass the harsh restrictions of these streaming services.

So, if you’re having this precise issue, the best thing you could do is to opt for a high-quality paid VPN.

The post VPN disconnects? Here’s why appeared first on Xplatform.

Stefan Henneken: IEC 61131-3: SOLID – Das Dependency Inversion Principle

Feste Abhängigkeiten sind einer der Hauptursache für schlecht wartbare Software. Natürlich können nicht alle Funktionsblöcke völlig unabhängig von anderen Funktionsblöcken existieren. Schließlich agieren diese miteinander und stehen somit untereinander in Beziehungen. Durch das Anwenden des Dependency Inversion Principle können diese Abhängigkeiten aber minimiert werden. Änderungen lassen sich somit schneller umsetzen.

An einem einfachen Beispiel werde ich zeigen, wie negative Koppelungen zwischen Funktionsblöcken entstehen können. Anschließend werden ich mit Hilfe des Dependency Inversion Principle diese Abhängigkeiten auflösen.


Das Beispiel enthält drei Funktionsblöcke, welche jeweils unterschiedliche Lampen ansteuern. Während FB_LampOnOff nur eine Lampe ein- und ausschalten kann, kann FB_LampSetDirect den Ausgangswert direkt auf einen Wert von 0 % bis 100 % setzen. Der dritte Baustein (FB_LampUpDown) ist nur in der Lage die Lampe durch die Methoden OneStepDown() und OneStepUp() um jeweils 1 % relativ zu dimmen. Die Methode OnOff() setzt den Ausgangswert unmittelbar auf 100 % bzw. 0 %.


Die Ansteuerung dieser drei Funktionsblöcke übernimmt FB_Controller. Von jedem Lampentyp wird in FB_Controller eine Instanz instanziiert. Über die Eigenschaft eActiveLamp vom Typ E_LampType wird die gewünschte Lampe ausgewählt.

TYPE E_LampType :
  Unknown   := -1,
  SetDirect := 0,
  OnOff     := 1,
  UpDown    := 2
) := Unknown;

FB_Controller besitzt für die Ansteuerung der unterschiedlichen Lampentypen wiederum entsprechende Methoden. Die Methoden DimDown() und DimmUp() dimmen die ausgewählte Lampe jeweils um 5 % nach oben bzw. 5 % nach unten. Während die Methoden On() und Off() die Lampe direkt ein- oder ausschaltet.

Für die Übermittlung der Ausgangsgröße zwischen dem Controller und der ausgewählten Lampe, wird das Observer-Pattern verwendet. Der Controller enthält hierzu eine Instanz von FB_AnalogValue. FB_AnalogValue implementiert die Schnittstelle I_Oberver mit der Methode Update(), während die drei Funktionsblöcke für die Lampen die Schnittstelle I_Subject implementieren. Über die Methode Attach() erhält jeder Lampenbaustein einen Interface-Pointer auf die Schnittestelle I_Observer von FB_AnalogValue. Ändert sich in einem der drei Lampenbausteinen der Ausgangswert, so wird über die Methode Update() von der Schnittstelle I_Observer der neue Wert an FB_AnalogValue übermittelt.

Unser Beispiel besteht bis jetzt aus den folgenden Akteuren:

Das UML-Diagramm zeigt die Zusammenhänge der jeweiligen Elemente untereinander:

Schauen wir uns den Programmcode der einzelnen Funktionsblöcke etwas genauer an.

FB_LampOnOff / FB_LampUpDown / FB_LampSetDirect

Als Beispiel für die drei Lampentypen soll hier FB_LampSetDirect dienen. FB_LampSetDirect besitzt eine lokale Variable für den aktuellen Ausgangswert und eine lokale Variable für den Interface-Pointer auf FB_AnalogValue.

  nLightLevel    : BYTE(0..100);
  _ipObserver    : I_Observer;

Schaltet FB_Controller auf die Lampe vom Typ FB_LampSetDirect um, so ruft FB_Controller die Methode Attach() auf und übergibt an FB_LampSetDirect den Interface-Pointer auf FB_AnalogValue. Ist der Wert gültig (ungleich 0), so wird dieser in der lokalen Variablen (Backing Variable) _ipObserver gespeichert.

Anmerkung: Lokale Variablen, die den Wert einer Eigenschaft speichern, werden auch als Backing Variable bezeichnet und mit einem Unterstrich im Variablennamen gekennzeichnet.

  ipObserver     : I_Observer;
IF (ipObserver = 0) THEN
_ipObserver := ipObserver;

Die Methode Detach() setzt den Interface-Pointer auf 0, wodurch die Methode Update() nicht mehr aufgerufen wird (siehe weiter unten).

_ipObserver := 0;

Über die Methode SetLightLevel() wird der neue Ausgangswert übergeben und in die lokale Variable nLightLevel gespeichert. Außerdem wird vom Interface-Pointer _ipObserver die Methode Update() aufgerufen. Hierdurch erhält die Instanz von FB_AnalogValue, welche sich in FB_Controller befindet, den neuen Ausgangswert.

  nNewLightLevel    : BYTE(0..100);
nLightLevel := nNewLightLevel; 
IF (_ipObserver <> 0) THEN

Bei allen drei Lampenbausteinen sind die Methoden Attach() und Detach() identisch. Unterschiede gibt es nur in den Methoden, welche den Ausgangswert ändern.


FB_AnalogValue enthält sehr wenig Programmcode, da dieser Funktionsblock ausschließlich zum Speichern der Ausgangsgröße dient.

  _nActualValue   : BYTE(0..100);

  nNewValue       : BYTE(0..100);

Zusätzlich hat FB_AnalogValue noch die Eigenschaft nValue, über der der aktuelle Wert nach außen zur Verfügung gestellt wird.


FB_Controller enthält die Instanzen der drei Lampenbausteine. Des Weiteren ist eine Instanz von FB_AnalogValue vorhanden, um den aktuellen Ausgangswert der aktiven Lampe entgegenzunehmen. _eActiveLamp speichert den aktuellen Zustand der Eigenschaft eActiveLamp.

  fbLampOnOff      : FB_LampOnOff();
  fbLampSetDirect  : FB_LampSetDirect();
  fbLampUpDown     : FB_LampUpDown();
  fbActualValue    : FB_AnalogValue();
  _eActiveLamp     : E_LampType;

Das Umschalten zwischen den drei Lampen erfolgt über den Setter der Eigenschaft eActiveLamp.



CASE eActiveLamp OF

_eActiveLamp := eActiveLamp;

Wird über die Eigenschaft eActiveLamp auf eine andere Lampe umgeschaltet, so wird zu Beginn die noch aktuelle Lampe über die lokale Methode Off() ausgeschaltet. Des Weiteren wird bei allen drei Lampen die Methode Detach() aufgerufen. Hierdurch wird eine mögliche Verbindung zu FB_AnalogValue beendet. Innerhalb der CASE-Anweisung wird bei der neuen Lampe die Methode Attach() aufgerufen und der Interface-Pointer auf fbActualValue übergeben. Zum Schluss wird der Zustand der Eigenschaft in die lokale Variable _eActiveLamp gespeichert.

Die Methoden DimDown(), DimUp(), Off() und On() haben die Aufgabe den gewünschten Ausgangswert einzustellen. Da die einzelnen Lampentypen hierzu verschiedene Methoden anbieten, muss jeder Lampentyp einzeln behandelt werden.

Die Methode DimDown() soll die aktive Lampe um 5 % runterdimmen. Der Ausgangswert soll hierbei aber 10 % nicht unterschreiten.

CASE _eActiveLamp OF
    IF (fbActualValue.nValue >= 15) THEN
      fbLampSetDirect.SetLightLevel(fbActualValue.nValue - 5);
    IF (fbActualValue.nValue >= 15) THEN	

FB_LampOnOff kennt nur die Zustände 0 % und 100 %. Ein Dimmen ist somit nicht möglich. Als Kompromiss wird deshalb beim Runterdimmen die Lampe ausgeschaltet (Zeile 4).

Bei FB_LampSetDirect kann mit Hilfe der Methode SetLightLevel() der neue Ausgangswert direkt gesetzt werden. Hierzu werden vom aktuellen Ausgangswert 5 subtrahiert und an die Methode SetLightLevel() übergeben (Zeile 7). Die IF-Abfrage in Zeile 6 stellt sicher, dass der Ausgangswert nicht unter 10 % eingestellt wird.

Da die Methode OneStepDown() von FB_LampUpDown den Ausgangswert nur um 1 % reduziert, wird die Methode 5-mal aufgerufen (Zeilen 11-15). Auch hier stellt eine IF-Abfrage in Zeile 10 sicher, dass die 10 % nicht unterschritten werden.

DimUp(), Off() und On() haben einen vergleichbaren Aufbau. Durch eine CASE-Anweisung werden die verschiedenen Lampentypen gesondert behandelt und somit die jeweiligen Besonderheiten berücksichtigt.

Beispiel 1 (TwinCAT 3.1.4024) auf GitHub

Analyse der Implementierung

Auf dem ersten Blick wirkt die Umsetzung solide. Das Programm macht was es soll und der vorgestellte Code ist in seiner jetzigen Größe wartbar. Wäre sichergestellt, dass das Programm an Umfang nicht zunimmt, könnte alles so bleiben wie es ist.

Doch in der Praxis entspricht der aktuelle Stand eher dem ersten Entwicklungszyklus eines größeren Projektes. Die kleine, überschaubare Anwendung wird im Laufe der Zeit durch Erweiterungen an Codeumfang zunehmen. Somit ist eine genaue Inspektion des Codes schon zu Beginn sinnvoll. Ansonsten besteht die Gefahr, den richtigen Zeitpunkt für grundlegende Optimierungen zu verpassen. Mängel lassen sich dann nur noch mit großem Zeitaufwand beseitigen.

Doch welche grundlegenden Probleme hat das obige Beispiel?

Punkt 1: CASE-Anweisung

In jeder Methode des Controllers befindet sich dasselbe CASE-Konstrukt.

CASE _eActiveLamp OF

Es ist zwar eine Ähnlichkeit zwischen den Wert von _eActiveLamp (z.B.  E_LampType.SetDirect) und der lokalen Variable (z.B. fbLampSetDirect) zu erkennen, doch müssen trotzdem die einzelnen Fälle manuell beachtet und programmiert werden.

Punkt 2: Erweiterbarkeit

Soll ein neuer Lampentyp hinzugefügt werden, so muss zunächst der Datentyp E_LampType erweitert werden. Anschließend ist es notwendig in jeder Methode vom Controller die CASE-Anweisung zu ergänzen.

Punkt 3: Zuständigkeiten

Dadurch, dass der Controller das Zuordnen der Befehle auf alle Lampentypen durchführt, ist die Logik eines Lampentyps auf mehrere FBs verteilt. Dieses ist eine äußerst unpraktische Gruppierung. Will man verstehen, wie der Controller einen bestimmten Lampentyp anspricht, so muss man von Methode zu Methode springen und sich aus der CASE-Anweisung den korrekten Fall raussuchen.

Punkt 4: Kopplung

Der Controller hat eine enge Bindung zu den unterschiedlichen Lampenbausteinen. Dadurch ist der Controller stark abhängig von Änderungen an den einzelnen Lampentypen. Jede Änderung an den Methoden eines Lampentyps führt zwangsläufig auch zu Anpassungen am Controller.

Optimierung der Implementierung

Derzeit besitzt das Beispiel feste Abhängigkeiten in einer Richtung. Der Controller ruft die Methoden der jeweiligen Lampentypen auf. Diese direkte Abhängigkeit sollte aufgelöst werden. Dazu benötigen wir eine gemeinsame Abstraktionsebene.

Auflösen der CASE-Anweisungen

Hierzu bieten sich abstrakte Funktionsblöcke und Schnittstelle an. Im Folgenden verwende ich den abstrakten Funktionsblock FB_Lamp und die Schnittstelle I_Lamp. Die Schnittstelle I_Lamp besitzt die gleichen Methoden wie der Controller. Der abstrakte FB implementiert die Schnittstelle I_Lamp und besitzt dadurch ebenfalls alle Methoden von FB_Controller.

Wie abstrakte Funktionsblöcke und Schnittstellen miteinander kombiniert werden können, habe in IEC 61131-3: Abstrakter FB vs. Schnittstelle vorgestellt.

Alle Lampentypen erben von diesem abstrakten Lampentyp. Aus Sicht des Controllers sehen alle Lampentypen hierdurch gleich aus. Des Weiteren implementiert der abstrakte FB die Schnittstelle I_Subject.


Die Methoden Detach() und Attach() von FB_Lamp werden nicht als abstract deklariert und enthalten den notwendigen Programmcode. Dadurch ist es nicht notwendig, den Programmcode für diese beiden Methoden in jeden Lampentyp erneut zu implementieren.

Da die Lampentypen von FB_Lamp erben, sind diese aus Sicht des Controllers alle gleich.

Die Methode SetLightLevel() bleibt unverändert. Das Zuordnen der Methoden von FB_Lamp (DimDown(), DimUp(), Off() und On()) auf die jeweiligen Lampentypen erfolgt jetzt nicht mehr im Controller, sondern im jeweiligen FB des Lampentyps:

IF (nLightLevel >= 15) THEN
  SetLightLevel(nLightLevel - 5);

Somit ist nicht mehr der Controller für das Zuordnen der Methoden zuständig, sondern jeder Lampentyp selbst. Die CASE-Anweisungen in den Methoden von FB_Controller entfallen vollständig.

Auflösen von E_LampType

Die Verwendung von E_LampType bindet den Controller weiterhin an die jeweiligen Lampentypen. Doch wie kann auf die verschiedenen Lampentypen umgeschaltet werden, wenn E_LampType entfällt? Um dieses zu erreichen, wird dem Controller der gewünschte Lampentyp über eine Eigenschaft per Referenz übergeben.


Somit können alle Lampentypen übergeben werden, einzige Voraussetzung, der übergebene Lampentyp muss von FB_Lamp erben. Dadurch werden alle Methoden und Eigenschaften festgelegt, die für eine Interaktion zwischen Controller und Lampenbaustein notwendig sind.

Anmerkung: Diese Technik des ‚reinreichen‘ von Abhängigkeiten wird auch als Dependency Injection bezeichnet.

Die Umschaltung auf den neuen Lampenbaustein erfolgt im Setter der Eigenschaft refActiveLamp. Dort wird die Methode Detach() der aktiven Lampe aufgerufen (Zeile 2), während in Zeile 6 von der neuen Lampe die Methode Attach() aufgerufen wird. In Zeile 4 wird die Referenz der neuen Lampe in die lokale Variable (Backing Variable) _refActiveLamp abgespeichert.

IF (__ISVALIDREF(_refActiveLamp)) THEN
_refActiveLamp REF= refActiveLamp;
IF (__ISVALIDREF(refActiveLamp)) THEN

In den Methoden DimDown(), DimUp(), Off() und On() wird über _refActiveLamp der Methodenaufruf an die aktive Lampe weitergeleitet. Anstelle der CASE-Anweisung stehen hier nur noch wenige Zeile, da nicht mehr zwischen den verschiedenen Lampentypen unterschieden werden muss.

IF (__ISVALIDREF(_refActiveLamp)) THEN

Der Controller ist somit generisch. Wird ein neuer Lampentyp definiert, so bleibt der Controller unverändert.

Zugegeben: hierdurch wurde das Auswählen des gewünschten Lampentyps an den Aufrufer von FB_Controller übertragen. Dieser muss jetzt die verschiedenen Lampentypen anlegen und an den Controller übergeben. Dieses ist dann ein guter Ansatz, wenn sich z.B. alle Elemente in einer Bibliothek befinden. Durch die oben gezeigten Anpassungen können jetzt eigene Lampentypen entwickelt werden, ohne dass Anpassungen an der Bibliothek notwendig sind.

Beispiel 2 (TwinCAT 3.1.4024) auf GitHub

Analyse der Optimierung

Obwohl ein Funktionsblock und eine Schnittstelle hinzugekommen sind, so ist die Menge des Programmcodes nicht mehr geworden. Der Code brauchte nur sinnvoll umstrukturiert werden, um die oben genannten Probleme zu eliminieren. Das Ergebnis ist eine langfristig tragfähige Programmstruktur, welche in mehrere gleichbleibend kleine Artefakte mit klaren Verantwortlichkeiten aufgeteilt wurde. Das UML-Diagramm zeigt sehr gut die neue Aufteilung:

 (abstrakte Elemente werden in kursiver Schriftart dargestellt)

FB_Controller besitzt keine feste Bindung mehr zu den einzelnen Lampentypen. Stattdessen wird auf den abstrakten Funktionsblock FB_Lamp zugegriffen, welcher über die Eigenschaft refActiveLamp an den Controller hinein gereicht wird. Über diese Abstraktionsebene wird dann auf die einzelnen Lampentypen zugegriffen.

Die Definition des Dependency Inversion Principle

Das Dependency Inversion Principle besteht aus zwei Grundsätzen und wird in dem Buch (Amazon-Werbelink *) Clean Architecture: Das Praxis-Handbuch für professionelles Softwaredesign von Robert C. Martin sehr gut beschrieben:

Module hoher Ebenen sollten nicht von Modulen niedriger Ebenen abhängen. Beide sollten von Abstraktionen abhängen.

Bezogen auf das obige Beispiel ist das Modul der hohen Ebene der Funktionsblock FB_Controller. Dieser sollte nicht direkt auf Module niedriger Ebene zugreifen, in der Details enthalten sind. Die Module niedriger Ebene sind die einzelnen Lampentypen.

Abstraktionen sollten nicht von Details abhängen. Details sollten von Abstraktionen abhängen.

Die Details sind die einzelnen Methoden, die die jeweiligen Lampentypen anbieten. Im ersten Beispiel ist FB_Controller von den Details aller Lampentypen abhängig. Wird an einem Lampentyp eine Änderung vorgenommen, so muss auch der Controller angepasst werden.

Was genau wird durch das Dependency Inversion Principle umgedreht?

Im ersten Beispiel greift FB_Controller direkt auf die einzelnen Lampentypen zu. Dadurch ist FB_Controller (höhere Ebene) abhängig von den Lampentypen (niedrigere Ebene).

Das Dependency Inversion Principle invertiert diese Abhängigkeit. Hierzu wird eine zusätzliche Abstraktionsebene eingeführt. Die höhere Ebene legt fest, wie diese Abstraktionsebene aussieht. Die niederen Schichten müssen diese Vorgaben erfüllen. Dadurch ändert sich die Richtung der Abhängigkeiten.

Im obigen Beispiel wurde diese zusätzliche Abstraktionsebene durch die Kombination des abstrakten Funktionsblock FB_Lamp und der Schnittstelle I_Lamp umgesetzt.


Bei dem Dependency Inversion Principle besteht die Gefahr des Overengineering. Nicht jede Koppelung sollte aufgelöst werden. Dort wo ein Austausch von Funktionsblöcken zu erwarten ist, kann das Dependency Inversion Principle eine große Hilfe sein. Weiter oben hatte ich das Beispiel einer Bibliothek genannt, in der verschiedene Funktionsblöcke untereinander abhängig sind. Will der Anwender der Bibliothek in diese Abhängigkeiten eingreifen, so würden feste Abhängigkeiten dieses verhindern.

Durch das Dependency Inversion Principle erhöht sich die Testbarkeit eines Systems. FB_Controller kann völlig unabhängig von den einzelnen Lampentypen getestet werden. Für die Unit-Tests wird ein FB erstellt, welcher von FB_Lamp abgeleitet wird. Dieser Dummy-FB, welcher nur Funktionen enthält die für die Tests von FB_Controller notwendig sind, wird auch als Mocking-Object bezeichnet. Jakob Sagatowski stellt in seinem Post Mocking objects in TwinCAT dieses Konzept vor.

In den nächsten Post werde ich das Beispielprogramm mit Hilfe des Single Responsibility Principle analysieren und weiter optimieren.

Holger Schwichtenberg: Neu in .NET 6 [2]: dotnet new --search

Die Serie zu den Neuerungen in .NET 6 stellt im zweiten Teil den neuen Kommandozeilenbefehl vor, um Projektvorlagen auf www.nuget.org zu suchen.

Golo Roden: TypeScript lernen: Eine Einführung

An TypeScript kommt heutzutage kaum noch jemand vorbei, der Anwendungen für das Web und die Cloud entwickeln möchte. Tatsächlich löst TypeScript eine ganze Reihe der Probleme von JavaScript, insbesondere hinsichtlich der Skalierbarkeit im Team. Wie funktioniert TypeScript, und wie lernt man es am besten?

Thorsten Hans: What Are The 8 Best VPNs to Use in Qatar?

A VPN will protect and secure your online experience in Qatar, allowing you to bypass geo-restricted streaming platforms and use VoIP services like WhatsApp, Face Time or Skype. WhatsApp calling, for example, is not working in Qatar. But with a VPN you can unblock WhatsApp calling and use it without any restrictions by connecting to a server location where this service is available.
See the 8 best VPNs to use in Qatar:

  1. CyberGhost
  2. NordVPN
  3. ExpressVPN
  4. Surfshark
  5. PIA
  6. Windscribe
  7. hide.me
  8. ProtonVPN

The level of online censorship in Qatar is truly high but it is absolutely legal to use a VPN service (paid or not) in the country. As long as you’re not using it to commit fraud, of course. So with that being said let’s check out the 8 best VPNs that you can safely use while in Qatar. We’re beginning with the most reliable paid VPNs:

1. CyberGhost

CyberGhost has 14 servers in Qatar, it is specialized in torrenting, gaming and streaming and offers reliable fast speeds. The most important things that you will get by using this great VPN service are excellent customer support, AES-256-bit military-grade encryption, IP and DNS leak protection, up to 7 simultaneous connections, an automatic kill switch and a 45-day money-back guarantee. The best feature, however, is that the VPN has servers specialized for the biggest streaming platforms like Netflix and Hulu.

2. NordVPN

This classic option is considered to be the most secure VPN for Qatar. Users can have up to 6 simultaneous connections. NordVPN provides solid protection – a tunnel to tunnel encryption (the highest level of security), SSL 2048-bit encryption, a strict no-logs policy and other additional protection layers. All of this will ensure that you have the safest possible online experience in Qatar. And there’s the 30-day money-back guarantee.

3. ExpressVPN

Limitless monthly data transfer, 256-bit encryption, automatic kill switch, a very helpful 24/7 customer support, large bandwidth, no-logs policy, split-tunnelling, up to 5 simultaneous connections – that’s what you get with ExpressVPN! This VPN is an excellent choice for Qatar. There are no servers in the country but there are servers in the Middle East that are perfect for bypassing the government restrictions on the internet. There’s a 30-day money-back guarantee.

4. Surfshark

Surfshark offers strong privacy and security credentials as well as unlimited simultaneous connections. The 24/7 customer service live chat will have all your problems solved. You will enjoy convenient surfing and fast speeds on all servers while using Surfshark in Qatar. The AES-256-bit military-grade encryption, the kill-switch and the CleanWeb feature are all there to make sure your online experience will be 100% safe and your data – protected. The money-back guarantee is 30 days.

5. PIA

With PIA you can connect up to 10 devices simultaneously. The VPN has 11 servers in Qatar. This is considered to be the best VPN for torrenting in the country. There are top-notch security features: AES-256-bit military-grade encryption, Open-source software, ad and malware blocking (MACE), WireGuard protocol, a kill-switch. The unblocking ability of PIA is truly impressive and no geo-restrictions can ruin your streaming experience. You will get a 30-day money-back guarantee.

The Best Free VPNs for Qatar

It’s not advisable to use a free VPN service in Qatar. A free VPN provider can sell your data to third parties and your online experience will be unprotected and unpleasant. The speeds are also usually slow and the number of servers is very limited. It’s always better to invest in a paid VPN. However, if you insist on using a free VPN service, here are the 3 safest free VPN options for Qatar:

1. Windscribe

With the free plan, you will get 10GB of data every month, unlimited simultaneous connections, great privacy and security features (kill switch, 256-bit encryption, DNS/IPv6/WebRTC leak protection) as well as a minimal amount of annoying ads. The speed of the Windscribe free version is way lower than that of the paid version but it’s suitable for torrenting.

2. hide.me

The free plan of this VPN offers 2GB of data per month that is enough for browsing and streaming, 24/7 customer service for your convenience, a kill switch, AES-256 bit encryption and IP/DNS leak protection. The speeds are quite decent for a free VPN option. Users can stream in HD even on more distant servers. And there are no ads!

3. ProtonVPN

ProtonVPN is a high-quality service that doesn’t have restrictions on speed or traffic. The free version of this VPN is add-free and offers decent speed and extended functionality. The truth is that no other free option comes close to matching the solid security and privacy provided by this VPN service.

The post What Are The 8 Best VPNs to Use in Qatar? appeared first on Xplatform.

Golo Roden: In eigener Sache: Der tech:lounge Reactober

Der tech:lounge Reactober ist eine Reihe von sechs Webinaren, in denen sich alles um die UI-Entwicklung mit React dreht – vom Einstieg bis zum Deep-Dive.

Holger Schwichtenberg: Neu in .NET 6 [1]: dotnet sdk check

Der erste Teil der Serie zu Neuerungen in .NET 6 widmet sich einem neuen Befehl, der die verfügbaren Versionen der .NET Runtime und des .NET SDK auflistet.

Christian Dennig [MS]: Secure Azure Cosmos DB access by using Azure Managed Identities

Learn how to use Azure RBAC to connect to Cosmos DB and increase the security of your application by using Azure Managed Identities.

A few months ago, the Azure Cosmos DB team released a new feature that enables developers to use Azure principals (AAD users, service principals etc.) and Azure RBAC to connect to the database service. The feature significantly increases the security and can completely replace the use of connection strings and keys in your application – things you normally should rotate every now and then to minimize the risk of exposed credentials. Furthermore, you can apply fine-grained authorization rules to those principals at account-, database- and container-level and control what each “user” is able to do.

To completely get rid of passwords and keys in your service, Microsoft encourages developers to use “Managed Identities”. Managed identities provide an identity to applications and services to be used when connection to Azure resources that support AAD authentication. There are two kinds of managed identities:

  • System-assigned identity: created automatically by Azure at the service level (e.g. Azure AppService) and tied to the lifecycle of it. Only that specific Azure resource can then use the identity to request a token and it will be automatically removed when the service is deleted
  • User-assigned identity: created independently of an Azure service as a standalone resource in your subscription. The identity can be assigned to more than one resource and is not tied to the lifecycle of a particular Azure resource

When a managed identity is added, Azure also creates a dedicated certificate that is used to request a token at the token provider of Azure Active Directory. When you finally want to get access to an AAD-enabled service like Cosmos DB or Azure KeyVault, you simply call the local metadata endpoint (of the underlying service/Azure VM – to request a token which is then used to authenticate against the service – no need to use a password as the certificate of the assigned managed identity is being used for authentication. BTW, the certificate is automatically rotated for you before it expires, so you don’t need to worry about it at all. This is one big advantage of using managed identities over service principals where you would need to deal with expiring passwords for yourself.


If you want to follow along with the tutorial, clone the sample repository from my personal GitHub account: https://github.com/cdennig/cosmos-managed-identity

To see managed identities and the Cosmos DB RBAC feature in action, we’ll first create a user-assigned identity, a database and add and assign a custom Cosmos DB role to that identity.

We will use a combination of Azure Bicep and the Azure CLI. So first, let’s create a resource group and the managed identity:

$ az group create -n rg-cosmosrbac -l westeurope
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/rg-cosmosrbac",
  "location": "westeurope",
  "managedBy": null,
  "name": "rg-cosmosrbac",
  "properties": {
    "provisioningState": "Succeeded"
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"

$ az identity create --name umid-cosmosid --resource-group rg-cosmosrbac --location westeurope
  "clientId": "c9e48f4e-24c7-46af-834e-96cd48c2ce27",
  "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/rg-cosmosrbac/providers/Microsoft.ManagedIdentity/userAssignedIdentities/umid-cosmosid",
  "location": "westeurope",
  "name": "umid-cosmosid",
  "principalId": "63cf3af1-7ee2-4d4c-9fe6-deb936065faa",
  "resourceGroup": "rg-cosmosrbac",
  "tags": {},
  "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "type": "Microsoft.ManagedIdentity/userAssignedIdentities"

After that, let’s see how we create the Cosmos DB account, database and the container for the data. To create these resources, we’ll use an Azure Bicep template. Let’s have a look at the different parts of it.

var location = resourceGroup().location
var dbName = 'rbacsample'
var containerName = 'data'

// Cosmos DB Account
resource cosmosDbAccount 'Microsoft.DocumentDB/databaseAccounts@2021-06-15' = {
  name: 'cosmos-${uniqueString(resourceGroup().id)}'
  location: location
  kind: 'GlobalDocumentDB'
  properties: {
    consistencyPolicy: {
      defaultConsistencyLevel: 'Session'
    locations: [
        locationName: location
        failoverPriority: 0
    capabilities: [
        name: 'EnableServerless'
    disableLocalAuth: false
    databaseAccountOfferType: 'Standard'
    enableAutomaticFailover: true
    publicNetworkAccess: 'Enabled'

// Cosmos DB database
resource cosmosDbDatabase 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases@2021-06-15' = {
  name: '${cosmosDbAccount.name}/${dbName}'
  location: location
  properties: {
    resource: {
      id: dbName

// Container
resource containerData 'Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers@2021-06-15' = {
  name: '${cosmosDbDatabase.name}/${containerName}'
  location: location
  properties: {
    resource: {
      id: containerName
      partitionKey: {
        paths: [
        kind: 'Hash'

The template for these resources is straightforward: we create a (serverless) Cosmos DB account with a database called rbacsample and a data container. Nothing special here.

Next, the template adds a Cosmos DB role definition. Therefore, we need to inject the principal ID of the previously created managed identity to the template.

@description('Principal ID of the managed identity')
param principalId string

var roleDefId = guid('sql-role-definition-', principalId, cosmosDbAccount.id)
var roleDefName = 'Custom Read/Write role'

resource roleDefinition 'Microsoft.DocumentDB/databaseAccounts/sqlRoleDefinitions@2021-06-15' = {
  name: '${cosmosDbAccount.name}/${roleDefId}'
  properties: {
    roleName: roleDefName
    type: 'CustomRole'
    assignableScopes: [
    permissions: [
        dataActions: [

You can see in the template, that you can scope the role definition (property assignableScopes), meaning at which level the role may be assigned. In this sample, it is scoped to the account level. But you can also choose “database” or “container” level (more information on that in the official documentation).

In terms of permissions/actions, Cosmos DB offers fine-grained options that you can use when defining your custom role. You can even make use of wildcards, which we leverage in the current sample. Let’s have a look at a few of the predefined actions:

  • Microsoft.DocumentDB/databaseAccounts/readMetadata: read account metadata
  • Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/create: create items
  • Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read: read items
  • Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/delete: delete items
  • Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed: read the change feed

If you want to get more information on the permission model and available actions, the documentation goes into more detail. For now, let’s move on in our sample by assigning the custom role:

var roleAssignId = guid(roleDefId, principalId, cosmosDbAccount.id)

resource roleAssignment 'Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments@2021-06-15' = {
  name: '${cosmosDbAccount.name}/${roleAssignId}'
  properties: {
    roleDefinitionId: roleDefinition.id
    principalId: principalId
    scope: cosmosDbAccount.id

Finally, we’ll deploy the whole template (you can find the template in the root folder of the git repository):

$ MI_PRINID=$(az identity show -n umid-cosmosid -g rg-cosmosrbac --query "principalId" -o tsv)

$ az deployment group create -f deploy.bicep -g rg-cosmosrbac --parameters principalId=$MI_PRINID -o none

After the template has been successfully deployed, let’s create a sample application that uses the managed identity to access the Cosmos DB database.


Let’s create a basic application that uses the managed identity to access Cosmos DB and create one document in the data container for demo purposes. Fortunately, when using the Cosmos DB SDK , things couldn’t be easier for you as a developer. You simply need to use an instance of the DefaultAzureCredential class from the Azure.Identity package (API reference) and all the “heavy-lifting” like preparing and issuing the request to the local metadata endpoint and using the resulting token is done automatically for you. Let’s have a look at the relevant parts of the application:

var credential = new DefaultAzureCredential();

var cosmosClient = new CosmosClient(_configuration["Cosmos:Uri"], credential);
var container = cosmosClient.GetContainer(_configuration["Cosmos:Db"], _configuration["Cosmos:Container"]);

var newId = Guid.NewGuid().ToString();
await container.CreateItemAsync(new {id = newId, partitionKey = newId, name = "Ted Lasso"},
    new PartitionKey(newId), cancellationToken: stoppingToken);

Use the application in Azure

So far, we have everything prepared to use a managed identity to connect to Azure Cosmos DB: we created a user-assigned identity, a Cosmos DB database and a custom role tied to that identity. We also have a basic application that connects to the database via Azure RBAC and creates one document in the container. Let’s now create a service that hosts and runs the application.

Sure, we could use an Azure AppService or an Azure Function, but let’s exaggerate a little bit here and create an Azure Kubernetes Service cluster with the Pod Identity plugin enabled. This plugin lets you assign predefined managed identities to pods in your cluster and transparently use the local token endpoint of the underlying Azure VM (be aware that at the time of writing, this plugin is still in preview).

If you haven’t used the feature yet, you first must register it in your subscription and download the Azure CLI extension for AKS preview features:

$ az feature register --name EnablePodIdentityPreview --namespace Microsoft.ContainerService

$ az extension add --name aks-preview

The feature registration takes some time. You can check the status via the following command. The registration state should be “Registered” before continuing:

$ az feature show --name EnablePodIdentityPreview --namespace Microsoft.ContainerService -o table

Now, create the Kubernetes cluster. To keep things simple and cheap, let’s just add one node to the cluster:

$ az aks create -g rg-cosmosrbac -n cosmosrbac --enable-pod-identity --network-plugin azure -c 1 --generate-ssh-keys

# after the cluster has been created, download the credentials
$ az aks get-credentials -g rg-cosmosrbac -n cosmosrbac

Now, to use the managed identity and bind it to the cluster VM(s), we need to assign a special role to our managed identity called Virtual Machine Contributor.

$ MI_APPID=$(az identity show -n umid-cosmosid -g rg-cosmosrbac --query "clientId" -o tsv)
$ NODE_GROUP=$(az aks show -g rg-cosmosrbac -n cosmosrbac --query nodeResourceGroup -o tsv)
$ NODES_RESOURCE_ID=$(az group show -n $NODE_GROUP -o tsv --query "id")

# assign the role
$ az role assignment create --role "Virtual Machine Contributor" --assignee "$MI_APPID" --scope $NODES_RESOURCE_ID

Finally, we are able to create a Pod Identity and assign our managed identity to it:

$ MI_ID=$(az identity show -n umid-cosmosid -g rg-cosmosrbac --query "id" -o tsv)

$ az aks pod-identity add --resource-group rg-cosmosrbac --cluster-name cosmosrbac --namespace default  --name "cosmos-pod-identity" --identity-resource-id $MI_ID

Let’s look at our cluster and see what has been created:

$ kubectl get azureidentities.aadpodidentity.k8s.io
NAME                                   AGE
cosmos-pod-identity   2m34s
Details of the pod identity
Details of the pod identity

We now have the pod identity available in the cluster and are ready to deploy the sample application. If you want to create the Docker image on your own, you can use the Dockerfile located in the CosmosDemoRbac\CosmosDemoRbac folder of the repository. For your convenience, there’s already a pre-built / pre-published image on Docker Hub. We can now create the pod manifest:

# contents of pod.yaml
apiVersion: v1
kind: Pod
  name: demo
    aadpodidbinding: "cosmos-pod-identity"
  - name: demo
    image: chrisdennig/cosmos-mi:1.1
      - name: Cosmos__Uri
        value: "https://cosmos-2varraanuegcs.documents.azure.com:443/"
      - name: Cosmos__Db
        value: "rbacsample"
      - name: Cosmos__Container
        value: "data"
    kubernetes.io/os: linux

The most important part here is line 7 (highlighted) where we bind the previously created pod identity to our pod. This will enable the application container within the pod to call the local metadata endpoint on the cluster VM to request an access token that is finally used to authenticate against Cosmos DB. Let’s deploy it (you’ll find pod.yaml file in the root folder).

$ kubectl apply -f pod.yaml

When the pod starts, the container will log some information to stdout, indicating whether the connection to Cosmos DB was successful. So, let’s query the logs:

$ kubectl logs demo

You should see an output like that:

Seems like everything works as expected. Open your database in the Azure Portal and have a look at the data container. You should see one entry:

Bonus: No more Primary Keys!

So far, we can connect to Cosmos DB via a user-assigned Azure Managed Identity. To increase the security of such a scenario even more, you can now disable the use of connection strings and Cosmos DB account keys at all – preventing anyone from using this kind of information to access your account.

To do this, open the deploy.bicep file and set the parameter disableLocalAuth on line 31 to true (we do this now, because it – of course – also disables the access to the database via the Portal). Now, reapply the template and try to open the Data Explorer in the Azure Portal again. You’ll see that you can’t access your data anymore, because the portal uses the Cosmos DB connection strings / account keys under the hood.


Getting rid of passwords (or connection string / keys) while accessing Azure services and instead making use of Managed Identities is a fantastic way to increase the security of your workloads running in Azure. This sample demonstrated how to use this approach in combination with Cosmos DB and Azure Kubernetes Service (Pod Identity feature). It is – of course – not limited to these resources. You can use this technique with other Azure services like Azure KeyVault, Azure Storage Accounts, Azure SQL DBs etc. Microsoft even encourages you to apply this approach to eliminate the probably most used attack vector in cloud computing: exposed passwords (e.g. via misconfiguration, leaked credentials in code repositories etc.).

I hope this tutorial could show you, how easy it is to apply this approach and inspire you to use it in your next project. Happy hacking, friends 🖖

No more connection strings or primary keys! Increase security when accessing @AzureCosmosDB by using #AAD Managed Identities.

#cosmosdb #aks #k8s #kubernetes #podidentity @Azure

You can find the source code on my private GitHub account: https://github.com/cdennig/cosmos-managed-identity

Header image source: https://unsplash.com/@markuswinkler

Code-Inside Blog: Microsoft Build 2021 session recommendations

To be fair: Microsoft Build 2021 was some month ago, but the content might still be relevant today. Sooo… it took me a while, but here is a list of sessions that I found interesting. Some sessions are “better” and some “lighter”, the order doesn’t reflect that - that was just the order I watched those videos.

The headline has a link to the video and below are some notes.

Build cloud-native applications that run anywhere

  • Azure Arc (GitHub & Policies)
  • AKS

Build differentiated SaaS apps with the Microsoft Cloud

  • Power Apps
  • “Light” session - only if you are interested in Microsofts “Low Code” portfolio

[Build the next generation of collaborative apps for hybrid work


  • Overview Dev Platform (PowerApps, Graph…)
  • Fluid
  • Adaptive Cards
  • Project.Reunion / WebView 2

[Mark Russinovich on Azure innovation and more!


  • Dapr
  • Story about RdcMan
  • Sysmon on linux

[Learn how to build exciting apps across meetings, chats, and channels within or outside Microsoft Teams](


  • Microsoft Teams SDK
  • Azure Communication Services
  • Meeting Events, Media APIs, Share integration
  • Teams Connect
  • Adaptive Cards in Teams
  • Messaging Extensions in Outlook for Web
  • Together Mode scenes

What’s new for Windows desktop application development

  • Project Reunion
  • MAUI

[Understand the ML process and embed models into apps


  • Azure ML
  • “Data scientist”: VS Code Demo with Jupyter Notebooks, PyTorch, TensorBoard
  • “MLOps”
  • Azure Machine Learning Studio
  • “Red/Blue”-Deployment via GitHub Actions

[The future of modern application development with .NET


  • “.NET Core Momentum”
  • .NET Upgrade Assistant
  • Minimal web apis
  • MAUI
  • Blazor in Web & Desktop
  • Hot Reload

Scott Guthrie ‘Unplugged’ – Home Edition (Extended)

  • ScottGu
  • DevTools
  • GitHub Actions
  • Codespaces
  • Cosmos DB: Serverless, Cache, Encryption, Free tier enhancements
  • Azure AI

Build your first web app with Blazor & Web Assembly

  • Learning video

Develop apps with the Microsoft Graph Toolkit

  • “Low code” Learning video about the toolkit

Application Authentication in the Microsoft Identity platform

  • MSAL 2.0 & Microsoft Identity Platform
  • SPA App with JS
  • WebApps stuff with ASP.NET Core
  • Service apps

[Double-click with Microsoft engineering leaders


  • “Whiteboarding-style”
  • GitOps Concepts
  • Velocity - Inner/Outer Loop
  • Data Analytics with Cosmos DB
  • Azure Cloud “overview”

[.NET 6 deep dive; what’s new and what’s coming


  • .NET Momentum
  • .NET 5 - why
  • .NET 6 main features
  • EF Core
  • C# 10
  • Minimal WebApis
  • MAUI
  • Blazor
  • ASP.NET Core
  • Edit and Continue

Hope this helps.

Christian Dennig [MS]: Using the Transactional Outbox Pattern with Azure Cosmos DB for Guaranteed Delivery of Domain Events

This article is about how to build a resilient architecture for distributed applications leveraging basic Domain-Driven Design and the Transactional Outbox Pattern with Azure Cosmos DB and Azure Service Bus.


Get your hands dirty on GitHub: https://github.com/cdennig/transactional-ob-pattern

Microservice architectures become increasingly popular these days and promise to solve problems like scalability, maintainability, agility etc. by splitting your once monolithic application into smaller (micro-) services that can operate and scale on their own. While you want these services to be independent from other services of your application (no tight coupling!), this also means managing data necessary to operate independently within a dedicated datastore per service. In such a distributed, microservice-oriented application, you will end up running multiple datastores where data is often replicated between different services of your application.

How does such a service get hold of all the data it needs to run properly? Typically, you use a messaging solution like RabbitMQ, Kafka or Azure Service Bus that distributes events from your microservices via a messaging bus after a business object has been created, modified, deleted etc. By following such an approach, you avoid calling and therefor tightly coupling your services together.

Let’s have a look at the “Hello, World!” example in that space: in an Ordering service, when a user wants to create a new order, the service would receive the payload from a client application via a REST endpoint, it would map the data to the internal representation of an Order object (validate the data) and after a successful commit to the database, it would publish an OrderCreated event to a message bus. Any other service interested in newly created orders (e.g. an Inventory or Invoicing service), would subscribe to these OrderCreated messages and process them accordingly. The following pseudo code shows, how this would typically look like in the Ordering service:

CreateNewOrder(CreateOrderDto order){
  // validate the incoming data
  // do some other stuff
  // Save the object to the database
  var result = _orderRespository.Create(order);
  // finally, publish the respective event
  _messagingService.Publish(new OrderCreatedEvent(result));
  return Ok();

That all sounds wonderful until something unexpected happens between the highlighted lines of the implementation. Imagine you successfully saved the Order object to the database, are in the middle of publishing the event to the message bus and your service crashes, the message bus is not available (due to whatever reason) or the network between your service and the messaging system has a problem?!

In a nutshell: you cannot publish the OrderCreated event to “the outside world” although the actual object has already been saved. Now what? You end up having a severe problem and start saving the events to a secondary service like an Azure Storage Account to process them later or – even worse – writing code that keeps track of all the events that haven’t already been published in memory. Worst case, you end up having data inconsistencies in your other services, because events are lost. Debugging these kinds of problems is pure nightmare and you want to avoid these situations whatever it takes.


So, what is the solution for the problem described above? There is a well-known pattern called Transactional Outbox Pattern that will help us here. The pattern ensures that events will be saved in a datastore (typically in an Outbox table of your database) before ultimately pushing them to the message broker. If the business object and the corresponding events are saved within the same database transaction, it is also guaranteed that no data will be lost – either everything will be committed or rolled back. To eventually publish the event, a different service or worker process can query the Outbox table for new entries, publish the events and mark them as “processed”. Using this pattern, you ensure that events will never be lost after creating or modifying a business object.

In a traditional database, the implementation in your service is fairly easy. If you e.g. use Entity Framework Core, you will use your EF context to create a database transaction, save the business object and the event and commit the transaction – or do a rollback in case of an error. Even the worker service that is processing the events data is straightforward: periodically query the Outbox table for new entries, publish the event payload to the message bus and mark the entry as “processed”.

Of course, in real-life things are not as easy as they might look like in the first place. Most importantly, you need to make sure that the order of the events, as they happened in the application, is preserved so that an OrderUpdated event doesn’t get published before an OrderCreated event.

How to implement the pattern with Azure Cosmos DB?

So far, we discussed the Transactional Outbox Pattern in theory and how it can help to implement reliable messaging in distributed applications. But how can we use the pattern in combination with Azure Cosmos DB and leverage some of the unique features like Cosmos DB Change Feed to make our developer life easier? Let’s have a detailed look at it by implementing a sample service that is responsible for managing Contact objects (Firstname, Lastname, Email, Company Information etc.). The sample service uses the CQRS pattern and follows basic Domain-Driven Design concepts.

Azure Cosmos DB

Cosmos DB Logo

First, Azure Cosmos DB – in case you haven’t heard of the service so far – is a globally distributed NoSQL database service that lets you use different APIs like Gremlin, MongoDB and the Core SQL API (which we will focus on in this article) to manage your application data – with 99.999% SLAs on availability. Cosmos DB can automatically horizontally scale your database, promising unlimited storage and throughput. The service keeps its promise by partitioning your data based on a user-defined key provided at container (the place where your documents will be stored) creation, the so-called “partition key”. That means your data is divided into logical subsets within a container based on that partition key – you may have also heard of the term “sharding” in case you are familiar with MongoDB. Under the hood, Cosmos DB will then distribute these logical subsets of your data across (multiple) physical partitions.

Cosmos DB Transactional Batches

Due to the mechanism described above, transactions in Cosmos DB work slightly different than you might be familiar with in relational database systems. Cosmos DB transactions – called “transactional batches” – operate on a single logical partition and therefor guarantee ACID properties. That means, you can’t save two documents in a transactional batch operation in different containers or even logical partitions (read: with different “partition keys”). Due to that mechanism, the implementation of the Transactional Outbox Pattern with Cosmos DB is slightly different compared to the approach in a relational database where you would simply have two tables: one for the business object and one Outbox table for the events. In Cosmos DB, we need to save the business object and the events data in the same logical partition (in the same container). The following chapters describe, how this can be achieved.

Custom Container Context, Repositories and UnitOfWork

The most important part of the implementation is a “custom container context” that is responsible of keeping track of objects that need to be saved in the same transactional batch. Think of a very lightweight “Entity Framework Context” maintaining a list of created/modified objects that also knows in which Cosmos DB container (and logical partition) these objects need to be saved. Here’s the interface for it:

public interface IContainerContext
    public Container Container { get; }
    public List<IDataObject<Entity>> DataObjects { get; }
    public void Add(IDataObject<Entity> entity);
    public Task<List<IDataObject<Entity>>> 
      SaveChangesAsync(CancellationToken cancellationToken = default);
    public void Reset();

The list within the “container context” component will hold Contact as well as DomainEvent objects and both will be put in the same container – yes, we are mixing multiple types of objects in the same Cosmos DB container and use a Type property to distinct between an “entity” and an “event”.

For each type there exists a dedicated repository that defines/implements the data access. The Contact repository interface offers the following methods:

public interface IContactsRepository
    public void Create(Contact contact);
    public Task<(Contact, string)> ReadAsync(Guid id, string etag);
    public Task DeleteAsync(Guid id, string etag);
    public Task<(List<(Contact, string)>, bool, string)> 
      ReadAllAsync(int pageSize, string continuationToken);
    public void Update(Contact contact, string etag);

The Event repository looks similar, except that there is only one method to create new events in the store:

public interface IEventRepository
    public void Create(ContactDomainEvent e);

The implementations of both repository interfaces get a reference via dependency injection to an IContainerContext instance to make sure that both operate on the same context.

The final component is a UnitOfWork that is responsible for committing the changes held in the IContainerContext instance to the database:

public class UnitOfWork : IUnitOfWork
    private readonly IContainerContext _context;
    public IContactRepository ContactsRepo { get; }

    public UnitOfWork(IContainerContext ctx, IContactRepository cRepo)
        _context = ctx;
        ContactsRepo = cRepo;

    public Task<List<IDataObject<Entity>>> 
        CommitAsync(CancellationToken cancellationToken = default)
        return _context.SaveChangesAsync(cancellationToken);

We now have the components in place for data access. Let’s see, how events are created and published.

Event Handling – Creation and Publication

Every time a Contact object is created, modified or (soft-) deleted, we want the system to raise a corresponding event, so that we can notify other services interested in these changes immediately. Core of the solution provided in this article is a combination of Domain-Driven Design and making use of the mediator pattern as proposed by Jimmy Bogard [1]. He suggests maintaining a list of domain events that happened due to modifications of the domain object and “publish” these events ultimately before saving the actual object to the database. The list of changes is kept in the domain object itself, so that no other component can modify the chain of events. The behavior of maintaining such events (IEvent instances) in a domain object is defined via an interface IEventEmitter<IEvent> and implemented in an abstract DomainEntity class:

public abstract class DomainEntity : Entity, IEventEmitter<IEvent>
    private readonly List<IEvent> _events = new();

    [JsonIgnore] public IReadOnlyList<IEvent> DomainEvents => 

    public virtual void AddEvent(IEvent domainEvent)
        var i = _events.FindIndex(0, e => e.Action == domainEvent.Action);
        if (i < 0)
            _events.Insert(i, domainEvent);

When it comes to raising/adding domain events, it’s the responsibility of the Contact object. As mentioned before, the Contact entity follows basic DDD concepts. In this case, it means that you can’t modify any domain properties “from outside”: you don’t have any public setters in the class. Instead, it offers dedicated methods to manipulate the state and is therefore able to raise the appropriate events for a certain modification (e.g. “NameUpdated”, “EmailUpdated” etc.).

Here’s an example when updating the name of a contact – the event is raised at the highlighted row:

public void SetName(string firstName, string lastName)
    if (string.IsNullOrWhiteSpace(firstName) || 
            new ArgumentException("FirstName or LastName may not be empty");

    Name = new Name(firstName, lastName);

    if (IsNew) return;

    AddEvent(new ContactNameUpdatedEvent(Id, Name));
    ModifiedAt = DateTimeOffset.Now;

The corresponding ContactNameUpdatedEvent that simply keeps track of the individual changes to the domain object looks as follows:

public class ContactNameUpdatedEvent : ContactDomainEvent
    public Name Name { get; }

    public ContactNameUpdatedEvent(Guid contactId, Name contactName) : 
        base(Guid.NewGuid(), contactId, nameof(ContactNameUpdatedEvent))
        Name = contactName;

So far, we only “log” the events to the domain object and nothing gets published or even saved to the database. How can we persist the events in combination with the business object? Well, it’s done right before saving the domain object to Cosmos DB.

Within the SaveChangesAsync method of the IContainerContext implementation, we simply loop over all objects tracked in the container context and publish the events maintained in these objects. It’s all done in a private RaiseDomainEvents method (dObjs is the list of tracked entities of the container context):

private void RaiseDomainEvents(List<IDataObject<Entity>> dObjs)
    var eventEmitters = new List<IEventEmitter<IEvent>>();

    // Get all EventEmitters
    foreach (var o in dObjs)
        if (o.Data is IEventEmitter<IEvent> ee)

    // Raise Events
    if (eventEmitters.Count <= 0) return;
    foreach (var evt in eventEmitters
        .SelectMany(eventEmitter => eventEmitter.DomainEvents))

On line 14 (highlighted), we use the MediatR library to publish an event within our application – this is possible, because all events – like ContactNameUpdatedEvent – also implement the INotification interface of the MediatR package.

Of course, we also need to handle these events – and this is where the IEventsRepository implementation comes into play. Let’s have a look at such an event handler:

public class ContactNameUpdatedHandler :
    private IEventRepository EventRepository { get; }

    public ContactNameUpdatedHandler(IEventRepository eventRepo)
        EventRepository = eventRepo;

    public Task Handle(ContactNameUpdatedEvent notification, 
        CancellationToken cancellationToken)
        return Task.CompletedTask;

You can see, that an IEventRepository instance gets injected in the handler class via the constructor. So, as soon as an ContactNameUpdatedEvent gets published in the application, the Handle method gets invoked and uses the events repository instance to create a notification object – that ultimately lands in the list of tracked objects in the IContainerContext object and therefore will become part of the objects that will be saved in the same transactional batch to Cosmos DB.

Here are the important parts of the implementation of IContainerContext:

private async Task<List<IDataObject<Entity>>> 
    SaveInTransactionalBatchAsync(List<IDataObject<Entity>> dObjs,
        CancellationToken cancellationToken)
    if (dObjs.Count > 0)
        var pk = new PartitionKey(dObjs[0].PartitionKey);
        var tb = Container.CreateTransactionalBatch(pk);
        dObjs.ForEach(o =>
            TransactionalBatchItemRequestOptions tro = null;

            if (!string.IsNullOrWhiteSpace(o.Etag))
                tro = new TransactionalBatchItemRequestOptions 
                    IfMatchEtag = o.Etag 

            switch (o.State)
                case EntityState.Created:
                case EntityState.Updated or EntityState.Deleted:
                    tb.ReplaceItem(o.Id, o, tro);

        var tbResult = await tb.ExecuteAsync(cancellationToken);
[check for return codes etc.]

    var result = new List<IDataObject<Entity>>(dObjs);
    // reset internal list
    return result;

Let Cosmos DB Shine

We now have everything in place: any time a domain object gets created or modified, it adds a corresponding domain event, which will be put in the same container context (via a separate repository) for object tracking and finally saved – both the domain object and the events – in the same transactional batch to Cosmos DB.

This is how the process works so far (e.g. for updating the name on a contact object):

  1. SetName is invoked on the contact object
  2. Event ContactNameUpdated is added to the list of events in the domain object
  3. ContactRepository Update method is invoked which adds the domain object to the container context. Object is now “tracked”.
  4. CommitAsync is invoked on the UnitOfWork object which in turn calls SaveChangesAsync on the container context
  5. Within SaveChangesAsync, all events in the list of the domain object get published by a MediatR instance and are added via the EventsRepository to the same container context
  6. After that, in SaveChangesAsync, a TransactionalBatch is created which will hold both the contact object and the event
  7. The TransactionalBatch is executed and the data is committed to Cosmos DB
  8. SaveChangesAsync and CommitAsync successfully return
  9. End of the update process

Let’s have a closer look now at how each type gets persisted to a container. In the code snippets above, you saw that objects saved to Cosmos DB will be wrapped in a DataObject instance. Such an object provides common properties like Id, PartitionKey, Type, State (like e.g. Created, Updated etc. – won’t be persisted in Cosmos DB), Etag (for optimistic locking [2]), TTL (Time-To-Live property for automatic cleanup of old documents [3]) and – of course – the Data itself. All of this is defined in a generic interface called IDataObject and used by the repositories and the container context:

public interface IDataObject<out T> where T : Entity
    string Id { get; }
    string PartitionKey { get; }
    string Type { get; }
    T Data { get; }
    string Etag { get; set; }
    int Ttl { get; }
    EntityState State { get; set; }

Objects wrapped in a DataObject instance and saved to the database will then look like this (Contact and ContactNameUpdatedEvent):

// Contact document/object - after creation

    "id": "b5e2e7aa-4982-4735-9422-c39a7c4af5c2",
    "partitionKey": "b5e2e7aa-4982-4735-9422-c39a7c4af5c2",
    "type": "contact",
    "data": {
        "name": {
            "firstName": "Bertram",
            "lastName": "Gilfoyle"
        "description": "This is a contact",
        "email": "bg@piedpiper.com",
        "company": {
            "companyName": "Pied Piper",
            "street": "Street",
            "houseNumber": "1a",
            "postalCode": "092821",
            "city": "Palo Alto",
            "country": "US"
        "createdAt": "2021-09-22T11:07:37.3022907+02:00",
        "deleted": false,
        "id": "b5e2e7aa-4982-4735-9422-c39a7c4af5c2"
    "ttl": -1,
    "_etag": "\"180014cc-0000-1500-0000-614455330000\"",
    "_ts": 1631868211

// after setting a new name - this is how an event document looks like

    "id": "d6a5f4b2-84c3-4ac7-ae22-6f4025ba9ca0",
    "partitionKey": "b5e2e7aa-4982-4735-9422-c39a7c4af5c2",
    "type": "domainEvent",
    "data": {
        "name": {
            "firstName": "Dinesh",
            "lastName": "Chugtai"
        "contactId": "b5e2e7aa-4982-4735-9422-c39a7c4af5c2",
        "action": "ContactNameUpdatedEvent",
        "id": "d6a5f4b2-84c3-4ac7-ae22-6f4025ba9ca0",
        "createdAt": "2021-09-17T10:50:01.1692448+02:00"
    "ttl": 120,
    "_etag": "\"18005bce-0000-1500-0000-614456b80000\"",
    "_ts": 1631868600

You see that the Contact and ContactNameUpdatedEvent (type: domainEvent) documents have the same partition key – hence both documents will be persisted in the same logical partition. And, if you want to keep a log of changes of a contact object, you can also keep all the events in the container for good. Frankly, this is not what we want to have here in this sample. The purpose of persisting both object types is to make sure that a) events never get lost and b) they eventually get published to “the outside world” (e.g. to an Azure Service Bus topic).

So, to finally read the stream of events and send them to a message broker, let’s use one of the “unsung hero features” [4] of Cosmos DB: Change Feed.

The Change Feed is a persistent log of changes in your container that is operating in the background keeping track of modifications in the order they occurred – per logical partition. So, when you read the Change Feed, it is guaranteed that – for a certain partition key – you read the changes of those documents always in the correct order. This is mandatory for our scenario. Frankly, this is the reason why we put both the contact and corresponding event documents in the same partition: when we read the Change Feed, we can be sure that we never get an “updated” before a “created” event.

So, how do we read the Change Feed then? You have multiple options. The most convenient way is to use an Azure Function with a Cosmos DB trigger. Here, you have everything in place and if you want to host that part of the application on the Azure Functions service, you are good to go. Another option is to use the Change Feed Processor library [5]. It lets you integrate Change Feed processing in your Web API e.g. as a background service (via IHostedService interface). In this sample, we simply create a console application that uses the abstract class BackgroundService for implementing long running background tasks in .NET Core.

Now, to receive the changes from the Cosmos DB Change Feed, you need to instantiate a ChangeFeedProcessor object, register a handler method for message processing and start listening for changes:

private async Task<ChangeFeedProcessor> 
    var changeFeedProcessor = _container
        .WithStartTime(new DateTime(2000, 1, 1, 0, 0, 0, DateTimeKind.Utc))

    _logger.LogInformation("Starting Change Feed Processor...");
    await changeFeedProcessor.StartAsync();
    _logger.LogInformation("Change Feed Processor started.  
        Waiting for new messages to arrive.");
    return changeFeedProcessor;

The handler method is then responsible for processing the messages. In this sample, we keep things simple and publish the events to an Azure Service Bus topic which is partitioned for scalability and has the “deduplication” feature enabled [6] – more on that in a second. From there on, any service interested in changes to Contact objects can subscribe to that topic and receive and process those changes for its own context.

You’ll also see that the Service Bus messages have a SessionId property. By using sessions in Azure Service Bus, we guarantee that the order of the messages is preserved (FIFO) [7] – which is necessary for our use case. Here is the snippet that handles messages from the Change Feed:

private async Task HandleChangesAsync(IReadOnlyCollection<ExpandoObject> changes, CancellationToken cancellationToken)
    _logger.LogInformation($"Received {changes.Count} document(s).");
    var eventsCount = 0;

    Dictionary<string, List<ServiceBusMessage>> partitionedMessages = new();

    foreach (var document in changes as dynamic)
        if (!((IDictionary<string, object>)document).ContainsKey("type") ||
            !((IDictionary<string, object>)document).ContainsKey("data")) continue; // unknown doc type

        if (document.type == EVENT_TYPE) // domainEvent
            string json = JsonConvert.SerializeObject(document.data);
            var sbMessage = new ServiceBusMessage(json)
                ContentType = "application/json",
                Subject = document.data.action,
                MessageId = document.id,
                PartitionKey = document.partitionKey,
                SessionId = document.partitionKey

            // Create message batch per partitionKey
            if (partitionedMessages.ContainsKey(document.partitionKey))
                partitionedMessages[sbMessage.PartitionKey] = new List<ServiceBusMessage> { sbMessage };


    if (partitionedMessages.Count > 0)
        _logger.LogInformation($"Processing {eventsCount} event(s) in {partitionedMessages.Count} partition(s).");

        // Loop over each partition
        foreach (var partition in partitionedMessages)
            // Create batch for partition
            using var messageBatch =
                await _topicSender.CreateMessageBatchAsync(cancellationToken);
            foreach (var msg in partition.Value)
                if (!messageBatch.TryAddMessage(msg))
                    throw new Exception();

                $"Sending {messageBatch.Count} event(s) to Service Bus. PartitionId: {partition.Key}");

                await _topicSender.SendMessagesAsync(messageBatch, cancellationToken);
            catch (Exception e)
        _logger.LogInformation("No event documents in change feed batch. Waiting for new messages to arrive.");

What happens when errors occur?

In case of an error while processing the changes, the Change Feed library will restart reading messages at the position where it successfully processed the last batch. That means, if we already have processed 10.000 messages, are now in the middle of working on messages 10.001 to 10.0025 (we read documents in batches of 25 – see the initialization of the ChangeFeedProcessor object) and an error happens, we can simply restart our process and it will pick up its work at position 10.001. The library keeps track of what has already been processed via a Leases container.

Of course, it can happen that – out of the 25 messages that are then reprocessed – our application already sent 10 to Azure Service Bus. This is where the deduplication feature will save our life. Azure Service Bus will check if a message has already been added to a topic during a specified time window based on the application-controlled MessageId property of a message. That property is set to the Id of the event document, meaning Azure Service Bus will ignore and drop a message, if a certain event has already been successfully added to our topic.

In a typical “Transactional Outbox Pattern” implementation, we would now update the processed events and set a Processed property to true, indicating that we successfully published it – and every now and then, the events would then be deleted to keep only the most recent records/documents. Of course, this could be addressed manually in the handler method.

But we are using Cosmos DB! We are already keeping track of events that were processed by using the Change Feed (in combination with the Leases container). And to periodically clean-up the events, we can leverage another useful feature of Cosmos DB: Time-To-Live (TTL) on documents [3]. Cosmos DB provides the ability to automatically delete documents based on a TTL property that can be added to a document – a timespan in seconds. The service will constantly check your container for documents with such a TTL property and as soon as it has expired, Cosmos DB will remove it from the container (btw, using the remaining Request Units in a background task – it will never “eat up” RUs that you need to handle user requests).

So, when all the services work as expected, events will be processed and published fast – meaning within seconds. If we have an error in Cosmos DB, we do not even publish any events, because nothing (both business object as well as corresponding events) can’t be saved at all – users will receive an error.

The only thing we need to consider and therefore set an appropriate TTL value on our DomainEvent documents, is when the background worker application (Change Feed Processor) or the Azure Service Bus aren’t available.

How much time do we grant both components to eventually publish the events? Let’s have a look at the SLA for Azure Service Bus: Microsoft guarantees 99.9% uptime. That means a downtime of ~9h per year or ~44mins per month. Of course, we should give our service more time than a duration of a “possible outage” to process changes. Furthermore, we should also consider that our Change Feed Processor worker service can also be down. To be on the safe side, I’d pick 10 days (TTL expects seconds, so a value of 864.000) for a production environment. All components involved will then have more than one week to process / publish changes within our application. That should be enough even in the case of a “disaster” like an Azure region outage. And after 10 days, the container holding the events will be cleaned-up automatically by Cosmos DB.

Sidenote: As mentioned before, that’s not necessary. If you want to keep a log of all the changes for a Contact object, you can also set the TTL to -1 on event documents and Cosmos DB won’t purge them at all.


As a result, we now have a service that can handle Contact business objects and every time such a contact gets added or modified, it will raise a corresponding event which will be published within the application right before saving the actual object. These events will be picked up by an event handler in our service and added to the same “database context” which will save both the business object and events in the same transaction to Cosmos DB. Hence, we can guarantee that no event will ever be lost. Furthermore, we leverage the Cosmos DB Change Feed then to publish the tracked events in the background to an Azure Service Bus Topic via a background worker service that makes use of the ChangeFeedProcessor library.

So, in the end this is not a “traditional” implementation of the “Transactional Outbox Pattern”, because we leverage some features of Cosmos DB that makes things easier for a developer.

What are the advantages of this solution then? We have…

  • guaranteed delivery of events
  • guaranteed ordering of events and deduplication via Azure Service Bus
  • no need to maintain an extra Processed property that indicates a successful processing of an event document – the solution makes use of Time-To-Live and Change Feed features in Cosmos DB
  • error proof processing of messages via ChangeFeedProcessor (or Azure Function)
  • optional: you can add multiple Change Feed processors – each one maintaining its own “pointer” enabling additional scenarios
  • if you use a multi-master/multi-region Cosmos DB account, you’ll even get “five nines” (99.999%) availability – which is outstanding!


This sample demonstrated how to implement the Transactional Outbox Pattern with Azure Cosmos DB. We made use of some of the hero features of Cosmos DB to simplify the implementation of the pattern.

You can find the source code for the sample application including the change feed processor on my GitHub account. There, you’ll also find a bicep deployment script to create all the necessary Azure resources for you automatically.

Keep in mind that this is not a “production ready” implementation, but it hopefully can act as an inspiration for your next adventure.

Happy hacking! 😊

Implement the Transactional Outbox Pattern with @AzureCosmosDB and @Azure #ServiceBus #distributed #pattern #microservices


Additional Links

Source Code

The sample source code for this article can be found on GitHub: https://github.com/cdennig/transactional-ob-pattern.

Sample code on GitHub

Jürgen Gutsch: Do you know the GitHub Advisory Database?

For a while, I'm trying to get into the topics of application security. Application security is a really huge area and covers a lot of topics. It contains user authentication as well as CORS, various kinds of injections, and many other errors you can do during development. One of the huge topics is about possible vulnerabilities in dependencies.

Almost every developer is using third-party libraries and components directly, via NuGet, NPM, pip, or whatever package manager. I did and still do as well.

While adding a dependency to your application, can you ensure that this dependency doesn't contain any vulnerabilities, or even ensure that the risk of adding vulnerabilities to your application can be reduced? Every code can contain an error that results in a critical issue, so third-party dependencies can do as well. Adding a dependency to your application can also mean adding a security problem to your application.

NPM already has a dependency audit tool integrated. npm --audit checks the packages.config against a vulnerabilities database and tells you about flaws and which version should be safe. In Python, you can install pip package globally to do vulnerabilities checks based on the requirements.txt against an open-source database.

In the .NET CLI you can use dotnet list package --vulnerable to check package dependencies of your project. The .NET CLI is using the GitHub Advisory Database to check for vulnerabilities: https://github.com/advisories

GitHub Advisory Database

The GitHub Advisory Database contains "the latest security vulnerabilities from the world of open-source software" as GitHub writes here https://github.com/advisories.

More about the GitHub Advisory Database: https://docs.github.com/en/code-security/security-advisories/about-github-security-advisories

Actually, I see a problem with the GitHub Advisory Database in the .NET universe. There are more than 270 thousand unique packages registered on NuGet.org and there are more than 3.5 million package versions registered but only 140 reviewed advisories Advisory Database.

It is great to have the possibility to check the packages for vulnerabilities but it doesn't make sense to check against a database that only contains 140 entries for that amount of packages.

There might be some reasons for that:

1st) .NET packages authors don't know about the Advisory Database

I'm sure many NuGet package authors don't know about the Advisory Database. Like me. I learned about the Advisory Database just a couple of weeks ago.

2nd) It is not common the the .NET universe to report vulnerabilities

Compared to the other stacks the .NET universe is pretty new in the open-source world. Sure, are some pretty cool projects that are almost 20 years old, but those projects are only exceptions.

3rd) There are less vulnerabilities in .NET packages

Since the .NET packages are based on a good and almost complete framework there might be fewer vulnerabilities than on other stacks. From my perspective, this might be possible, but it is petty much dependent on the kind of package. The more the package is related to a frontend, the more vulnerabilities can occur in such a library.

Why should I use such a advisory database?

There are important reasons why you should use the advisory database or even report to an advisory database:

Don't completely trust your own code

You as a package author are not really save with your own code. Vulnerabilities can occur in every code. Every good developer is focusing on business logic and doesn't really think about side aspects like application security. It can always happen that you create a critical bug that results in more or less critical vulnerability.

Every time you fix such a case in your code you should report that to the GitHub Advisory Database to tell your users about possible security issues in older versions of your package. This way you protect your package users. This way you tell your users to feel responsible for your users. It doesn't tell your users that you are a bad developer. The opposite is the case.

It is not your fault to produce packages with vulnerabilities accidentally, but it would be your fault if you don't do anything about it.

Don't completely trust the NuGet packages you use

Execute a vulnerability check on the packages you use whenever it is possible. Update the packages you use to the latest version whenever it is possible.

Even Microsoft packages contain vulnerabilities as you can see here: https://github.com/advisories/GHSA-q7cg-43mg-qp69

GitHub does such checks on the code level for you using the Dependabot. In case you don't host your code on GitHub you should use different tools like the already mentioned CLI tools or commercial tools like SonaSource, Snyk, or similar.

You can execute such checks on the build server immediately before you actually build your code. You could continue building in case the vulnerabilities are of a low or moderate level. You could stop building the code in case there are high or critical vulnerabilities

Safe your users

  • Check your code
  • Check your dependencies
  • Accept reported vulnerabilities of your package and fix them
  • Report vulnerabilities that occurred in your package after you fixed it.
  • Don't report vulnerabilities for your packages that actually occur in used packages.
    • This should be done by the other package authors
  • Report vulnerabilities of used packages in their repositories or to their maintainers
    • To give them a chance to fix it

How to report a vulnerability

If you own a repository on GitHib you can easily draft and propose a new security advisory to the GitHub database. In your repository on GitHub there is a "Security" tab. If you click on that tab, you'll find the "Security advisories" page on the left-hand menu. Here you see your already drafted advisories as well as a button to create a new one:


If you don't own that repository, you will see the same page but without the button to draft an advisory.

If you click that button, you'll see a nice form to draft the advisory.



Once it is drafted, you can request a CVE identifier or just publish it. The GitHub team will then review it and add it to the advisory database:



That's it.

If you find a critical problem on a repository that you don't own. You should create an issue on that repository, describe what you found. The repository owner should now fix the problem and add an advisory to the database.


You definitely should take care of your dependencies and should check them for vulnerabilities. And you definitely should have a look at the GitHub Advisory Database and you should report your advisories there

This would help to keep the applications secure.

Thorsten Hans: The 5 Best Free VPNs for Chromebook – The Most Reliable Options

What are the best free VPNs for a Chromebook? Feel protected from online threats by installing a free VPN for your personal, work or school Chromebook. What are the advantages of the VPN service and is it hard to install and use it? We will help you select the very best free VPNs:

  1. ExpressVPN
  2. TunnelBear
  3. Windscribe
  4. Hotspot Shield
  5. hide.me

But why use a VPN? Well, surfing the web hides many dangers. A VPN protects your sensitive data – security information, passwords – and also hides your IP address so you can bypass annoying geo-restrictions. You will enjoy a better, more comfortable browsing experience while being protected.

As a Chromebook user, it is best to pay for a trustworthy VPN service. Some of the VPNs may be risky and not 100% reliable so we recommend choosing a paid one. However, our small list contains VPNs that can be used safely on a Chromebook, so let’s check them out.

1. ExpressVPN

With around 3.000 servers in more than 90 countries, pleasing connection speeds, AES 256-bit encryption, kill-switch and DNS and IPv6 leak protection ExpressVPN is one of the very best free options on the market.

This VPN has some of the most solid security protocols so it will keep you completely safe online. It will also make sure you’ll enjoy flawless torrenting. The app is well-designed and really easy to use. Take advantage of the trial period. Users can trial ExpressVPN for free without any risks. There is a 30-day money-back guarantee.

2. TunnelBear

The TunnelBear free service offers its users 500MB data per month, annual audits, 26 server locations, kill-switch and AES 256-bit encryption. And if you tweet the service you can actually get another 1GB of additional data. The interface of this VPN is fun and cartoonish so it can get you in a good mood.

The GhostBear mode makes the browsing activity invisible and bypasses even the most solid censorship and geo-restrictions. Understandably, since we’re talking about a free version TunnelBear has some flaws. Only the local servers are very fast and there is no money-back guarantee so people cannot test TunnelBear’s full version. But still, the free version is reliable and will keep your identity safe with its strong security features.

3. Windscribe

Would you like to have 10GB of data monthly, reliable privacy and security features, access to around 180 servers and dependable customer service (auto chatbot) for free? Windscribe promises all of that. The interface is great, there is a minimal amount of ads and let’s not forget the option for unlimited simultaneous connections.

The security and safety features are truly solid: 256-bit encryption, DNS/IPv6/WebRTC leak protection, a trusty kill switch. The flaws of the Windscribe free version are: doesn’t unblock streaming platforms like Hulu or Netflix and the speeds are slow, though they are quite suitable for torrenting.

4. Hotspot Shield

The Hotspot Shield basic plan allows people to use the VPN free of charge. And here’s what they get: 500MB data every day (does not accumulate, it resets every 24 hours), internet kill switch, data compression during streaming, a strict no-logs policy, 256-bit AES encryption and overall ironclad security to protect them from data theft.

The customer support is top-notch since users can contact the team via live chat or email and receive a quick response. Hotspot Shield is suitable for casual streaming because the speeds are too low for streaming and that’s in case you live far away from the United States.

5. hide.me

Hide.me is our last free VPN service suggestion. It offers 2GB of data monthly and access to servers in more than 75 locations. The amount of free data is enough for the users to browse and stream. H

Hide.me is also add-free, has reliable customer service round the clock, offers AES-256 bit encryption, IP/DNS leak protection and a kill switch. This VPN service really emphasizes security and their user’s online privacy is their number one priority. The speeds of this VPN are generally good and you can enjoy HD streaming even on distant servers. There’s a 30-day money-back guarantee.

Installing a VPN for Chromebook

You can connect your Chromebook to a private network using a VPN. In order to do that you can manually configure the Client in Chrome or use a Chrome browser extension (choose an extension by clicking here and install it) or an Android app (simply download the VPN app, log in and switch it on).

Chromebook laptop

We suggest the latter. An Android VPN app is considered the best option for Chromebook protection. If you need help setting up a VPN while using a Chromebook at work or school it’s best to contact your administrator.

The post The 5 Best Free VPNs for Chromebook – The Most Reliable Options appeared first on Xplatform.

Thorsten Hans: The 5 Best VPNs for PS4 – Enjoy High-Speed Gaming and a High-Quality Experience

A VPN won’t allow you to use PS4 for free but it will improve your gaming experience, protect your personal data and more, much more! Learn more about the benefits of installing a VPN for your PlayStation 4. What is the pricing and which VPNs can be used for free? These top 5 VPNs for PS4 won’t disappoint:

  1. ExpressVPN
  2. CyberGhost
  3. TunnelBear
  4. Speedify
  5. Surfshark

How to install the chosen VPN on a compatible router, your PC or your laptop? It’s really not that hard, just keep reading and we’ll help!

Why Using a VPN for PlayStation 4?

Using a VPN for your PlayStation 4 has a couple of important advantages:

  • Reducing lags during online playing
  • Protection from potential DDOS attacks
  • The opportunity to enjoy new games before their release in your region
  • Access to geo-blocked content (like Netflix US content) using the console’s streaming apps
  • Pairing up with gamers from all around the world
  • Protection of your personal data
  • Potential boosting of gaming speeds

How To Use a VPN for PlayStation 4?

Figuring out which VPN service you’d like to use is the first and biggest step (see our top 5 VPNs for PS4 below). And then there’s a little more to it. The thing is that a VPN can’t be directly installed on a PS4. No worries, the installation is quite simple and the process is not that lengthy.

There are 2 main ways to set up a VPN for your PS4:

1. Using a Compatible Router

PS4 console in the air

The VPN will protect all of the devices on your Wi-Fi network. Just make sure that your router supports VPN connections (most modern ones do). You will have to manually fill in your account details in the router settings and then follow the steps further. They may vary slightly, depending on the VPN you’ve chosen and the router brand.

2. Using a PC or a Laptop

The installation of a VPN can be done on any Windows computer that has a Wi-Fi hotspot and an Ethernet port. You will need to plug one end of the Ethernet cable into the PC/laptop and the other end into your PlayStation 4.

Install the VPN by configuring the Ethernet connection:

Control Panel > Network and Internet > Network and Sharing Center > Change Adapter Settings > right-click on your VPN connection > Properties > open Sharing tab > select “Allow other network users to connect through this computer’s internet connection” > select Home Networking Connection > select your internet connection from the dropdown menu

Steps on the PS4:

Settings > Network Settings > set up Internet Connection > use a LAN Cable > select Easy connection > select “Do Not Use a Proxy Server” if prompted.

Now let’s check out the top 5 VPNs for PS4!

1. ExpressVPN

ExpressVPN provides optimized server speeds and a fast security protocol (Lightway), allows its users to connect up to 5 devices at the same time and has a wide range of supported devices. Setting up this VPN is a child’s play – sign up, download and connect!

You will get unlimited data for gaming and you will be completely safe while playing on shared servers (no DDoS attacks that can ruin your game). The server speeds won’t disappoint, they are perfect for gaming since they are extremely fast.

The speeds that you usually need for online gaming are of at least 15 Mbps and the average download speed of this VPN is 45 Mbps. There’s also a 30-day money-back guarantee.

Subscription planPrice
One month$12.95
6 months$9.99 per month
12 months$8.32 per month

2. CyberGhost

CyberGhost is a great choice for beginner VPN users. It guarantees them security, zero logs, fast speeds, fast and secure connection, many server locations, reliable customer support and the opportunity to give instant feedback.

Sign up and then sign in to your CyberGhost account, go to My DNS Settings, activate the setting on your IP, enter the DNS in your router settings and finally connect the PlayStation to the router.

It’s that easy! You can also set up the VPN with a Wi-Fi router or your computer, like the rest of the VPNs on this list. There’s a free trial and a 45-day money-back guarantee.

Subscription planPrice
1 month$14.18
1 year$4.43 per month
2 years$3.77 per month
3 years$2.36 per month

3. TunnelBear

TunnelBear’s free plan offers really good speeds, up to 1.5 GB data per month (you get 500MB at first, but if you tweet the company they will add 1GB extra), a no-logs policy, military-grade encryption, stable leak protection and the opportunity to simultaneously connect up to 5 devices.

Not bad for a free VPN, right? You will also get to choose from 1,600 servers in 25 different locations which means that you’ll be able to connect to more international game servers. However, playing on servers that are very far away from your actual location will cause the speeds to drop. Nonetheless, TunnelBear remains one of the best free VPN options for PS4.

4. Speedify

We present to you another cool free option with great speeds, military-grade encryption, leak protection and a strict no-logs policy. Speedify uses channel bonding to increase connection speeds and security. Users can combine all of the available internet connections and enjoy stable gameplay.

They can play even on long-distance servers without noticing any serious lags, which is very impressive for a free VPN and really convenient for true gamers. The Speedify free plan will provide you with 2GB of data per month. That equals 9-10 hours of gameplay. Trial it completely risk-free for 30 days!

5. Surfshark

This VPN has strong privacy and security credentials, it’s truly reliable. Surfshark offers more than 3000 servers in 65 countries, fast gaming speeds, unlimited simultaneous connections, DDoS protection, a no-logs policy and the usual 30-day money-back guarantee. Download Surfshark, install it directly onto a router, connect your PS4 to the router and let the gaming begin! This VPN may be relatively new on the market but it sure is worth it with its valuable features.

Subscription planPrice
1 month$12.95
6 months$6.49 per month
24 months$2.49 per month

The post The 5 Best VPNs for PS4 – Enjoy High-Speed Gaming and a High-Quality Experience appeared first on Xplatform.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Async streaming

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into async streaming.

Async streaming basically means the usage of IAsyncEnumerable<>


Async streaming is now supported from the controller action down to the response formatter, as well as on the hosting level. This topic is basically about the IAsyncEnumerable<T>. This means that those async enumerable will be handled async all the way down to the response stream. They don't get buffered anymore, which improves the performance and reduces the memory usage a lot. Huge lists of data now get smoothly streamed to the client.

In the past, we handled large data by sending them in small chunks to the output stream, because of the buffering. This way we needed to find the right balance of the chunk size. Smaller chunks increase the CPU load and bigger chunks increase the memory consumption.

This is not longer needed. The IAsyncEnumerable<T> does this for you with a lot better performance.

Even EF Core supports the IAsyncEnumerable<T> to query the data. Because of that, working with EF Core is improved as well. Data you fetch from the database using EF Core can now be directly streamed to the output.

This is more or less what Microsoft wrote about async streaming, but I really like to try it by myself. 😃

Trying to stream large data

I'd like to try streaming a lot of data from the database to the client. So I create a new web API project using the .NET CLI:

dotnet new webapi -n AsyncStreams -o AsyncStreams
cd AsyncStreams\

code .

Microsoft changed the most .NET CLI project templates to use the minimal API approach.

This creates a web API project and opens it in Visual Studio Code. We need to add some EF Core packages to work with SQLite and to create EF migrations:

dotnet add package Microsoft.EntityFrameworkCore.Sqlite --version 6.0.0-preview.7.21378.4
dotnet add package Microsoft.EntityFrameworkCore.Design --version 6.0.0-preview.7.21378.4

To generate that load of data, I also need to add my favorite package GenFu:

dotnet add package GenFu

This package is pretty useful to create test and mock data.

If you never installed ef global tool you should do it using the following command. The version should be the same as for the Microsoft.EntityFrameworkCore.Design package. I'm currently using the preview 7:

dotnet tool install --global dotnet-ef --version 6.0.0-preview.7.21378.4

Now let's write some code.

At first, I add a AppDbContext and a AppDbContextFactory to the project:

using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Design;

namespace AsyncStreams
    public class AppDbContext : DbContext
        public DbSet<WeatherForecast> WeatherForecasts => Set<WeatherForecast>();

        public AppDbContext(DbContextOptions<AppDbContext> options) : base(options)
        { }
    public class AppDbContextFactory : IDesignTimeDbContextFactory<AppDbContext>
        public AppDbContext CreateDbContext(string[] args)
            var options = new DbContextOptionsBuilder<AppDbContext>();
            options.UseSqlite("Data Source=app.db");

            return new AppDbContext(options.Options);

The factory will be used by the EF tool to work work with the migrations.

At next, I need to register the DbContext to the Dependency Injection Container. In the Program.cs I add the following snippet right after the registration of Swagger:

builder.Services.AddDbContext<AppDbContext>(options =>
    options.UseSqlite("Data Source=app.db");

Next, I'd like to seed a bigger amount of data. To do that I'm using GenFu in a method called SeedDatabase that I placed in the Program.cs to generate 100000 records of WeatherForecast:

// ...more usings
using GenFu;

// ...


SeedDatabase(); // Call the seeding


void SeedDatabase()
    using var context = app.Services.CreateScope().ServiceProvider.GetService<AppDbContext>();
    if (context != null && !context.WeatherForecasts.Any())
        var i = 1;
            .Fill(c => c.Id, () => { return i++; });

        var weatherForecasts = A.ListOf<WeatherForecast>(100000);

I need to create a Scope to get the AppDbContext out of the ServiceProvider. Then we check if the database already contains any data. We also need to configure GenFu to not create random IDs. Otherwise, we would get problems when we safe the data into the database. Then the list of 100000 WeatherForecast gets created and stored into the database, in case there are no.

I would have used the HasData method in the OnModelCreating method in the AppDbContext to seed the data. But seeding large data using this way doesn't really work since EF Migrations creates an insert statement per record in the migration file. This means the size of the migration file exceeds a lot and applying the migration took hours on my machine before I stopped it. The .NET Host needed almost all RAM and the CPU load was at 50%. I tried to seed one million records and 100000 records with no success. And lost three hours this way.

This is why I did the seeding manually before the application starts as proposed in this documentation: https://docs.microsoft.com/en-us/ef/core/modeling/data-seeding

I also tried to get one million records loaded with the client, but I got an Error: Maximum response size reached message in Postman, so I left it with 100000. Actually, that points me to the question of where the streaming aspect is... Maybe this is a Postman problem 🤔

One more thing to do is to change the WeatherForecastController to use the AppDbContext and to return the weather forecasts:

using Microsoft.AspNetCore.Mvc;

namespace AsyncStreams.Controllers;

public class WeatherForecastController : ControllerBase
    private readonly ILogger<WeatherForecastController> _logger;
    private readonly AppDbContext _context;

    public WeatherForecastController(
        ILogger<WeatherForecastController> logger,
        AppDbContext context)
        _logger = logger;
        _context = context;

    public IActionResult Get()
        return Ok(_context.WeatherForecasts);

At last, I need to create the EF migration and to update the database using the global tool:

dotnet ef migrations add InitialCreate
dotnet ef database update

Since I don't seed the data with the migrations, it will be fast.

That's it. I start the application using dotnet run and call the endpoint in Postman:

GET https://localhost:5001/WeatherForecast/

It is fascinating. The CPU load of the AsyncStreams application is quite low, but the memory consumption is pretty much the same, compared to an action method that buffers the data:

public async Task<IActionResult> Get()
    return Ok(await _context.WeatherForecasts.ToListAsync());

I guess, I need to do some more tests to get a better comparison of the memory consumption.

What's next?

In the next part In going to have a look at the HTTP logging middleware in ASP.NET Core 6.0.

Holger Schwichtenberg: In eigener Sache: Vorträge vom Dotnet-Doktor

Der Dotnet-Doktor hält bis zum Jahresende zahlreiche Vorträge und Workshops zu .NET 6 und Blazor.

Holger Schwichtenberg: Minimale Konsolenanwendung in .NET 6

Ab .NET 6 Preview 7 hat Microsoft die Projektvorlagen stark minimalisiert. Das Konsolenprojekt besteht nur noch aus einer Codezeile.

Jürgen Gutsch: ASP.NET Core in .NET 6 - Introducing minimal APIs

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into minimal APIs.

With the preview 4, Microsoft simplified the simplest project template to an absolute minimum. Microsoft created this template to make it easier for new developers to start creating small microservices and HTTP APIs.

When I saw the minimal APIs the first time some months ago it reminds me on this:

var express = require("express");
var app = express();

app.listen(3000, () => {
 console.log("Server running on port 3000");

app.get("/url", (req, res, next) => {

Yes, that is NodeJS using ExpressJS to bootup an http server that provides a minimal API. Actually, the ASP.NET Core minimal APIs looks as easy as NodeJS and ExpressJS. You don't believe? Just have a look.

Minimal APIs

To create a minimal API project, you can simply write it on your own or just use the dotnet CLI as usual:

dotnet new web -n MiniApi -o MiniApi

This command creates a project file, app settings files, and a Program.cs that looks like this:

using System;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Hosting;

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

if (app.Environment.IsDevelopment())

app.MapGet("/", () => "Hello World!");


Microsoft changed the empty web project in the dotnet CLI to use minimal APIs. You will not find a Startup.cs anymore. It is all in the Program.cs. This should help new developers to get into ASP.NET Core easier.

If you already know ASP.NET Core, you will know some of the things that are used here. The WebApplicationBuilder will be created with the default settings to create the hosting environment, the same way as the WebHostBuilder. After Build() was called you can use a WebApplication object to map endpoints and to add Middlewares like the DeveloperExceptionPage.

app.Run() starts the application to serve the endpoints.

You can start the project like any other ASP.NET Core project by running dotnet run or by clicking F5 in your IDE.

Actually, it is all working as any other ASP.NET Core project, but most of the stuff is encapsulated and preconfigured in the WebApplicationBuilder and can be accessed via properties. If you like to register some additional services, you need to access the Services property of the WebApplicationBuilder:

builder.Services.AddScoped<IMyService, MyService>();
builder.Services.AddTransient<IMyService, MyService>();
builder.Services.AddSingleton<IMyService, MyService>();


Here you can also add the known services like authentication, authorization, and even MVC Controllers with views.

To configure the Configuration, Logging, Host, etc. you also need to access the relevant properties.

On the WebApplication instance, it works the same way as configuring your application inside the Configure method of a Startup class. On the app variable, you can register all the middlewares and routes you like. In the sample above, it is a simple GET response on the default route. You could also register MVC, authentication, authorization, HSTS, etc. as you can do in a common ASP.NET Core project.

The only difference is that it is all in one file.

Even if it doesn't make sense to configure an MVC application using minimal APIs, but to demonstrate that minimal APIs are just regular ASP.NET Core under the hood:

using System;
using System.Net;
using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.AspNetCore.Identity;

var builder = WebApplication.CreateBuilder(args);


var app = builder.Build();

if (app.Environment.IsDevelopment())




app.UseEndpoints(endpoints =>
        name: "default",
        pattern: "{controller=Home}/{action=Index}/{id?}");


I really like this approach for simple APIs in a microservice context.

What do you think? Just drop me a comment.

What's next?

In the next part In going to look into the support for Async streaming in ASP.NET Core.

Golo Roden: Tabs oder Spaces?

Tabs oder Spaces? Diese Frage spaltet Entwicklerinnen und Entwickler seit Jahren, wenn nicht seit Jahrzehnten. Dabei ist die Frage, ob mit Tabs oder Spaces eingerückt werden sollte, eigentlich ganz leicht zu beantworten.

Thorsten Hans: Electron Apps: What Are They and How to Create Them?

You may have wondered how to program a cross-platform application yourself. There are various options that are more or less complicated. One way to create a cross-platform app is to use the Electron framework. But now the question arises:

What Is Electron?

In general, it can be said that it is a framework that is developed under an open-source license. You can use it to create cross-platform desktop applications or apps. That sounds complicated at first glance, but it isn’t. You use conventional web technologies such as HTML, CSS, and JavaScript for programming.

So you need to know a few programming basics. Electron relies on NodeJS as the backend and this must be installed on your development computer.

In the end, you will create a Chromium-based web application that you can then use on many different desktops. That’s right. The apps developed with Electron can be used equally on Windows, macOS, and Linux.

This has advantages for developers. They don’t have to learn any new programming languages. Another positive thing is that Electron supports automatic updates. If there is a new version of your app, you can roll it out automatically.

Furthermore, your cross-platform apps can also use the operating systems’ system bars, notification components, and so on. So you can see that Electron has many advantages if you choose a low resistance path.

Electron Also Has Disadvantages

However, at this point, one must clearly state the disadvantages of the framework. Such convenient solutions offer compromises that you have to live with. Or you have to know the disadvantages and you can then decide whether Electron is an option for your app or not. The use of Electron is certainly less suitable for some applications and apps.

One disadvantage of Electron apps is that the files are relatively large, so we’re talking about large downloads at this point. This is not a problem with modern internet speeds, but you should keep that in mind. Hard drive space is also available on modern computers. So, you simply have to decide for yourself where the pain threshold is reached.

Furthermore, the performance is usually worse than with native desktop apps. Here, too, it depends on the pain threshold. What do you want the users to do and is your app fast enough? Nowadays, users are quickly frustrated when they have to wait several seconds for a reaction from the app. You know your app best yourself and if it’s fast enough – why not Electron?

Current computers are also equipped with a relatively large amount of RAM. You can never have enough RAM, they say in nerd circles, and that’s true. What the computer can do in RAM is much faster, even if SSDs are much faster than conventional hard drives in terms of swap files. Electron apps, however, need much more RAM compared to native desktop apps.  Again, it depends on how much you want a computer to handle. If it’s within reason, it’s fine. But if the app needs way too much RAM, there might be a better solution.

Make A Decision

Before you decide whether to use Electron or not, take a look at some popular apps (<https://www.electronjs.org/apps>) that have been developed with the framework. Download a few programs and see for yourself how many resources they consume.

You may also want to check out a few apps that are similar to your project. Then you can decide whether a pain threshold is exceeded or not.

I think it’s important to gain an understanding of the apps and their resource consumption. Then you can make a good decision. Knowledge is power – that is true in this case as well.

Popular Electron Apps

Let’s just take a look at some popular Electron apps. You may be surprised to see which big names are using the framework.


WordPress offers a desktop app for Windows, Linux, and macOS. This speeds up access to the website enormously because a copy is saved locally. The app is based on Electron.

There is one catch, however. The app works fine with websites hosted directly on WordPress.com. It is possible to use a self-hosted website, but you need the JetPack plug-in for this. It’s not a nice solution, since it’s unnecessarily complicated.


Tip: WordPress offers a desktop app implemented with Electron.

The point is that the WordPress Electron app is fast and resource consumption is kept within reasonable limits.


The popular messaging app for teams, Slack, is also available as a cross-platform desktop program. The development team has made use of the open-source framework presented here.

You can even connect to Google Drive with the app. This makes teamwork even more convenient and productive. The home office is no longer a niche and that’s why it’s important for teams to be able to use platforms together and easily.


The program is a free open-source messaging and email app. It combines common web applications into one interface. Rambox supports over 600 apps including Gmail, Office 365, Skype, Slack, and Discord.

Of course, there is Rambox for Windows, macOS, and Linux. The software is available as a free Community Edition (CE) that supports over 99 apps. The paid pro versions support many more apps. There is also an ad blocker in the Pro versions and you can set working hours. Outside of these times, the app doesn’t bother you with notifications.

Wayward – A Game

Even games are implemented with this solution. And Wayward is a beautiful example. It’s a ‘Roguelike’ game where you have to survive in the wilderness. The game is turn-based. There are over 500 items that you can craft and interact with. Furthermore, there are over 45 animals and creatures in Wayward that you have to fight, among other things. The game just looks so adorable with its pixel graphics.

As you can see in the video, the game runs smoothly. We don’t know which machine the developer has, but I can run it on mine as well. The requirements are manageable. You need at least 2 GB of RAM and a graphics card with WebGL support and 512 MB of video memory. 1 GB of mass storage is required. This shouldn’t be a problem for current computers.

Electron Is Worth A Thought

Yes, desktop apps created with this framework require more resources than native programs. Thanks to modern computers, however, the bar is set very high. By looking at the popular programs, you can already see what you can do with Electron.

Like everywhere else, you need to know your target audience and what you can expect from them. Then it will be easy for you to decide whether to use the framework or not.

The post Electron Apps: What Are They and How to Create Them? appeared first on Xplatform.

Stefan Henneken: IEC 61131-3: SOLID – Five principles for better software

In addition to the syntax of a programming language and the understanding of the most important libraries and frameworks, other methodologies – such as design patterns – belong to the fundamentals of software development. Aside from a design pattern, design principles are also a helpful tool in the development of software. SOLID is an acronym for five such design principles, which help developers to design software more understandable, more flexible and more maintainable.

In larger software projects, a great number of function blocks exist that are connected to each other via inheritance and references. These units interact by the calls of the function blocks and their methods. The interaction of the code units can unnecessarily complicate the extending or finding of errors if designed wrongly. In order to develop sustainable software, the function blocks should be modeled in such a way that they are easy to extend.

Many design patterns apply the SOLID principles to suggest an architectural approach for the respective task. The SOLID principles are not to be understood as rules, but rather as advice. They are a subset of many principles that an American software engineer and lecturer Robert C. Martin (also known as Uncle Bob) presented in his book (Amazon advertising link *) Clean Architecture: A Craftsman’s Guide to Software Structure and Design. The SOLID principles include:

  • Single Responsibility Principle
  • Open Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

The principles shown here are hints that make it easier for a developer to improve code quality. The effort pays for itself after a short time, because changes will be easier, tests and debugging will be faster. Thus, the knowledge of these five design principles should be part of every software developer’s basic knowledge.

Single Responsibility Principle

A function block should have only one responsibility. If the functionality of a program is changed, this should have effects only on few function blocks. Many small function blocks are better than a few large ones. The code appears at first sight more extensive, but it is easier to organize. A program with many smaller function blocks, each for a special task, is easier to maintain, than few large function blocks, claiming to cover everything.

Open Closed Principle

According to the Open closed principle, function blocks should be open for extensions but closed for changes. The implementation of extensions should only be achieved by adding code, not by changing existing code. A good example of this principle is inheritance. A new function block inherits from an existing function block. New functions can thus be added without having to change the existing function block. It is not even necessary to have the program code.

Liskov Substitution Principle

Liskov substitution principle requires that derived function blocks must always be usable in place of their basic function blocks. Derived function blocks must behave like their basic function blocks. A derived function block may extend the base function block, but not restrict it.

Interface Segregation Principle

Many customized interfaces are better than one universal interface. Accordingly, an interface may only contain those functions that really belong together closely. Comprehensive interfaces create links between otherwise independent program parts. Thus, the Interface segregation principle has a similar goal as the Single responsibility principle. However, there are different approaches to the implementation of these two principles.

Dependency Inversion Principle

Function blocks are often linearly interdependent in one direction. A function block for logging messages calls methods of another function block to write data to a database. There is a fixed dependency between the function block for logging and the function block for accessing the database. The Dependency inversion principle resolves this fixed dependency by defining a common interface. This is implemented by the block for the database access.


In the following posts, I will introduce the individual SOLID principles in more detail and try to explain them using an example. The sample program from my post IEC 61131-3: The Command Pattern serves as a basis. With each SOLID principle, I will try to optimize the program further.

I will start shortly with the Dependency inversion principle.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Shadow-copying in IIS

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to explore the Shadow-copying in IIS.

Since .NET is locking the assemblies that are running by a process, it is impossible to replace them during an update scenario. Specially in scenarios where you self-host an IIS server or where you need to update an running application via FTP.

To solve this, Microsoft added a new feature to the ASP.NET Core module for IIS to shadow copy the application assemblies to a specific folder.

Exploring Shadow-copying in IIS

To enable shadow-copying you need to install the latest preview version of the ASP.NET Core module

On a self-hosted IIS server, this requires a new version of the hosting bundle. On Azure App Services, you will be required to install a new ASP.NET Core runtime site extension (https://devblogs.microsoft.com/aspnet/asp-net-core-updates-in-net-6-preview-3/#shadow-copying-in-iis)

If you have the requirements ready, you should add a web.config to your project or edit the weg.config that is created during the publish process (dotnet publish). Since most of us are using continuous integration and can't touch the web.config after it gets crated automatically, you should add it to the project. Just copy the one that got created using dotnet publish. Continuous integration will not override an existing web.config.

To enable it you will need to add some new handlerSettings to the web.config:

<aspNetCore processPath="%LAUNCHER_PATH%" arguments="%LAUNCHER_ARGS%" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout">
        <handlerSetting name="experimentalEnableShadowCopy" value="true" />
        <handlerSetting name="shadowCopyDirectory" value="../ShadowCopyDirectory/" />

This enables shadow-copying and specifies the shadow copy directory.

After the changes are deployed, you should be able to update the assemblies of a running application.

What's next?

In the next part In going to look into minimal APIs in ASP.NET Core.

Thorsten Hans: Raspberry Pi – The Perfect Mini-Computer For Programming

If you are looking for a computer to learn programming, the Raspberry Pi is a perfect and affordable option. You can get a fully equipped Raspberry Pi for less than 70 euros.

Why is the Raspberry Pi so good for learning to program? This is actually due to the official operating system, which was previously called Raspbian and is now called Raspberry Pi OS. The operating system is based on Debian GNU/Linux. Many tools are pre-installed to pave your way into the universe of programming languages.

The company behind the SBC (Single Board Computer) is the Raspberry Pi Foundation. It encourages users to use Python. There are good reasons for this. Python is a very beginner-friendly programming language.

However, Raspberry Pi OS also offers Scratch as an option for children, so to speak, who want to get a taste of the world of programming. Since the operating system is based on Debian, you can of course also use Bash and all other tools from the Debian universe.

Raspberry Pi Becomes Really Fast

Even with the first Raspberry Pi, you could learn programming, but the graphical user interface was no fun. It was quite slow.

In the meantime, there is the much faster Raspberry Pi 4 and also the Pi 400, which even has its own keyboard. Incidentally, it does not type badly and for beginners or children, it is the perfect machine to learn to program with. All you really need is a mouse and a screen – plus a power supply and microSD card, but there are starter kits for that.

With the Raspberry Pi 400, the speed has once again been increased and the system is actually usable as a desktop. Of course, you can’t edit videos or perform computationally intensive tasks with them. But for everyday tasks like editing documents with LibreOffice, surfing the web, and so on, it’s enough.

As mentioned earlier, Raspberry Pi OS comes with many pre-installed tools to help you get started with programming. Let’s take a closer look at that.

Pre-Installed Tools For Programming

If you have installed the official operating system, then use the menu at the bottom left to open the *Development* area. There you will find many useful tools that will help you get started with programming. Among other things, there are three different Scratch versions. This is a great way to teach yourself the basics of programming.

If you prefer to start with Python, then *Thonny Python IDE* is a good development environment. For Java fans there is *Blue Java IDE* and also *Greenfoot Java IDE*.


Tip: Many useful tools are already preinstalled on the Raspberry Pi OS.

Do you want to use the official Pi OS, but don’t want to get such an SBC? That’s no problem because the OS is also available for conventional computers. You can find the image on the download page (<https://www.raspberrypi.org/downloads…/>). Just search for the *Raspberry Pi Desktop (for PC and Mac)* section and you will find it there. Of course, you can also install the operating system on a virtual machine.

Certainly with access to the complete Debian software store you can install thousands of programs. You can find almost everything your heart desires for free.

Sense HAT Emulator

Does the *Sense HAT* sound familiar? This is a relatively expensive attachment for the Raspberry Pi, with which you can read out temperature, humidity, and air pressure, among other things. It was specially developed for the *Astro-Pi-Mission* and such devices were actually already in space on the ISS (International Space Station).

You can control the *Sense HAT* easily via Python. The Raspberry Pi Foundation makes it easy for you at this point and has even created a Python library for it. Experimenting with the Sense HAT is really fun. If you first want to test whether the HAT is worth the money, you can use the emulator and program with it. Your program will later also run with the physical *Sense HAT*.


Tip: You can also experiment with the *Sense HAT* without buying one.

For me, I actually experimented with the emulated *Sense HAT* for a while, and then I had to buy one. This thing is just too much fun.

Another great thing at this point is that you have a goal for your programming. If you have a *Sense HAT*, you would like to read out the temperature and other data immediately and display them on the LEDs. You don’t have to think about what kind of project you want to implement first.

Raspberry Pi As A Control Center

Because a Raspberry Pi is not expensive and needs little power, you can also use it as a control center. Even a Raspberry Pi Zero is sufficient for such purposes.

With a few lines of code, you can monitor whether your servers are online or initiate other tasks. For example, if you are responsible for the backups of several servers and the backups run together on an online hard drive, you can use a few lines of code to check whether the backups have arrived. All you have to do is check the date of the relevant files.

If a backup is missing, Pi Zero could send you a message. You can equip it with a few LEDs and as soon as one of them lights up, something is wrong with the data backup. That’s just one example, but the SBC is incredibly versatile. Let your creativity flourish.

The post Raspberry Pi – The Perfect Mini-Computer For Programming appeared first on Xplatform.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Hot Reload

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look at the .NET 6 support for Hot Reload.

In the preview 3, Microsoft started to add support for Hot Reload, which automatically gets started when you write dotnet watch. The preview 4 also includes support for Hot Reload in Visual Studio. Currently, I'm using the preview 5 to try Hot Reload.

Playing around with Hot Reload

To play around and to see how it works, I also create a new MVC project using the following commands:

dotnet new mvc -o HotReload -n HotReload
cd HotReload
code .

These commands create an MVC app, change into the project folder, and open VSCode.

dotnet run will not start the application with Hot Reload enabled, but dotnet watch does.

Run the command dotnet watch and see what happens, if you change some C#, HTML, or CSS files. It immediately updates the browser and shows you the results. You can see what's happening in the console as well.

Hot Reload in action

As mentioned initially, Hot Reload is enabled by default, if you use dotnet watch. If you don't want to use Hot Reload, you need to add the option --no-hot-reload to the command:

dotnet watch --no-hot-reload

Hot Reload should also work with WPF and Windows Forms Projects, as well as with .NET MAUI projects. I had a quick try with WPF and it didn't really work with XAML files. Sometimes it also did an infinite build loop. Every build

More about Hot Reload in this blog post: https://devblogs.microsoft.com/dotnet/introducing-net-hot-reload/

What's next?

In the next part In going to look into the support for Shadow-copying in IIS.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - HTTP/3 endpoint TLS configuration

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into HTTP/3 endpoint TLS configuration.

In the preview 3, Microsoft started to add support for HTTP/3 which brings a lot of improvements to the web. HTTP3 brings a faster connection setup as well as improved performance on low-quality networks.

Microsoft now adds support for HTTP/3 as well as the support to configure TLS (https) for HTTP/3.

More about HTTP/3

HTTP/3 endpoint TLS configuration

Let's see how you can configure HTTP/3 in a small MVC app using the following commands:

dotnet new mvc -o Http3Tls -n Http3Tls
cd Http3Tls
code .

This commands create an MVC app, change into the project folder and open VSCode.

In the Program.cs we need to configure HTTP/3 as shown in Microsoft's blog post:

public class Program
    public static void Main(string[] args)

    public static IHostBuilder CreateHostBuilder(string[] args) =>
            .ConfigureWebHostDefaults(webBuilder =>
                    .ConfigureKestrel((context, options) =>
                        options.EnableAltSvc = true;
                        options.Listen(IPAddress.Any, 5001, listenOptions =>
							// Enables HTTP/3
                            listenOptions.Protocols = HttpProtocols.Http3;
                            // Adds a TLS certificate to the endpoint
                            listenOptions.UseHttps(httpsOptions =>
                                httpsOptions.ServerCertificate = LoadCertificate();

The flag EnableAltSvc sets a Alt-Svc header to the browsers to tell them, that there are alternative services to the existing HTTP/1 or HTTP/2. This is needed to tell the browsers, that the alternative services - HTTP/3 in this case - should be treated like the existing ones. This needs a https connection to be secure and trusted.

What's next?

In the next part In going to look into the support for .NET Hot Reload support in ASP.NET Core.

Golo Roden: In eigener Sache: Die tech:lounge Summer Edition

Die tech:lounge Summer Edition ist eine Reihe von 12 Webinaren, in denen es um die Themen Architektur, Codequalität, Containerisierung und moderne Entwicklung geht, und die sich an Einsteiger und Fortgeschrittene gleichermaßen richtet.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Preserve prerendered state in Blazor apps

This is the next part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into preserve prerendered state in Blazor apps.

In Blazor apps can be prerendered on the server to optimize the load time. The app gets rendered immediately in the browser and is available for the user. Unfortunately, the state that is used on while prerendering on the server is lost on the client and needs to be recreated if the page is fully loaded and the UI may flicker, if the state is recreated and the prerendered HTML will be replaced by the HTML that is rendered again on the client.

To solve that, Microsoft adds support to persist the state into the prerendered page using the <preserve-component-state /> tag helper. This helps to set a stage that is identically on the server and on the client.

Actually, I have no idea why this isn't implemented as a default behavior in case the app get's prerendered. It should be done easily and wouldn't break anything, I guess.

Try to preserve prerendered states

I tried it with a new Blazor app and it worked quite well on the FetchData page. The important part is, to add the preserve-component-state tag helper after all used components in the _Host.cshtml. I placed it right before the script reference to the blazor.server.js:

    <component type="typeof(App)" render-mode="ServerPrerendered" />

    <div id="blazor-error-ui">
        <environment include="Staging,Production">
            An error has occurred. This application may no longer respond until reloaded.
        <environment include="Development">
            An unhandled exception has occurred. See browser dev tools for details.
        <a href="" class="reload">Reload</a>
        <a class="dismiss">🗙</a>

    <persist-component-state /> <!-- <== relevant tag helper -->
    <script src="_framework/blazor.server.js"></script>

The next snippet is more or less the same as on Microsoft's blog post, except that the forecast variable is missing there and System.Text.Json should be in the usings as well

@page "/fetchdata"
@implements IDisposable

@using PrerenderedState.Data
@using System.Text.Json
@inject WeatherForecastService ForecastService
@inject ComponentApplicationState ApplicationState


@code {
    private WeatherForecast[] forecasts;
    protected override async Task OnInitializedAsync()
        ApplicationState.OnPersisting += PersistForecasts;
        if (!ApplicationState.TryTakePersistedState("fetchdata", out var data))
            forecasts = await ForecastService.GetForecastAsync(DateTime.Now);
            var options = new JsonSerializerOptions
                PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
                PropertyNameCaseInsensitive = true,
            forecasts = JsonSerializer.Deserialize<WeatherForecast[]>(data, options);

    private Task PersistForecasts()
        ApplicationState.PersistAsJson("fetchdata", forecasts);
        return Task.CompletedTask;

    void IDisposable.Dispose()
        ApplicationState.OnPersisting -= PersistForecasts;

What is the tag helper doing?

It renders an HTML comment to the page that contains the state in an encoded format:


(I added some line breaks here)

This reminds me of the ViewState we had in ASP.NET WebForms. Does this make Blazor Server the successor of ASP.NET WebForms? Just kidding.

Actually, it is not really the ViewState, because it not gets sent back to the server. It just helps the client restore the state created on the server initially while it was prerendered.

What's next?

In the next part In going to look into the support for HTTP/3 endpoint TLS configuration in ASP.NET Core.

Golo Roden: 12 Aspekte für bessere Software

Teams und Unternehmen, die die Qualität ihrer Softwareentwicklung verbessern wollen, finden wertvolle Hilfe in den 12 Aspekten aus dem "Joel Test".

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Infer component generic types from ancestor components

This is the ninth part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a look into the inferring of generic types from ancestor components.

In Blazor generic ancestor components need to have the generic type defined in the markup code, until yet. With the preview 2 of .NET 6 ancestor components can infer the generic type from the parent component.

In the announcement post, Microsoft shows a quick demo with the Grid component. Let's have a quick look at the snippets:

<Grid Items="@people">
    <Column TItem="Person" Name="Full name">@context.FirstName @context.LastName</Column>
    <Column TItem="Person" Name="E-mail address">@context.Email</Column>

In this snippet, the Column component has the generic type with the TItem property. This is not longer needed as they showed with this sample:

<Grid Items="@people">
    <Column Name="Full name">@context.FirstName @context.LastName</Column>
    <Column Name="E-mail address">@context.Email</Column>

Since I don't like grids at all, I would like to try to build a SimpleList component that uses a generic ListItem ancestor component to render the items in the list:

Try to infer generic types

As usual, I have to create a project first. This time I'm going to use a Blazor Server project

dotnet new blazorserver -n ComponentGenericTypes -o ComponentGenericTypes
cd ComponentGenericTypes
code .

This creates a new Blazor Server project called ComponentGenericTypes, changes into the project directory, and opens VSCode to start working on the project.

To generate some meaningful dummy data, I'm going to add my favorite NuGet package GenFu:

dotnet add package GenFu

In the Index.razor, I replaced the existing code with the following:

@page "/"
@using ComponentGenericTypes.Components
@using ComponentGenericTypes.Data
@using GenFu

<h1>Hello, world!</h1>

<SimpleList Items="@people">
            Hallo <b>@context.FirstName @context.LastName</b><br />

@code {
    public IEnumerable<Person> people = A.ListOf<Person>(15);    

This will not work yet, but let's quickly go through it to get the idea. Since this code uses two components that are located in the Components folder, we need to add a using of ComponentGenericTypes.Components, as well as a using to ComponentGenericType.Date because we like to use the Person class. Both the components and the class don't exist yet.

At the bottom of the file, we create a list of 15 persons using GenFu and assign it to a variable that is bound to the SimpleList component. The ListItem component is the direct child component of the SimpleList and behaves like a template for the items. It also contains markup code to display the values.

For the Person class I created a new C# file in the Data folder and added the following code:

namespace ComponentGenericTypes.Data
    public class Person
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }

This is a pretty simple class. But the property names are important. If such a class is instantiated by GenFu, it automatically writes first names into the FirstName property, last names into the LastName property and it also writes valid email addresses into the Email property. It also works with Streets, Addresses, ZIP codes, phone numbers, and so on. This is why GenFu is my favorite NuGet package.

Now let's create a Components folder and place the SimpleList component inside. The code looks like this:

@typeparam TItem
@attribute [CascadingTypeParameter(nameof(TItem))]

<CascadingValue IsFixed="true" Value="Items">@ChildContent</CascadingValue>

@code {
    [Parameter] public IEnumerable<TItem> Items { get; set; }
    [Parameter] public RenderFragment ChildContent { get; set; }

It defines the generic type parameter TItem and a property called Items that is of type IEnemuerable of TItem. That makes the component generic to use almost any kind of IEnumerables. To use child components, the SimpleList also contains a RenderFragment property called ChildContent.

The second attribute does the magic. This cascades the generic type parameter of a specific type to the child component. This is why we don't need to specify the generic type in the ancestor component. In the third line, we also cascade the property Items to the child component.

Now it's time to create the ListItem component:

@typeparam TItem

@foreach (var item in Items)

@code {
    [CascadingParameter] public IEnumerable<TItem> Items { get; set; }
    [Parameter] public RenderFragment<TItem> ChildContent { get; set; }

This component iterates through the list of items and renders the ChildContent which in this case is a generic RenderFragment. The generic one creates a context variable of type TItem that can be used to bind the passed value to child components or HTML markup. As seen in the Index.razor the context variable will be of type Person:

        Hallo <b>@context.FirstName @context.LastName</b><br />

That's it! The index page now will show a list of 15 persons:

Generic List Component

Since I'm not really a Blazor expert the way how I implemented the components might be not completely right, but it's working and shows the idea of the topic of this blog post.

What's next?

In the next part In going to look into the support for Preserve prerendered state in Blazor apps in ASP.NET Core.

Code-Inside Blog: Today I learned (sort of) 'fltmc' to inspect the IO request pipeline of Windows

The headline is obviously a big lie, because I followed this twitter conversation last year, but it’s still interesting to me and I wanted to write it somewhere down.

Starting point was that Bruce Dawson (Google programmer) noticed, that building Chrome on Windows is slow for various reasons:

Trentent Tye told him to disable the “filter driver”:

If you have never heard of a “filter driver” (like me :)), you might want to take a look here.

To see the loaded filter driver on your machine try out this: Run fltmc (fltmc.exe) as admin.



This makes more or less sense to me. I’m not really sure what to do with that information, but it’s cool (nerd cool, but anyway :)).

Stefan Henneken: IEC 61131-3: SOLID – Fünf Grundsätze für bessere Software

Neben der Syntax einer Programmiersprache und dem Verständnis der wichtigsten Bibliotheken und Frameworks, gehören weiterer Methodiken – wie zum Beispiel Design Pattern – zu den Grundlagen der Softwareentwicklung. Neben den Design Pattern sind Designprinzipien ebenfalls ein hilfreiches Werkzeug bei der Entwicklung von Software. SOLID ist ein Akronym für fünf solcher Designprinzipien, die dem Entwickler dabei unterstützen Software verständlicher, flexibler und wartbarer zu entwerfen.

In größeren Softwareprojekten existieren eine Vielzahl von Funktionsblöcken, die über Vererbung und Referenzen miteinander in Verbindung stehen. Durch die Aufrufe der Funktionsblöcke und deren Methoden agieren diese Einheiten untereinander. Dieses Zusammenspiel der Codeeinheiten, kann bei falschem Design das Erweitern oder Auffinden von Fehlern unnötig erschweren. Für die Entwicklung von nachhaltiger Software sollten die Funktionsblöcke so modelliert werden, damit diese einfach zu erweitern sind.

Viele Design Pattern wenden die SOLID-Prinzipien an, um für die jeweilige Aufgabenstellung einen Architekturansatz vorzuschlagen. Die SOLID-Prinzipien sind auch nicht als Regeln zu verstehen, sondern mehr als Ratschläge. Sie sind eine Untermenge vieler Prinzipien, die der amerikanische Software-Ingenieur und Dozent Robert C. Martin (auch bekannt als Uncle Bob) in seinem Buch (Amazon-Werbelink *) Clean Architecture: Das Praxis-Handbuch für professionelles Softwaredesign vorgestellt hat. Die SOLID-Prinzipien sind im Einzelnen:

  • Single Responsibility Principle
  • Open Closed Principle
  • Liskov’sche Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

Die hier gezeigten Prinzipien sind Hinweise, die es einem Entwickler erleichtert die Codequalität zu verbessern. Der Aufwand amortisiert sich nach kurzer Zeit, da Änderungen einfacher, Tests und Fehlersuche beschleunigt werden. Somit sollte das Wissen über diese fünf Designprinzipien zur Basis eines jeden Softwareentwicklers gehören.

Single Responsibility Principle

Ein Funktionsblock sollte nur eine einzige Verantwortung haben. Wird die Funktionalität eines Programms geändert, sollte dieses nur Auswirkungen auf wenige Funktionsblöcke haben. Viele kleine Funktionsblöcke, sind besser als wenige große. Der Code wirkt auf dem ersten Blick zwar umfangreicher, ist dadurch aber einfacher zu organisieren. Ein Programm mit vielen kleineren Funktionsblöcken, für jeweils spezielle Aufgaben, ist einfacher zu pflegen, als wenige große Funktionsblöcke, die den Anspruch erheben, alles zu können.

Open Closed Principle

Nach dem Open Closed Principle sollten Funktionsblöcke offen für Erweiterungen, aber geschlossen für Änderungen sein. Die Umsetzung von Erweiterungen sollte nur durch Hinzufügen von Code, nicht durch Ändern von vorhandenen Code erreicht werden. Ein gutes Beispiel für dieses Prinzip ist die Vererbung. Ein neuer Funktionsblock erbt von einem schon vorhandenen Funktionsblock. Neue Funktionen können so hinzugefügt werden, ohne den vorhanden Funktionsblock verändern zu müssen. Es muss nicht einmal der Programmcode vorliegen.

Liskov‘sche Substitution Principle

Das Liskov‘sche Substitution Principle fordert, dass abgeleitete Funktionsblöcke immer anstelle ihrer Basis-FBs einsetzbar sein müssen. Abgeleitete FBs müssen sich so verhalten wie ihr Basis-FB. Ein abgeleiteter FB darf den Basis-FB erweitern, aber nicht einschränken.

Interface Segregation Principle

Viele kundenspezifische Schnittstellen sind besser als eine Universalschnittstelle. Eine Schnittstelle darf demnach nur die Funktionen enthalten, die auch wirklich eng zusammengehören. Durch umfangreiche Schnittstellen entstehen Kopplungen zwischen ansonsten unabhängigen Programmteilen. Somit hat das Interface Segregation Principle, ein ähnliches Ziel wie das Single Responsibility Principle. Allerdings gibt es bei der Umsetzung dieser beiden Prinzipien unterschiedliche Ansätze.

Dependency Inversion Principle

Funktionsblöcke sind häufig linear in einer Richtung voneinander abhängig. Ein Funktionsblock für das Loggen von Meldungen, ruft Methoden eines anderen Funktionsblocks auf, um Daten in eine Datenbank zu schreiben. Zwischen dem Funktionsblock für das Loggen und dem Funktionsblock für den Zugriff auf die Datenbank besteht eine feste Abhängigkeit. Das Prinzip der Abhängigkeitsinversion löst diese feste Abhängigkeit auf, indem eine gemeinsame Schnittstelle definiert wird. Diese wird von dem Baustein für die Datenbankzugriffe implementiert.

Zum Schluss

In den folgenden Posts werde ich die einzelnen SOLID-Prinzipen genauer vorstellen und versuchen diese anhand eines Beispiels zu erläutern. Als Grundlage dient hierbei das Beispiel-Programm aus meinem Post IEC 61131-3: Das Command Pattern. Mit jedem SOLID-Prinzip will ich versuchen das Programm weiter zu optimieren.

Starten werde ich in Kürze mit dem Dependency Inversion Principle.

Holger Schwichtenberg: .NET 6 erscheint am 9. November 2021

Microsoft hat jetzt den genauen Erscheinungstermin von .NET 6 bekannt gegeben.

Holger Schwichtenberg: Build-Konferenz 2021 beginnt heute: online und kostenfrei

Die Microsoft Build findet vom 25. bis 27.5.2021 zum elften Mal statt.

Holger Schwichtenberg: Softwareentwickler-Update für .NET- und Web-Entwickler am 8.6.2021 (Online)

Der Infotag am 8.6.2021 behandelt .NET 6, C# 10, WinUI3, Cross-Plattform mit MAUI und Blazor Desktop sowie Visual Studio 2022.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - CSS isolation for MVC Views and Razor Pages

This is the eighth part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a quick into the support for SS isolation for MVC Views and Razor Pages.

Blazor components already support CSS isolation. MVC Views and Razor Pages now do the same. Since the official blog post shows it on Razor Pages, I'd like to try it in an MVC application.

Trying CSS isolation for MVC Views

At first, I'm going to create a new MVC application project using the .NET CLI:

dotnet new mvc -n CssIsolation -o CssIsolation
cd CssIsolation
code .

These commands create the project, change the directory into the project folder, and opens VSCode.

After VSCode opens, create an Index.cshtml.css file in the Views/Home folder. In Visual Studio this file will be nested under the Index.cshtml. VSCode doesn't support this kind of nesting yet.

Like in Microsoft's blog post, I just add a small CSS snippet to the new CSS file to change the color of the H1 element:

h1 {
    color: red;

This actually doesn't have any effect yet. Unlike Blazor, we need to add a reference to a CSS resource that bundles all the isolated CSS. Open the _Layout.cshtml that is located in the Views/Shared folder and add the following line right after the reference to the site.css:

<link rel="stylesheet" href="CssIsolation.styles.css" />

Ensure the first part of the URL is the name of your application. It is CssIsolation in my case. If you named your application like FooBar, the CSS reference is FooBar.styles.css.

We'll now have a red H1 header:

Isolated CSS: red header

How is this solved?

I had a quick look at the sources to see how the CSS isolation is solved. Every element of the rendered View gets an autogenerated empty attribute that identifies the view:

<div b-zi0vwlqhpg class="text-center">
    <h1 b-zi0vwlqhpg class="display-4">Welcome</h1>
    <p b-zi0vwlqhpg>Learn about <a b-zi0vwlqhpg href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.</p>

Calling the CSS bundle resource in the browser (https://localhost:5001/cssisolation.styles.css) we can see how the CSS is structured:

/* _content/CssIsolation/Views/Home/Index.cshtml.rz.scp.css */
h1[b-zi0vwlqhpg] {
  color: red;
/* _content/CssIsolation/Views/Home/Privacy.cshtml.rz.scp.css */
h1[b-tqxfxf7tqz] {
  color: blue;

I did the same for the Privacy.cshtml to see how the isolation is done in the CSS resource. This is why you see two different files listed here. The autogenerated attribute is used with every CSS selector that is used here. This creates unique CSS selectors per view.

I assume this works the same with Razor Pages since both MVC and Razor Pages use the same technique.

This is pretty cool and helpful.

What's next?

In the next part In going to look into the support for Infer component generic types from ancestor components in ASP.NET Core.

Holger Schwichtenberg: Support-Ende für .NET Framework 4.5.2, 4.6 und 4.6.1 schon im April 2022

Microsoft hat bekanntgegeben, die Unterstützung der Versionen 4.5.2, 4.6 und 4.6.1 des klassischen .NET Frameworks vorzeitig schon in einem Jahr zu beenden.

Jürgen Gutsch: ASP.NET Core in .NET 6 - Support for custom event arguments in Blazor

This is the seventh part of the ASP.NET Core on .NET 6 series. In this post, I want to have a quick into the support for custom event arguments in Blazor.

In Blazor you can create custom events and Microsoft now added the support for custom event arguments for those custom events in Blazor as well. Microsoft added a sample in the blog post about the preview 2 that I like to try in a small Blazor project.

Exploring custom event arguments in Blazor

At first, I'm going to create a new Blazor WebAssembly project using the .NET CLI:

dotnet new blazorwasm -n BlazorCustomEventArgs -o BlazorCustomEventArgs
cd BlazorCustomEventArgs
code .

These commands create the project, change the directory into the project folder, and opens VSCode.

After VSCode opens, I create a new folder called CustomEvents and place a new C# file called CustomPasteEventArgs.cs in it. This file contains the first snippet:

using System;
using Microsoft.AspNetCore.Components;

namespace BlazorCustomEventArgs.CustomEvents
    [EventHandler("oncustompaste", typeof(CustomPasteEventArgs), enableStopPropagation: true, enablePreventDefault: true)]
    public static class EventHandlers
        // This static class doesn't need to contain any members. It's just a place where we can put
        // [EventHandler] attributes to configure event types on the Razor compiler. This affects the
        // compiler output as well as code completions in the editor.

    public class CustomPasteEventArgs : EventArgs
        // Data for these properties will be supplied by custom JavaScript logic
        public DateTime EventTimestamp { get; set; }
        public string PastedData { get; set; }

Additionally, I added a namespace to be complete.

In the Index.razor in the Pages folder we add the next snippet of the blog post:

@page "/"
@using BlazorCustomEventArgs.CustomEvents

<p>Try pasting into the following text box:</p>
<input @oncustompaste="HandleCustomPaste" />

@code {
    string message;

    void HandleCustomPaste(CustomPasteEventArgs eventArgs)
        message = $"At {eventArgs.EventTimestamp.ToShortTimeString()}, you pasted: {eventArgs.PastedData}";

I need to add the using to match the namespace of the CustomPasteEventArgs. This creates an input element and outputs a message that will be generated on the CustomPaste event handler.

At the end, we need to add some JavaScript in the index.html that is located in the wwwroot folder. This file hosts the actual WebAssembly application. Place this script directly after the script tag for the blazor.webassembly.js:

    Blazor.registerCustomEventType('custompaste', {
        browserEventName: 'paste',
        createEventArgs: event => {
            // This example only deals with pasting text, but you could use arbitrary JavaScript APIs
            // to deal with users pasting other types of data, such as images
            return {
                eventTimestamp: new Date(),
                pastedData: event.clipboardData.getData('text')

This binds the default paste event to the custompaste event and adds the pasted text data, as well as the current date to the CustomPasteEventArgs. In that case the JavaScript object literal should match the CustomPasteEventArg to get it working property, except the casing of the properties.

Blazor doesn't protect you to write some JavaScript ;-)

Let's try it out. I run the application by calling the dotnet run command or the dotnet watch command in the console:

dotnet run

If the browser doesn't start automatically copy the displayed HTTPS URL into the browser. It should look like this:

custom event args 1

Now I past some text into the input element. Et voilà:

custom event args 2Don't be confused about the date. Since it is created via JavaScript using new Date() it is a UTC date, which means minus two hours within the CET time zone, during daylight saving time.

What's next?

In the next part In going to look into the support for CSS isolation for MVC Views and Razor Pages in ASP.NET Core.

Code-Inside Blog: How to self host Google Fonts

Google Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.

In one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.

After some research we discovered this tool: Google-Web-Fonts-Helper


Pick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)

The project site is on GitHub.

Hope this helps!

Christina Hirth : What Does Continuous Delivery to a Team

Tl;dr: Continuous integration and delivery are not about a pipeline, it is about trust, psychological safety, a common goal and real teamwork.

What is needed for CI/CD – and how to achieve those?

  • No feature branches but trunk-based development and feature toggles: feature branches mean discontinuous development. CI/CD works with only one temporary branch: the local copy on your machine getting integrated at the moment you want to push. “No feature branches” also means pushing your changes at least once a day.
  • A feeling of safety to commit and push your code: trust in yourself and trust in your environment to help you if you fall – or steady you to not fall at all.
  • Quality gates to keep the customer safe
  • Observing and reducing the outcome of your work (as a team, of course)
  • Resilience: accept that errors will happen and make sure that they are not fatal, that you can live with them. This means also being aware of the risk involved in your changes

What happens in the team, in the team-work:

  • It enables a growing maturity, autonomy due to fast feedback, failing fast and early
  • It makes us real team-workers, “we fail together, we succeed together”
  • It leads to better programmers due to the need for XP practices and the need to know how to deliver backwards compatible software
  • It has an impact on the architecture and the design (see Accelerate)
  • Psychological safety: eliminates the fear of coding, of making decisions, of having code reviews
  • It gives a common goal, valuable for everybody: customers, devs, testers, PO, company
  • It makes everybody involved happy because of much faster feedback from customers instead of the feedback of the PO => it allows to validate the assumption that the new feature is valuable
  • It drives new ideas, new capabilities bc it allows experiments
  • Sets the right priorities: not to jump to code but to think about how to deliver new capabilities, to solve problems (sometimes even by deleting code)

How to start:

  • Agree upon setting CI/CD as a goal for the whole team: focus on how to get there not on the reasons why it cannot work out
  • Consider all requirements (safety net, coding and review practices, creating the pipeline and the quality gates) as necessary steps and work on them, one after another
  • Agree upon team rules making CI/CD as a team responsibility (monitoring errors, fixing them, flickering tests, processes to improve leaks in the safety net, blameless post-mortems)
  • Learn to give and get feedback on a professional manner (“I am not my work”). For example by reading the book Agile Conversations and/or practice it in the meetup

– – – – –

This bullet-point list was born during this year’s CITCON, a great un-conference on continuous improvement. I am aware that they can trigger questions and needs for explanations – and I would be happy to answer them 🙂

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Nullable Reference Type Annotations

This is the sixth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a quick into the new Nullable Reference Type Annotations in some ASP.NET Core APIs

Microsoft added Nullable Reference Types in C# 8 and this is why they applied nullability annotations to parts of ASP.NET Core. This provides additional compile-time safety while using reference types and protects against possible null reference exceptions.

This is not only a new thing with preview 1 but an ongoing change for the next releases. Microsoft will add more and more nullability annotations to the ASP.NET Core APIs in the next versions. You can see the progress in this GitHub Issue: https://github.com/aspnet/Announcements/issues/444

Exploring Nullable Reference Type Annotations

I'd quickly like to see whether this change is already visible in a newly created MVC project.

dotnet new mvc -n NullabilityDemo -o NullabilityDemo
cd NullabilityDemo

This creates a new MVC project and changes the directory into it.

Projects that enable using nullable annotations may see new build-time warnings from ASP.NET Core APIs. To enable nullable reference types, you should add the following property to your project file:


In the following screenshot you'll see, the build result before and after enabling nullable annotations:

null warnings on build

Actually, there is no new warning, It just shows a warning for the RequestId property in the ErrorViewModel because it might be null. After changing it to a nullable string, the warning disappears.

public class ErrorViewModel
    public string? RequestId { get; set; }

    public bool ShowRequestId => !string.IsNullOrEmpty(RequestId);

However. How can I try the changed APIs?

I need to have a look into the already mentioned GitHub Issue to choose an API to try.

I'm going with the Microsoft.AspNetCore.WebUtilitiesQueryHelpers.ParseQuery method:

using Microsoft.AspNetCore.WebUtilities;

// ...

private void ParseQuery(string queryString)

If you now set the queryString variable to null, you'll get yellow squiggles that tell you that null may be null:

null hints

You get the same message if you mark the input variable with a nullable annotation:

private void ParseQuery(string? queryString)

nullable hints

It's working and it is quite cool to prevent null reference exceptions against ASP.NET Core APIs.

What's next?

In the next part In going to look into the support for Support for custom event arguments in Blazor in ASP.NET Core.

Golo Roden: Luca-App versus Open Source

Die Luca-App enthält Code aus einem Open-Source-Projekt, hat aber dessen Lizenz verletzt. Die zugehörige Meldung bestimmte die Schlagzeilen der vergangenen Wochen. Doch was genau ist eigentlich Open Source, was ist der Unterschied zu Freier Software, und was lässt sich über Open Source lernen?

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Input ElementReference in Blazor

This is the fifth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the input ElementReference in Blazor that is exposed to relevant components.

Microsoft exposes the ElementReference of the Blazor input elements to the underlying input. This effects the following components: InputCheckbox, InputDate, InputFile, InputNumber, InputSelect, InputText, and InputTextArea.

Exploring the ElementReference

To test it, I created a Blazor Server project using the dotnet CLI:

dotnet new blazorserver -n ElementReferenceDemo -o ElementReferenceDemo

CD into the project and call dotnet watch

I will reuse the index.razor to try the form ElementReference:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

At first, add the following code block at the end of the file:

    Person person = new Person{
      FirstName = "John",
      LastName = "Doe"

    InputText firstNameReference;
    InputText lastNameReference;

    public class Person
        public string FirstName { get; set; }

        public string LastName { get; set; }

This creates a Person type and initializes it. We will use it later as a model in the EditForm. There are also two variables added that will reference the actual InputText elements in the form. We will add some more code later on, but let's add the form first:

<EditForm Model=@person>
    <InputText @bind-Value="person.FirstName" @ref="firstNameReference" /><br>
    <InputText @bind-Value="person.LastName" @ref="lastNameReference" /><br>

    <input type="submit" value="Submit" class="btn btn-primary" /><br>
    <input type="button" value="Focus FirstName" class="btn btn-secondary" 
        @onclick="HandleFocusFirstName" />
    <input type="button" value="Focus LastName" class="btn btn-secondary" 
        @onclick="HandleFocusLastName" />

This form has the person object assigned as a model. It contains two InputText elements, the default input button as well as two input buttons that will be used to test the ElementReference.

The reference Variables are assigned to the @ref attribute of the InputText elements. We will use these variables later on.

The buttons have @onclick methods assigned that we need to add to the code section:

private async Task HandleFocusFirstName()

private async Task HandleFocusLastName()

As described by Microsoft the input elements now expose the ElementReference. This can be used to set the Focus of an element. Add the following lines to focus the InputText elements:

private async Task HandleFocusFirstName()
   await firstNameReference.Element.Value.FocusAsync();

private async Task HandleFocusLastName()
   await lastNameReference.Element.Value.FocusAsync();

This might be pretty useful. Instead of playing around with JavaScript Interop, you can use C# completely.

On the other hand, it would be great, if Microsoft exposes much more features via the ElementReference, instead of just focusing an element.

What's next?

In the next part In going to look into the support for Nullable Reference Type Annotations in ASP.NET Core.

Norbert Eder: Python lernen #2: Installation / Tools

Im zweiten Teil der Serie geht es um die Installation. Das notwendige Setup findest du unter https://www.python.org/downloads/. Unterstützt werden alle gängigen Betriebssysteme. Ich installiere Python 3.9.2 für Windows.

Bei der Installation wähle ich die Standard-Installation. Vorher empfehle ich, den Installationsphad in die PATH-Umgebungsvariable mit aufnehmen zu lassen:

Weitere Zwischenschritte gibt es hier nicht.

Das Setup wurde erfolgreich ausgeführt. Starten wir nun die Konsole und führen einen ersten Test durch, ob auch tatsächlich alles funktioniert hat:

Python 3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 23:44:55) [MSC v.1928 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

Das sieht doch gut aus und wir befinden uns damit auch schon in der Python Shell.

Nun noch schnell ein einfacher Test mit dem Befehl print. Damit können wir Informationen in der Standardausgabe ausgeben:

>>> print('visit norberteder.com')
visit norberteder.com

Natürlich kann auch mit Dateien gearbeitet werden. Nachfolgend schreiben wir die Codezeile in eine Python-Datei:

D:\>echo "print('visit norberteder.com') > hello.py
"print('visit norberteder.com') > hello.py

Und führen diese auch aus:

D:\>python hello.py
visit norberteder.com


Gemeinsam mit Python wird die IDLE installiert. Damit bekommt man ein einfaches Werkzeug, um sowohl die Python Shell auszuführen, als auch Python-Dateien zu schreiben/editieren und zu debuggen.


Für die ersten Schritte ist IDLE sicherlich ausreichend, für größere Vorhaben werde ich einen anderen Editor verwenden.

Für private Zwecke bzw. für die Entwicklung von Open Source Software gibt es von JetBrains eine kostenlose PyCharm Community Edition. Da ich bereits mit anderen sprachspezifischen Editoren von JetBrains vertraut bin, werde ich in den nächsten Teilen dieser Serie auf PyCharm umsteigen.

Weitere Tools kommen für den Anfang nicht zum Einsatz. Möglicherweise ändert sich das im Laufe der Serie – wir werden sehen :)

Damit hätten wir die Basis sichergestellt und haben vorerst alles, was wir brauchen. Im nächsten Teil geht es um die Grundlagen der Programmiersprache. Welche Naming Conventions gibt es, wie definiert man Variablen und Funktionen.

In Python lernen #1: Der Einstieg findest du eine Liste aller verfügbaren Artikel zu meiner Python-Serie. Ich freue mich über dein Feedback, damit ich meine Serie weiter verbessern kann.

Der Beitrag Python lernen #2: Installation / Tools erschien zuerst auf Norbert Eder.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - DynamicComponent in Blazor

This is the fourth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the DynamicComponent in Blazor.

What does Microsoft say about it?

DynamicComponent is a new built-in Blazor component that can be used to dynamically render a component specified by type.

That sounds nice. It is a component that dynamically renders any other component. Unfortunately, there is no documentation available yet, except a comment in the blog. So let's create a small one:

Trying the DynamicComponent

To test it, I created a Blazor Server project using the dotnet CLI

dotnet new blazorserver -n BlazorServerDemo -o BlazorServerDemo

CD into the project and call dotnet watch

Now let's try the DynamicComponent on the index.razor:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

My idea is to render the SurveyPrompt component dynamically with a different title:

    var someType = typeof(SurveyPrompt);
    var myDictionaryOfParameters = new Dictionary<string, object>
        { "Title", "Foo Bar"}

<DynamicComponent Type="@someType" Parameters="@myDictionaryOfParameters" />

At first, I needed to define the type of the component I want to render. At second I needed to define the parameters I want to pass to that component. In that case, it is just the title property.


Why could this be useful?

This is great in case you want to render components dynamically based on data inputs or whatever.

Think about a timeline of news, a newsfeed, or stuff like this on a web page, that can render different kind of content like text, videos, pictures. You can now just loop through the news list and render the DynamicComponent and pass the type of the actual component to it, as well as the attribute values the components need.

What's next?

In the next part In going to look into the support for ElementReference in Blazor.

Holger Schwichtenberg: Kommende Entwickler-Events: Vor Ort und/oder Online

Ein Liste der kommenden Entwicklerveranstaltungen im deutschsprachigen Raum bis Mai 2022.

Stefan Henneken: IEC 61131-3: Different versions of the same library in a TwinCAT project

Library placeholders allow to reference multiple versions of the same library in a PLC project. This situation can be helpful if a library has to be updated in an existing project due to new functions, but the update turns out to give an older FB a changed behavior.

The mentioned problem can be solved by including different versions of the same library in the project using placeholders. Placeholders for libraries are comparable with references. Instead of adding libraries directly to a project, they are referenced indirectly via placeholders. Each placeholder is linked to a library either with a specific version or so that the current library is always used. If libraries are added via the standard dialog, placeholders are always used automatically.

In the following short post, I want to show how to add several versions of the same library to a project. In our example, I will add two different versions of the Tc3_JsonXml library to a project. There are currently three different versions of the library on my computer.

V3.3.7.0 and V3.3.14.0 will be used in parallel in the example.

Open the dialog for adding a library. Then switch to the view Advanced.

Switch to the tab Placeholder and enter a unique name for the new placeholder.

Select the library that will be referenced by the placeholder. A specific version or the latest version can always be selected using the ‘*’.

If you then select the placeholder in the project tree under References and switch to the properties window, the properties of the placeholder will be displayed there.

The namespace has still to be adjusted here. The namespace is used later in the PLC program and is used to address elements of both libraries via different names. I presented the basic concept of namespaces in IEC 61131-3: Namespaces. I chose the same identifiers for the namespace as for the placeholders.

After performing the same steps for the V3.3.14.0 version of the library, both placeholders should be available with a unique name and customized namespace.

The Library Manager, which is opened by double-clicking on References, provides a good overview.

Here you can clearly see how the placeholders are resolved. Usually, the placeholders have the same name as the libraries that are referenced. The ‘*’ means that always the newest version of the library is used, which is available on the development computer. The right column shows the version referenced by the placeholder. The names of the placeholders have been adapted for the two placeholders of the Tc3_JsonXml library.

FB_JsonSaxWriter will be used as an example in the PLC program. If the FB is specified without a namespace when the instance is declared,

  fbJsonSaxWriter    : FB_JsonSaxWriter;

the compiler will output an error message:

The name FB_JsonSaxWriter cannot be uniquely resolved because two different versions of the Tc3_JsonXml library (V3.3.7.0 and V3.3.14.0) are available in the project. Thus, FB_JsonSaxWriter is also contained twice in the project.

By using the namespaces, targeted access to the individual elements of the desired library is possible:

  fbJsonSaxWriter_Build7           : Tc3_JsonXml_Build7.FB_JsonSaxWriter;
  fbJsonSaxWriter_Build14          : Tc3_JsonXml_Build14.FB_JsonSaxWriter;
  sVersionBuild7, sVersionBuild14  : STRING;
sVersionBuild7 := Tc3_JsonXml_Build7.stLibVersion_Tc3_JsonXml.sVersion;
sVersionBuild14 := Tc3_JsonXml_Build14.stLibVersion_Tc3_JsonXml.sVersion;

In this short example, the current version number is also read out via the global structure that is contained in every library:

Both libraries can now be used in parallel in the same PLC project. However, it must be ensured that both libraries are available in exactly the required versions (V3.3.7.0 and V3.3.14.0) on the development computer.

Norbert Eder: Python lernen #1: Der Einstieg

Du möchtest, genauso wie ich, Python lernen? Dann geh diesen Weg gemeinsam mit mir. In dieser Artikelserie werde ich meine Fragen, Antworten und Erkenntnisse mit dir teilen.

Den Anfang macht eine kleine Liste von Informationsquellen, die ich mir im Vorfeld herausgesucht habe und die ich in der weiteren Zeit verwenden werde. Gerne freue ich mich auch über Hinweise auf weitere interessante Webseiten, Bücher und Videos.

Am Ende dieses Artikels findest du eine Liste aller Artikel, die laufend erweitert wird und vorerst nur diesen beinhält – aber hoffentlich kontinuierlich wächst.


In der letzten Zeit komme ich vermehrt mit Themen in Berührung, die förmlich nach Python schreien. Vorrangig handelt es sich hierbei um das Thema Machine Learning.

Zusätzlich möchte auch mein Nachwuchs in das Thema Softwareentwicklung einsteigen. Hier eignet sich Python hervorragend für Spielereien mit Raspberry Pi und Co. Zudem soll sie einfach zu erlenen sein, was gerade für den Nachwuchs von Vorteil ist.


Als meine Grundlage und Leitfaden dient das Buch Einstieg in Python.

Weitere Bücher mit guten Rezensionen findest du hier:


Auch wenn ich damit vielleicht etwas altmodisch wirke, für ein strukturiertes Lernen einer neuen Sprache etc. setze ich nach wie vor gerne auf Bücher. Grundsätzlich aber kann das natürlich jeder halten, wie er möchte.


Im Laufe der Zeit werden sich sicherlich viele hilfreiche Links ansammeln. Vorerst halte ich mich jedoch an die offiziellen Webseiten, um mich mit den notwendigen Downloads und Informationen zu versorgen:

Auf https://www.python.org/ finden sich zusätzlich auch viele Lernunterlagen, Videos und natürlich eine große Community.

Unter https://github.com/python finden sich sämtliche offizielle Repositories. Natürlich kann man auch einen Blick darauf werfen, was es sonst noch so auf Github mit Python gibt.


Wer sich gerne Videos ansieht, der findet gerade auf Youtube eine Menge zum Thema. Ganz gut sieht diese 29-teilige Serie zu Python 3 aus:


  1. Python lernen #1: Der Einstieg [dieser Artikel]
  2. Python lernen #2: Installation und Setup
  3. Python lernen #3: Grundlagen der Programmiersprache (Naming Conventions, Variablen deklarieren, Datentypen, Funktionen usw.)
  4. Python lernen #4: Module und Gliederung von Projekten
  5. Python lernen #5: Ein erstes Beispiel

Weitere Artikel folgen in Kürze, diese Liste wird laufend von mir erweitert.

Ich freue mich auch gerne über jedes Feedback, Unterstützung, Wünsche, Fragen und dergleichen. Bitte schreib mir hier einen Kommentar, oder melde dich über das Kontakt-Formular bei mir.

Der Beitrag Python lernen #1: Der Einstieg erschien zuerst auf Norbert Eder.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Support for IAsyncDisposable in MVC

This is the third part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the Support for IAsyncDisposable in MVC.

The IAsyncDisposable is a thing since .NET Core 3.0. If I'm right, we got that together with the async streams to release those kind of streams asynchronously. Now MVC is supporting this interface as well and you can use it anywhere in your code on controllers, classes, etc. to release async resources.

When should I use IAsyncDisposable?

When you work with asynchronous enumerators like in async steams and when you work with instances of unmanaged resources which needs resource-intensive I/O operation to release.

When implementing this interface you can use the DisposeAsync method to release those kind of resources.

Let's try it

Let's assume we have a controller that creates and uses a Utf8JsonWriter which as well is a IAsyncDisposable resource

public class HomeController : Controller, IAsyncDisposable
    private Utf8JsonWriter _jsonWriter;

    private readonly ILogger<HomeController> _logger;

    public HomeController(ILogger<HomeController> logger)
        _logger = logger;
        _jsonWriter = new Utf8JsonWriter(new MemoryStream());

The interface needs us to implement the DisposeAsync method. This should be done like this:

public async ValueTask DisposeAsync()
    // Perform async cleanup.
    await DisposeAsyncCore();
    // Dispose of unmanaged resources.
    // Suppress GC to call the finalizer.

This is a higher level method that calls a DisposeAsyncCore that actually does the async cleanup. It also calls the regular Dispose method to release other unmanaged resources and it tells the garbage collector not to call the finalizer. I guess this could release the instance before the async cleanup finishes.

This needs us to add another method called DisposeAsyncCore():

protected async virtual ValueTask DisposeAsyncCore()
    if (_jsonWriter is not null)
        await _jsonWriter.DisposeAsync();

    _jsonWriter = null;

This will actually dispose the async resource .

Further reading

Microsoft has some really detailed docs about it:

What's next?

In the next part In going to look into the support for DynamicComponent in Blazor.

Holger Schwichtenberg: Konsolenfenster und Windows-Fenster in einer .NET-5.0-App

Um Windows Forms oder Windows Presentation Foundation (WPF) in einer Konsolenanwendung nutzen zu können, ist eine spezielle Einstellung notwendig.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Update on dotnet watch

This is the second part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look into the updates on dotnet watch. The announcement post from February 17th mentioned that dotnet watch now does dotnet watch run by default.

Actually, this doesn't work in preview 1 because this feature didn't make it to this release by accident: https://github.com/dotnet/aspnetcore/issues/30470

BTW: This feature isn't mentioned anymore. The team changed the post and didn't add it to preview 2 though.

The idea is to just use dotnet watch without specifying the run command that should be executed after a file is changed. run is now the default command:


This is just a small thing but might save some time.

What's next?

In the next part In going to look into the support for IAsyncDisposable in MVC.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.