Norbert Eder: Visual Studio 2017: Service Fabric Templates werden nicht angezeigt

Du hast das Azure Service Fabric SDK installiert, allerdings findest du im Visual Studio 2017 das Projekt-Template nicht und kannst somit kein neues Projekt anlegen? In diesem Fall sind eventuell die Service Fabric Tools des Azure Entwicklungsmoduls nicht installiert:

Service Fabric Tools für Visual Studio 2017 installieren

Service Fabric Tools für Visual Studio 2017 installieren

Es ist im Visual Studio Installer die Azure Entwicklung zu aktivieren, ebenso die Service Fabric-Tools.

Nach der Installation und des erneuten Startes von Visual Studio 2017 sind die Templates vorhanden.

Service Fabric Application Template | Visual Studio 2017

Service Fabric Application Template | Visual Studio 2017

Viel Spaß bei der Entwicklung.

Der Beitrag Visual Studio 2017: Service Fabric Templates werden nicht angezeigt erschien zuerst auf Norbert Eder.

Holger Schwichtenberg: Ein Git-Repository in Azure DevOps per Kommandozeile anlegen

Microsoft stellt zur automatisierten Verwaltung von Azure DevOps ein Kommandozeilenwerkzeug mit Namen "VSTS CLI" bereit.

Christina Hirth : Base your decisions on heuristics and not on gut feeling

As a developer we tackle very often problems which can be solved in various ways. It is ok not to know how to solve a problem. The real question is: how to decide which way to go 😯

In this situations often I rather have a feeling as a concrete logical reason for my decisions. This gut feelings are in most cases correct – but this fact doesn’t help me if I want to discuss it with others. It is not enough to KNOW something. If you are not a nerd from the 80’s (working alone in a den) it is crucial to be able to formulate and explain and share your thoughts leading to those decisions.

Finally I found a solution for this problem as I saw the session of Mathias Verraes about Design Heuristics held by the KanDDDinsky.

The biggest take away seems to be a no-brainer but it makes a huge difference: formulate and visualize your heuristics so that you can talk about concrete ideas instead of having to memorize everything what was said – or what you think it was said.

Using this methodology …

  • … unfounded opinions like “I think this is good and this is bad” won’t be discussed. The question is, why is something good or bad.
  • … loop backs to the same subjects are avoided (to something already discussed)
  • … the participants can see all criteria at once
  • … the participants can weight the heuristics and so to find the probably best solution

What is necessary for this method? Actually nothing but a whiteboard and/or some stickies. And maybe to take some time beforehand to list your design heuristics. These are mine (for now):

  • Is this a solution for my problem?
  • Do I have to build it or can I buy it?
  • Can it be rolled out without breaking neither my features as everything else out of my control?
  • Breaks any architecture rules, any clean code rules? Do I have a valid reason to break these rules?
  • Can lead to security leaks?
  • Is it over engineered?
  • Is it much to simple, does it feel like a short cut?
  • If it is a short cut, can be corrected in the near future without having to throw away everything? = Is my short cut implemented driving my code in the right direction, but in more shallow way?
  • Does this solution introduce a new stack = a new unknown complexity?
  • Is it fast enough (for now and the near future)?
  • … to be continued 🙂

The video for the talk can be found here. It was a workshop disguised as a talk (thanks again Mathias!!), we could have have continued for another hour if it weren’t for the cold beer waiting 🙂

David Tielke: DDC 2018 - Inhalte meines Workshops und der DevSessions

Alle Jahre wieder kommt nicht nur das Christuskind, sondern findet auch die .NET Developer Conference im Pullman-Hotel in Köln statt. In diesem Jahr veranstaltete Developer Media wieder vom 26. bis 28. November die größte .NET-Konferenz im deutschsprachigen Raum. Auch ich durfte zum neunten Mal als Sprecher teilnehmen und dabei einen Workshop und zwei DevSessions beitragen. 

Ich möchte an dieser Stelle noch einmal allen Teilnehmern meiner Veranstaltungen für drei grandiose Tage danken, es hat wie immer wahnsinnig viel Spaß gemacht. Ebenfalls ein großer Dank geht an den Veranstalter, der erneut eine tolle Konferenz geboten hat.

Wie in meinen Vorträgen angekündigt, gibt es hier nun alle Inhalte meiner Sessions zum download. Das Passwort dazu wurde in den Sessions bekanntgegeben.

Links
Downloadbereich auf OneDrive

Code-Inside Blog: How to use TensorFlow with AMD GPU's

How to use TensorFlow with AMD GPU’s

Most machine learning frameworks that run with a GPU support Nvidia GPUs, but if you own a AMD GPU you are out of luck.

Recently AMD has made some progress with their ROCm platform for GPU computing and does now provide a TensorFlow build for their gpus.

Since I work with tensorflow and own a AMD GPU it was time to give it a try. I stumpled upon these instructions for TensorFlow 1.8 but since they are outdated, I decided to write down what I did.

1. Set up Linux

It looks like there is currently no ROCm support for Windows. And no, WSL aka Bash for Windows does not work. But there are packages for CentOS/RHEL 7 and Ubuntu. I used Ubuntu 18.04.

2. Install ROCm

Just follow the ROCm install instructions.

3. Install TensorFlow

AMD provides a special build of TensorFlow. Currently they support TensorFlow 1.12.0. You can build it yourself, but the most convenient way to use it, is to install the package from PyPI:

sudo apt install python3-pip 
pip3 install --user tensorflow-rocm

4. Train a Model

To test your setup you can run the image recognition task from the Tensorflow tutorials.

git clone https://github.com/tensorflow/models.git
cd models/tutorials/image/imagenet
python3 classify_image.py

and the result should look like this:

giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca (score = 0.89103)
indri, indris, Indri indri, Indri brevicaudatus (score = 0.00810)
lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens (score = 0.00258)
custard apple (score = 0.00149)
earthstar (score = 0.00141)

Extra: Monitor your GPU

If you like to check that your model fully utilize your GPU, you can use the (radeontop)[https://github.com/clbr/radeontop] tool:

Install it with

sudo apt-get install radeontop

and run it

sudo radeontop

This will dump the statistics to the command line.

sudo radeontop -d -

Holger Schwichtenberg: Microsoft Connect(); Online-Konferenz am 4. Dezember 2018

Die Vorträge sind am heute Abend ab 17:30 Uhr deutscher Zeit kostenlos zu sehen. Was ist zu erwarten?

Code-Inside Blog: Make your WCF Service async

Oh my: WCF???

This might be the elephant in the room: I wouldn’t use WCF for new stuff anymore, but to keep some “legacy” stuff working it might be a good idea to modernize those services as well.

WCF Service/Client compatibility

WCF services had always close relationships with their clients and so it is no suprise, that most guides show how to implement async operations on the server and client side.

In our product we needed to ensure backwards compatibility with older clients and to my suprise: Making the operations async don’t break the WCF contract!.

So - a short example:

Sync Sample

The sample code is more or less the default implementation for WCF services when you use Visual Studio:

[ServiceContract]
public interface IService1
{
    [OperationContract]
    string GetData(int value);

    [OperationContract]
    CompositeType GetDataUsingDataContract(CompositeType composite);

    // TODO: Add your service operations here
}

[DataContract]
public class CompositeType
{
    bool boolValue = true;
    string stringValue = "Hello ";

    [DataMember]
    public bool BoolValue
    {
        get { return boolValue; }
        set { boolValue = value; }
    }

    [DataMember]
    public string StringValue
    {
        get { return stringValue; }
        set { stringValue = value; }
    }
}

public class Service1 : IService1
{
    public string GetData(int value)
    {
        return string.Format("You entered: {0}", value);
    }

    public CompositeType GetDataUsingDataContract(CompositeType composite)
    {
        if (composite == null)
        {
            throw new ArgumentNullException("composite");
        }
        if (composite.BoolValue)
        {
            composite.StringValue += "Suffix";
        }
        return composite;
    }
}

The code is pretty straight forward: The typical interface with two methods, which are decorated with OperationContract and a default implementation.

When we know run this example and check the generated WSDL we will get something like this:

<wsdl:definitions xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:wsx="http://schemas.xmlsoap.org/ws/2004/09/mex" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsa10="http://www.w3.org/2005/08/addressing" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wsap="http://schemas.xmlsoap.org/ws/2004/08/addressing/policy" xmlns:msc="http://schemas.microsoft.com/ws/2005/12/wsdl/contract" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://tempuri.org/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" name="Service1" targetNamespace="http://tempuri.org/">
	<wsdl:types>
		<xsd:schema targetNamespace="http://tempuri.org/Imports">
			<xsd:import schemaLocation="http://localhost:8733/Design_Time_Addresses/SyncWcf/Service1/?xsd=xsd0" namespace="http://tempuri.org/"/>
			<xsd:import schemaLocation="http://localhost:8733/Design_Time_Addresses/SyncWcf/Service1/?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/"/>
			<xsd:import schemaLocation="http://localhost:8733/Design_Time_Addresses/SyncWcf/Service1/?xsd=xsd2" namespace="http://schemas.datacontract.org/2004/07/SyncWcf"/>
		</xsd:schema>
	</wsdl:types>
	<wsdl:message name="IService1_GetData_InputMessage">
		<wsdl:part name="parameters" element="tns:GetData"/>
	</wsdl:message>
	<wsdl:message name="IService1_GetData_OutputMessage">
		<wsdl:part name="parameters" element="tns:GetDataResponse"/>
	</wsdl:message>
	<wsdl:message name="IService1_GetDataUsingDataContract_InputMessage">
		<wsdl:part name="parameters" element="tns:GetDataUsingDataContract"/>
	</wsdl:message>
	<wsdl:message name="IService1_GetDataUsingDataContract_OutputMessage">
		<wsdl:part name="parameters" element="tns:GetDataUsingDataContractResponse"/>
	</wsdl:message>
	<wsdl:portType name="IService1">
		<wsdl:operation name="GetData">
			<wsdl:input wsaw:Action="http://tempuri.org/IService1/GetData" message="tns:IService1_GetData_InputMessage"/>
			<wsdl:output wsaw:Action="http://tempuri.org/IService1/GetDataResponse" message="tns:IService1_GetData_OutputMessage"/>
		</wsdl:operation>
		<wsdl:operation name="GetDataUsingDataContract">
			<wsdl:input wsaw:Action="http://tempuri.org/IService1/GetDataUsingDataContract" message="tns:IService1_GetDataUsingDataContract_InputMessage"/>
			<wsdl:output wsaw:Action="http://tempuri.org/IService1/GetDataUsingDataContractResponse" message="tns:IService1_GetDataUsingDataContract_OutputMessage"/>
		</wsdl:operation>
	</wsdl:portType>
	<wsdl:binding name="BasicHttpBinding_IService1" type="tns:IService1">
		<soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
		<wsdl:operation name="GetData">
			<soap:operation soapAction="http://tempuri.org/IService1/GetData" style="document"/>
			<wsdl:input>
				<soap:body use="literal"/>
			</wsdl:input>
			<wsdl:output>
				<soap:body use="literal"/>
			</wsdl:output>
		</wsdl:operation>
		<wsdl:operation name="GetDataUsingDataContract">
			<soap:operation soapAction="http://tempuri.org/IService1/GetDataUsingDataContract" style="document"/>
			<wsdl:input>
				<soap:body use="literal"/>
			</wsdl:input>
			<wsdl:output>
				<soap:body use="literal"/>
			</wsdl:output>
		</wsdl:operation>
	</wsdl:binding>
	<wsdl:service name="Service1">
		<wsdl:port name="BasicHttpBinding_IService1" binding="tns:BasicHttpBinding_IService1">
			<soap:address location="http://localhost:8733/Design_Time_Addresses/SyncWcf/Service1/"/>
		</wsdl:port>
	</wsdl:service>
</wsdl:definitions>

Convert to async

To make the service async we only need change the method signature and returing Tasks:

[ServiceContract]
public interface IService1
{
    [OperationContract]
    Task<string> GetData(int value);

    [OperationContract]
    Task<CompositeType> GetDataUsingDataContract(CompositeType composite);

    // TODO: Add your service operations here
}

...

public class Service1 : IService1
{
    public async Task<string> GetData(int value)
    {
        return await Task.FromResult(string.Format("You entered: {0}", value));
    }

    public async Task<CompositeType> GetDataUsingDataContract(CompositeType composite)
    {
        if (composite == null)
        {
            throw new ArgumentNullException("composite");
        }
        if (composite.BoolValue)
        {
            composite.StringValue += "Suffix";
        }

        return await Task.FromResult(composite);
    }
}

When we run this example and check the WSDL we will see that it is (besides some naming that I changed based on my samples) identical:

<wsdl:definitions xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:wsx="http://schemas.xmlsoap.org/ws/2004/09/mex" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsa10="http://www.w3.org/2005/08/addressing" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wsap="http://schemas.xmlsoap.org/ws/2004/08/addressing/policy" xmlns:msc="http://schemas.microsoft.com/ws/2005/12/wsdl/contract" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://tempuri.org/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" name="Service1" targetNamespace="http://tempuri.org/">
	<wsdl:types>
		<xsd:schema targetNamespace="http://tempuri.org/Imports">
			<xsd:import schemaLocation="http://localhost:8733/Design_Time_Addresses/AsyncWcf/Service1/?xsd=xsd0" namespace="http://tempuri.org/"/>
			<xsd:import schemaLocation="http://localhost:8733/Design_Time_Addresses/AsyncWcf/Service1/?xsd=xsd1" namespace="http://schemas.microsoft.com/2003/10/Serialization/"/>
			<xsd:import schemaLocation="http://localhost:8733/Design_Time_Addresses/AsyncWcf/Service1/?xsd=xsd2" namespace="http://schemas.datacontract.org/2004/07/AsyncWcf"/>
		</xsd:schema>
	</wsdl:types>
	<wsdl:message name="IService1_GetData_InputMessage">
		<wsdl:part name="parameters" element="tns:GetData"/>
	</wsdl:message>
	<wsdl:message name="IService1_GetData_OutputMessage">
		<wsdl:part name="parameters" element="tns:GetDataResponse"/>
	</wsdl:message>
	<wsdl:message name="IService1_GetDataUsingDataContract_InputMessage">
		<wsdl:part name="parameters" element="tns:GetDataUsingDataContract"/>
	</wsdl:message>
	<wsdl:message name="IService1_GetDataUsingDataContract_OutputMessage">
		<wsdl:part name="parameters" element="tns:GetDataUsingDataContractResponse"/>
	</wsdl:message>
	<wsdl:portType name="IService1">
		<wsdl:operation name="GetData">
			<wsdl:input wsaw:Action="http://tempuri.org/IService1/GetData" message="tns:IService1_GetData_InputMessage"/>
			<wsdl:output wsaw:Action="http://tempuri.org/IService1/GetDataResponse" message="tns:IService1_GetData_OutputMessage"/>
		</wsdl:operation>
		<wsdl:operation name="GetDataUsingDataContract">
			<wsdl:input wsaw:Action="http://tempuri.org/IService1/GetDataUsingDataContract" message="tns:IService1_GetDataUsingDataContract_InputMessage"/>
			<wsdl:output wsaw:Action="http://tempuri.org/IService1/GetDataUsingDataContractResponse" message="tns:IService1_GetDataUsingDataContract_OutputMessage"/>
		</wsdl:operation>
	</wsdl:portType>
	<wsdl:binding name="BasicHttpBinding_IService1" type="tns:IService1">
		<soap:binding transport="http://schemas.xmlsoap.org/soap/http"/>
		<wsdl:operation name="GetData">
			<soap:operation soapAction="http://tempuri.org/IService1/GetData" style="document"/>
			<wsdl:input>
				<soap:body use="literal"/>
			</wsdl:input>
			<wsdl:output>
				<soap:body use="literal"/>
			</wsdl:output>
		</wsdl:operation>
		<wsdl:operation name="GetDataUsingDataContract">
			<soap:operation soapAction="http://tempuri.org/IService1/GetDataUsingDataContract" style="document"/>
			<wsdl:input>
				<soap:body use="literal"/>
			</wsdl:input>
			<wsdl:output>
				<soap:body use="literal"/>
			</wsdl:output>
		</wsdl:operation>
	</wsdl:binding>
	<wsdl:service name="Service1">
		<wsdl:port name="BasicHttpBinding_IService1" binding="tns:BasicHttpBinding_IService1">
			<soap:address location="http://localhost:8733/Design_Time_Addresses/AsyncWcf/Service1/"/>
		</wsdl:port>
	</wsdl:service>
</wsdl:definitions>

Clients

The contract itself is still the same. You can still use the sync-methods on the client side, because WCF doesn’t care (at least with the SOAP binding stuff). It would be clever to also update your client code, but you don’t have to, that was the most important point for us.

Async & OperationContext access

If you are accessing the OperationContext on the server side and using async methods you might stumble on an odd behaviour:

After the first access to OperationContext.Current the value will disappear and OperationContext.Current will be null. This Stackoverflow.com question shows this “bug”.

The reason for this: There are some edge cases, but if you are not using “Reentrant services” the behaviour can be changed with this setting:

<appSettings>
  <add key="wcf:disableOperationContextAsyncFlow" value="false" />
</appSettings>

With this setting if should work like before in the “sync”-world.

Summery

“Async all the things” - even legacy WCF services can be turned into async task based APIs without breaking any clients. Checkout the sample code on GitHub.

Hope this helps!

Links:

Albert Weinert: Microsoft Connect(); 2018 Public Live Streaming

Am 4. Dezember findet die virtuelle Konferenz rund um Visual Studio und Azure statt. Diese startet um 17:30 und die Microsoft Connect(); wird Live im Netz übertragen.

Microsoft Connect(); Parallel zu den Keynotes wird auf meinen Twitch Kanal Live geschaut, gechattet und kommentiert.

Kommt also am 4. Dezember 2018 ab 17:30 Uhr vorbei und wir schauen gemeinsam.

Stefan Henneken: IEC 61131-3: Das ‘Dekorierer’ Pattern

Mit Hilfe des Dekorierer Pattern können neue Funktionsblöcke auf Basis bestehender Funktionsblöcke entwickelt werden, ohne das Prinzip der Vererbung überzustrapazieren. In dem folgenden Post werde ich den Einsatz dieses Pattern an Hand eines einfachen Beispiels vorstellen.

Das Beispiel soll für verschiedene Pizzen den Preis (GetPrice()) berechnen. Auch wenn dieses Beispiel keinen direkten Bezug zur Automatisierungstechnik hat, so wird das Grundprinzip des Dekorierer-Pattern recht gut beschrieben. Genauso gut könnten die Pizzen auch durch Pumpen, Zylinder oder Achsen ersetzt werden.

Erste Variante: Der ‚Super-Funktionsblock‘

In dem Beispiel gibt es zwei Grundsorten; American Style und Italian Style. Jede dieser Grundsorten kann mit den Beilagen Salami (Salami), Käse (Cheese) und Brokkoli (Broccoli) versehen werden.

Der naheliegendste Ansatz könnte darin bestehen, die gesamte Funktionalität in einem Funktionsblock zu legen.

Eigenschaften legen die Zusammensetzung der Pizza fest, während eine Methode die gewünschte Berechnung durchführt.

Picture01

Des Weiteren wird FB_init() so erweitert, dass schon bei der Deklaration der Instanzen die Zutaten festgelegt werden. Somit lassen sich verschiedene Pizzavarianten recht einfach erstellen.

fbAmericanSalamiPizza : FB_Pizza(ePizzaStyle := E_PizzaStyle.eAmerican,
                                 bHasBroccoli := FALSE,
                                 bHasCheese := TRUE,
                                 bHasSalami := TRUE);
fbItalianVegetarianPizza : FB_Pizza(ePizzaStyle := E_PizzaStyle.eItalian,
                                    bHasBroccoli := TRUE,
                                    bHasCheese := FALSE,
                                    bHasSalami := FALSE);

Die Methode GetPrice() wertet diese Informationen aus und gibt den geforderten Wert zurück:

METHOD PUBLIC GetPrice : LREAL

IF (THIS^.eStyle = E_PizzaStyle.eItalian) THEN
  GetPrice := 4.5;
ELSIF (THIS^.eStyle = E_PizzaStyle.eAmerican) THEN
  GetPrice := 4.2;
ELSE
  GetPrice := 0;
  RETURN;
END_IF
IF (THIS^.bBroccoli) THEN
  GetPrice := GetPrice + 0.8;
END_IF
IF (THIS^.bCheese) THEN
  GetPrice := GetPrice + 1.1;
END_IF
IF (THIS^.bSalami) THEN
  GetPrice := GetPrice + 1.4;
END_IF

Eigentlich eine ganz solide Lösung. Doch wie so häufig in der Softwareentwicklung, ändern sich die Anforderungen. So kann die Einführung neuer Pizzen, weitere Zutaten erfordern. Der Funktionsblock FB_Pizza wächst kontinuierlich an und somit auch die Komplexität. Auch die Tatsache, dass sich alles in einem Funktionsblock befindet, macht es schwierig die Endwicklung auf mehrere Personen zu verteilen.

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Zweite Variante: Die ‚Vererbungshölle‘

Im zweiten Ansatz wird für jede Pizza-Variante ein separater Funktionsblock erstellt. Zusätzlich definiert eine Schnittstelle (I_Pizza) alle gemeinsamen Eigenschaften und Methoden. Da von allen Pizzen der Preis ermittelt werden soll, enthält die Schnittstelle die Methode GetPrice().

Diese Schnittstelle implementieren die beiden Funktionsblöcke FB_PizzaAmericanStyle und FB_PizzaItalianStyle. Somit ersetzen die Funktionsblöcke die Aufzählung E_PizzaStyle und sind die Basis für alle weiteren Pizzen. Die Methode GetPrice() gibt bei diesen beiden FBs den jeweiligen Basispreis zurück.

Darauf aufbauend werden die einzelnen Pizzen, mit den unterschiedlichen Zutaten definiert. So enthält die Pizza Margherita zusätzlich Käse (Cheese) und Tomaten (Tomato). Die Pizza Salami benötigt außerdem noch Salami. Somit erbt der FB für die Pizza Salami von dem FB der Pizza Margherita.

Die Methode GetPrice() greift immer mit dem Super-Zeiger auf die darunter liegende Methode zu und addiert den Betrag für die eigenen Zutaten hinzu, vorausgesetzt diese sind vorhanden.

METHOD PUBLIC GetPrice : LREAL

GetPrice := SUPER^.GetPrice();
IF (THIS^.bSalami) THEN
  GetPrice := GetPrice + 1.4;
END_IF

Daraus ergibt sich eine Vererbungshierarchie, welche die Abhängigkeiten der einzelnen Pizza-Varianten wiederspiegelt.

Picture02

Auch diese Lösung sieht auf den ersten Blick sehr elegant aus. Ein Vorteil ist die gemeinsame Schnittstelle. Jeder Instanz eines der Funktionsblöcke kann somit einem Interface-Pointer vom Typ I_Pizza zugewiesen werden. Dieses ist z.B. bei Methoden hilfreich, da über einen Parameter vom Typ I_Pizza jede Pizza-Variante übergeben werden kann.

Auch können verschiedene Pizzen in ein Array abgelegt und der gemeinsame Preis berechnet werden:

PROGRAM MAIN
VAR
  fbItalianPizzaPiccante     : FB_ItalianPizzaPiccante;
  fbItalianPizzaMozzarella   : FB_ItalianPizzaMozzarella;
  fbItalianPizzaSalami       : FB_ItalianPizzaSalami;
  fbAmericanPizzaCalifornia  : FB_AmericanPizzaCalifornia;
  fbAmericanPizzaNewYork     : FB_AmericanPizzaNewYork;
  aPizza                     : ARRAY [1..5] OF I_Pizza;
  nIndex                     : INT;
  lrPrice                    : LREAL;
END_VAR

aPizza[1] := fbItalianPizzaPiccante;
aPizza[2] := fbItalianPizzaMozzarella;
aPizza[3] := fbItalianPizzaSalami;
aPizza[4] := fbAmericanPizzaCalifornia;
aPizza[5] := fbAmericanPizzaNewYork;

lrPrice := 0;
FOR nIndex := 1 TO 5 DO
  lrPrice := lrPrice + aPizza[nIndex].GetPrice();
END_FOR

Trotzdem weist dieser Ansatz verschiedene Nachteile auf.

Was ist, wenn die Menükarte angepasst und sich die Zusammensetzung einer Pizza dadurch ändert? Angenommen die Pizza Salami soll auch Pilze (Mushroom) erhalten, dann erbt die Pizza Piccante ebenfalls die Pilze, obwohl dieses nicht gewünscht wird. Die gesamte Vererbungshierarchie muss angepasst werden. Durch die festen Beziehungen über die Vererbung, wird die Lösung unflexibel.

Wie kommt das System mit individuellen Kundenwüschen zurecht? Wie z.B. doppelter Käse oder Zutaten, die eigentlich für eine bestimmte Pizza nicht vorgesehen sind.

Befinden sich die Funktionsblöcke in einer Bibliothek, so wären diese Anpassungen nur eingeschränkt möglich.

Vor allen Dingen besteht die Gefahr, dass bestehende Anwendungen, die mit einem älteren Stand der Bibliothek kompiliert wurden, sich nicht mehr korrekt verhalten.

Beispiel 2 (TwinCAT 3.1.4022) auf GitHub

Dritte Variante: Das Dekorierer Pattern

Zur Optimierung der Lösung sind einige Entwurfsprinzipien der objektorientierten Softwareentwicklung hilfreich. Das Einhalten dieser Prinzipen soll helfen, die Softwarestruktur sauber zu halten.

Open Closed Principle

Offen für Erweiterungen: Das bedeutet, dass sich durch die Verwendung von Erweiterungsmodulen die ursprüngliche Funktionalität eines Moduls verändern lässt. Dabei enthalten die Erweiterungsmodule nur die Anpassungen von der ursprünglichen Funktionalität.

Geschlossen für Änderungen: Das bedeutet, dass keine Änderungen des Moduls nötig sind, um es zu erweitern. Das Modul bietet definierte Erweiterungspunkte, über die sich die Erweiterungsmodule anknüpfen lassen.

Identifiziere jene Aspekte, die sich ändern und trenne diese von jenen, die konstant bleiben

Wie werden die Funktionsblöcke aufgeteilt, damit Erweiterungen an möglichst wenigen Stellen notwendig sind?

Bisher wurden die beiden Pizza-Grundsorten American Style und Italian Style durch Funktionsblöcke abgebildet. Warum also nicht auch die Zutaten als Funktionsblöcke definieren? Damit würden wir das Open Closed Principle erfüllen. Unsere Grundsorten und Zutaten sind konstant und somit für Veränderungen geschlossen. Allerdings müssen wir dafür sorgen, dass jede Grundsorte mit beliebigen Zutaten erweitert werden kann. Die Lösung wäre somit offen für Erweiterungen.

Das Dekorierer Pattern verlässt sich beim Erweitern des Verhaltens nicht auf Vererbung. Vielmehr kann jede Beilage auch als Hülle (Wrapper) verstanden werden. Diese Hülle legt sich um ein bereits bestehendes Gericht. Damit dieses möglich ist, implementieren auch die Beilagen das Interface I_Pizza. Jede Beilage enthält des Weiteren ein Interface-Pointer auf die darunterliegende Hülle.

Die Pizza-Grundsorte und die Beilagen werden hierdurch ineinander verschachtelt. Wird die Methode GetPrice() von der äußersten Hülle aufgerufen, so delegiert diese den Aufruf an die darunterliegende Hülle weiter und addiert anschließend seinen Preis hinzu. Das geht solange, bis die Aufrufkette an der Pizza-Grundsorte angekommen ist, welche den Basispreis zurückliefert.

Picture03

Die innerste Hülle gibt ihren Basispreis zurück:

METHOD GetPrice : LREAL

GetPrice := 4.5;

Jede weitere Hülle (Dekorierer) addiert auf die darunterliegende Hülle den gewünschten Aufschlag:

METHOD GetPrice : LREAL

IF (THIS^.ipSideOrder  0) THEN
  GetPrice := THIS^.ipSideOrder.GetPrice() + 0.9;
END_IF

Damit die jeweils darunterliegende Hülle an den Baustein übergeben werden kann, wird die Methode FB_init() um einen zusätzlichen Parameter vom Typ I_Pizza erweitert. Somit werden schon bei der Deklaration der FB-Instanzen die gewünschten Zutaten festgelegt.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains	: BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode	: BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  ipSideOrder	: I_Pizza;
END_VAR

THIS^.ipSideOrder := ipSideOrder;

Damit das Durchlaufen der einzelnen Hüllen besser erkennbar wird, habe ich die Methode GetDescription() vorgesehen. Jede Hülle erweitert den bestehenden String um eine kurze Beschreibung.

Picture04

Im folgenden Beispiel, wird die Zusammenstellung der Pizza direkt bei der Deklaration angegeben:

PROGRAM MAIN
VAR
  // Italian Pizza Margherita (via declaration)
  fbItalianStyle : FB_PizzaItalianStyle;
  fbTomato       : FB_DecoratorTomato(fbItalianStyle);
  fbCheese       : FB_DecoratorCheese(fbTomato);
  ipPizza        : I_Pizza := fbCheese;

  fPrice         : LREAL;
  sDescription   : STRING;	
END_VAR

fPrice := ipPizza.GetPrice(); // output: 6.5
sDescription := ipPizza.GetDescription(); // output: 'Pizza Italian Style: - Tomato - Cheese'

Zwischen den Funktionsblöcken besteht keine feste Verbindung. Neue Pizzasorten können definiert werden, ohne dass Veränderungen an bestehenden Funktionsblöcken notwendig sind. Die Vererbungshierarchie legt nicht die Abhängigkeiten zwischen den einzelnen Pizza-Varianten fest.

Picture05

Zusätzlich kann der Interface-Pointer auch per Eigenschaft übergeben werden. Somit ist eine Zusammensetzung oder eine Änderung der Pizza auch zur Laufzeit des Programms möglich.

PROGRAM MAIN
VAR
  // Italian Pizza Margherita (via runtime)
  fbItalianStyle  : FB_PizzaItalianStyle;
  fbTomato        : FB_DecoratorTomato(0);
  fbCheese        : FB_DecoratorCheese(0);
  ipPizza         : I_Pizza;

  bCreate         : BOOL;
  fPrice          : LREAL;
  sDescription    : STRING;
END_VAR

IF (bCreate) THEN
  bCreate := FALSE;
  fbTomato.ipDecorator := fbItalianStyle;
  fbCheese.ipDecorator := fbTomato;
  ipPizza := fbCheese;
END_IF
IF (ipPizza  0) THEN
  fPrice := ipPizza.GetPrice(); // output: 6.5
  sDescription := ipPizza.GetDescription(); // output: 'Pizza Italian Style: - Tomato - Cheese'
END_IF

Auch können in jedem Funktionsblock Besonderheiten eingebaut werden. Dabei kann es sich um zusätzliche Eigenschaften, aber auch um weitere Methoden handeln.

Der Funktionsblock für die Tomaten soll optional auch als Bio-Tomate angeboten werden. Eine Möglichkeit ist natürlich das Anlegen eines neuen Funktionsblocks. Das ist dann notwendig, wenn der vorhandene Funktionsblock nicht erweiterbar ist (z.B. weil er sich in einer Bibliothek befindet). Ist diese Anforderung aber schon vor der ersten Freigabe bekannt, so kann dieses unmittelbar berücksichtigt werden.

Der Funktionsblock erhält in der Methode FB_init() einen zusätzlichen Parameter.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains		: BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode		: BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  ipSideOrder		: I_Pizza;
  bWholefoodProduct	: BOOL;
END_VAR

THIS^.ipSideOrder := ipSideOrder;
THIS^.bWholefood := bWholefoodProduct;

Auch dieser Parameter könnte über eine Eigenschaft zur Laufzeit änderbar sein. Bei der Berechnung des Preises wird die Option wie gewünscht berücksichtigt.

METHOD GetPrice : LREAL

IF (THIS^.ipSideOrder  0) THEN
  GetPrice := THIS^.ipSideOrder.GetPrice() + 0.9;
  IF (THIS^.bWholefood) THEN
    GetPrice := GetPrice + 0.3;
  END_IF
END_IF

Eine weitere Optimierung kann die Einführung eines Basis-FB (FB_Decorator) für alle Decorator-FBs sein.

Picture06

Beispiel 3 (TwinCAT 3.1.4022) auf GitHub

Definition

In dem Buch Entwurfsmuster. Elemente wiederverwendbarer objektorientierter Software” von Gamma, Helm, Johnson und Vlissides wird dieses wie folgt ausgedrückt:

„Das Dekorierer Pattern ist eine flexible Alternative zur Unterklassenbildung […], um eine Klasse um zusätzliche Funktionalitäten zu erweitern.”

Implementierung

Der entscheidende Punkt beim Dekorierer Pattern ist, dass beim Erweitern eines Funktionsblock nicht Vererbung zum Einsatz kommt. Soll das Verhalten ergänzt werden, so werden Funktionsblöcke ineinander verschachtelt; sie werden dekoriert.

Zentrale Komponente ist die Schnittstelle IComponent. Diese Schnittstelle implementieren die Funktionsblöcke, die dekoriert werden sollen (Componnent).

Die Funktionsblöcke, die als Dekorierer dienen (Decorator), implementieren ebenfalls die Schnittstelle IComponent. Zusätzlich enthalten diese aber auch eine Referenz (Interface-Pointer component) auf einen weiteren Dekorierer (Decorator) oder auf den Basis-Funktionsblock (Component).

Der äußerste Dekorierer repräsentiert somit den Basis-Funktionsblock, erweitert um die Funktionen der Dekorierer. Die Methode Operation() wird durch alle Funktionsblöcke durchgereicht. Wobei jeder Funktionsblock beliebige Funktionalitäten hinzufügen darf.

Dieser Ansatz bringt einige Vorteile mit sich:

  • Der ursprüngliche Funktionsblock (Component) weiß nichts von den Ergänzungen (Decorator). Es ist nicht notwendig diesen zu erweitern oder anzupassen.
  • Die Dekorierer sind unabhängig voneinander und können auch bei anderen Anwendungen eingesetzt werden.
  • Die Dekorierer können beliebig miteinander kombiniert werden.
  • Ein Funktionsblock kann somit deklarativ oder auch zur Laufzeit sein Verhalten ändern.
  • Ein Client, der über die Schnittstelle IComponent auf den Funktionsblock zugreift, kann auf die gleiche Weise auch mit einem dekorierten Funktionsblock umgehen. Der Client muss nicht angepasst werden; er wird wiederverwendbar.

Aber auch einige Nachteile sind zu beachten:

  • Die Anzahl der Funktionsblöcke kann deutlich zunehmen, Dieses macht die Einarbeitung in eine bestehende Bibliothek aufwendiger.
  • Der Client erkennt nicht, ob es sich um die ursprüngliche Basis-Komponente handelt (wenn über die Schnittstelle IComponent darauf zugegriffen wird), oder ob diese durch Dekorierer erweitert wurden. Das kann ein Vorteil sein (siehe oben), aber auch zu Problemen führen.
  • Durch die langen Aufrufketten wird die Fehlersuche erschwert. Auch können sich die langen Aufrufketten negativ auf die Performanz der Applikation auswirken.

UML Diagramm

Picture07

Bezogen auf das obige Beispiel ergibt sich folgende Zuordnung:

Client MAIN
IComponent I_Pizza
Operation() GetPrice(), GetDescription()
Decorator FB_DecoratorCheese, FB_DecoratorSalami, FB_DecoratorTomato
AddedBehavior() bWholefoodProduct
component ipSideOrder
Component FB_PizzaItalianStyle, FB_PizzaAmericanStyle

Anwendungsbeispiele

Das Dekorierer Pattern ist sehr häufig in Klassen wiederzufinden, die für die Bearbeitung von Datenstreams zuständig sind. Dieses betrifft sowohl die Java Standardbibliothek, als auch das Microsoft .NET Framework.

So gibt es im .NET Framework die Klasse System.IO.Stream. Von dieser Klasse erben u.a. System.IO.FileStream und System.IO.MemoryStream. Beide Unterklassen enthalten aber auch eine Instanz von Stream. Viele Methoden und Eigenschaften von FileStream und MemoryStream greifen auf diese Instanz zu. Man kann auch sagen: Die Unterklassen FileStream und MemoryStream dekorieren Stream.

Weitere Anwendungsfälle sind Bibliotheken zur Erstellung von grafischen Oberflächen. Dazu zählt WPF von Microsoft als auch Swing für Java.

Ein Textfeld (TextBox) und ein Rahmen (Border) werden ineinander verschachtelt; das Textfeld wird mit dem Rahmen dekoriert. Der Rahmen (mit dem Textfeld) wird anschließend an die Page übergeben.

Golo Roden: Babel: Runtime, Polyfill … was wann?

Wer mit Babel eine ES2015-Umgebung simulieren will, braucht entweder @babel/polyfill oder @babel/runtime. Der Polyfill eignet sich für Anwendungen, die Runtime hingegen für Module. Um zu verhindern, dass Code aus der Runtime in jede Datei eingebunden wird, lässt sich das mit @babel/plugin-transform-runtime optimieren.

Jürgen Gutsch: Removing Disqus and adding GitHub Issue Comments

I recently realized that I ran this new blog for almost exactly three years now and wrote almost 100 posts until yet. Running this blog is completely different compared to the the previous one based on the community server on ASP.NET Zone. I now write on markdown files which I commit and push to GitHub. I also switched the language. From January 2007 to November 2015 I wrote in German and since I run this GitHub based blog I switched completely to English, which is a great experience and improves the English writing and speaking skills a lot.

This blog is based on Pretzel, which is a .NET based Jekyll clone, that creates a static website. Pretzel as well as Jekyll is optimized for blogs or similar structured web sites. Both systems take markdown files and turn them based on the Liquid template engine into static HTML pages. This works pretty well and I really like the way to push markdown files to the GitHub repo and get an updated blog a few seconds later on Azure. This is continuous delivery using GitHub and Azure for blog posts. It is amazing. And I really love blogging this way.

Actually the blog is successful from my perspective. Around 6k visits per week is a good number, I guess.

Because the blog is static HTML, at the end I need to extend it with software as a service solutions to create dynamic content or to track the success of that blog.

So I added Disqus to enable comments on this blog. Disqus was quite popular at that time for this kind of blogs and I also get some traffic from Disqus. Anyway, now this service started to show some advertisement on my page and it also shows advertisement that is not really related to the contents of my page.

I also added a small Google AdSense banner to the blog, but this is placed at the end of the page and doesn't really annoy you as a reader, I hope. I put some text upon this banner, to ask you as a reader to support my blog if you like it. A click on that banner doesn't really cost some time or money.

I don't get anything out of the annoying off-topic adds that Disqus shows here, except a free tool to collect blog post comments and store them somewhere outside in the cloud. I don't really "own" the comments, which is the other downside.

Sure Disqus is a free service and someone need to pay for it, but the ownership of the contents is an problem as well as the fact that I cannot influence the contents of the adds displayed on my blog:

Owning the comments

The comments are important contents you provide to me, to the other readers and to the entire developer community. But they are completely separated from the blog post they relate to. They are stored on a different cloud. Actually I have no idea where Disqus stores the comments.

How do I own the comments?

My idea was to use GitHub issues of the blog repository to collect the comments. Every first comment of a blog post should create a GitHub issue and any comment is a comment on this issue. With this solution the actual posts and the comments are in the same repository, they can be linked together and I own this comments a little more than previously.

I already asked on twitter about that and got some positive feedback.

Evaluating a solution

There are already some JavaScript codes available which can be used to add GitHub Issues as comments. The GitHub API is well documented and it should be easy to do this.

I already evaluated a solution to use and decided to go with Utterance

"A lightweight comments widget built on GitHub issues"

Utterance was built by Jeremy Danyow. I stumbled upon it on Jeremys blog post about Using GitHub Issues for Blog Comments. Jeremy works as a Senior Software Engineer at Microsoft, he is member of the Aurelia core team and created also gist.run.

As far as I understood, Utterances is a light weight version of Microsofts comment system used with the new docs on https://docs.microsoft.com. Also Microsoft stores the comments as Issues on GitHub, which is nice because they can create real issues out of it, in case there are real Problems with the docs, etc.

More Links about it: https://utteranc.es/ and https://github.com/utterance.

At the end I just need to add a small HTML snippet to my blog:

<script src="https://utteranc.es/client.js"
        repo="juergengutsch/blog"
        issue-term="title"
        theme="github-light"
        crossorigin="anonymous"
        async>
</script>

This script will search for Issues with the same title as the current page. If there's no such issue, it will create a new one. If there is such an issue it will create an comment on that issue. This script also supports markdown.

Open questions until yet

Some important open question came up while evaluating the solution:

  1. Is it possible to import all the Disqus comments to GitHub Issues?
    • This is what I need to figure out now.
    • Would be bad to not have the existing comments available in the new system.
  2. What if Jeremys services are not available anymore?

The second question is easy to solve. As I wrote, I will just host the stuff by my own in case Jeremy will shut down his services. The first question is much more essential. It would be cool to get the comments somehow in a readable format. I would than write a small script or a small console app to import the comments as GitHub Issues.

Exporting the Disqus comments to GitHub Issues

Fortunately there is an export feature on Disqus, in the administration settings of the site:

After clicking "Export Comment" the export gets scheduled and you'll get an email with the download link to the export.

The exported file is a GZ compressed XML file including all threads and posts. A thread in this case is an entry per blog post where the comment form was visible. A thread actually doesn't need to contain comments. Post are comments related to a thread. Posts contain the actual comment as message, Author information and relations to the thread and the parent post if it is a reply to a comment.

This is pretty clean XML and it should be easy to import that automatically into GitHub Issues. Now I needed to figure out how the GitHub API works and to write a small C# Script to import all the comments.

This XML also includes the authors names and usernames. This is cool to know, but it doesn't have any value for me anymore, because Disqus users are no GitHub users. I can't set the comments in behalf of real GitHub users. So any migrated comment will be done by myself and I need to mark the comment, that it originally came from another reader.

So it will be something like this:

var message = $@"Comment written by **{post.Author}** on **{post.CreatedAt}**

{post.Message}
";

Importing the comments

I decided to write a small console app and to do some initial tests on a test repo. I extracted the exported data and moved it into the .NET Core console app folder and tried to play around with it.

First I read all threads out of the file and than the posts afterwards. A only selected the threads which are not marked as closed and not marked as deleted. I also checked the blog post URL of the thread, because sometimes the thread was created by a local test run, sometimes I changed the publication date of a post afterwards, which also changed the URL and sometimes the thread was created by a post that was displayed via a proxying page. I tried to filter all that stuff out. The URL need to start with http://asp.net-hacker.rocks or https://asp.net-hacker.rocks to be valid. Also the posts shouldn't be marked as deleted or marked as spam

Than I assigned the posts to the specific threads using the provided thread id and ordered the posts by date. This breaks the dialogues of the Disqus threads, but should be ok for the first step.

Than I created the actual issue post it and posted the assigned comments to the new issue.

That's it.

Reading the XML file is easy using the XmlDocument this is also available in .NET Core:

var doc = new XmlDocument();
doc.Load(path);
var nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace(String.Empty, "http://disqus.com");
nsmgr.AddNamespace("def", "http://disqus.com");
nsmgr.AddNamespace("dsq", "http://disqus.com/disqus-internals");

IEnumerable<Thread> threads = await FindThreads(doc, nsmgr);
IEnumerable<Post> posts = FindPosts(doc, nsmgr);

Console.WriteLine($"{threads.Count()} valid threads found");
Console.WriteLine($"{posts.Count()} valid posts found");

I need to use the XmlNamespaceManager here to use tags and properties using the Disqus namespaces. The XmlDocument as well as the XmlNamespaceManager need to get passed into the read methods then. The two find methods are than reading the threads and posts out of the XmlDocument.

In the next snippet I show the code to read the threads:

private static async Task<IEnumerable<Thread>> FindThreads(XmlDocument doc, XmlNamespaceManager nsmgr)
{
    var xthreads = doc.DocumentElement.SelectNodes("def:thread", nsmgr);

    var threads = new List<Thread>();
    var i = 0;
    foreach (XmlNode xthread in xthreads)
    {
        i++;

        long threadId = xthread.AttributeValue<long>(0);
        var isDeleted = xthread["isDeleted"].NodeValue<bool>();
        var isClosed = xthread["isClosed"].NodeValue<bool>();
        var url = xthread["link"].NodeValue();
        var isValid = await CheckThreadUrl(url);

        Console.WriteLine($"{i:###} Found thread ({threadId}) '{xthread["title"].NodeValue()}'");

        if (isDeleted)
        {
            Console.WriteLine($"{i:###} Thread ({threadId}) was deleted.");
            continue;
        }
        if (isClosed)
        {
            Console.WriteLine($"{i:###} Thread ({threadId}) was closed.");
            continue;
        }
        if (!isValid)
        {
            Console.WriteLine($"{i:###} the url Thread ({threadId}) is not valid: {url}");
            continue;
        }

        Console.WriteLine($"{i:###} Thread ({threadId}) is valid");
        threads.Add(new Thread(threadId)
        {
            Title = xthread["title"].NodeValue(),
            Url = url,
            CreatedAt = xthread["createdAt"].NodeValue<DateTime>()

        });
    }

    return threads;
}

I think there's nothing magic in it. Even assigning the posts to the threads is just some LINQ code.

To create the actual issues and comments, I use the Octokit.NET library which is available on NuGet and GitHub.

dotnet add package Octokit

This library is quite simple to use and well documented. You have the choice between basic authentication and token authentication to connect to GitHub. I chose the token authentication which is the proposed way to connect. To get the token you need to go to the settings of your GitHub account. Choose a personal access token and specify the rights the for the token. The basic rights to contribute to the specific repository are enough in this case:

private static async Task PostIssuesToGitHub(IEnumerable<Thread> threads)
{
    var client = new GitHubClient(new ProductHeaderValue("DisqusToGithubIssues"));
    var tokenAuth = new Credentials("secret personal token from github");
    client.Credentials = tokenAuth;

    var issues = await client.Issue.GetAllForRepository(repoOwner, repoName);
    foreach (var thread in threads)
    {
        if (issues.Any(x => !x.ClosedAt.HasValue && x.Title.Equals(thread.Title)))
        {
            continue;
        }

        var newIssue = new NewIssue(thread.Title);
        newIssue.Body = $@"Written on {thread.CreatedAt} 

URL: {thread.Url}
";

        var issue = await client.Issue.Create(repoOwner, repoName, newIssue);
        Console.WriteLine($"New issue (#{issue.Number}) created: {issue.Url}");
        await Task.Delay(1000 * 5);

        foreach (var post in thread.Posts)
        {
            var message = $@"Comment written by **{post.Author}** on **{post.CreatedAt}**

{post.Message}
";

            var comment = await client.Issue.Comment.Create(repoOwner, repoName, issue.Number, message);
            Console.WriteLine($"New comment by {post.Author} at {post.CreatedAt}");
            await Task.Delay(1000 * 5);
        }
    }
}

This method gets the list of Disqus threads, creates the GitHub client and inserts one thread by another. I also read the existing Issues from GitHub in case I need to run the migration twice because of an error. After the Issue is created, I only needed to create the comments per Issue.

After I started that code, the console app starts to add issues and comments to GitHub:

The comments are set as expected:

Unfortunately the import breaks after a while with a weird exception.

Octokit.AbuseException

Unfortunately that run didn't finish. After the first few issues were entered I got an exception like this.

Octokit.AbuseException: 'You have triggered an abuse detection mechanism and have been temporarily blocked from content creation. Please retry your request again later.'

This Exception happens because I reached the creation rate limit (user.creation_rate_limit_exceeded). This limit is set by GitHub on the public API. It is not allowed to do more than 5000 requests per hour: https://developer.github.com/v3/#rate-limiting

You can see such security related events in the security tap of your GitHub account settings.

There is no real solution to solve this problem, except to add more checks and fallbacks to the migration code. I checked which issue already exists and migrate only the issues that don't exist. I also added a five second delay between each request to GitHub. This only increases the migration time, and helps to start the migration only two times. Without the delay I got the exception more often during the tests.

Using Utterances

Once the Issues are migrated to GutHub, I need to use Utterances to the blog. At first you need to install the utterances app on your repository. The repository needs to be public and the issues should be enabled obviously.

On https://utteranc.es/ there is a kind of a configuration wizard that creates the HTML snippet for you, which you need to add to your blog. In my case it is the small snippet I already showed previously:

<script src="https://utteranc.es/client.js"
        repo="juergengutsch/blog"
        issue-term="title"
        theme="github-light"
        crossorigin="anonymous"
        async>
</script>

This loads the Uttereances client script, configures my blog repository and the way the issued will be found in my repository. You have different options for the issue-term. Since I set the blog post title as GitHub issue title, I need to tell Utterances to look at the tile. The theme I want to use here is the GitHub light theme. The dark theme doesn't fit the blog style. I was also able to override the CSS by overriding the following two CSS classes:

.utterances {}
.utterances-frame {}

The result

At the end it worked pretty cool. After the migration and after I changed the relevant blog template I tried it locally using the pretzel taste command.

If you want to add a comment as a reader, you need to logon with your GitHub account and you need to grand the utterances app to post to my repo with our name.

Not every new commend will be stored in the repository of my blog. All the contents are in the same repository. There will be an issue per post, so it is almost directly linked.

What do you think? Do you like it? Tell me about your opinion :-)

BTW: You will find the migration tool on GitHub.

Stefan Henneken: IEC 61131-3: The ‘State’ Pattern

State machines are used regularly, especially in automation technology. The state pattern provides an object-oriented approach that offers important advantages especially for larger state machines.

Most developers have already implemented state machines in IEC 61131-3: one consciously, the other one perhaps unconsciously. The following is a simple example of three different approaches:

  1. CASE statement
  2. State transitions in methods
  3. The ‘state’ pattern

Our example describes a vending machine that dispenses a product after inserting a coin and pressing a button. The number of products is limited. If a coin is inserted and the button is pressed although the machine is empty, the coin is returned.

The vending machine shall be mapped by the function block FB_Machine. Inputs accept the events and the current state and the number of still available products are read out via outputs. The declaration of the FB defines the maximum number of products.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton : BOOL;
  bInsertCoin : BOOL;
  bTakeProduct : BOOL;
  bTakeCoin : BOOL;
END_VAR
VAR_OUTPUT
  eState : E_States;
  nProducts : UINT;
END_VAR

UML state diagram

State machines can be very well represented as a UML state diagram.

Picture01

A UML state diagram describes an automaton that is in exactly one state of a finite set of states at any given time.

The states in a UML state diagram are represented by rectangles with rounded corners (vertices) (in other diagram forms also often as a circle). States can execute activities, e.g. when entering the state (entry) or when leaving the state (exit). With entry / n = n – 1, the variable n is decremented when entering the state.

The arrows between the states symbolize possible state transitions. They are labeled with the events that lead to the respective state transition. A state transition occurs when the event occurs and an optional condition (guard) is fulfilled. Conditions are specified in square brackets. This allows decision trees to be implemented.

First variant: CASE statement

You will often find CASE statements for the conversion of state machines. The CASE statement queries every possible state. The conditions are queried for the individual states within the respective areas. If the condition is fulfilled, the action is executed and the state variable is adapted. To increase readability, the state variable is often mapped as ENUM.

TYPE E_States :
(
  eWaiting := 0,
  eHasCoin,
  eProductEjected,
  eCoinEjected
);
END_TYPE

Thus, the first variant of the state machine looks like this:

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR

rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);
 
CASE eState OF
  E_States.eWaiting:
    IF (rtrigButton.Q) THEN
      ; // keep in the state
    END_IF
    IF (rtrigInsertCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
      eState := E_States.eHasCoin;
    END_IF
 
  E_States.eHasCoin:
    IF (rtrigButton.Q) THEN
      IF (nProducts > 0) THEN
        nProducts := nProducts - 1;
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
        eState := E_States.eProductEjected;
      ELSE
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
        eState := E_States.eCoinEjected;
      END_IF
    END_IF
 
  E_States.eProductEjected:
    IF (rtrigTakeProduct.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
      eState := E_States.eWaiting;
    END_IF
 
  E_States.eCoinEjected:
    IF (rtrigTakeCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
      eState := E_States.eWaiting;
    END_IF
 
  ELSE
    ADSLOGSTR(ADSLOG_MSGTYPE_ERROR, 'Invalid state', '');
    eState := E_States.eWaiting;
END_CASE

A quick test shows that the FB does what it is supposed to do:

Picture02

However, it quickly becomes clear that larger applications cannot be implemented in this way. The clarity is completely lost after a few states.

Sample 1 (TwinCAT 3.1.4022) on GitHub

Second variant: State transitions in methods

The problem can be reduced if all state transitions are implemented as methods.

Picture03

If a particular event occurs, the respective method is called.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR

rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);
 
IF (rtrigButton.Q) THEN
  THIS^.PressButton();
END_IF
IF (rtrigInsertCoin.Q) THEN
  THIS^.InsertCoin();
END_IF
IF (rtrigTakeProduct.Q) THEN
  THIS^.CustomerTakesProduct();
END_IF
IF (rtrigTakeCoin.Q) THEN
  THIS^.CustomerTakesCoin();
END_IF

Depending on the current state, the desired state transition is executed in the methods and the state variable is adapted:

METHOD INTERNAL CustomerTakesCoin : BOOL
IF (THIS^.eState = E_States.eCoinEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
  eState := E_States.eWaiting;
END_IF
 
METHOD INTERNAL CustomerTakesProduct : BOOL
IF (THIS^.eState = E_States.eProductEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
  eState := E_States.eWaiting;
END_IF
 
METHOD INTERNAL InsertCoin : BOOL
IF (THIS^.eState = E_States.eWaiting) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
  THIS^.eState := E_States.eHasCoin;
END_IF
 
METHOD INTERNAL PressButton : BOOL
IF (THIS^.eState = E_States.eHasCoin) THEN
  IF (THIS^.nProducts > 0) THEN
    THIS^.nProducts := THIS^.nProducts - 1;
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
    THIS^.eState := E_States.eProductEjected;
  ELSE                
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
    THIS^.eState := E_States.eCoinEjected;
  END_IF
END_IF

This approach also works perfectly. However, the state machine remains in only one function block. Although the state transitions are shifted to methods, this is a solution approach of structured programming. This still ignores the possibilities of object orientation. This leads to the result that the source code is still difficult to extend and is illegible.

Sample 2 (TwinCAT 3.1.4022) on GitHub

Third variant: The state pattern

Some OO design principles are helpful for the implementation of the State Pattern:

Cohesion (= degree to which a class has a single concentrated purpose) and delegation

Encapsulate each responsibility into a separate object and delegate calls to these objects. One class, one responsibility!

Identify those aspects that change and separate them from those that remain constant

How are the objects split so that extensions to the state machine are necessary in as few places as possible? Previously, FB_Machine had to be adapted for each extension. This is a major disadvantage, especially for large state machines on which several developers are working.

Let’s look again at the methods CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() and PressButton(). They all have a similar structure. In IF statements, the current state is queried and the desired actions are executed. If necessary, the current state is also adjusted. However, this approach does not scale. Each time a new state is added, several methods have to be adjusted.

The state pattern scatters the status to several objects. Each possible status is represented by a FB. These status FBs contain the entire behavior for the respective state. Thus, a new status can be introduced without having to change the source code of the original blocks.

Every action (CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin(), and PressButton()) can be executed on any state. Thus, all status FBs have the same interface. For this reason, one interface is introduced for all status FBs:

Picture04

FB_Machine aggregates this interface (line 9), which delegates the method calls to the respective status FBs (lines 30, 34, 38 and 42).

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton            : BOOL;
  bInsertCoin        : BOOL;
  bTakeProduct       : BOOL;
  bTakeCoin          : BOOL;
END_VAR
VAR_OUTPUT
  ipState            : I_State := fbWaitingState;
  nProducts          : UINT;
END_VAR
VAR
  fbCoinEjectedState    : FB_CoinEjectedState(THIS);
  fbHasCoinState        : FB_HasCoinState(THIS);
  fbProductEjectedState : FB_ProductEjectedState(THIS);
  fbWaitingState        : FB_WaitingState(THIS);
 
  rtrigButton           : R_TRIG;
  rtrigInsertCoin       : R_TRIG;
  rtrigTakeProduct      : R_TRIG;
  rtrigTakeCoin         : R_TRIG;
END_VAR
 
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);
 
IF (rtrigButton.Q) THEN
  ipState.PressButton();
END_IF
 
IF (rtrigInsertCoin.Q) THEN
  ipState.InsertCoin();
END_IF
 
IF (rtrigTakeProduct.Q) THEN
  ipState.CustomerTakesProduct();
END_IF
 
IF (rtrigTakeCoin.Q) THEN
  ipState.CustomerTakesCoin();
END_IF

But how can the status be changed in the respective methods, the individual status FBs?

First of all, an instance within FB_Machine is declared by each status FB. Via FB_init(), a pointer to FB_Machine is transferred to each status FB (lines 13 – 16).

Each single instance can be read by property from FB_Machine. Each time an interface pointer to I_State is returned.

Picture05

Furthermore, FB_Machine receives a method for setting the status,

METHOD INTERNAL SetState : BOOL
VAR_INPUT
  newState : I_State;
END_VAR
THIS^.ipState := newState;

and a method for changing the current number of products:

METHOD INTERNAL SetProducts : BOOL
VAR_INPUT
  newProducts : UINT;
END_VAR
THIS^.nProducts := newProducts;

FB_init() receives another input variable, so that the maximum number of products can be specified in the declaration.

Since the user of the state machine only needs FB_Machine and I_State, the four properties (CoinEjectedState, HasCoinState, ProductEjectedState and WaitingState), the two methods (SetState() and SetProducts()) and the four status FBs (FB_CoinEjectedState(), FB_HasCoinState(), FB_ProductEjectedState() and FB_WaitingState()) were declared as INTERNAL. If the FBs of the state machine are in a compiled library, they are not visible from the outside. These are also not present in the library repository. The same applies to elements that are declared as PRIVATE. FBs, interfaces, methods and properties that are only used within a library, can thus be hidden from the user of the library.

The test of the state machine is the same in all three variants:

PROGRAM MAIN
VAR
  fbMachine      : FB_Machine(3);
  sState         : STRING;
  bButton        : BOOL;
  bInsertCoin    : BOOL;
  bTakeProduct   : BOOL;
  bTakeCoin      : BOOL;
END_VAR
 
fbMachine(bButton := bButton,
          bInsertCoin := bInsertCoin,
          bTakeProduct := bTakeProduct,
          bTakeCoin := bTakeCoin);
sState := fbMachine.ipState.Description;
 
bButton := FALSE;
bInsertCoin := FALSE;
bTakeProduct := FALSE;
bTakeCoin := FALSE;

The statement in line 15 is intended to simplify testing, since a readable text is displayed for each state.

Sample 3 (TwinCAT 3.1.4022) on GitHub

This variant seems quite complex at first sight, since considerably more FBs are needed. But the distribution of responsibilities to single FBs makes this approach very flexible and much more robust for extensions.

This becomes clear when the individual status FBs become very extensive. For example, a state machine could control a complex process in which each status FB contains further subprocesses. A division into several FBs makes such a program maintainable in the first place, especially if several developers are involved.

For very small state machines, the use of the state pattern is not necessarily the most optimal variant. I myself also like to fall back on the solution with the CASE statement.

Alternatively, IEC 61131-3 offers a further option for implementing state machines with the Sequential Function Chart (SFC). But that is another story.

Definition

In the book “Design patterns: elements of reusable object-oriented software” by Gamma, Helm, Johnson and Vlissides, this is expressed as follows:

Allow an object to change its behavior when its internal state changes. It will look as if the object has changed its class.

Implementation

A common interface (State) is defined, which contains a method for each state transition. For each state, a class is created that implements this interface (State1, State2, …). As all states have the same interface, they are interchangeable.

Such a state object is aggregated (encapsulated) by the object whose behavior has to be changed depending on the state (Context). This object represents the current internal state (currentState) and encapsulates the state-dependent behavior. The context delegates calls to the currently set status object.

The state changes can be performed by the specific state objects themselves. To do this, each status object requires a reference to the context (Context). The context must also provide a method for changing the state (setState()). The subsequent state is passed to the method setState() as a parameter. For this purpose, the context offers all possible states as properties.

UML Diagram

Picture06

Based on the example above, the following assignment results:

Context FB_Machine
State I_State
State1, State2, … FB_CoinEjectedState, FB_HasCoinState,
FB_ProductEjectedState, FB_WaitingState
Handle() CustomerTakesCoin(), CustomerTakesProduct(),
InsertCoin(), PressButton()
GetState1, GetState2, … CoinEjectedState, HasCoinState,
ProductEjectedState, WaitingState
currentState ipState
setState() SetState()
context pMachine

Application examples

A TCP communication stack is a good example of using the state pattern. Each state of a connection socket can be represented by corresponding state classes (TCPOpen, TCPClosed, TCPListen, …). Each of these classes implements the same interface (TCPState). The context (TCPConnection) contains the current state object. All actions are transferred to the respective state class via this state object. This class processes the actions and changes to a new state if necessary.

Text parsers are also state-based. For example, the meaning of a character usually depends on the previously read characters.

Jürgen Gutsch: Disabling comments on this blog until they are moved to GitHub

I'm going to remove the Disqus comments on this blog and move to GitHib issue based comments. The reason is, that I don't want to have advertisements that are not related to the contents of this page. Another reason is, that I want to have the full control over the comments. The third reason is related to GDPR: I've no Idea yet what Disqus is doing to protect the users privacy and how the users are able control their personal data. With the advertisements they are displaying it gets less transparent, because I don't know who what is the original source of the adds and who is responsible for the users personal data.

I removed Disqus from my blog

I'm currently migrating all the Disqus comments to GitHub issues. There will be an GitHub issue per blog post and the issue comments will be the blog post comments than. I will lose the dialogue hierarchy of the comments, but this isn't really needed. Another downside for you readers is, that they will need to have an GiHub account to create comments. Otherwise the most of you already have one and you don't need to have an Discus account anymore to drop a comment.

To do the migration I removed Disqus first and exported all the comments. After a few days of migrating and testing I'll enable the GitHub issue comments on my blog. There will be a comment form on on each blog post as usual and you don't need to go to GitHub to drop a comment.

I will write a detailed blog post about the new comment system and how I migrated it, if it's done.

The new GitHub issue based comments should be available after the weekend

Norbert Eder: Canary Deployment

In Blue Green Deployment habe ich einen Ansatz beschrieben, wie neue Releases in Produktivumgebungen vor der Aktivierung getestet werden können. Daraus lässt sich mit höherer Wahrscheinlichkeit auf die Funktionsfähigkeit eines Releases rückschließen. Allerdings wird nur getestet. Wie stabil und performant die Software läuft, kann nicht beurteilt werden. Eine Hilfe stellen Canary Deployments dar.

Canary Deployment (Kanarienvogel) hat den namentlichen Ursprung in den alten Kohleminen. Als Frühwarnsystem vor giftigen Gasen, haben die Minenarbeiter Kanarienvögel in Käfigen aufgestellt. Traten giftige Gase aus, sind die Kanarienvögel gestorben und die Arbeiter konnten sich noch schnell in Sicherheit bringen.

Wie funktioniert aber nun ein Canary Deployment?

Es gibt – wie auch beim Blue Green Deployment – zumindest zwei Produktivsysteme. Eines der beiden System (oder Teile davon) erhalten Updates. Nun kann der aktualisierte Part getestet werden (sowohl automatisiert, als auch manuell). Zudem wird ein zuvor definierter Teil des Traffics über das aktualisierte System geleitet.

Canary Deployment | Norbert Eder

Canary Deployment

Durch sukzessives Umleiten und Belasten des neuen Systems, werden aussagekräftige Hinweise über die Funktionsfähigkeit (auch unter Last) gegeben.

Ein Beispiel: Es wird festgelegt, dass nach der Aktualisierung, 2% des Traffics über das neue System geleitet werden. Treten keine Probleme auf, kann der Anteil erhöht werden. Treten Probleme auf, sind maximal 2% der Benutzer davon betroffen. Ein Rollback ist sofort möglich.

Mit diesem Aufbau steht also ein Frühwarnsystem zur Verfügung. Wir erhalten mehr Sicherheit und bei Problemen ist nur ein Bruchteil der Benutzer betroffen.

Einher geht allerdings auch ein infrastruktureller Aufwand und eine erhöhte Komplexität.

Der Beitrag Canary Deployment erschien zuerst auf Norbert Eder.

Norbert Eder: Blue Green Deployment

Viele Entwickler setzen mittlerweile auf die Unterstützung von automatisierten Tests und gewährleisten dadurch ein frühe Fehlererkennung, geringere Kosten bei der Behebung und schlussendlich eine hohe Qualität. Dennoch können Fehler nicht vollkommen ausgeschlossen werden.

Einer der Gründe hierfür ist, dass die Tests in der Regel nur in Testsystemen ausgeführt werden. Somit ist eine Aussage hinsichtlich der Funktionsweise im Produktivsystem nicht gegeben. Anwender überraschen uns Entwickler gerne mit unkonventionellen Eingaben oder einer eigenwilligen Bedienung der Software. Dies kann unter Umständen zu schiefen Datenständen führen. Was also in der Entwicklungs- bzw. Testumgebung funktioniert, muss dies noch lange nicht in der Produktivumgebung tun. Was kann man nun unternehmen, um eine bessere Aussage treffen zu können?

Eine Möglichkeit besteht im Blue Green Deployment. Dabei besteht das Produktivsystem zweimal. Einmal als blaue, einmal als grüne Linie. Aktiv ist immer nur eines der beiden Systeme. Das inaktive System kann für Tests herangezogen werden. Dabei können die Systeme auf unterschiedlicher (aber ähnlicher) Hardware oder VMs laufen.

Ein neues Release wird dabei immer am inaktiven System eingespielt und getestet. Sind alle Tests erfolgreich und stehen alle Funktionen zur Verfügung, wird das inaktive zum aktiven System und umgekehrt. In anderen Worten: War das blaue System aktiv und das grüne System inaktiv, dann erhielt das grüne System das Update und wurde nach erfolgreichen Tests aktiv. Nun ist das blaue System inaktiv und erhält das nächste kommende Update.

Dies bietet natürlich auch noch weitere Vorteile. So ist es sehr schnell möglich, wieder auf die alte Version zurückzugehen (Rollback). Zudem steht ein zweites System bei Ausfällen (Hardware etc.) zur Verfügung.

Die zusätzliche Sicherheit bringt jedoch einige Herausforderungen hinsichtlich Infrastruktur, Deploymentprozess, aber auch der Entwicklung (z.B. Umgang mit Schemaänderungen an der Datenbank) mit sich. Belohnt wird man durch eine höhere Ausfallssicherheit und einer möglichen (verbesserten) Aussage über die Funktionsfähigkeit eines neuen Releases im Produktivsystem.

Darauf aufbauend kann ein Canary Deployment eine noch bessere Aussagekraft im Produktiveinsatz geben.

Credit: Server-Icon von FontAwesome / CC Attribution 4.0 International, alle anderen Icons von Microsoft Powerpoint.

Der Beitrag Blue Green Deployment erschien zuerst auf Norbert Eder.

Jürgen Gutsch: Customizing ASP.​NET Core Part 10: TagHelpers

This was initially planned as the last topic of this series, because this also was the last part of the talk about customizing ASP.NET Core I did in the past. See the initial post about this series. Now I have three additional customizing topics to talk about. If you like to propose another topic feel free to drop a comment in the initial post.

In this tenth part of this series I'm going to write about TagHelpers. The built in TagHelpers are pretty useful and making the razor more pretty and more readable. Creating custom TagHelpers will make your life much easier.

This series topics

About TagHelpers

With TagHelpers you are able to extend existing HTML tags or to create new tags that get rendered on the server side. The extensions or the new tags are not visible in the browsers. TagHelpers a only kind of shortcuts to write easier and less HTML or Razor code on the server side. TagHelpers wil be interpreted on the server and will produce "real" HTML code for the browsers.

TagHelpers are not a new thing in ASP.NET Core, it was there since the first version of ASP.NET Core. The most existing and built-in TagHelpers are a replacement for the old fashioned HTML Helpers, which are still existing and working in ASP.NET Core to keep the Razor views compatible to ASP.NET Core.

A very basic example of extending HTML tags is the built in AnchorTagHelper:

<!-- old fashioned HtmlHelper -->
<li>@Html.Link("Home", "Index", "Home")</li>
<!-- new TagHelper -->
<li><a asp-controller="Home" asp-action="Index">Home</a></li>

The HtmlHelper are kinda strange between the HTML tags, for HTML developers. It is hard to read. It is kind of disturbing and interrupting while reading the code. It is maybe not for ASP.NET Core developers who are used to read that kind of code. But compared to the TagHelpers it is really ugly. The TagHelpers feel more natural and more like HTML even if they are not and even if they are getting rendered on the server.

Many of the HtmlHelper can be replaced with a TagHelper.

There are also some new tags built with TagHelpers. Tags that are not existing in HTML, but look like HTML. One example is the EnvironmentTagHelper:

<environment include="Development">
    <link rel="stylesheet" href="~/lib/bootstrap/dist/css/bootstrap.css" />
    <link rel="stylesheet" href="~/css/site.css" />
</environment>
<environment exclude="Development">
    <link rel="stylesheet" href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
            asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
            asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute" />
    <link rel="stylesheet" href="~/css/site.min.css" asp-append-version="true" />
</environment>

This TagHelper renders or doesn't render the contents depending of the current runtime environment. In this case the target environment is the development mode. The first environment tag renders the contents if the current runtime environment is set to Development and the second one renders the contents if it not set to Development. This makes it a useful helper to render debugable scripts or styles in Development mode and minified and optimized code in any other runtime environment.

Creating custom TagHelpers

Just as a quick example, let's assume we need to have any tag configurable as bold and colored in a specific color:

<p strong color="red">Use this area to provide additional information.</p>

This looks like pretty old fashioned HTML out of the nineties, but this is just to demonstrate a simple TagHelper. But this can be done by a TagHelper that extend any tag that has an attribute called strong

[HtmlTargetElement(Attributes = "strong")]
public class StrongTagHelper : TagHelper
{
    public string Color { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.Attributes.RemoveAll("strong");

        output.Attributes.Add("style", "font-weight:bold;");
        if (!String.IsNullOrWhiteSpace(Color))
        {
            output.Attributes.RemoveAll("style");
            output.Attributes.Add("style", $"font-weight:bold;color:{Color};");
        }
    }
}

The first line tells the tag helper to work on tags with an target attribute strong. This TagHelper doesn't define an own tag. But also provides an additional attribute to specify the color. At least the Process method defined how to render the HTML to the output stream. In this case it adds some CSS inline Styles to the current tag. It also removes the target attribute from the current tag. The color attribute won't show up.

This will look like this

<p color="red">Use this area to provide additional information.</p>

The next sample show how to define a custom tag using a TagHelper:

public class GreeterTagHelper : TagHelper
{
    [HtmlAttributeName("name")]
    public string Name { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.TagName = "p";
        output.Content.SetContent($"Hello {Name}");
    }
}

This TagHelper handles a greeter tag that has a property name. In the Process method the current tag will be changed to a p tag and the new content is set the the current output.

<greeter name="Readers"></greeter>

The result is like this:

<p>Hello Readers</p>

A more complex scenario

The TagHelpers in the last section were pretty basic just to show how TagHelpers work. The next sample is a little more complex and shows an almost real scenario. This TagHelper renders a table with a list of items. This is a generic TagHelper and shows a real reason to create own custom TagHelpers. With this you are able to reuse an a isolated piece of view code. You can wrap for example Bootstrap components to make it much easier to use, e.g. with just one tag instead of nesting five levels of div tags. Or you can just simplify your Razor views:

public class DataGridTagHelper : TagHelper
{
    [HtmlAttributeName("Items")]
    public IEnumerable<object> Items { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.TagName = "table";
        output.Attributes.Add("class", "table");
        var props = GetItemProperties();

        TableHeader(output, props);
        TableBody(output, props);
    }

    private void TableHeader(TagHelperOutput output, PropertyInfo[] props)
    {
        output.Content.AppendHtml("<thead>");
        output.Content.AppendHtml("<tr>");
        foreach (var prop in props)
        {
            var name = GetPropertyName(prop);
            output.Content.AppendHtml($"<th>{name}</th>");
        }
        output.Content.AppendHtml("</tr>");
        output.Content.AppendHtml("</thead>");
    }

    private void TableBody(TagHelperOutput output, PropertyInfo[] props)
    {
        output.Content.AppendHtml("<tbody>");
        foreach (var item in Items)
        {
            output.Content.AppendHtml("<tr>");
            foreach (var prop in props)
            {
                var value = GetPropertyValue(prop, item);
                output.Content.AppendHtml($"<td>{value}</td>");
            }
            output.Content.AppendHtml("</tr>");
        }
        output.Content.AppendHtml("</tbody>");
    }

    private PropertyInfo[] GetItemProperties()
    {
        var listType = Items.GetType();
        Type itemType;
        if (listType.IsGenericType)
        {
            itemType = listType.GetGenericArguments().First();
            return itemType.GetProperties(BindingFlags.Public | BindingFlags.Instance);
        }
        return new PropertyInfo[] { };
    }

    private string GetPropertyName(PropertyInfo property)
    {
        var attribute = property.GetCustomAttribute<DisplayNameAttribute>();
        if (attribute != null)
        {
            return attribute.DisplayName;
        }
        return property.Name;
    }

    private object GetPropertyValue(PropertyInfo property, object instance)
    {
        return property.GetValue(instance);
    }
}

To use this TagHelper you just need to assign a list of items to this tag:

<data-grid persons="Model.Persons"></data-grid>

In this case it is a list of persons, that we get in the Persons property of our current model. The Person class I use here looks like this:

public class Person
{
    [DisplayName("First name")]
    public string FirstName { get; set; }
    
    [DisplayName("Last name")]
    public string LastName { get; set; }
    
    public int Age { get; set; }
    
    [DisplayName("Email address")]
    public string EmailAddress { get; set; }
}

So not all of the properties have a DisplayNameAttribute, so the fallback in the GetPropertyName method is needed to get the actual property name instead of the the DisplayName value.

To use it in production this TagHelper need some more checks and validations, but it works:

Now you are able to extend this TagHelper with a lot more features, like sorting, filtering, paging and so on. Feel free.

Conclusion

TagHelpers are pretty useful to reuse parts of the view and to simplify and cleanup your views. You can also provide a library with useful view elements. Here are some more examples of already existing TabHelper libraries and samples:

  • https://github.com/DamianEdwards/TagHelperPack
  • https://github.com/dpaquette/TagHelperSamples
  • https://www.red-gate.com/simple-talk/dotnet/asp-net/asp-net-core-tag-helpers-bootstrap/
  • https://www.jqwidgets.com/asp.net-core-mvc-tag-helpers/

This part was initially planned as the last part of this series, but I found some more interesting topics. If you also have some nice ideas to write about feel free to drop a comment in the introduction post of this series.

In the next post, I'm going to write about how to customize the Hosting of ASP.NET Core Wep Applications: Customizing ASP.NET Core Part 11: Hosting (not yet done)

Code-Inside Blog: HowTo: Run a Docker container using Azure Container Instances

x

Azure Container Instances

There are (at least) 3 diffent ways how to run a Docker Container on Azure:

In this blogpost we will take a small look how to run a Docker Container on this service. The “Azure Container Instances”-service is a pretty easy service and might be a good first start. I will do this step for step guide via the Azure Portal. You can use the CLI or Powershell. My guide is more or less the same as this one, but I will highlight some important points in my blogpost, so feel free to check out the official docs.

Using Azure Container Instances

1. Add new…

At first search for “Container Instances” and this should show up:

x

2. Set base settings

Now - this is propably the most important step - choose the container name and source of the image. Those settings can’t be changed later on!

The image can be from a Public Docker Hub repository or from a prive docker registry.

Important: If you are using a Private Docker Hub repository use ‘index.docker.io’ as the login server. It took me a while to figure that out.

x

3. Set container settings

Now you need to choose which OS and how powerful the machine should be.

Important: If you want an easy access via HTTP to your container, make sure to set a “DNS label”. With this label you access it like this: customlabel.azureregion.azurecontainer.io

x

Make sure to set any needed environment variables here.

Also keep in mind: You can’t change this stuff later on.

Ready

In the last step you will see a summery of the given settings:

x

Go

After you finish the setup your Docker Container should start after a short amount of time (depending on your OS and image of course).

x

The most important aspect here:

Check the status, which should be “running”. You can also see your applied FQDN.

Summery

This service is pretty easy. The setup itself is not hard, but sometimes the UI seems “buggy”, but if you can run your Docker Container locally, you should also be able to run it on this service.

Hope this helps!

Christina Hirth : FFS Fix The Small Things

Kent Beck’s hilarious rant against finding excuses if it comes to refactor things.

Christina Hirth : My KanDDDinsky distilled

KanDDDinsky

The second edition of “KanDDDinsky – The art of business software” took place on the 18-19th October 2018. For me it was the best conference I have visited for long time: the talks I attended at this conference created all together a coherent picture and the speakers made me sometimes feel like visiting an Open Space, an UnConference. It felt like a great community event with the right amount of people with right amount of knowledge and enough time to have great discussions during the two days.

These are my takeaways and notes:

Michael Feathers “The Design of Names and Spaces” (Keynote)

  1. Do not be dogmatic, sometimes allow the ubiquitous language to drive you to the right data structure – but sometimes is better to take the decisions the other way around.
  2. Build robust systems, follow Postel’s Law

Be liberal in what you accept, and conservative in what you send.

If you ask me, this principle shouldn’t be only applied for software development…

Kenny Baas-Schwegler – Crunching ‘real-life stories’ with DDD Event Storming and combining it with BDD

I learned so much from Kenny that I had to write it in an separate blog post.

Update: the video of this talk can be seen here

Kevlin Henney – What Do You Mean?

This talk was extrem entertaining and informative, you should watch it after it will be published. Kevlin addressed so many thoughts around software development, is impossible to choose the one message.  And yes: the sentence  “It’s only semantics” still makes me angry!

Codified Knowledge
It is not semantics, it is meaning what we turn in code

Herendi Zsofia – Encouraging DDD Curiosity as a Product Owner

It was interesting to see a product owner talking about her efforts making the developers interested in the domain. It was somehow curious because we were on a DDD conference – I’m sure all present were already interested in building the right features fitting to the domain and to the problem – but of course we are only the minority among the coding people. She belongs to the clear minority of product owners being openly interested in DDD. Thank you!

Matthias Verraes – Design Heuristics

This session was so informative, I will share what I have learned in  a separate post.

J. B. Rainsberger – Some Underrated Elements of Success for the Modern Programmer

J.B. is my oldest “twitter-pal” and in the past 5+ years we discussed about everything from tests to wine or how to find whipped cream in a Romanian shopping center. But: we never met in person 😥  I am really happy that Marco and Janek fixed this for me!

The talk was just like I expected: clear, accurate, very informative. Hier a small subset of the tips  shared by J.B.

Save energy not time!

There are talks which cannot be distilled. J. B.’s talk was exactly one of those. I will insert the link here when it will be published and I just can encourage everybody to invest the 60 minutes and watch it.

Statistics #womenInTech

I had the feeling it were a lot of women at the conference even if they represented “only” 10% (20 from 200) of the participants. But still: 5-6 years ago I was mostly alone and it is not the case anymore. This is great, I really think that something had changed in the last few years!

Finally: I just can repeat myself how I decide if a conference was successful

Code-Inside Blog: How to fix ERR_CONNECTION_RESET & ERR_CERT_AUTHORITY_INVALID with IISExpress and SSL

This post is a result of some pretty strange SSL errors that I encountered last weekend.

The scenario:

I tried to setup a development environment for a website that uses a self signed SSL cert. The problem occured right after the start - especially Chrome displayed those wonderful error messages:

  • ERR_CONNECTION_RESET
  • ERR_CERT_AUTHORITY_INVALID

The “maybe” solution:

When you google the problem you will see a couple of possible solutions. I guess the first problem on my machine was, that a previous cert was stale and thus created this issue. I later then began to delete all localhost SSL & IIS Express related certs in the LocalMachine-Cert store. Maybe this was a dumb idea, because it caused more harm then it helped.

But: Maybe this could solve your problem. Try to check your LocalMachine or LocalUser-Cert store and check for stale certs.

How to fix the IIS Express?

Well - after I deleted the IIS Express certs I couldn’t get anything to work, so I tried to repair the IIS Express installation and boy… this is a long process.

The repair process via the Visual Studio Installer will take some minutes and in the end I had the same problem again, but my IIS Express was working again.

How to fix the real problem?

After some more time (and I did repair the IIS Express at least 2 or 3 times) I tried the second answer from this Stackoverflow.com question:

cd C:\Program Files (x86)\IIS Express\IisExpressAdminCmd.exe setupsslUrl -url:https://localhost:44387/ -UseSelfSigned

And yeah - this worked. Puh…

Another option:

Checkout the project settings and try to change the bitness settings (I had once a problem with “x64” instead of “Default”) or try to recreate the virtual directory here:

x

Conclusion:

  • Don’t delete random IIS Express certs in your LocalMachine-Cert store.
  • If you do: Repair the IIS Express via the Visual Studio Installer (the option to repair IIS Express via the Programs & Feature management tool seems to be gone with VS 2017).
  • Try to setup the SSL cert with the “IisExpressAdminCmd.exe” - this helped me a lot.
  • Try to use the VS tooling and checkout the project tab and try out “Create Virtual Directory” or change the IIS Express bitness settings.

I’m not sure if this really fixed my problem, but maybe it may help:

You can “manage” some part of the SSL stuff via “netsh” from a normal cmd prompt (powershell acts weird with netsh), e.g.:

netsh http delete sslcert ipport=0.0.0.0:44300
netsh http add sslcert ipport=0.0.0.0:44300 certhash=your_cert_hash_with_no_spaces appid={123a1111-2222-3333-4444-bbbbcccdddee}

Be aware: I remember that I deleted a sslcert via the netsh tool, but was unable to add a sslcert. After the IisExpressAdminCmd.exe stuff I worked for me.

Hope this helps!

Jürgen Gutsch: Customizing ASP.​NET Core Part 09: ActionFilter

This post is a little late this time. My initial plan was to throw out two posts of this series per week, but this doesn't work out, since there are sometimes some more family and work tasks to do than expected.

Anyway, we keep on customizing on the controller level in this ninth post of this blog series. I'll have a look into ActionFilters and hot to create your own ActionFilter to keep your Actions small and readable.

The series topics

About ActionFilters

Action filters are a little bit like MiddleWares, but are executed immediately on a specific action or on all actions of a specific controller. If you apply an ActionFilter as a global one, it executes on all actions in your application. ActionFilters are created to execute code right before the actions is executed or after the action is executed. They are introduced to execute aspects that are not part of the actual action logic. Authorization is such an aspect. I'm sure you already know the AuthorizeAttribute to allow users or groups to access specific Actions or Controllers. The AuthorizeAttribute actually is an ActionFilter. It checks whether the logged-on user is authorized or not. If not it redirects to the log-on page.

The next sample shows the skeletons of a normal ActionFilters and an async ActionFilter:

public class SampleActionFilter : IActionFilter
{
    public void OnActionExecuting(ActionExecutingContext context)
    {
        // do something before the action executes
    }

    public void OnActionExecuted(ActionExecutedContext context)
    {
        // do something after the action executes
    }
}

public class SampleAsyncActionFilter : IAsyncActionFilter
{
    public async Task OnActionExecutionAsync(
        ActionExecutingContext context,
        ActionExecutionDelegate next)
    {
        // do something before the action executes
        var resultContext = await next();
        // do something after the action executes; resultContext.Result will be set
    }
}

As you can see here there are always two section to place code to execute before and after the action is executed. This ActionFilters cannot be uses as attributes. If you want to use the ActionFilters as attributes in your Controllers, you need to drive from Attribute or from ActionFilterAttribute:

public class ValidateModelAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext context)
    {
        if (!context.ModelState.IsValid)
        {
            context.Result = new BadRequestObjectResult(context.ModelState);
        }
    }
}

This code shows a simple ActionFilter which always returns a BadRequestObjectResult, if the ModelState is not valid. This may be useful an a Web API as a default check on POST, PUT and PATCH requests. This could be extended with a lot more validation logic. We'll see how to use it later on.

Another possible use case for an ActionFilter is logging. You don't need to log in the Controllers and Actions directly. You can do this in an action filter to not mess up the actions with not relevant code:

public class LoggingActionFilter : IActionFilter
{
    ILogger _logger;
    public LoggingActionFilter(ILoggerFactory loggerFactory)
    {

        _logger = loggerFactory.CreateLogger<LoggingActionFilter>();
    }

    public void OnActionExecuting(ActionExecutingContext context)
    {
        // do something before the action executes
        _logger.LogInformation($"Action '{context.ActionDescriptor.DisplayName}' executing");
    }

    public void OnActionExecuted(ActionExecutedContext context)
    {
        // do something after the action executes
        _logger.LogInformation($"Action '{context.ActionDescriptor.DisplayName}' executed");
    }
}

This logs an information message out to the console. You are able to get more information about the current Action out of the ActionExecutingContext or the ActionExecutedContext e.g. the arguments, the argument values and so on. This makes the ActionFilters pretty useful.

Using the ActionFilters

ActionFilters that actually are Attributes can be registered as an attribute of an Action or a Controller:

[HttpPost]
[ValidateModel] // ActionFilter as attribute
public ActionResult<Person> Post([FromBody] Person model)
{
    // save the person
    
	return model; //just to test the action
}

Here we use the ValidateModelAttribute that checks the ModelState and returns a BadRequestObjectResult in case the ModelState is invalid and I don't need to check the ModelState in the actual Action.

To register ActionFilters globally you need to extend the MVC registration in the CofnigureServices method of the Startup.cs:

services.AddMvc()
    .AddMvcOptions(options =>
    {
        options.Filters.Add(new SampleActionFilter());
        options.Filters.Add(new SampleAsyncActionFilter());
    });

ActionFilters registered like this are getting executed on every action. This way you are able to use ActionFilters that don't derive from Attribute.

The Logging LoggingActionFilter we created previously is a little more special. It is depending on an instance of an ILoggerFactory, which need to be passed into the constructor. This won't work well as an attribute, because Attributes don't support constructor injection via dependency injection. The ILoggerFactory is registered in the ASP.NET Core dependency injection container and needs to be injected into the LoggingActionFilter.

Because of this there are some more ways to register ActionFilters. Globally we are able to register it as a type, that gets instantiated by the dependency injection container and the dependencies can be solved by the container.

services.AddMvc()
    .AddMvcOptions(options =>
    {
        options.Filters.Add<LoggingActionFilter>();
    })

This works well. We now have the ILoggerFactory in the filter

To support automatic resolution in Attributes, you need to use the ServiceFilterAttribute on the Controller or Action level:

[ServiceFilter(typeof(LoggingActionFilter))]
public class HomeController : Controller
{

in addition to the global filter registration, the ActionFilter needs to be registered in the ServiceCollection before we can use it with the ServiceFilterAttribute:

services.AddSingleton<LoggingActionFilter>();

To be complete there is another way to use ActionFilters that needs arguments passed into the constructor. You can use the TypeFilterAttribute to automatically instantiate the filter. But using this attribute the Filter isn't instantiate by the dependency injection container and the arguments need to get specified as argument of the TypeFilterAttribute. See the next snippet from the docs:

[TypeFilter(typeof(AddHeaderAttribute),
    Arguments = new object[] { "Author", "Juergen Gutsch (@sharpcms)" })]
public IActionResult Hi(string name)
{
    return Content($"Hi {name}");
}

The Type of the filter end the arguments are specified with the TypeFilterAttribute

Conclusion

Personally I like the way to keep the actions clean using ActionFilters. If I find repeating tasks inside my Actions, that are not really relevant to the actual responsibility of the Action, I try to move it out to an ActionFilter, or maybe a ModelBinder or a MiddleWare, depending on how globally it should work. The more it is relevant to an Action the more likely I use an ActionFilter.

There are some more kind of filters, which all work similar. To learn more about the different kind of filters, you definitely need to read the docs.

In the tenth part of the series we move to the actual view logic and extend the Razor Views with custom TagHelpers: Customizing ASP.NET Core Part 10: TagHelpers

Christina Hirth : Event Storming with Specifications by Example

Event Storming is a technique defined and refined by Alberto Brandolini (@ziobrando). I fully agree the statement about this method, Event Storming is for now “The smartest approach to collaboration beyond silo boundaries”

I don’t want to explain what Event Storming is, the concept is present in the IT world for a few years already and there are a lot of articles or videos explaining the basics. What I want to emphasize is WHY do we need to learn and apply this technique:

The knowledge of the product experts may differ from the assumption of the developers
KanDDDinsky 2018 – Kenny Baas-Schwegler

On the 18-19.10.2018 I had the opportunity to not only hear a great talk about Event Storming but also to be part of a 2 hours long hands-on session, all this powered by Kandddinsky (for me the best conference I visited this year) and by @kenny_baas (and @use case driven and @brunoboucard). In the last few years I participated on a few Event Storming sessions, mostly on community events, twice at cleverbridge but this time it was different. Maybe ES is like Unit Testing, you have to exercise and reflect about what went well and what must be improved. Anyway this time I learned and observed a few rules and principles new for me and their effects on the outcome. This is what I want to share here.

  1. You need a facilitator.
    Both ES sessions I was part at cleverbridge have ended with frustration. All participants were willing to try it out but we had nobody to keep the chaos under control. Because as Kenny said “There will be chaos, this is guaranteed.” But this is OK, we – devs, product owners, sales people, etc. – have to learn fast to understand each other without learning the job of the “other party” or writing a glossary (I tried that already and didn’t helped 😐 ). Also we need somebody being able to feel and steer the dynamics in the room.


    The tweets were written during a discussion about who could be a good facilitator. You can read the whole thread on Twitter if you like. Another good article summarizing the first impressions of @mathiasverraes as facilitator is this one.

  2. Explain AND visualize the rules beforehand.
    I skip for now the basics like the necessity of a very long free wall and that the events should visualize the business process evolving in time.
    This are the additional rules I learned in the hands-on session:

      1. no dev-talk! The developer is per se a species able to transform EVERYTHING in patterns and techniques and tables and columns and this ability is not helpful if one wants to know if we can solve a problem together. By using dev-speech the discussion will be driven to the technical “solvability” based on the current technical constraints like architecture. With ES we want to create or deepen our ubiquitous language , and this surely not includes the word “Message Bus”  😉
      2. Every discussion should happen on the board. There will be a lot of discussions and we tend to talk a lot about opinions and feelings. This won’t happen if we keep discussing about the business processes and events which are visualized in front of us – on the board.
      3. No discussions regarding persons not in the room. Discussing about what we think other people would mind are not productive and cannot lead to real results. Do not waste time with it, time is too short anyway.
      4. Open questions occurring during the storming should not be discussed (see the point above) but marke