Jürgen Gutsch: Removing Disqus and adding GitHub Issue Comments

I recently realized that I ran this new blog for almost exactly three years now and wrote almost 100 posts until yet. Running this blog is completely different compared to the the previous one based on the community server on ASP.NET Zone. I now write on markdown files which I commit and push to GitHub. I also switched the language. From January 2007 to November 2015 I wrote in German and since I run this GitHub based blog I switched completely to English, which is a great experience and improves the English writing and speaking skills a lot.

This blog is based on Pretzel, which is a .NET based Jekyll clone, that creates a static website. Pretzel as well as Jekyll is optimized for blogs or similar structured web sites. Both systems take markdown files and turn them based on the Liquid template engine into static HTML pages. This works pretty well and I really like the way to push markdown files to the GitHub repo and get an updated blog a few seconds later on Azure. This is continuous delivery using GitHub and Azure for blog posts. It is amazing. And I really love blogging this way.

Actually the blog is successful from my perspective. Around 6k visits per week is a good number, I guess.

Because the blog is static HTML, at the end I need to extend it with software as a service solutions to create dynamic content or to track the success of that blog.

So I added Disqus to enable comments on this blog. Disqus was quite popular at that time for this kind of blogs and I also get some traffic from Disqus. Anyway, now this service started to show some advertisement on my page and it also shows advertisement that is not really related to the contents of my page.

I also added a small Google AdSense banner to the blog, but this is placed at the end of the page and doesn't really annoy you as a reader, I hope. I put some text upon this banner, to ask you as a reader to support my blog if you like it. A click on that banner doesn't really cost some time or money.

I don't get anything out of the annoying off-topic adds that Disqus shows here, except a free tool to collect blog post comments and store them somewhere outside in the cloud. I don't really "own" the comments, which is the other downside.

Sure Disqus is a free service and someone need to pay for it, but the ownership of the contents is an problem as well as the fact that I cannot influence the contents of the adds displayed on my blog:

Owning the comments

The comments are important contents you provide to me, to the other readers and to the entire developer community. But they are completely separated from the blog post they relate to. They are stored on a different cloud. Actually I have no idea where Disqus stores the comments.

How do I own the comments?

My idea the was to use GitHub issues of the blog repository to collect the comments. Every first comment of a blog post should create a GitHub issue and any comment is a comment on this issue. With this solution the actual posts and the comments are in the same repository, they can be linked together and I own this comments a little more than previously.

I already asked on twitter about that and got some positive feedback.

Evaluating a solution

There are already some JavaScript codes available which can be used to add GitHub Issues as comments. The GitHub API is well documented and it should be easy to do this.

I already evaluated a solution to use and decided to go with Utterance

"A lightweight comments widget built on GitHub issues"

Utterance was built by Jeremy Danyow. I stumbled upon it on Jeremys blog post about Using GitHub Issues for Blog Comments. Jeremy works as a Senior Software Engineer at Microsoft, he is member of the Aurelia core team and created also gist.run.

As far as I understood, Utterances is a light weight version of Microsofts comment system used with the new docs on https://docs.microsoft.com. Also Microsoft stores the comments as Issues on GitHub, which is nice because they can create real issues out of it, in case there are real Problems with the docs, etc.

More Links about it: https://utteranc.es/ and https://github.com/utterance

At the end I just need to add a small HTML snippet to my blog:

<script src="https://utteranc.es/client.js"
        repo="juergengutsch/blog"
        issue-term="title"
        theme="github-light"
        crossorigin="anonymous"
        async>
</script>

This script will search for Issues with the same title as the current page. If there's no such issue, it will create a new one. If there is such an issue it will create an comment on that issue. This script also supports markdown.

Open questions until yet

Some important open question came up while evaluating the solution:

  1. Is it possible to import all the Disqus comments to GitHub Issues?
    • This is what I need to figure out now.
    • Would be bad to not have the existing comments available in the new system.
  2. What if Jeremys services are not available anymore?

The second question is easy to solve. As I wrote, I will just host the stuff by my own in case Jeremy will shut down his services. The first question is much more essential. It would be cool to get the comments somehow in a readable format. I would than write a small script or a small console app to import the comments as GitHub Issues.

Exporting the Disqus comments to GitHub Issues

Fortunately there is an export feature on Disqus, in the administration settings of the site:

After clicking "Export Comment" the export gets scheduled and you'll get an email with the download link to the export.

The exported file is a GZ compressed XML file including all threads and posts. A thread in this case is an entry per blog post where the comment form was visible. A thread actually doesn't need to contain comments. Post are comments related to a thread. Posts contain the actual comment as message, Author information and relations to the thread and the parent post if it is a reply to a comment.

This is pretty clean XML and it should be easy to import that automatically into GitHub Issues. Now I needed to figure out how the GitHub API works and to write a small C# Script to import all the comments.

This XML also includes the authors names and usernames. This is cool to know, but it doesn't have any value for me anymore, because Disqus users are no GitHub users. I can't set the comments in behalf of real GitHub users. So any migrated comment will be done by myself and I need to mark the comment, that it originally came from another reader.

So it will be something like this:

var message = $@"Comment written by **{post.Author}** on **{post.CreatedAt}**

{post.Message}
";

Importing the comments

I decided to write a small console app and to do some initial tests on a test repo. I extracted the exported data and moved it into the .NET Core console app folder and tried to play around with it.

First I read all threads out of the file and than the posts afterwards. A only selected the threads which are not marked as closed and not marked as deleted. I also checked the blog post URL of the thread, because sometimes the thread was created by a local test run, sometimes I changed the publication date of a post afterwards, which also changed the URL and sometimes the thread was created by a post that was displayed via a proxying page. I tried to filter all that stuff out. The URL need to start with http://asp.net-hacker.rocks or https://asp.net-hacker.rocks to be valid. Also the posts shouldn't be marked as deleted or marked as spam

Than I assigned the posts to the specific threads using the provided thread id and ordered the posts by date. This breaks the dialogues of the Disqus threads, but should be ok for the first step.

Than I created the actual issue post it and posted the assigned comments to the new issue.

That's it.

Reading the XML file is easy using the XmlDocument this is also available in .NET Core:

var doc = new XmlDocument();
doc.Load(path);
var nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace(String.Empty, "http://disqus.com");
nsmgr.AddNamespace("def", "http://disqus.com");
nsmgr.AddNamespace("dsq", "http://disqus.com/disqus-internals");

IEnumerable<Thread> threads = await FindThreads(doc, nsmgr);
IEnumerable<Post> posts = FindPosts(doc, nsmgr);

Console.WriteLine($"{threads.Count()} valid threads found");
Console.WriteLine($"{posts.Count()} valid posts found");

I need to use the XmlNamespaceManager here to use tags and properties using the Disqus namespaces. The XmlDocument as well as the XmlNamespaceManager need to get passed into the read methods then. The two find methods are than reading the threads and posts out of the XmlDocument.

In the next snippet I show the code to read the threads:

private static async Task<IEnumerable<Thread>> FindThreads(XmlDocument doc, XmlNamespaceManager nsmgr)
{
    var xthreads = doc.DocumentElement.SelectNodes("def:thread", nsmgr);

    var threads = new List<Thread>();
    var i = 0;
    foreach (XmlNode xthread in xthreads)
    {
        i++;

        long threadId = xthread.AttributeValue<long>(0);
        var isDeleted = xthread["isDeleted"].NodeValue<bool>();
        var isClosed = xthread["isClosed"].NodeValue<bool>();
        var url = xthread["link"].NodeValue();
        var isValid = await CheckThreadUrl(url);

        Console.WriteLine($"{i:###} Found thread ({threadId}) '{xthread["title"].NodeValue()}'");

        if (isDeleted)
        {
            Console.WriteLine($"{i:###} Thread ({threadId}) was deleted.");
            continue;
        }
        if (isClosed)
        {
            Console.WriteLine($"{i:###} Thread ({threadId}) was closed.");
            continue;
        }
        if (!isValid)
        {
            Console.WriteLine($"{i:###} the url Thread ({threadId}) is not valid: {url}");
            continue;
        }

        Console.WriteLine($"{i:###} Thread ({threadId}) is valid");
        threads.Add(new Thread(threadId)
        {
            Title = xthread["title"].NodeValue(),
            Url = url,
            CreatedAt = xthread["createdAt"].NodeValue<DateTime>()

        });
    }

    return threads;
}

I think there's nothing magic in it. Even assigning the posts to the threads is just some LINQ code.

To create the actual issues and comments, I use the Octokit.NET library which is available on NuGet and GitHub.

dotnet add package Octokit

This library is quite simple to use and well documented. You have the choice between basic authentication and token authentication to connect to GitHub. I chose the token authentication which is the proposed way to connect. To get the token you need to go to the settings of your GitHub account. Choose a personal access token and specify the rights the for the token. The basic rights to contribute to the specific repository are enough in this case:

private static async Task PostIssuesToGitHub(IEnumerable<Thread> threads)
{
    var client = new GitHubClient(new ProductHeaderValue("DisqusToGithubIssues"));
    var tokenAuth = new Credentials("secret personal token from github");
    client.Credentials = tokenAuth;

    var issues = await client.Issue.GetAllForRepository(repoOwner, repoName);
    foreach (var thread in threads)
    {
        if (issues.Any(x => !x.ClosedAt.HasValue && x.Title.Equals(thread.Title)))
        {
            continue;
        }

        var newIssue = new NewIssue(thread.Title);
        newIssue.Body = $@"Written on {thread.CreatedAt} 

URL: {thread.Url}
";

        var issue = await client.Issue.Create(repoOwner, repoName, newIssue);
        Console.WriteLine($"New issue (#{issue.Number}) created: {issue.Url}");
        await Task.Delay(1000 * 5);

        foreach (var post in thread.Posts)
        {
            var message = $@"Comment written by **{post.Author}** on **{post.CreatedAt}**

{post.Message}
";

            var comment = await client.Issue.Comment.Create(repoOwner, repoName, issue.Number, message);
            Console.WriteLine($"New comment by {post.Author} at {post.CreatedAt}");
            await Task.Delay(1000 * 5);
        }
    }
}

This method gets the list of Disqus threads, creates the GitHub client and inserts one thread by another. I also read the existing Issues from GitHub in case I need to run the migration twice because of an error. After the Issue is created, I only needed to create the comments per Issue.

After I started that code, the console app starts to add issues and comments to GitHub:

The comments are set as expected:

Unfortunately the import breaks after a while with a weird exception.

Octokit.AbuseException

Unfortunately that run didn't finish. After the first few issues were entered I got an exception like this.

Octokit.AbuseException: 'You have triggered an abuse detection mechanism and have been temporarily blocked from content creation. Please retry your request again later.'

This Exception happens because I reached the creation rate limit (user.creation_rate_limit_exceeded). This limit is set by GitHub on the public API. It is not allowed to do more than 5000 requests per hour: https://developer.github.com/v3/#rate-limiting

You can see such security related events in the security tap of your GitHub account settings.

There is no real solution to solve this problem, except to add more checks and fallbacks to the migration code. I checked which issue already exists and migrate only the issues that don't exist. I also added a five second delay between each request to GitHub. This only increases the migration time, and helps to start the migration only two times. Without the delay I got the exception more often during the tests.

Using Utterances

Once the Issues are migrated to GutHub, I need to use Utterances to the blog. At first you need to install the utterances app on your repository. The repository needs to be public and the issues should be enabled obviously.

On https://utteranc.es/ there is a kind of a configuration wizard that creates the HTML snippet for you, which you need to add to your blog. In my case it is the small snippet I already showed previously:

<script src="https://utteranc.es/client.js"
        repo="juergengutsch/blog"
        issue-term="title"
        theme="github-light"
        crossorigin="anonymous"
        async>
</script>

This loads the Uttereances client script, configures my blog repository and the way the issued will be found in my repository. You have different options for the issue-term. Since I set the blog post title as GitHub issue title, I need to tell Utterances to look at the tile. The theme I want to use here is the GitHub light theme. The dark theme doesn't fit the blog style. I was also able to override the CSS by overriding the following two CSS classes:

.utterances {}
.utterances-frame {}

The result

At the end it worked pretty cool. After the migration and after I changed the relevant blog template I tried it locally using the pretzel taste command.

If you want to add a comment as a reader, you need to logon with your GitHub account and you need to grand the utterances app to post to my repo with our name.

No every new commend will be stored in the repository of my blog. All the contents are in the same repository. There will be an issue per post, so it is almost directly linked.

What do you think? Do you like it? Tell me about your opinion :-)

BTW: You will find the migration tool on GitHub.

Stefan Henneken: IEC 61131-3: The ‘State’ Pattern

State machines are used regularly, especially in automation technology. The state pattern provides an object-oriented approach that offers important advantages especially for larger state machines.

Most developers have already implemented state machines in IEC 61131-3: one consciously, the other one perhaps unconsciously. The following is a simple example of three different approaches:

  1. CASE statement
  2. State transitions in methods
  3. The ‘state’ pattern

Our example describes a vending machine that dispenses a product after inserting a coin and pressing a button. The number of products is limited. If a coin is inserted and the button is pressed although the machine is empty, the coin is returned.

The vending machine shall be mapped by the function block FB_Machine. Inputs accept the events and the current state and the number of still available products are read out via outputs. The declaration of the FB defines the maximum number of products.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton : BOOL;
  bInsertCoin : BOOL;
  bTakeProduct : BOOL;
  bTakeCoin : BOOL;
END_VAR
VAR_OUTPUT
  eState : E_States;
  nProducts : UINT;
END_VAR

UML state diagram

State machines can be very well represented as a UML state diagram.

Picture01

A UML state diagram describes an automaton that is in exactly one state of a finite set of states at any given time.

The states in a UML state diagram are represented by rectangles with rounded corners (vertices) (in other diagram forms also often as a circle). States can execute activities, e.g. when entering the state (entry) or when leaving the state (exit). With entry / n = n – 1, the variable n is decremented when entering the state.

The arrows between the states symbolize possible state transitions. They are labeled with the events that lead to the respective state transition. A state transition occurs when the event occurs and an optional condition (guard) is fulfilled. Conditions are specified in square brackets. This allows decision trees to be implemented.

First variant: CASE statement

You will often find CASE statements for the conversion of state machines. The CASE statement queries every possible state. The conditions are queried for the individual states within the respective areas. If the condition is fulfilled, the action is executed and the state variable is adapted. To increase readability, the state variable is often mapped as ENUM.

TYPE E_States :
(
  eWaiting := 0,
  eHasCoin,
  eProductEjected,
  eCoinEjected
);
END_TYPE

Thus, the first variant of the state machine looks like this:

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR

rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);
 
CASE eState OF
  E_States.eWaiting:
    IF (rtrigButton.Q) THEN
      ; // keep in the state
    END_IF
    IF (rtrigInsertCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
      eState := E_States.eHasCoin;
    END_IF
 
  E_States.eHasCoin:
    IF (rtrigButton.Q) THEN
      IF (nProducts > 0) THEN
        nProducts := nProducts - 1;
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
        eState := E_States.eProductEjected;
      ELSE
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
        eState := E_States.eCoinEjected;
      END_IF
    END_IF
 
  E_States.eProductEjected:
    IF (rtrigTakeProduct.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
      eState := E_States.eWaiting;
    END_IF
 
  E_States.eCoinEjected:
    IF (rtrigTakeCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
      eState := E_States.eWaiting;
    END_IF
 
  ELSE
    ADSLOGSTR(ADSLOG_MSGTYPE_ERROR, 'Invalid state', '');
    eState := E_States.eWaiting;
END_CASE

A quick test shows that the FB does what it is supposed to do:

Picture02

However, it quickly becomes clear that larger applications cannot be implemented in this way. The clarity is completely lost after a few states.

Sample 1 (TwinCAT 3.1.4022) on GitHub

Second variant: State transitions in methods

The problem can be reduced if all state transitions are implemented as methods.

Picture03

If a particular event occurs, the respective method is called.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR

rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);
 
IF (rtrigButton.Q) THEN
  THIS^.PressButton();
END_IF
IF (rtrigInsertCoin.Q) THEN
  THIS^.InsertCoin();
END_IF
IF (rtrigTakeProduct.Q) THEN
  THIS^.CustomerTakesProduct();
END_IF
IF (rtrigTakeCoin.Q) THEN
  THIS^.CustomerTakesCoin();
END_IF

Depending on the current state, the desired state transition is executed in the methods and the state variable is adapted:

METHOD INTERNAL CustomerTakesCoin : BOOL
IF (THIS^.eState = E_States.eCoinEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
  eState := E_States.eWaiting;
END_IF
 
METHOD INTERNAL CustomerTakesProduct : BOOL
IF (THIS^.eState = E_States.eProductEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
  eState := E_States.eWaiting;
END_IF
 
METHOD INTERNAL InsertCoin : BOOL
IF (THIS^.eState = E_States.eWaiting) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
  THIS^.eState := E_States.eHasCoin;
END_IF
 
METHOD INTERNAL PressButton : BOOL
IF (THIS^.eState = E_States.eHasCoin) THEN
  IF (THIS^.nProducts > 0) THEN
    THIS^.nProducts := THIS^.nProducts - 1;
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
    THIS^.eState := E_States.eProductEjected;
  ELSE                
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
    THIS^.eState := E_States.eCoinEjected;
  END_IF
END_IF

This approach also works perfectly. However, the state machine remains in only one function block. Although the state transitions are shifted to methods, this is a solution approach of structured programming. This still ignores the possibilities of object orientation. This leads to the result that the source code is still difficult to extend and is illegible.

Sample 2 (TwinCAT 3.1.4022) on GitHub

Third variant: The state pattern

Some OO design principles are helpful for the implementation of the State Pattern:

Cohesion (= degree to which a class has a single concentrated purpose) and delegation

Encapsulate each responsibility into a separate object and delegate calls to these objects. One class, one responsibility!

Identify those aspects that change and separate them from those that remain constant

How are the objects split so that extensions to the state machine are necessary in as few places as possible? Previously, FB_Machine had to be adapted for each extension. This is a major disadvantage, especially for large state machines on which several developers are working.

Let’s look again at the methods CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() and PressButton(). They all have a similar structure. In IF statements, the current state is queried and the desired actions are executed. If necessary, the current state is also adjusted. However, this approach does not scale. Each time a new state is added, several methods have to be adjusted.

The state pattern scatters the status to several objects. Each possible status is represented by a FB. These status FBs contain the entire behavior for the respective state. Thus, a new status can be introduced without having to change the source code of the original blocks.

Every action (CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin(), and PressButton()) can be executed on any state. Thus, all status FBs have the same interface. For this reason, one interface is introduced for all status FBs:

Picture04

FB_Machine aggregates this interface (line 9), which delegates the method calls to the respective status FBs (lines 30, 34, 38 and 42).

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton            : BOOL;
  bInsertCoin        : BOOL;
  bTakeProduct       : BOOL;
  bTakeCoin          : BOOL;
END_VAR
VAR_OUTPUT
  ipState            : I_State := fbWaitingState;
  nProducts          : UINT;
END_VAR
VAR
  fbCoinEjectedState    : FB_CoinEjectedState(THIS);
  fbHasCoinState        : FB_HasCoinState(THIS);
  fbProductEjectedState : FB_ProductEjectedState(THIS);
  fbWaitingState        : FB_WaitingState(THIS);
 
  rtrigButton           : R_TRIG;
  rtrigInsertCoin       : R_TRIG;
  rtrigTakeProduct      : R_TRIG;
  rtrigTakeCoin         : R_TRIG;
END_VAR
 
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);
 
IF (rtrigButton.Q) THEN
  ipState.PressButton();
END_IF
 
IF (rtrigInsertCoin.Q) THEN
  ipState.InsertCoin();
END_IF
 
IF (rtrigTakeProduct.Q) THEN
  ipState.CustomerTakesProduct();
END_IF
 
IF (rtrigTakeCoin.Q) THEN
  ipState.CustomerTakesCoin();
END_IF

But how can the status be changed in the respective methods, the individual status FBs?

First of all, an instance within FB_Machine is declared by each status FB. Via FB_init(), a pointer to FB_Machine is transferred to each status FB (lines 13 – 16).

Each single instance can be read by property from FB_Machine. Each time an interface pointer to I_State is returned.

Picture05

Furthermore, FB_Machine receives a method for setting the status,

METHOD INTERNAL SetState : BOOL
VAR_INPUT
  newState : I_State;
END_VAR
THIS^.ipState := newState;

and a method for changing the current number of products:

METHOD INTERNAL SetProducts : BOOL
VAR_INPUT
  newProducts : UINT;
END_VAR
THIS^.nProducts := newProducts;

FB_init() receives another input variable, so that the maximum number of products can be specified in the declaration.

Since the user of the state machine only needs FB_Machine and I_State, the four properties (CoinEjectedState, HasCoinState, ProductEjectedState and WaitingState), the two methods (SetState() and SetProducts()) and the four status FBs (FB_CoinEjectedState(), FB_HasCoinState(), FB_ProductEjectedState() and FB_WaitingState()) were declared as INTERNAL. If the FBs of the state machine are in a compiled library, they are not visible from the outside. These are also not present in the library repository. The same applies to elements that are declared as PRIVATE. FBs, interfaces, methods and properties that are only used within a library, can thus be hidden from the user of the library.

The test of the state machine is the same in all three variants:

PROGRAM MAIN
VAR
  fbMachine      : FB_Machine(3);
  sState         : STRING;
  bButton        : BOOL;
  bInsertCoin    : BOOL;
  bTakeProduct   : BOOL;
  bTakeCoin      : BOOL;
END_VAR
 
fbMachine(bButton := bButton,
          bInsertCoin := bInsertCoin,
          bTakeProduct := bTakeProduct,
          bTakeCoin := bTakeCoin);
sState := fbMachine.ipState.Description;
 
bButton := FALSE;
bInsertCoin := FALSE;
bTakeProduct := FALSE;
bTakeCoin := FALSE;

The statement in line 15 is intended to simplify testing, since a readable text is displayed for each state.

Sample 3 (TwinCAT 3.1.4022) on GitHub

This variant seems quite complex at first sight, since considerably more FBs are needed. But the distribution of responsibilities to single FBs makes this approach very flexible and much more robust for extensions.

This becomes clear when the individual status FBs become very extensive. For example, a state machine could control a complex process in which each status FB contains further subprocesses. A division into several FBs makes such a program maintainable in the first place, especially if several developers are involved.

For very small state machines, the use of the state pattern is not necessarily the most optimal variant. I myself also like to fall back on the solution with the CASE statement.

Alternatively, IEC 61131-3 offers a further option for implementing state machines with the Sequential Function Chart (SFC). But that is another story.

Definition

In the book “Design patterns: elements of reusable object-oriented software” by Gamma, Helm, Johnson and Vlissides, this is expressed as follows:

Allow an object to change its behavior when its internal state changes. It will look as if the object has changed its class.

Implementation

A common interface (State) is defined, which contains a method for each state transition. For each state, a class is created that implements this interface (State1, State2, …). As all states have the same interface, they are interchangeable.

Such a state object is aggregated (encapsulated) by the object whose behavior has to be changed depending on the state (Context). This object represents the current internal state (currentState) and encapsulates the state-dependent behavior. The context delegates calls to the currently set status object.

The state changes can be performed by the specific state objects themselves. To do this, each status object requires a reference to the context (Context). The context must also provide a method for changing the state (setState()). The subsequent state is passed to the method setState() as a parameter. For this purpose, the context offers all possible states as properties.

UML Diagram

Picture06

Based on the example above, the following assignment results:

Context FB_Machine
State I_State
State1, State2, … FB_CoinEjectedState, FB_HasCoinState,
FB_ProductEjectedState, FB_WaitingState
Handle() CustomerTakesCoin(), CustomerTakesProduct(),
InsertCoin(), PressButton()
GetState1, GetState2, … CoinEjectedState, HasCoinState,
ProductEjectedState, WaitingState
currentState ipState
setState() SetState()
context pMachine

Application examples

A TCP communication stack is a good example of using the state pattern. Each state of a connection socket can be represented by corresponding state classes (TCPOpen, TCPClosed, TCPListen, …). Each of these classes implements the same interface (TCPState). The context (TCPConnection) contains the current state object. All actions are transferred to the respective state class via this state object. This class processes the actions and changes to a new state if necessary.

Text parsers are also state-based. For example, the meaning of a character usually depends on the previously read characters.

Jürgen Gutsch: Disabling comments on this blog until they are moved to GitHub

I'm going to remove the Disqus comments on this blog and move to GitHib issue based comments. The reason is, that I don't want to have advertisements that are not related to the contents of this page. Another reason is, that I want to have the full control over the comments. The third reason is related to GDPR: I've no Idea yet what Disqus is doing to protect the users privacy and how the users are able control their personal data. With the advertisements they are displaying it gets less transparent, because I don't know who what is the original source of the adds and who is responsible for the users personal data.

I removed Disqus from my blog

I'm currently migrating all the Disqus comments to GitHub issues. There will be an GitHub issue per blog post and the issue comments will be the blog post comments than. I will lose the dialogue hierarchy of the comments, but this isn't really needed. Another downside for you readers is, that they will need to have an GiHub account to create comments. Otherwise the most of you already have one and you don't need to have an Discus account anymore to drop a comment.

To do the migration I removed Disqus first and exported all the comments. After a few days of migrating and testing I'll enable the GitHub issue comments on my blog. There will be a comment form on on each blog post as usual and you don't need to go to GitHub to drop a comment.

I will write a detailed blog post about the new comment system and how I migrated it, if it's done.

The new GitHub issue based comments should be available after the weekend

Norbert Eder: Canary Deployment

In Blue Green Deployment habe ich einen Ansatz beschrieben, wie neue Releases in Produktivumgebungen vor der Aktivierung getestet werden können. Daraus lässt sich mit höherer Wahrscheinlichkeit auf die Funktionsfähigkeit eines Releases rückschließen. Allerdings wird nur getestet. Wie stabil und performant die Software läuft, kann nicht beurteilt werden. Eine Hilfe stellen Canary Deployments dar.

Canary Deployment (Kanarienvogel) hat den namentlichen Ursprung in den alten Kohleminen. Als Frühwarnsystem vor giftigen Gasen, haben die Minenarbeiter Kanarienvögel in Käfigen aufgestellt. Traten giftige Gase aus, sind die Kanarienvögel gestorben und die Arbeiter konnten sich noch schnell in Sicherheit bringen.

Wie funktioniert aber nun ein Canary Deployment?

Es gibt – wie auch beim Blue Green Deployment – zumindest zwei Produktivsysteme. Eines der beiden System (oder Teile davon) erhalten Updates. Nun kann der aktualisierte Part getestet werden (sowohl automatisiert, als auch manuell). Zudem wird ein zuvor definierter Teil des Traffics über das aktualisierte System geleitet.

Canary Deployment | Norbert Eder

Canary Deployment

Durch sukzessives Umleiten und Belasten des neuen Systems, werden aussagekräftige Hinweise über die Funktionsfähigkeit (auch unter Last) gegeben.

Ein Beispiel: Es wird festgelegt, dass nach der Aktualisierung, 2% des Traffics über das neue System geleitet werden. Treten keine Probleme auf, kann der Anteil erhöht werden. Treten Probleme auf, sind maximal 2% der Benutzer davon betroffen. Ein Rollback ist sofort möglich.

Mit diesem Aufbau steht also ein Frühwarnsystem zur Verfügung. Wir erhalten mehr Sicherheit und bei Problemen ist nur ein Bruchteil der Benutzer betroffen.

Einher geht allerdings auch ein infrastruktureller Aufwand und eine erhöhte Komplexität.

Der Beitrag Canary Deployment erschien zuerst auf Norbert Eder.

Norbert Eder: Blue Green Deployment

Viele Entwickler setzen mittlerweile auf die Unterstützung von automatisierten Tests und gewährleisten dadurch ein frühe Fehlererkennung, geringere Kosten bei der Behebung und schlussendlich eine hohe Qualität. Dennoch können Fehler nicht vollkommen ausgeschlossen werden.

Einer der Gründe hierfür ist, dass die Tests in der Regel nur in Testsystemen ausgeführt werden. Somit ist eine Aussage hinsichtlich der Funktionsweise im Produktivsystem nicht gegeben. Anwender überraschen uns Entwickler gerne mit unkonventionellen Eingaben oder einer eigenwilligen Bedienung der Software. Dies kann unter Umständen zu schiefen Datenständen führen. Was also in der Entwicklungs- bzw. Testumgebung funktioniert, muss dies noch lange nicht in der Produktivumgebung tun. Was kann man nun unternehmen, um eine bessere Aussage treffen zu können?

Eine Möglichkeit besteht im Blue Green Deployment. Dabei besteht das Produktivsystem zweimal. Einmal als blaue, einmal als grüne Linie. Aktiv ist immer nur eines der beiden Systeme. Das inaktive System kann für Tests herangezogen werden. Dabei können die Systeme auf unterschiedlicher (aber ähnlicher) Hardware oder VMs laufen.

Ein neues Release wird dabei immer am inaktiven System eingespielt und getestet. Sind alle Tests erfolgreich und stehen alle Funktionen zur Verfügung, wird das inaktive zum aktiven System und umgekehrt. In anderen Worten: War das blaue System aktiv und das grüne System inaktiv, dann erhielt das grüne System das Update und wurde nach erfolgreichen Tests aktiv. Nun ist das blaue System inaktiv und erhält das nächste kommende Update.

Dies bietet natürlich auch noch weitere Vorteile. So ist es sehr schnell möglich, wieder auf die alte Version zurückzugehen (Rollback). Zudem steht ein zweites System bei Ausfällen (Hardware etc.) zur Verfügung.

Die zusätzliche Sicherheit bringt jedoch einige Herausforderungen hinsichtlich Infrastruktur, Deploymentprozess, aber auch der Entwicklung (z.B. Umgang mit Schemaänderungen an der Datenbank) mit sich. Belohnt wird man durch eine höhere Ausfallssicherheit und einer möglichen (verbesserten) Aussage über die Funktionsfähigkeit eines neuen Releases im Produktivsystem.

Darauf aufbauend kann ein Canary Deployment eine noch bessere Aussagekraft im Produktiveinsatz geben.

Credit: Server-Icon von FontAwesome / CC Attribution 4.0 International, alle anderen Icons von Microsoft Powerpoint.

Der Beitrag Blue Green Deployment erschien zuerst auf Norbert Eder.

Jürgen Gutsch: Customizing ASP.​NET Core Part 10: TagHelpers

This was initially planned as the last topic of this series, because this also was the last part of the talk about customizing ASP.NET Core I did in the past. See the initial post about this series. Now I have three additional customizing topics to talk about. If you like to propose another topic feel free to drop a comment in the initial post.

In this tenth part of this series I'm going to write about TagHelpers. The built in TagHelpers are pretty useful and making the razor more pretty and more readable. Creating custom TagHelpers will make your life much easier.

This series topics

About TagHelpers

With TagHelpers you are able to extend existing HTML tags or to create new tags that get rendered on the server side. The extensions or the new tags are not visible in the browsers. TagHelpers a only kind of shortcuts to write easier and less HTML or Razor code on the server side. TagHelpers wil be interpreted on the server and will produce "real" HTML code for the browsers.

TagHelpers are not a new thing in ASP.NET Core, it was there since the first version of ASP.NET Core. The most existing and built-in TagHelpers are a replacement for the old fashioned HTML Helpers, which are still existing and working in ASP.NET Core to keep the Razor views compatible to ASP.NET Core.

A very basic example of extending HTML tags is the built in AnchorTagHelper:

<!-- old fashioned HtmlHelper -->
<li>@Html.Link("Home", "Index", "Home")</li>
<!-- new TagHelper -->
<li><a asp-controller="Home" asp-action="Index">Home</a></li>

The HtmlHelper are kinda strange between the HTML tags, for HTML developers. It is hard to read. It is kind of disturbing and interrupting while reading the code. It is maybe not for ASP.NET Core developers who are used to read that kind of code. But compared to the TagHelpers it is really ugly. The TagHelpers feel more natural and more like HTML even if they are not and even if they are getting rendered on the server.

Many of the HtmlHelper can be replaced with a TagHelper.

There are also some new tags built with TagHelpers. Tags that are not existing in HTML, but look like HTML. One example is the EnvironmentTagHelper:

<environment include="Development">
    <link rel="stylesheet" href="~/lib/bootstrap/dist/css/bootstrap.css" />
    <link rel="stylesheet" href="~/css/site.css" />
</environment>
<environment exclude="Development">
    <link rel="stylesheet" href="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.7/css/bootstrap.min.css"
            asp-fallback-href="~/lib/bootstrap/dist/css/bootstrap.min.css"
            asp-fallback-test-class="sr-only" asp-fallback-test-property="position" asp-fallback-test-value="absolute" />
    <link rel="stylesheet" href="~/css/site.min.css" asp-append-version="true" />
</environment>

This TagHelper renders or doesn't render the contents depending of the current runtime environment. In this case the target environment is the development mode. The first environment tag renders the contents if the current runtime environment is set to Development and the second one renders the contents if it not set to Development. This makes it a useful helper to render debugable scripts or styles in Development mode and minified and optimized code in any other runtime environment.

Creating custom TagHelpers

Just as a quick example, let's assume we need to have any tag configurable as bold and colored in a specific color:

<p strong color="red">Use this area to provide additional information.</p>

This looks like pretty old fashioned HTML out of the nineties, but this is just to demonstrate a simple TagHelper. But this can be done by a TagHelper that extend any tag that has an attribute called strong

[HtmlTargetElement(Attributes = "strong")]
public class StrongTagHelper : TagHelper
{
    public string Color { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.Attributes.RemoveAll("strong");

        output.Attributes.Add("style", "font-weight:bold;");
        if (!String.IsNullOrWhiteSpace(Color))
        {
            output.Attributes.RemoveAll("style");
            output.Attributes.Add("style", $"font-weight:bold;color:{Color};");
        }
    }
}

The first line tells the tag helper to work on tags with an target attribute strong. This TagHelper doesn't define an own tag. But also provides an additional attribute to specify the color. At least the Process method defined how to render the HTML to the output stream. In this case it adds some CSS inline Styles to the current tag. It also removes the target attribute from the current tag. The color attribute won't show up.

This will look like this

<p color="red">Use this area to provide additional information.</p>

The next sample show how to define a custom tag using a TagHelper:

public class GreeterTagHelper : TagHelper
{
    [HtmlAttributeName("name")]
    public string Name { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.TagName = "p";
        output.Content.SetContent($"Hello {Name}");
    }
}

This TagHelper handles a greeter tag that has a property name. In the Process method the current tag will be changed to a p tag and the new content is set the the current output.

<greeter name="Readers"></greeter>

The result is like this:

<p>Hello Readers</p>

A more complex scenario

The TagHelpers in the last section were pretty basic just to show how TagHelpers work. The next sample is a little more complex and shows an almost real scenario. This TagHelper renders a table with a list of items. This is a generic TagHelper and shows a real reason to create own custom TagHelpers. With this you are able to reuse an a isolated piece of view code. You can wrap for example Bootstrap components to make it much easier to use, e.g. with just one tag instead of nesting five levels of div tags. Or you can just simplify your Razor views:

public class DataGridTagHelper : TagHelper
{
    [HtmlAttributeName("Items")]
    public IEnumerable<object> Items { get; set; }

    public override void Process(TagHelperContext context, TagHelperOutput output)
    {
        output.TagName = "table";
        output.Attributes.Add("class", "table");
        var props = GetItemProperties();

        TableHeader(output, props);
        TableBody(output, props);
    }

    private void TableHeader(TagHelperOutput output, PropertyInfo[] props)
    {
        output.Content.AppendHtml("<thead>");
        output.Content.AppendHtml("<tr>");
        foreach (var prop in props)
        {
            var name = GetPropertyName(prop);
            output.Content.AppendHtml($"<th>{name}</th>");
        }
        output.Content.AppendHtml("</tr>");
        output.Content.AppendHtml("</thead>");
    }

    private void TableBody(TagHelperOutput output, PropertyInfo[] props)
    {
        output.Content.AppendHtml("<tbody>");
        foreach (var item in Items)
        {
            output.Content.AppendHtml("<tr>");
            foreach (var prop in props)
            {
                var value = GetPropertyValue(prop, item);
                output.Content.AppendHtml($"<td>{value}</td>");
            }
            output.Content.AppendHtml("</tr>");
        }
        output.Content.AppendHtml("</tbody>");
    }

    private PropertyInfo[] GetItemProperties()
    {
        var listType = Items.GetType();
        Type itemType;
        if (listType.IsGenericType)
        {
            itemType = listType.GetGenericArguments().First();
            return itemType.GetProperties(BindingFlags.Public | BindingFlags.Instance);
        }
        return new PropertyInfo[] { };
    }

    private string GetPropertyName(PropertyInfo property)
    {
        var attribute = property.GetCustomAttribute<DisplayNameAttribute>();
        if (attribute != null)
        {
            return attribute.DisplayName;
        }
        return property.Name;
    }

    private object GetPropertyValue(PropertyInfo property, object instance)
    {
        return property.GetValue(instance);
    }
}

To use this TagHelper you just need to assign a list of items to this tag:

<data-grid persons="Model.Persons"></data-grid>

In this case it is a list of persons, that we get in the Persons property of our current model. The Person class I use here looks like this:

public class Person
{
    [DisplayName("First name")]
    public string FirstName { get; set; }
    
    [DisplayName("Last name")]
    public string LastName { get; set; }
    
    public int Age { get; set; }
    
    [DisplayName("Email address")]
    public string EmailAddress { get; set; }
}

So not all of the properties have a DisplayNameAttribute, so the fallback in the GetPropertyName method is needed to get the actual property name instead of the the DisplayName value.

To use it in production this TagHelper need some more checks and validations, but it works:

Now you are able to extend this TagHelper with a lot more features, like sorting, filtering, paging and so on. Feel free.

Conclusion

TagHelpers are pretty useful to reuse parts of the view and to simplify and cleanup your views. You can also provide a library with useful view elements. Here are some more examples of already existing TabHelper libraries and samples:

  • https://github.com/DamianEdwards/TagHelperPack
  • https://github.com/dpaquette/TagHelperSamples
  • https://www.red-gate.com/simple-talk/dotnet/asp-net/asp-net-core-tag-helpers-bootstrap/
  • https://www.jqwidgets.com/asp.net-core-mvc-tag-helpers/

This part was initially planned as the last part of this series, but I found some more interesting topics. If you also have some nice ideas to write about feel free to drop a comment in the introduction post of this series.

In the next post, I'm going to write about how to customize the Hosting of ASP.NET Core Wep Applications: Customizing ASP.NET Core Part 11: Hosting (not yet done)

Code-Inside Blog: HowTo: Run a Docker container using Azure Container Instances

x

Azure Container Instances

There are (at least) 3 diffent ways how to run a Docker Container on Azure:

In this blogpost we will take a small look how to run a Docker Container on this service. The “Azure Container Instances”-service is a pretty easy service and might be a good first start. I will do this step for step guide via the Azure Portal. You can use the CLI or Powershell. My guide is more or less the same as this one, but I will highlight some important points in my blogpost, so feel free to check out the official docs.

Using Azure Container Instances

1. Add new…

At first search for “Container Instances” and this should show up:

x

2. Set base settings

Now - this is propably the most important step - choose the container name and source of the image. Those settings can’t be changed later on!

The image can be from a Public Docker Hub repository or from a prive docker registry.

Important: If you are using a Private Docker Hub repository use ‘index.docker.io’ as the login server. It took me a while to figure that out.

x

3. Set container settings

Now you need to choose which OS and how powerful the machine should be.

Important: If you want an easy access via HTTP to your container, make sure to set a “DNS label”. With this label you access it like this: customlabel.azureregion.azurecontainer.io

x

Make sure to set any needed environment variables here.

Also keep in mind: You can’t change this stuff later on.

Ready

In the last step you will see a summery of the given settings:

x

Go

After you finish the setup your Docker Container should start after a short amount of time (depending on your OS and image of course).

x

The most important aspect here:

Check the status, which should be “running”. You can also see your applied FQDN.

Summery

This service is pretty easy. The setup itself is not hard, but sometimes the UI seems “buggy”, but if you can run your Docker Container locally, you should also be able to run it on this service.

Hope this helps!

Christina Hirth : FFS Fix The Small Things

Kent Beck’s hilarious rant against finding excuses if it comes to refactor things.

Christina Hirth : My KanDDDinsky distilled

KanDDDinsky

The second edition of “KanDDDinsky – The art of business software” took place on the 18-19th October 2018. For me it was the best conference I have visited for long time: the talks I attended at this conference created all together a coherent picture and the speakers made me sometimes feel like visiting an Open Space, an UnConference. It felt like a great community event with the right amount of people with right amount of knowledge and enough time to have great discussions during the two days.

These are my takeaways and notes:

Michael Feathers “The Design of Names and Spaces” (Keynote)

  1. Do not be dogmatic, sometimes allow the ubiquitous language to drive you to the right data structure – but sometimes is better to take the decisions the other way around.
  2. Build robust systems, follow Postel’s Law

Be liberal in what you accept, and conservative in what you send.

If you ask me, this principle shouldn’t be only applied for software development…

Kenny Baas-Schwegler – Crunching ‘real-life stories’ with DDD Event Storming and combining it with BDD

I learned so much from Kenny that I had to write it in an separate blog post.

Kevlin Henney – What Do You Mean?

This talk was extrem entertaining and informative, you should watch it after it will be published. Kevlin addressed so many thoughts around software development, is impossible to choose the one message.  And yes: the sentence  “It’s only semantics” still makes me angry!

Codified Knowledge
It is not semantics, it is meaning what we turn in code

Herendi Zsofia – Encouraging DDD Curiosity as a Product Owner

It was interesting to see a product owner talking about her efforts making the developers interested in the domain. It was somehow curious because we were on a DDD conference – I’m sure all present were already interested in building the right features fitting to the domain and to the problem – but of course we are only the minority among the coding people. She belongs to the clear minority of product owners being openly interested in DDD. Thank you!

Matthias Verraes – Design Heuristics

This session was so informative, I will share what I have learned in  a separate post.

J. B. Rainsberger – Some Underrated Elements of Success for the Modern Programmer

J.B. is my oldest “twitter-pal” and in the past 5+ years we discussed about everything from tests to wine or how to find whipped cream in a Romanian shopping center. But: we never met in person 😥  I am really happy that Marco and Janek fixed this for me!

The talk was just like I expected: clear, accurate, very informative. Hier a small subset of the tips  shared by J.B.

Save energy not time!

There are talks which cannot be distilled. J. B.’s talk was exactly one of those. I will insert the link here when it will be published and I just can encourage everybody to invest the 60 minutes and watch it.

Statistics #womenInTech

I had the feeling it were a lot of women at the conference even if they represented “only” 10% (20 from 200) of the participants. But still: 5-6 years ago I was mostly alone and it is not the case anymore. This is great, I really think that something had changed in the last few years!

Finally: I just can repeat myself how I decide if a conference was successful

Code-Inside Blog: How to fix ERR_CONNECTION_RESET & ERR_CERT_AUTHORITY_INVALID with IISExpress and SSL

This post is a result of some pretty strange SSL errors that I encountered last weekend.

The scenario:

I tried to setup a development environment for a website that uses a self signed SSL cert. The problem occured right after the start - especially Chrome displayed those wonderful error messages:

  • ERR_CONNECTION_RESET
  • ERR_CERT_AUTHORITY_INVALID

The “maybe” solution:

When you google the problem you will see a couple of possible solutions. I guess the first problem on my machine was, that a previous cert was stale and thus created this issue. I later then began to delete all localhost SSL & IIS Express related certs in the LocalMachine-Cert store. Maybe this was a dumb idea, because it caused more harm then it helped.

But: Maybe this could solve your problem. Try to check your LocalMachine or LocalUser-Cert store and check for stale certs.

How to fix the IIS Express?

Well - after I deleted the IIS Express certs I couldn’t get anything to work, so I tried to repair the IIS Express installation and boy… this is a long process.

The repair process via the Visual Studio Installer will take some minutes and in the end I had the same problem again, but my IIS Express was working again.

How to fix the real problem?

After some more time (and I did repair the IIS Express at least 2 or 3 times) I tried the second answer from this Stackoverflow.com question:

cd C:\Program Files (x86)\IIS Express\IisExpressAdminCmd.exe setupsslUrl -url:https://localhost:44387/ -UseSelfSigned

And yeah - this worked. Puh…

Conclusion:

  • Don’t delete random IIS Express certs in your LocalMachine-Cert store.
  • If you do: Repair the IIS Express via the Visual Studio Installer (the option to repair IIS Express via the Programs & Feature management tool seems to be gone with VS 2017).
  • Try to setup the SSL cert with the “IisExpressAdminCmd.exe” - this helped me a lot.

I’m not sure if this really fixed my problem, but maybe it may help:

You can “manage” some part of the SSL stuff via “netsh” from a normal cmd prompt (powershell acts weird with netsh), e.g.:

netsh http delete sslcert ipport=0.0.0.0:44300
netsh http add sslcert ipport=0.0.0.0:44300 certhash=your_cert_hash_with_no_spaces appid={123a1111-2222-3333-4444-bbbbcccdddee}

Be aware: I remember that I deleted a sslcert via the netsh tool, but was unable to add a sslcert. After the IisExpressAdminCmd.exe stuff I worked for me.

Hope this helps!

Jürgen Gutsch: Customizing ASP.​NET Core Part 09: ActionFilter

This post is a little late this time. My initial plan was to throw out two posts of this series per week, but this doesn't work out, since there are sometimes some more family and work tasks to do than expected.

Anyway, we keep on customizing on the controller level in this ninth post of this blog series. I'll have a look into ActionFilters and hot to create your own ActionFilter to keep your Actions small and readable.

The series topics

About ActionFilters

Action filters are a little bit like MiddleWares, but are executed immediately on a specific action or on all actions of a specific controller. If you apply an ActionFilter as a global one, it executes on all actions in your application. ActionFilters are created to execute code right before the actions is executed or after the action is executed. They are introduced to execute aspects that are not part of the actual action logic. Authorization is such an aspect. I'm sure you already know the AuthorizeAttribute to allow users or groups to access specific Actions or Controllers. The AuthorizeAttribute actually is an ActionFilter. It checks whether the logged-on user is authorized or not. If not it redirects to the log-on page.

The next sample shows the skeletons of a normal ActionFilters and an async ActionFilter:

public class SampleActionFilter : IActionFilter
{
    public void OnActionExecuting(ActionExecutingContext context)
    {
        // do something before the action executes
    }

    public void OnActionExecuted(ActionExecutedContext context)
    {
        // do something after the action executes
    }
}

public class SampleAsyncActionFilter : IAsyncActionFilter
{
    public async Task OnActionExecutionAsync(
        ActionExecutingContext context,
        ActionExecutionDelegate next)
    {
        // do something before the action executes
        var resultContext = await next();
        // do something after the action executes; resultContext.Result will be set
    }
}

As you can see here there are always two section to place code to execute before and after the action is executed. This ActionFilters cannot be uses as attributes. If you want to use the ActionFilters as attributes in your Controllers, you need to drive from Attribute or from ActionFilterAttribute:

public class ValidateModelAttribute : ActionFilterAttribute
{
    public override void OnActionExecuting(ActionExecutingContext context)
    {
        if (!context.ModelState.IsValid)
        {
            context.Result = new BadRequestObjectResult(context.ModelState);
        }
    }
}

This code shows a simple ActionFilter which always returns a BadRequestObjectResult, if the ModelState is not valid. This may be useful an a Web API as a default check on POST, PUT and PATCH requests. This could be extended with a lot more validation logic. We'll see how to use it later on.

Another possible use case for an ActionFilter is logging. You don't need to log in the Controllers and Actions directly. You can do this in an action filter to not mess up the actions with not relevant code:

public class LoggingActionFilter : IActionFilter
{
    ILogger _logger;
    public LoggingActionFilter(ILoggerFactory loggerFactory)
    {

        _logger = loggerFactory.CreateLogger<LoggingActionFilter>();
    }

    public void OnActionExecuting(ActionExecutingContext context)
    {
        // do something before the action executes
        _logger.LogInformation($"Action '{context.ActionDescriptor.DisplayName}' executing");
    }

    public void OnActionExecuted(ActionExecutedContext context)
    {
        // do something after the action executes
        _logger.LogInformation($"Action '{context.ActionDescriptor.DisplayName}' executed");
    }
}

This logs an information message out to the console. You are able to get more information about the current Action out of the ActionExecutingContext or the ActionExecutedContext e.g. the arguments, the argument values and so on. This makes the ActionFilters pretty useful.

Using the ActionFilters

ActionFilters that actually are Attributes can be registered as an attribute of an Action or a Controller:

[HttpPost]
[ValidateModel] // ActionFilter as attribute
public ActionResult<Person> Post([FromBody] Person model)
{
    // save the person
    
	return model; //just to test the action
}

Here we use the ValidateModelAttribute that checks the ModelState and returns a BadRequestObjectResult in case the ModelState is invalid and I don't need to check the ModelState in the actual Action.

To register ActionFilters globally you need to extend the MVC registration in the CofnigureServices method of the Startup.cs:

services.AddMvc()
    .AddMvcOptions(options =>
    {
        options.Filters.Add(new SampleActionFilter());
        options.Filters.Add(new SampleAsyncActionFilter());
    });

ActionFilters registered like this are getting executed on every action. This way you are able to use ActionFilters that don't derive from Attribute.

The Logging LoggingActionFilter we created previously is a little more special. It is depending on an instance of an ILoggerFactory, which need to be passed into the constructor. This won't work well as an attribute, because Attributes don't support constructor injection via dependency injection. The ILoggerFactory is registered in the ASP.NET Core dependency injection container and needs to be injected into the LoggingActionFilter.

Because of this there are some more ways to register ActionFilters. Globally we are able to register it as a type, that gets instantiated by the dependency injection container and the dependencies can be solved by the container.

services.AddMvc()
    .AddMvcOptions(options =>
    {
        options.Filters.Add<LoggingActionFilter>();
    })

This works well. We now have the ILoggerFactory in the filter

To support automatic resolution in Attributes, you need to use the ServiceFilterAttribute on the Controller or Action level:

[ServiceFilter(typeof(LoggingActionFilter))]
public class HomeController : Controller
{

in addition to the global filter registration, the ActionFilter needs to be registered in the ServiceCollection before we can use it with the ServiceFilterAttribute:

services.AddSingleton<LoggingActionFilter>();

To be complete there is another way to use ActionFilters that needs arguments passed into the constructor. You can use the TypeFilterAttribute to automatically instantiate the filter. But using this attribute the Filter isn't instantiate by the dependency injection container and the arguments need to get specified as argument of the TypeFilterAttribute. See the next snippet from the docs:

[TypeFilter(typeof(AddHeaderAttribute),
    Arguments = new object[] { "Author", "Juergen Gutsch (@sharpcms)" })]
public IActionResult Hi(string name)
{
    return Content($"Hi {name}");
}

The Type of the filter end the arguments are specified with the TypeFilterAttribute

Conclusion

Personally I like the way to keep the actions clean using ActionFilters. If I find repeating tasks inside my Actions, that are not really relevant to the actual responsibility of the Action, I try to move it out to an ActionFilter, or maybe a ModelBinder or a MiddleWare, depending on how globally it should work. The more it is relevant to an Action the more likely I use an ActionFilter.

There are some more kind of filters, which all work similar. To learn more about the different kind of filters, you definitely need to read the docs.

In the tenth part of the series we move to the actual view logic and extend the Razor Views with custom TagHelpers: Customizing ASP.NET Core Part 10: TagHelpers

Christina Hirth : Event Storming with Specifications by Example

Event Storming is a technique defined and refined by Alberto Brandolini (@ziobrando). I fully agree the statement about this method, Event Storming is for now “The smartest approach to collaboration beyond silo boundaries”

I don’t want to explain what Event Storming is, the concept is present in the IT world for a few years already and there are a lot of articles or videos explaining the basics. What I want to emphasize is WHY do we need to learn and apply this technique:

The knowledge of the product experts may differ from the assumption of the developers
KanDDDinsky 2018 – Kenny Baas-Schwegler

On the 18-19.10.2018 I had the opportunity to not only hear a great talk about Event Storming but also to be part of a 2 hours long hands-on session, all this powered by Kandddinsky (for me the best conference I visited this year) and by @kenny_baas (and @use case driven and @brunoboucard). In the last few years I participated on a few Event Storming sessions, mostly on community events, twice at cleverbridge but this time it was different. Maybe ES is like Unit Testing, you have to exercise and reflect about what went well and what must be improved. Anyway this time I learned and observed a few rules and principles new for me and their effects on the outcome. This is what I want to share here.

  1. You need a facilitator.
    Both ES sessions I was part at cleverbridge have ended with frustration. All participants were willing to try it out but we had nobody to keep the chaos under control. Because as Kenny said “There will be chaos, this is guaranteed.” But this is OK, we – devs, product owners, sales people, etc. – have to learn fast to understand each other without learning the job of the “other party” or writing a glossary (I tried that already and didn’t helped 😐 ). Also we need somebody being able to feel and steer the dynamics in the room.


    The tweets were written during a discussion about who could be a good facilitator. You can read the whole thread on Twitter if you like. Another good article summarizing the first impressions of @mathiasverraes as facilitator is this one.

  2. Explain AND visualize the rules beforehand.
    I skip for now the basics like the necessity of a very long free wall and that the events should visualize the business process evolving in time.
    This are the additional rules I learned in the hands-on session:

      1. no dev-talk! The developer is per se a species able to transform EVERYTHING in patterns and techniques and tables and columns and this ability is not helpful if one wants to know if we can solve a problem together. By using dev-speech the discussion will be driven to the technical “solvability” based on the current technical constraints like architecture. With ES we want to create or deepen our ubiquitous language , and this surely not includes the word “Message Bus”  😉
      2. Every discussion should happen on the board. There will be a lot of discussions and we tend to talk a lot about opinions and feelings. This won’t happen if we keep discussing about the business processes and events which are visualized in front of us – on the board.
      3. No discussions regarding persons not in the room. Discussing about what we think other people would mind are not productive and cannot lead to real results. Do not waste time with it, time is too short anyway.
      4. Open questions occurring during the storming should not be discussed (see the point above) but marked prominently with a red sticky. Do not waste time
      5. Do not discuss about everything, look for the money! The most important goal is to generate benefit and not to create the most beautiful design!

Tips for the Storming:

  • “one person, one sharpie, one set of stickies”: everybody has important things to say, nobody should stay away from the board and the discussions.
  • start with describing the business process, business rules, eventual consistent business decisions aka policies, other constraints you – or the product owner whom the business “belongs” – would like to model, and write the most important information somewhere visible for everybody.
  • explain how ES works: every business relevant event should be placed on a time line and should be formulated in the past tense. Business relevant is everything somebody (Kibana is not a person, sorry 😉 ) would like know about.
  • explain the rules and the legend (you need a color legend to be able to read the results later).
  • give the participants time (we had 15 minutes) to write every business event they think it is important to know about on orange stickies. Also write the business rules (the wide dark red ones) and the product decisions (the wide pink ones) on stickies and put them there where they are applied. The rules before the event, the policies after one event happened.
  • start putting the stickies on the wall, throw away the duplicates, discuss and maybe reformulate the rest. After you are done try to tell the story based on what you can read on the wand. After this read the stickies from the end to the start. With these methods you should be able to discover if you have gaps or used wrong assumptions by modelling the process you wanted to describe.
  • mark known processes (like “manual process”) with the same stickies as the policies and do not waste time discussing it further.
  • start to discuss the open questions. Almost always there are different ways to answer this questions and if you cannot decide in a few seconds than postpone it. But as default: decide to create the event and measure how often happens so that later on you can make the right decision!
    Event Storming – measure now, decide later


    Another good article for this topic is this one from @thinkb4coding

At this point we could have continued with the process to find aggregates and bounded contexts but we didn’t. Instead we switched the methodology to Specifications by Example – in my opinion a really good idea!

Specifications
Event Storming enhanced with Specifications by Example

We prioritized the rules and policies and for the most important ones we defined examples – just like we are doing it if we discuss a feature and try to find the algorithm.

Example: in our ticket reservation business we had a rule saying “no overbooking, one ticket per seat”. In order to find the algorithm we defined different examples:

  • 4 tickets should be reserved and there are 5 tickets left
  • 4 tickets should be reserved and there are 3 tickets left
  • 4 tickets should be reserved and all tickets are already reserved.

With this last step we can verify if our ideas and assumptions will work out and we can gain even more insights about the business rules and business policies we defined – and all this not as developer writing if-else blocks but together with the other stake holders. At the same time the non-techie people would understand in the future what impact these rules and decisions have on the product we build together. The side-effect having the specifications already defined is also a great benefit as these are the acceptance tests which will be built by the developer and read and used by the product owner.

More about the example and the results can you read on the blog of Kenny Baas-Schwegler.

I hope I covered everything and have succeeded to reproduce the most important learning of the 2 days ( I tend to oversee things thinking “it is obvious”). If not: feel free to ask, I will be happy to answer 🙂

Happy Storming!

Holger Schwichtenberg: Kostenfreier Vortrag zu PowerShell Core 6.1 am 7. November 2018 in Gelsenkirchen

Der Dotnet-Doktor zeigt bei der User Group ".NET Developers Ruhr" in Gelsenkirchen die Unterschiede zwischen der alten Windows PowerShell 5.1 und der PowerShell Core 6.1 auf und demonstriert einige Einsatzgebiete.

Jürgen Gutsch: Customizing ASP.​NET Core Part 08: ModelBinders

In the last post about OutputFormatters I wrote about sending data out to the clients in different formats. In this post we are going to do it the other way. This post is about data you get into your Web API from outside. What if you get data in a special format or what if you get data you need to validate in a special way. ModelBinders will help you handling this.

The series topics

About ModelBinders

ModelBinders are responsible to bind the incoming data to specific action method parameters. It binds the data sent with the request to the parameters. The default binders are able to bind data that are sent via the QueryString or sent within the request body. Within the body the data can be sent in URL format or JSON.

The model binding tries to find the values in the request by the parameter names. The form values, the route data and the query string values are stored as a key-value pair collection and the binding tries to find the parameter name in the keys of the collection.

Preparation of the test project

In this post I'd like to send CSV data to a Web API method. I will reuse the CSV data we created in the last post:

Id,FirstName,LastName,Age,EmailAddress,Address,City,Phone
48,Samantha,White,18,Angel.Morgan@shaw.ca,"8202 77th Street ",Mascouche,(682) 381-4092
1,Eric,Wright,2,Briana.Ross@gmx.com,"8104 Scott Avenue ",Canutillo,(253) 366-5637
55,Amber,Watson,46,Sarah.Foster@gmx.com,"9206 Lewis Avenue ",Coleman,(632) 375-4415
99,Alexander,King,59,Ross.Timms@live.com,"3089 Paerdegat 7th Street ",Monte Alto,(366) 319-4154
69,Autumn,Hayes,25,Mark.Diaz@shaw.ca,"3263 Avenue O  ",Montreal West (Montréal-Ouest),(283) 438-7801
94,Destiny,James,47,Kylie.Walker@telus.net,"1057 14th Street ",Montreal,(570) 574-3208
59,Christina,Bennett,87,Madeline.Adams@att.com,"5672 19th Lane ",Corrigan,(467) 304-0309
71,Isaac,Hayes,33,Trevor.Robinson@hotmail.com,"9707 Langham Street ",Huntington,(635) 317-0231
23,Jason,Morgan,77,Jennifer.Powell@rogers.ca,"4413 Debevoise Avenue ",Pinole,(265) 467-1984
43,Jenna,Brandzin,92,Natalie.Reed@gmail.com,"4691 Sea Breeze Avenue ",Cushing-Douglass,(502) 427-9135
79,Madison,Verstraete,69,Abigail.Wright@hotmail.com,"2066 104th Street ",Moose Lake,(448) 423-7550
80,Lorrie,Long,89,Melissa.Bennett@microsoft.com,"3048 Allen Avenue ",Munday,(576) 707-6183
79,Alejandro,Daeninck,51,Matthew.Phillips@att.com,"9997 41st Street ",North Bay,(455) 297-2648
14,Makayla,Clark,44,Joshua.Jackson@rogers.ca,"4518 Folsom Place ",Cortland,(772) 692-0732
12,Isaac,Sanchez,37,Paige.MacKenzie@live.com,"2094 Mc Kenny Street ",Brockville,(563) 735-0233
68,Jesus,Brandzin,34,Molly.Clark@telus.net,"3532 Durland Place ",Comfort,(627) 319-9704
59,Logan,Howard,59,Jorge.Brandzin@rogers.ca,"3458 Wythe Avenue ",Enderby,(226) 520-9653
48,Nathaniel,Richardson,58,Amanda.Pitt@gmail.com,"6926 Sunnyside Court ",Los Altos Hills,(513) 338-4602
34,Tiffany,Miller,18,Claire.Alexander@att.com,"1985 Devon Avenue ",Sansom Park,(357) 274-3606

So let's start by creating a new project using the .NET CLI:

dotnet new webapi -n ModelBinderSample -o ModelBinderSample

This creates a new Web API project.

In this new project I created a new controller with a small action inside:

namespace ModelBinderSample.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class PersonsController : ControllerBase
    {
        public ActionResult<object> Post(IEnumerable<Person> persons)
        {
            return new
            {
                ItemsRead = persons.Count(),
                Persons = persons
            };
        }
    }
}

This looks basically like any other action. It accepts a list of persons and returns an anonymous object that contains the number of persons as well as the list of persons. This action is pretty useless, but helps us to debug the ModelBinder using Postman.

We also need the Person class:

public class Person
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string EmailAddress { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string Phone { get; set; }
}

This actually will work fine, if we would send JSON based data to that action.

As a last preparation step, we need to add the CsvHelper NuGet package to easier parse the CSV data. I also love to use the .NET CLI here:

dotnet package add CsvHelper

Creating a CsvModelBinder

To create the ModelBinder add a new class called CsvModelBinder, which implements the IModelBinder. The next snippet shows a generic binder that should work with any list of models:

public class CsvModelBinder : IModelBinder
{
    public Task BindModelAsync(ModelBindingContext bindingContext)
    {
        if (bindingContext == null)
        {
            throw new ArgumentNullException(nameof(bindingContext));
        }

        // Specify a default argument name if none is set by ModelBinderAttribute
        var modelName = bindingContext.ModelName;
        if (String.IsNullOrEmpty(modelName))
        {
            modelName = "model";
        }

        // Try to fetch the value of the argument by name
        var valueProviderResult = bindingContext.ValueProvider.GetValue(modelName);
        if (valueProviderResult == ValueProviderResult.None)
        {
            return Task.CompletedTask;
        }

        bindingContext.ModelState.SetModelValue(modelName, valueProviderResult);

        var value = valueProviderResult.FirstValue;
        // Check if the argument value is null or empty
        if (String.IsNullOrEmpty(value))
        {
            return Task.CompletedTask;
        }

        var stringReader = new StringReader(value);
        var reader = new CsvReader(stringReader);

        var modelElementType = bindingContext.ModelMetadata.ElementType;
        var model = reader.GetRecords(modelElementType).ToList();

        bindingContext.Result = ModelBindingResult.Success(model);

        return Task.CompletedTask;
    }
}

In the method BindModelAsync we get the ModelBindingContext with all the information in it we need to get the data and to de-serialize it.

First the context get's checked against null values. After that we set a default argument name to model, if none is specified. If this is done we are able to fetch the value by the name we previously set.

If there's no value, we shouldn't throw an exception in this case. The reason is that maybe the next configured ModelBinder is responsible. If we throw an exception the execution of the current request is broken and the next configured ModelBinder doesn't have the chance to get executed.

With a StringReader we read the value into the CsvReader and de-serialize it to the list of models. We get the type for the de-serialization out of the ModelMetadata property. This contains all the relevant information about the current model.

Using the ModelBinder

The Binder isn't used automatically, because it isn't registered in the dependency injection container and not configured to use within the MVC framework.

The easiest way use this model binder is to use the ModelBinderAttribute on the argument of the action where the model should be bound:

[HttpPost]
public ActionResult<object> Post(
    [ModelBinder(binderType: typeof(CsvModelBinder))] 
    IEnumerable<Person> persons)
{
    return new
    {
        ItemsRead = persons.Count(),
        Persons = persons
    };
}

Here the type of our CsvModelBinder is set as binderType to that attribute.

Steve Gordon wrote about a second option in his blog post: Custom ModelBinding in ASP.NET MVC Core. He uses a ModelBinderProvider to add the ModelBinder to the list of existing ones.

I personally prefer the explicit declaration, because the most custom ModelBinders will be pretty specific to an action or to an specific type and theres no hidden magic in the background.

Testing the ModelBinder

To test it, we need to create a new Request in Postman. I set the request type to POST and put the URL https://localhost:5001/api/persons in the address bar. No I need to add the CSV data in the body of the request. Because it is a URL formatted body, I needed to put the data as persons variable into the body:

persons=Id,FirstName,LastName,Age,EmailAddress,Address,City,Phone
48,Samantha,White,18,Angel.Morgan@shaw.ca,"8202 77th Street ",Mascouche,(682) 381-4092
1,Eric,Wright,2,Briana.Ross@gmx.com,"8104 Scott Avenue ",Canutillo,(253) 366-5637
55,Amber,Watson,46,Sarah.Foster@gmx.com,"9206 Lewis Avenue ",Coleman,(632) 375-4415

After pressing send, I got the result as shown below:

Now the clients are able to send CSV based data to the server.

Conclusion

This is a good way to transform the input in a way the action really needs. You could also use the ModelBinders to do some custom validation against the database or whatever you need to do before the model get's passed to the action.

To learn more about ModelBinders, you need to have a look into the pretty detailed documentation:

While playing around with the ModelBinderProvider Steve describes in his blog, I stumbled upon InputFormatters. Would this actually be the right way to transform CSV input into objects? I definitely need to learn some more details about the InputFormattersand will use this as 12th topic of this series.

Please follow the introduction post of this series to find additional customizing topics I will write about.

In the next part I will show you what you can do with ActionFilters: Customizing ASP.NET Core Part 09: ActionFilter

Jürgen Gutsch: Customizing ASP.​NET Core Part 07: OutputFormatter

In this seventh post I want to write about, how to send your Data in different formats and types to the client. By default the ASP.NET Core Web API sends the data as JSON, but there are some more ways to send the data.

The series topics

About OutputFormatters

OutputFormatters are classes that turn your data into a different format to sent them trough HTTP to the clients. Web API uses a default OutputFormatter to turn objects into JSON, which is the default format to send data in a structured way. Other build in formatters are a XML formatter and a plan text formatter.

With the - so called - content negotiation the client is able to decide which format he wants to retrieve .The client need to specify the content type of the format in the Accept-Header. The content negotiation is implemented in the ObjectResult.

By default the Web API always returns JSON, even if you accept text/xml in the header. This is why the build in XML formatter is not registered by default. There are two ways to add a XmlSerializerOutputFormatter to ASP.NET Core:

services.AddMvc()
    .AddXmlSerializerFormatters();

or

services.AddMvc(options =>
{
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
});

There is also a XmlDataContractSerializerOutputFormatter available

Also any Accept header gets turned into application/json. If you want to allow the clients to accept different headers, you need to switch that translation off:

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
});

To try the formatters let's setup a small test project.

Prepare a test project

Using the console we will create a small ASP.NET Core Web API project. Execute the following commands line by line:

dotnet new webapi -n WebApiTest -o WebApiTest
cd WebApiTest
dotnet add package GenFu
dotnet add package CsvHelper

This creates a new Web API projects and adds two NuGet packages to it. GenFu is a awesome library to easily create test data. The second one helps us to easily write CSV data.

Now open the project in Visual Studio or in Visual Studio Code and open the ValuesController.cs and change the Get() method like this:

[HttpGet]
public ActionResult<IEnumerable<Person>> Get()
{
	var persons = A.ListOf<Person>(25);
	return persons;
}

This crates a list of 25 Persons using GenFu. The properties get automatically filled with almost realistic data. You'll see the magic of GenFu and the results later on.

In the Models folder create a new file Person.cs with the the Person class inside:

public class Person
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string EmailAddress { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string Phone { get; set; }
}

Open the Startup.cs as well and add the Xml formatters and allow other accept headers as described earlier:

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
});

That's it for now. Now you are able to retrieve the data from the Web API. Start the project by using the dotnet run command.

The best tools to test a web API are Fiddler or Postman. I prefer Postman because it is easy to use. At the end it doesn't matter which tool you want to use. In this demos I'm going to use Postman.

Inside Postman I create a new request. I write the API Url into the address field, which is https://localhost:5001/api/values and I add a header with the key Accept and the Value application/json.

After I press send I will see the JSON result in the response body below:

Here you can see the auto generated values. GenFu puts the data in based on the property type and the property name. So it puts real first names and real last names as well as real cities and phone numbers into the Persons properties.

No let's test the XML output formatter.

In postman change the Accept header form application/json to text/xml and press send:

We now have an XML formatted output.

Now let's go a step further and create some custom OutputFormatters.

Custom OutputFormatters

The plan is to create an VCard output to be able to import the persons contacts directly to outlook or any other contact database that supports VCards. Later in this section we also want to create an CSV output formatter.

Both are text based output formatters and will derive from TextOutputFormatter. Create a new class in a new file called VcardOutputFormatter.cs:

public class VcardOutputFormatter : TextOutputFormatter
{
    public string ContentType { get; }

    public VcardOutputFormatter()
    {
        SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/vcard"));

        SupportedEncodings.Add(Encoding.UTF8);
        SupportedEncodings.Add(Encoding.Unicode);
    }

    // optional, but makes sense to restrict to a specific condition
    protected override bool CanWriteType(Type type)
    {
        if (typeof(Person).IsAssignableFrom(type) 
            || typeof(IEnumerable<Person>).IsAssignableFrom(type))
        {
            return base.CanWriteType(type);
        }
        return false;
    }

    // this needs to be overwritten
    public override Task WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)
    {
        var serviceProvider = context.HttpContext.RequestServices;
        var logger = serviceProvider.GetService(typeof(ILogger<VcardOutputFormatter>)) as ILogger;

        var response = context.HttpContext.Response;

        var buffer = new StringBuilder();
        if (context.Object is IEnumerable<Person>)
        {
            foreach (var person in context.Object as IEnumerable<Person>)
            {
                FormatVcard(buffer, person, logger);
            }
        }
        else
        {
            var person = context.Object as Person;
            FormatVcard(buffer, person, logger);
        }
        return response.WriteAsync(buffer.ToString());
    }

    private static void FormatVcard(StringBuilder buffer, Person person, ILogger logger)
    {
		buffer.AppendLine("BEGIN:VCARD");
		buffer.AppendLine("VERSION:2.1");
		buffer.AppendLine($"FN:{person.FirstName} {person.LastName}");
		buffer.AppendLine($"N:{person.LastName};{person.FirstName}");
		buffer.AppendLine($"EMAIL:{person.EmailAddress}");
		buffer.AppendLine($"TEL;TYPE=VOICE,HOME:{person.Phone}");
		buffer.AppendLine($"ADR;TYPE=home:;;{person.Address};{person.City}");            
		buffer.AppendLine($"UID:{person.Id}");
		buffer.AppendLine("END:VCARD");
		logger.LogInformation($"Writing {person.FirstName} {person.LastName}");
    }
}

In the constructor we need to specify the supported media types and encodings. In the method CanWriteType() we need to check whether the current type is supported within this output formatters. Here we only want to format a single Person or a lists of Persons.

The method WriteResponseBodyAsync() then actually writes the list of persons out to the response stream via a StringBuilder

At least we need to register the new VcardOutputFormatter in the Startup.cs:

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    
    // register the VcardOutputFormatter
    options.OutputFormatters.Add(new VcardOutputFormatter()); 
});

Start the app again using dotnet run. Now change the Accept header to text/vcard and let's see what happens:

We now should see our date in the VCard format.

Let's do the same for a CSV output. We already added the CsvHelper library to the project, so you can just copy the next snippet into your project:

public class CsvOutputFormatter : TextOutputFormatter
{
    public string ContentType { get; }

    public CsvOutputFormatter()
    {
        SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/csv"));

        SupportedEncodings.Add(Encoding.UTF8);
        SupportedEncodings.Add(Encoding.Unicode);
    }

    // optional, but makes sense to restrict to a specific condition
    protected override bool CanWriteType(Type type)
    {
        if (typeof(Person).IsAssignableFrom(type)
            || typeof(IEnumerable<Person>).IsAssignableFrom(type))
        {
            return base.CanWriteType(type);
        }
        return false;
    }

    // this needs to be overwritten
    public override Task WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)
    {
        var serviceProvider = context.HttpContext.RequestServices;
        var logger = serviceProvider.GetService(typeof(ILogger<CsvOutputFormatter>)) as ILogger;

        var response = context.HttpContext.Response;

        var csv = new CsvWriter(new StreamWriter(response.Body));

        if (context.Object is IEnumerable<Person>)
        {
            var persons = context.Object as IEnumerable<Person>;
            csv.WriteRecords(persons);
        }
        else
        {
            var person = context.Object as Person;
            csv.WriteRecord<Person>(person);
        }

        return Task.CompletedTask;
    }
}

This almost works the same way. We can pass the response stream via a StreamWriter directly into the CsvWriter. After that we are able to feed the writer with the persons or the list of persons. That's it.

We also need to register the CsvOutputFormatter before we can test it.

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    
    // register the VcardOutputFormatter
    options.OutputFormatters.Add(new VcardOutputFormatter()); 
	// register the CsvOutputFormatter
    options.OutputFormatters.Add(new CsvOutputFormatter()); 
});

In Postman change the Accept header to text/csv and press send again:

Conclusion

Isn't that cool? I really like the way to change the format based on the except header. This way you are able to create an Web API for many different clients and that accept many different formats. There are still a lot of potential clients outside which don't use JSON and prefer XML or CSV.

The other way around would be an option to consume CSV or any other format inside the Web API. Let's assume your client would send you a list of persons in CSV format. How would you solve this? Parsing the String manually in the action method would work, but it's not a nice option. This is what ModelBinders can do for us. Let's see how this works in the next chapter about Customizing ASP.NET Core Part 08: ModelBinders.

Holger Schwichtenberg: Azure-Dienste per PowerShell anlegen

Ein PowerShell-Skript legt schnell mehrere Webserver und Datenbanken in Microsofts Azure-Cloud an.

Golo Roden: Verkettete Listen in SQL

Common Table Expressions (CTE) ermöglichen rekursive Abfragen auf SQL-Tabellen. Das ist unter anderem praktisch, um einfach verkettete Listen effizient in relationalen Datenbanken ablegen und abfragen zu können.

Jürgen Gutsch: Customizing ASP.​NET Core Part 06: Middlewares

Wow, it is already the sixth part of this series. In this post I'm going to write about middlewares and how you can use them to customize your app a little more. I quickly go threw the basics about middlewares and than I'll write about some more specials things you can do with middlewares.

The series topics

About middlewares

The most of you already know what middlewares are, but some of you maybe don't. Even if you already use ASP.NET Core for a while, you don't really need to know details about middlewares, because they are mostly hidden behind nicely named extension methods like UseMvc(), UseAuthentication(), UseDeveloperExceptionPage() and so on. Every time you call a Use-method in the Startup.cs in the Configure method, you'll implicitly use at least one ore maybe more middlewares.

A middleware is a peace of code that handles the request pipeline. Imagine the request pipeline as huge tube where you can call something in and where an echo comes back. The middlewares are responsible for create this echo or to manipulate the sound, to enrich the information or to handle the source sound or to handle the echo.

Middlewares are executed in the order they are configured. The first configured middleware is the first that gets executed.

In an ASP.NET Core web, if the client requests an image or any other static file, the StaticFileMiddleware searches for that resource and return that resource if it finds one. If not this middleware does nothing except to call the next one. If there is no last middleware that handles the request pipeline, the request returns nothing. The MvcMiddleware also checks the requested resource, tries to map it to a configured route, executes the controller, created a view and returns a HTML or Web API result. If the MvcMiddleware doesn't find a matching controller, it anyway will return a result in this case it is a 404 Status result. It returns an echo in any case. This is why the MvcMiddleware is the last configured middleware.

(Image source: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-2.1)

An exception handling middleware usually is one of the first configured middleware, but it is not because it get's executed at first, but at last. The first configured middleware is also the last one if the echo comes back the tube. An exception handling middleware validates the result and displays a possible exception in a browser and client friendly way. This is where a runtime error gets an 500 Status.

You are able to see how the pipeline is executed if you create an empty ASP.NET Core application. I usually use the console and the .NET CLI tools:

dotnet new web -n MiddleWaresSample -o MiddleWaresSample
cd MiddleWaresSample

Open the Startup.cs with your favorite editor. It should be pretty empty compared to a regular ASP.NET Core application:

public class Startup
{
    // This method gets called by the runtime. Use this method to add services to the container.
    // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
    public void ConfigureServices(IServiceCollection services)
    {
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        
        app.Run(async (context) =>
        {
            await context.Response.WriteAsync("Hello World!");
        });
    }
}

There is the DeveloperExceptionPageMiddleware used and a special lambda middleware that only writes "Hello World!" to the response stream. The response stream is the echo I wrote about previously. This special middleware stops the pipeline and returns something as an echo. So it is the last one.

Leave this middleware and add the following lines right before the app.Run():

app.Use(async (context, next) =>
{
    await context.Response.WriteAsync("===");
    await next();
    await context.Response.WriteAsync("===");
});
app.Use(async (context, next) =>
{
    await context.Response.WriteAsync(">>>>>> ");
    await next();
    await context.Response.WriteAsync(" <<<<<<");
});

This two calls of app.Use() also creates two lambda middlewares, but this time the middlewares are calling the next ones. Each middleware knows the next one and calls it. Both middleware writing to the response stream before and after the next middleware is called. This should demonstrate how the pipeline works. Before the next middleware is called the actual request is handled and after the next middleware is called, the response (echo) is handled.

If you now run the application (using dotnet run) and open the displayed URL in the browser, you should see a plain text result like this:

===>>>>>> Hello World! <<<<<<===

Does this make sense to you? If yes, let's see how to use this concept to add some additional functionality to the request pipeline.

Writing a custom middleware

ASP.NET Core is based on middlewares. All the logic that gets executed during a request is somehow based on a middleware. So we are able to use this to add custom functionality to the web. We want to know the execution time of every request that goes through the request pipeline. I do this by creating and starting a Stopwatch before the next middleware is called and by stop measuring the execution time after the next middleware is called:

app.Use(async (context, next) =>
{
    var s = new Stopwatch();
    s.Start();
    
    // execute the rest of the pipeline
    await next();
    
    s.Stop(); //stop measuring
    var result = s.ElapsedMilliseconds;
    
    // write out the milliseconds needed
    await context.Response.WriteAsync($"Time needed: {result }");
});

After that I write out the elapsed milliseconds to the response stream.

If you write some more middlewares the Configure method in the Startup.cs get's pretty messy. This is why the most middlewares are written as separate classes. This could look like this:

public class StopwatchMiddleWare
{
    private readonly RequestDelegate _next;

    public StopwatchMiddleWare(RequestDelegate next)
    {
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        var s = new Stopwatch();
        s.Start();

        // execute the rest of the pipeline
        await next();

        s.Stop(); //stop measuring
        var result = s.ElapsedMilliseconds;

        // write out the milliseconds needed
        await context.Response.WriteAsync($"Time needed: {result }");
    }
}

This way we get the next middleware via the constructor and the current context in the Invoke() method.

Note: The Middleware is initialized on the start of the application and exists once during the application lifetime. The constructor gets called once. On the other hand the Invoke() method is called once per request.

To use this middleware, there is a generic UseMiddleware() method available you can use in the configure method:

app.UseMiddleware<StopwatchMiddleware>();

The more elegant way is to create an extensions method that encapsulates this call:

public static class StopwatchMiddlewareExtension
{
    public static IApplicationBuilder UseStopwatch(this IApplicationBuilder app)
    {
        app.UseMiddleware<StopwatchMiddleware>();
        return app;
    }
}

Now can simply call it like this:

app.useStopwatch();

This is the way you can provide additional functionality to a ASP.NET Core web through the request pipeline. You are able to manipulate the request or even the response using middlewares.

The AuthenticationMiddleware for example tries to request user information from the request. If it doesn't find some it asked the client about it by sending a specific response back to the client. If it finds some, it adds the information to the request context and makes it available to the entire application this way.

What else can we do using middlewares?

Did you know that you can divert the request pipeline into two or more branches?

The next snippet shows how to create branches based on specific paths:

app.Map("/map1", app1 =>
{
    // some more middlewares
    
    app1.Run(async context =>
    {
        await context.Response.WriteAsync("Map Test 1");
    });
});

app.Map("/map2", app2 =>
{
    // some more middlewares
    
    app2.Run(async context =>
    {
        await context.Response.WriteAsync("Map Test 2");
    });
});

// some more middlewares

app.Run(async (context) =>
{
    await context.Response.WriteAsync("Hello World!");
});

The path "/map1" is a specific branch that continues the request pipeline inside. The same with "/map2". Both maps have their own middleware configurations inside. All other not specified paths will follow the main branch.

There's also a MapWhen() method to branch the pipeline based on a condition instead of branch based on a path:

public void Configure(IApplicationBuilder app)
{
    app.MapWhen(context => context.Request.Query.ContainsKey("branch"),
                app1 =>
    {
        // some more middlewares
    
        app1.Run(async context =>
        {
            await context.Response.WriteAsync("MapBranch Test");
        });
    });

    // some more middlewares
    
    app.Run(async context =>
    {
        await context.Response.WriteAsync("Hello from non-Map delegate. <p>");
    });
}

You can create conditions based on configuration values or as shown here, based on properties of the request context. In this case a query string property is used. You can use HTTP headers, form properties or any other property of the request context.

You are also able to nest the maps to create child and grandchild branches of needed.

Map() or MapWhen() is used to provide a special API or resource based an a specific path or a specific condition. The ASP.NET Core HealthCheck API is done like this. It first uses MapWhen() to specify the port to use and then the Map() to set the path for the HealthCheck API, or it uses Map() only if no port is specified. At the end the HealthCheckMiddleware is used:

private static void UseHealthChecksCore(IApplicationBuilder app, PathString path, int? port, object[] args)
{
    if (port == null)
    {
        app.Map(path, b => b.UseMiddleware<HealthCheckMiddleware>(args));
    }
    else
    {
        app.MapWhen(
            c => c.Connection.LocalPort == port,
            b0 => b0.Map(path, b1 => b1.UseMiddleware<HealthCheckMiddleware>(args)));
    }
}

(See here on GitHib)

UPDATE 10/10/2018

After I published this post Hisham asked me a question on Twitter:

Another question that's middlewares related, I'm not sure why I never seen anyone using IMiddleware instead of writing InvokeAsync manually?!!

IMiddleware is new in ASP.NET Core 2.0 and actually I never knew that it exists before he tweeted about it. I'll definitely have a deeper look into IMiddleware and will write about it. Until that you should read Hishams really good post about it: Why you aren't using IMiddleware?

Conclusion

Most of the ASP.NET Core features are based on middlewares and we are able to extend ASP.NET Core by creating our own middlewares.

In the next to chapters I will have a look into different data types and how to handle them. I will create API outputs with any format and data type I want and except data of any type and format. Read the next part about Customizing ASP.NET Core Part 07: OutputFormatter

Jürgen Gutsch: Customizing ASP.​NET Core Part 05: HostedServices

This fifth part of this series doesn't really show a customization. This part is more about a feature you can use to create background services to run tasks asynchronously inside your application. Actually I use this feature to regularly fetch data from a remote service in a small ASP.NET Core application.

The series topics

About HostedServcices

HostedServices are a new thing in ASP.NET Core 2.0 and can be used to run tasks in the asynchronously in the background of your application. This can be used to fetch data periodically, do some calculations in the background or some cleanups. This can also be used to send preconfigured emails or whatever you need to do in the background.

HostedServices are basically simple classes, which implements the IHostedService interface.

public class SampleHostedService : IHostedService
{
	public Task StartAsync(CancellationToken cancellationToken)
	{
	}
	
	public Task StopAsync(CancellationToken cancellationToken)
	{
	}
}

A HostedService needs to implement a StartAsync() and a StopAsync() method. The StartAsync() is the place where you implement the logic to execute. This method gets executed once immediately after the application starts. The method StopAsync() on the other hand gets executed just before the application stops. This also means, to start a kind of a scheduled service you need to implement it by your own. You will need to implement a loop which executes the code regularly.

To get a HostedService executed you need to register it in the ASP.NET Core dependency injection container as a singleton instance:

services.AddSingleton<IHostedService, SampleHostedService>();

To see how a hosted service work, I created the next snippet. It writes a log message on start, on stop and every two seconds to the console:

public class SampleHostedService : IHostedService
{
	private readonly ILogger<SampleHostedService> logger;
	
	// inject a logger
	public SampleHostedService(ILogger<SampleHostedService> logger)
	{
		this.logger = logger;
	}

	public Task StartAsync(CancellationToken cancellationToken)
	{
		logger.LogInformation("Hosted service starting");

		return Task.Factory.StartNew(async () =>
		{
			// loop until a cancalation is requested
			while (!cancellationToken.IsCancellationRequested)
			{
				logger.LogInformation("Hosted service executing - {0}", DateTime.Now);
				try
				{
					// wait for 3 seconds
					await Task.Delay(TimeSpan.FromSeconds(2), cancellationToken);
				}
				catch (OperationCanceledException) { }
			}
		}, cancellationToken);
	}

	public Task StopAsync(CancellationToken cancellationToken)
	{
		logger.LogInformation("Hosted service stopping");
		return Task.CompletedTask;
	}
}

To test this, I simply created a new ASP.NET Core application, placed this snippet inside, register the HostedService and started the application by calling the next command in the console:

dotnet run

This results in the following console output:

As you can see the log output is written to the console every two seconds.

Conclusion

You can now start to do some more complex thing with the HostedServices. Be careful with the hosted service, because it runs all in the same application. Don't use to much CPU or memory, this could slow down your application.

For bigger applications I would suggest to move such tasks in a separate application that is specialized to execute background tasks. A separate Docker container, a BackroundWorker on Azure, Azure Functions or something like this. However it should be separated from the main application in that case

In the next part I'm going to write about Middlewares and how you can use them to implement special logic to the request pipeline, or how you are able to serve specific logic on different paths. Customizing ASP.NET Core Part 06: Middlewares

Jürgen Gutsch: Customizing ASP.​NET Core Part 04: HTTPS

HTTPS is on by default now and a first class feature. On Windows the certificate which is needed to enable HTTPS is loaded from the windows certificate store. If you create a project on Linux and Mac the certificate is loaded from a certificate file.

Even if you want to create a project to run it behind and IIS or an NGinX webserver HTTPS is enabled. Usually you would manage the certificate on the IIS or NGinX webserver in that case. But this shouldn't be a problem and you shouldn't disable HTTPS in the ASP.NET Core settings.

To manage the certificate within the ASP.NET Core application directly makes sense if you run services behind the firewall, services which are not accessible from the internet. Services like background services for a micro service based applications, or services in a self hosted ASP.NET Core application.

There are some scenarios where it makes sense to also load the certificate from a file on Windows. This could be in an application that you will run on docker for Windows, and also on docker for Linux.

Personally I like the flexible way to load the certificate from a file.

The series topics

Setup Kestrel

As well as in the first to parts of this blog series, we need override the default WebHostBuilder a little bit. With ASP.NET Core it is possible to replace the default Kestrel based hosting with an hosting based on an HttpListener. This means the Kestrel webserver is configured somehow to the host builder. You are able to add and configure Kestrel manually by using it. That means by calling the UseKestrel() method on the IWebHostBuilder:

public class Program
{
	public static void Main(string[] args)
	{
		CreateWebHostBuilder(args).Build().Run();
	}

	public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
		WebHost.CreateDefaultBuilder(args)
			.UseKestrel(options => 
			{	
			})
			.UseStartup<Startup>();
}

This method accepts an action to configure the Kestrel webserver. What we actually need to do is to configure the addresses and ports the webserver is listen on. For the HTTPS port we also need to configure how the certificate should be loaded.

.UseKestrel(options => 
{
	options.Listen(IPAddress.Loopback, 5000);
	options.Listen(IPAddress.Loopback, 5001, listenOptions =>
	{
		listenOptions.UseHttps("certificate.pfx", "topsecret");
	});
})

In this snippet we add to addresses and ports to listen on. The second one is defined as secure endpoint configured to use HTTPS. The method UseHttps() is overloaded multiple times, to load certificates from the windows certificate store as well as from files. In this case we use a file called certificate.pfx located in the project folder.

Reminder to myself: Replacing the host actually would be an idea for an eleventh part of this series.

To create such a certificate file to just play around with this configuration open the certificate store and export the development certificate created by visual studio.

For your safety

Use the following line ONLY to play around with this configuration:

listenOptions.UseHttps("certificate.pfx", "topsecret");

The problem is the hard coded password. Never ever store a password in a code file that gets pushed to any source code repository. Ensure you load the password from the configuration API of ASP.NET Core. Use the user secrets on your local development machine and use environment variables on a server. On Azure use the Application Settings to store the passwords. Passwords will be hidden on the Azure Portal UI, if they are marked as passwords.

Conclusion

This is just a small customization. Anyway, this helps if you want to share the code between different platforms, if you want to run your application on Docker and don't want to care about certificate stores, etc.

Usually, if you run your application behind an web server like IIS or NGinX, you don't need to care about certificates in your ASP.NET Core application. But you need to if you host your application inside another application, on Docker or without an IIS or NGinX.

ASP.NET Core has a new feature to run tasks in the background inside the application. To learn more about that, read the next post about Customizing ASP.NET Core Part 05: HostedServices.

Alexander Schmidt: Azure Active Directory B2C – Teil 1

Einrichtung von Azure AD B2C.

Jürgen Gutsch: Customizing ASP.​NET Core Part 03: Dependency Injection

In the third part we'll take a look into the ASP.NET Core dependency injection and how to customize it to use a different dependency injection container if needed.

The series topics

Why using a different dependency injection container?

In the most projects you don't really need to use a different dependency injection Container. The DI implementation in ASP.NET Core supports the main basic features and works well and pretty fast. Anyway, some other DI container support some interesting features you maybe want to use in your application.

  • Maybe you like to create an application that support modules as lightweight dependencies.
    • E.g. modules you want to put into a specific directory and they get automatically registered in your application
    • This could be done with NInject.
  • Maybe you want to configure the services in a configuration file outside the application, in an XML or JSON file instead in C# only
    • This is a common feature in various DI containers, but not yet supported in ASP.NET Core.
  • Maybe you don't want to have an immutable DI container, because you want to add services at runtime.
    • This is also a common feature in some DI containers.

A look at the ConfigureServices Method

Create a new ASP.NET Core project and open the Startup.cs, you will find the method to configure the services which looks like this:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
	services.Configure<CookiePolicyOptions>(options =>
	{
		// This lambda determines whether user consent for non-essential cookies is needed for a given request.
		options.CheckConsentNeeded = context => true;
		options.MinimumSameSitePolicy = SameSiteMode.None;
	});
    
    services.AddTransient<IService, MyService>();

	services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
}

This method gets the IServiceCollection, which already filled with a bunch of services which are needed by ASP.NET Core. This services got added by the hosting services and parts of ASP.NET Core that got executed before the method ConfigureSercices is called.

Inside the method some more services gets added. First a configuration class that contains cookie policy options is added to the ServiceCollection. In this sample I also add a custom service called MyService that implements the IService interface. After that the method AddMvc() adds another bunch of services needed by the MVC framework. Until yet we have around 140 services registered to the IServiceCollection. But the service collections isn't the actual dependency injection container.

The actual DI container is wrapped in the so called service provider, which will be created out of the service collection. The IServiceCollection has an extension method registered to create a IServiceProvider out of the service collection.

IServiceProvider provider = services.BuildServiceProvider()

The ServiceProvider than contains the immutable container that cannot be changed at runtime. With the default method ConfigureServices the IServiceProvider gets created in the background after this method was called, but it is possible to change the method a little bit:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.Configure<CookiePolicyOptions>(options =>
    {
        // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
    });
    
    services.AddTransient<IService, MyService>(); // custom service
    
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
    
    return services.BuildServiceProvider()
}

I changed the return type to IServiceProvider and return the ServiceProvider created with the method BuildServiceProvider(). This change will still work in ASP.NET Core.

Use a different ServiceProvider

To change to a different or custom DI container you need to replace the default implementation of the IServiceProvider with a different one. Additionally you need to find a way to move the already registered services to the new container.

The next code sample uses Autofac as a third party container. I use Autofac in this snippet because you are easily able to see what is happening here:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.Configure<CookiePolicyOptions>(options =>
    {
        // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
    });

    //services.AddTransient<IService, MyService>();

    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

    // create a Autofac container builder
    var builder = new ContainerBuilder();

    // read service collection to Autofac
    builder.Populate(services);

    // use and configure Autofac
    builder.RegisterType<MyService>().As<IService>();

    // build the Autofac container
    ApplicationContainer = builder.Build();

    // creating the IServiceProvider out of the Autofac container
    return new AutofacServiceProvider(ApplicationContainer);
}

// IContainer instance in the Startup class 
public IContainer ApplicationContainer { get; private set; }

Also Autofac works with a kind of a service collection inside the ContainerBuilder and it creates the actual container out of the ContainerBuilder. To get the registered services out of the IServiceCollection into the ContainerBuilder, Autofac uses the Populate() method. This copies all the existing services to the Autofac container.

Our custom service MyService now gets registered using the Autofac way.

After that, the container gets build and stored in a property of type IContainer. In the last line of the method ConfigureServices we create a AutofacServiceProvider and pass in the IContainer. This is the IServiceProvider we need to return to use Autofac within our application.

UPDATE: Introducing Scrutor

You don't always need to replace the existing .NET Core DI container to get and use nice features. In the beginning I mentioned the auto registration of services. This can also be done with a nice NuGet package called Scrutor by Kristian Hellang (https://kristian.hellang.com/). Scrutor extends the IServiceCollection to automatically register services to the .NET Core DI container.

"Assembly scanning and decoration extensions for Microsoft.Extensions.DependencyInjection" https://github.com/khellang/Scrutor

Andrew Lock published a pretty detailed blog post about Scrutor. It doesn't make sense to repeat that. Read that awesome post and learn more about it: Using Scrutor to automatically register your services with the ASP.NET Core DI container

Conclusion

Using this approach you are able to use any .NET Standard compatible DI container to replace the existing one. If the container of your choice doesn't provide an ServiceProvider, create an own one that implements IServiceProvider and uses the DI container inside. If the container of your choice doesn't provide a method to populate the registered services into the container, create your own method. Loop over the registered services and add them to the other container.

Actually the last step sounds easy, but can be a hard task. Because you need to translate all the possible IServiceCollection registrations into registrations of the different container. The complexity of that task depends on the implementation details of the other one.

Anyway, you have the choice to use any DI container which is compatible to the .NET Standard. You have the choice to change a lot of the default implementations in ASP.NET Core.

So you can with the default HTTPS behavior on Windows. To learn more about that please read the next post about Customizing ASP.NET Core Part 04: HTTPS.

Code-Inside Blog: Be afraid of varchar(max) with async EF or ADO.NET

Last month we had changed our WCF APIs to async implementations, because we wanted all those glorious scalability improvements in our codebase.

The implementation was quite easy, because our service layer did most of the time just some simple EntityFramework 6 queries.

The field test went horribly wrong

After we moved most of the code to async we did a small test and it worked quite good. Our gut feelings were OK-ish, because we knew that we didn’t do a full stress test.

As always: Things didn’t work as expected. We deployed the code on our largest customer and it did: Nothing.

100% CPU

We knew that after the deployment we would hit a high load and at first it seems to “work” based on the CPU workload, but nothing happend. I checked the SQL monitoring and noticed that the throughput was ridiculous low. One query (which every client needed to execute) caught my attention, because the query itself was super simple, but somehow was the showstopper for everyone.

The “bad query”

I checked the code and it was more or less something like this (with the help of EntityFramework 6)

var result = await dbContext.Configuration.ToListAsync();

The “Configuration” itself is a super simple table with a Key & Value column.

Be aware that the same code worked OK with the non async implementation!

“Cause”

This call was extremely costly in terms of performance, but why? It turns out, that this customer installation had a pretty large configuration. One value was around 10MB, which doesn’t sound that much, but if this code is executed in parallel with 5000 clients, it can hurt.

On top of that: The async implementation tries to be smart, but this leads to thousand of task creations, which will slow down everything.

This stackoverflow answer really helped me to understand this problem. Just look at those figures:

First, in the first case we were having just 3500 hit counts along the full call path, here we have 118 371. Moreover, you have to imagine all the synchronization calls I didn’t put on the screenshoot…

Second, in the first case, we were having “just 118 353” calls to the TryReadByteArray() method, here we have 2 050 210 calls ! It’s 17 times more… (on a test with large 1Mb array, it’s 160 times more)

Moreover there are:

  • 120 000 Task instances created
  • 727 519 Interlocked calls
  • 290 569 Monitor calls
  • 98 283 ExecutionContext instances, with 264 481 Captures
  • 208 733 SpinLock calls

My guess is the buffering is made in an async way (and not a good one), with parallel Tasks trying to read data from the TDS. Too many Task are created just to parse the binary data. …

Switch to ADO.NET, damn EF, right?

If you are now thinking: “Yeah… EF sucks, right, use just plain ADO.NET!” you will end up in the same mess, because the default ExecuteAsync-reader is used in the EntityFramework.

I use EF Core, am I save?

The same problem applies to EF Core, just checkout this comment by the EF Team.

How can we solve this problem then?

Solution 1: Async, but with Sequential read

I changed the code to use plain ADO.NET, but with CommandBehavior.Sequential access.

This way it seems that the async implementation is much smarter how to read large chunks of data. I’m not an ADO.NET expert, but with the default strategy ADO.NET tries to read the whole row and stores it in memory. With the sequential access it can use the memory more effective - at least, it seems to work much better.

Your code also needs to be implemented with sequential access in mind, otherwise it will fail.

Solution 2: Avoid large type like nvarchar(max)

This advice comes from the EF team:

Avoid using NTEXT, TEXT, IMAGE, TVP, UDT, XML, [N]VARCHAR(MAX) and VARBINARY(MAX) – the maximum data size for these types is so large that it is very unusual (or even impossible) that they would happen to be able to fit within a single packet.

When we need to store large content, we typically use a separat blob table and stream those values to the clients. This works quite well, but we forgot our “configuration” table :-)

When I now look at this problem it seems obvious, but we had some hard days to fix the issue.

Hope this helps.

Helpful links:

Jürgen Gutsch: Customizing ASP.​NET Core Part 02: Configuration

This second part of the blog series about customizing ASP.NET Core is about the application configuration, how to use it and how to customize the configuration to use different ways to configure your app.

The series topics

Configure the configuration

As well as the logging, since ASP.NET Core 2.0 the configuration is also hidden in the default configuration of the WebHostBuilder and not part of the Startup.cs anymore. This is done for the same reasons to keep the Startup clean and simple:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

Fortunately you are also able to override the default settings to customize the the configuration in a way you need it.

When you create a new ASP.NET Core project you already have an appsettings.json and an appsettings.Development.json configured. You can and you should use this configuration files to configure your app. You should because this is the pre-configured way and the most ASP.NET Core developers will look for an appsettings.json to configure the application. This is absolutely fine and works pretty well.

But maybe you already have an existing XML configuration or want to share a YAML configuration file over different kind of applications. This could also make sense. Sometimes it makes also sense to read configuration values out of a database.

The next snippet shows the hidden default configuration to read the appsettigns.json files:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

This configuration also set the base path of the application and adds the configuration via environment variables. The method ConfigureAppConfiguration accepts a lambda method that gets a ConfigurationBuilderContext and a ConfigurationBuilder passed in

Whenever you customize the the application configuration you should add the configuration via environment variable as the last step. The order of the configuration matters and the latter added configuration providers will override the previously added configurations. Be sure the environment variables always override the configurations via file. This way you ensure the configure on azure web apps via the Application Settings UI on Azure which will be passed to the application as environment variables.

The IConfigurationBuilder has a lot of extension methods to add more configurations like XML or INI configuration files, in-memory configurations and so on. You can find a lot more configuration providers provided by the community to read in YAML files, database values and a lot more. In this demo I'm going to show you how to read INI files in.

Typed configurations

Before trying to read the INI files it makes sense to show how to use typed configuration instead of reading the configuration via the IConfiguration key by key.

To read a type configuration you need to define the type to configure. I usually crate a class called AppSettings like this:

public class AppSettings
{
    public int Foo { get; set; }
    public string Bar { get; set; }
}

This classes than can be filled with specific configuration sections inside the method ConfigureServices in the Startup.cs

services.Configure<AppSettings>(Configuration.GetSection("AppSettings"));

This way the typed configuration also gets registered as a service in the dependency injection container and can be used everywhere in the application. You are able to create different configuration types per configuration section. In the most cases one section should be fine, but maybe it makes sense to divide the settings into different sections.

This configuration than can be used via dependency injection in every part of your application. The next snippets shows how to use the configuration in a MVC controller:

public class HomeController : Controller
{
    private readonly AppSettings _options;

    public HomeController(IOptions<AppSettings> options)
    {
        _options = options.Value;
    }

The IOptions<AppSettings> is a wrapper around our AppSettings type and the property Value contains the actual instance of the AppSettings including the values from the configuration file.

To try that out the appsettings.json need to have the AppSettings section configured, otherwise the values are null or not set.

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*",
  "AppSettings": {
      "Foo": 123,
      "Bar": "Bar"
  }
}

Configuration using INI files

To also use INI files to configure the application we need to add the INI configuration inside the method ConfigureAppConfiguration in the Program.cs:

config.AddIniFile("appsettings.ini", optional: false, reloadOnChange: true);
config.AddJsonFile($"appsettings.{env.EnvironmentName}.ini", optional: true, reloadOnChange: true);

This code loads the INI files the same way as the JSON configuration files. The first line is a required configuration and the second one an optional configuration depending on the current runtime environment.

The INI file could look like this:

[AppSettings]
Bar="FooBar"

This file also contains a section called AppSettings and a property called Bar. Initially I wrote the order of the configuration matters. If you added the two lines to configure via INI files after the configuration via JSON files, the INI files will override the settings from the JSON files. The property Bar gets overridden with "FooBar" and the property Foo stays the same. Also the values out of the INI file will be available via the previously created AppSettings class.

Every other configuration provider will work the same way. Let's see how a configuration provider would look like.

Configuration Providers

A configuration provider is an implementation of an IConfigurationProvider that get's created by an configuration source, which is an implementation of an IConfigurationSource. The configuration provider than reads the date in from somewhere and provides it via a Dictionary.

To add a custom or third party configuration provider to ASP.NET Core you need to call the method Add on the configuration builder and put the configuration source in:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        
        // add new configuration source
        config.Add(new MyCustomConfigurationSource{
        	SourceConfig = //configure whatever source 
            Optional = false,
            ReloadOnChange = true
        });
        
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

Usually you would create an extension method to easier add the configuration source:

config.AddMyCustomSource("source", optional: false, reloadOnChange: true);

A really detailed concrete example about how to create a custom configuration provider is written by the fellow MVP Andrew Lock.

Conclusion

In the most cases it is not needed to add a different configuration provider or to create your own configuration provider, but it's good to know how to change it in case you need it. Also using typed configuration is a nice way to read the settings. In classic ASP.NET we used a manually created façade to to read the application settings in a typed way. Now this is automatically done by just providing a class. This class get's automatically filled and provided via dependency injection.

To learn more about ASP.NET Core Dependency Injection have a look into the next part of the series: Customizing ASP.NET Core Part 03: Dependency Injection

Jürgen Gutsch: Customizing ASP.​NET Core Part 01: Logging

In this first part of the new blog series about customizing ASP.NET Core, I will show you how to customize the logging. The default logging only writes to the console or to the debug window. This is quite good for the most cases, but maybe you need to log to a sink like a file or a database. Maybe you want to extend the logger with additional information. In that cases you need to know how to change the default logging.

The series topics

Configure logging

In previous versions of ASP.NET Core (pre 2.0) the logging was configured in the Startup.cs. Since 2.0 the Startup.cs was simplified and a lot of configurations where moved to a default WebHostBuilder, which is called in the Program.cs. Also the logging was moved to the default WebHostBuilder:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

In ASP.NET Core you are able to override and customize almost everything. So you can with the logging. The IWebHostBuilder has a lot of extension methods to override the default behavior. To override the default settings for the logging we need to choose the ConfigureLogging method. The next snippet shows exactly the same logging as it was configured inside the CreateDefaultBuilder() method:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureLogging((hostingContext, logging) =>
    {
        logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
        logging.AddConsole();
        logging.AddDebug();
    })                
    .UseStartup<Startup>();

This method needs a lambda that gets a WebHostBuilderContext that contains the hosting context and a LoggingBuilder to configure the logging.

Create a custom logger

To demonstrate a custom logger, I created a small useless logger that is able to colorize log entries with an specific log level in the console. This so called ColoredConsoleLogger will be added and created using a LoggerProvider we also need to write by our own. To specify the color and the log level to colorize, we need to add a configuration class. In the next snippet all three parts (Logger, LoggerProvider and Configuration) are shown:

public class ColoredConsoleLoggerConfiguration
{
    public LogLevel LogLevel { get; set; } = LogLevel.Warning;
    public int EventId { get; set; } = 0;
    public ConsoleColor Color { get; set; } = ConsoleColor.Yellow;
}

public class ColoredConsoleLoggerProvider : ILoggerProvider
{
    private readonly ColoredConsoleLoggerConfiguration _config;
    private readonly ConcurrentDictionary<string, ColoredConsoleLogger> _loggers = new ConcurrentDictionary<string, ColoredConsoleLogger>();

    public ColoredConsoleLoggerProvider(ColoredConsoleLoggerConfiguration config)
    {
        _config = config;
    }

    public ILogger CreateLogger(string categoryName)
    {
        return _loggers.GetOrAdd(categoryName, name => new ColoredConsoleLogger(name, _config));
    }

    public void Dispose()
    {
        _loggers.Clear();
    }
}

public class ColoredConsoleLogger : ILogger
{
	private static object _lock = new Object();
    private readonly string _name;
    private readonly ColoredConsoleLoggerConfiguration _config;

    public ColoredConsoleLogger(string name, ColoredConsoleLoggerConfiguration config)
    {
        _name = name;
        _config = config;
    }

    public IDisposable BeginScope<TState>(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        return logLevel == _config.LogLevel;
    }

    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        lock (_lock)
        {
            if (_config.EventId == 0 || _config.EventId == eventId.Id)
            {
                var color = Console.ForegroundColor;
                Console.ForegroundColor = _config.Color;
                Console.WriteLine($"{logLevel.ToString()} - {eventId.Id} - {_name} - {formatter(state, exception)}");
                Console.ForegroundColor = color;
            }
        }
    }
}

We need to lock the actual console output, because we will get some race conditions where wrong log entries get colored with the wrong color, because the console itself is not really thread save.

If this is done we can start to plug in the new logger to the configuration:

logging.ClearProviders();

var config = new ColoredConsoleLoggerConfiguration
{
    LogLevel = LogLevel.Information,
    Color = ConsoleColor.Red
};
logging.AddProvider(new ColoredConsoleLoggerProvider(config));

If needed you are able to clear all the previously added logger providers. Than we call AddProvider to add a new instance of our ColoredConsoleLoggerProvider with the specific settings. We could also add some more instances of the provider with different settings.

This shows ho to handle different log levels in a a different way. You can use this to send an emails on hard errors, to log debug messages to a different log sink than regular informational messages and so on.

In many cases it doesn't make sense to write a custom logger because there are already many good third party loggers, like elmah, log4net and NLog. In the next section I'm going to show you how to use NLog in ASP.NET Core

Plug-in an existing Third-Party logger provider

NLog was one of the very first loggers, which was available as a .NET Standard library and usable in ASP.NET Core. NLog also already provides a Logger Provider to easily plug it into ASP.NET Core.

The next snippet shows a typical NLog.Config that defines two different sinks to log all messages in one log file and custom messages only into another file:

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\dotnetconf\001-logging\internal-nlog.txt">

  <!-- Load the ASP.NET Core plugin -->
  <extensions>
    <add assembly="NLog.Web.AspNetCore"/>
  </extensions>

  <!-- the targets to write to -->
  <targets>
     <!-- write logs to file -->
     <target xsi:type="File" name="allfile" fileName="C:\git\dotnetconf\001-logging\nlog-all-${shortdate}.log"
                 layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

   <!-- another file log, only own logs. Uses some ASP.NET core renderers -->
     <target xsi:type="File" name="ownFile-web" fileName="C:\git\dotnetconf\001-logging\nlog-own-${shortdate}.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}|url: ${aspnet-request-url}|action: ${aspnet-mvc-action}" />

     <!-- write to the void aka just remove -->
    <target xsi:type="Null" name="blackhole" />
  </targets>

  <!-- rules to map from logger name to target -->
  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

We than need to add the NLog ASP.NET Core package from NuGet:

dotnet add package NLog.Web.AspNetCore

(Be sure you are in the project directory before you execute that command)

Now you only need to add NLog in the ConfigureLogging method in the Program.cs

hostingContext.HostingEnvironment.ConfigureNLog("NLog.Config");
logging.AddProvider(new NLogLoggerProvider());

The first line configures NLog to use the previously created NLog.Config and the second line adds the NLogLoggerProvider to the list of logging providers. Here you can add as many logger providers you need.

Conclusion

The good thing of hiding the basic configuration is only to clean up the newly scaffolded projects and to keep the actual start as simple as possible. The developer is able to focus on the actual features. But the more the application grows the more important is logging. The default logging configuration is easy and it works like charm, but in production you need a persisted log to see errors from the past. So you need to add a custom logging or a more flexible logger like NLog or log4net.

To learn more about ASP.NET Core configuration have a look into the next part of the series: Customizing ASP.NET Core Part 02: Configuration.''

Jürgen Gutsch: New Blog Series: Customizing ASP.​NET Core

With this post I want to introduce a new blog series about things you can or maybe need to customize in ASP.NET Core. Initially this series will contain ten different topics. Maybe later I'll write some more posts about that.

The initial topics are based on my talk about Customizing ASP.NET Core. I did this talk several times in German and English. I did the talk on the .NET Conf 2018 as well.

Unfortunately on the .NET Conf the talk started with pretty bad audio for some reasons. The first five minutes can be moved directly to the trash IMHO. I also could only show 7 out of 10 demos, even if I tried to get all the demos into 45 minutes one day before. I'm almost sure the audio problem wasn't on my side. Via the router I disconnected almost all devices from the internet during the our I was presenting and it went well before the presentation when we did the latest tech check.

Anyway, after five minutes the audio went a lot better and the audience was able to follow the rest of the presentation.

For this series I'm going to follow the same order as in that presentation, which is the order from bottom to top, from the server configuration parts, over Web.API up to the MVC topics.

Initial series topics

Additional series topics

  • Customizing ASP.NET Core Part 11: Hosting
  • Customizing ASP.NET Core Part 12: InputFormatters
  • Customizing ASP.NET Core Part 13: ViewComponents

Do you want to see that talk?

If you are interested in this talk about Customizing ASP.NET Core, feel free to drop me a comment, a message via Twitter or an email. I'm able to do it remotely via Skype, Skype for Business or on side, if the travel costs are covered somehow. For free at community events, like Meetups or user group meetings and fairly paid on commercial events.

Discover more possible talks on Sessionize: https://sessionize.com/juergengutsch

Golo Roden: 25 Tage später …

Node.js enthält in der 10.x-Serie bis zur Version 10.9.0 den Fehler, dass die Funktionen setTimeout und setInterval nach 25 Tagen ihren Betrieb einstellen. Ursache ist ein Konvertierungsfehler. Abhilfe schafft das Aktualisieren auf eine neue Node.js-Version.

Stefan Henneken: IEC 61131-3: Das ‘State’ Pattern

Besonders in der Automatisierungstechnik finden Zustandsautomaten regelmäßig Anwendung. Mit Hilfe des State Pattern steht ein objektorientierter Ansatz zur Verfügung, der insbesondere bei größeren Zustandsautomaten wichtige Vorteile bietet.

Die meisten Entwickler haben schon Zustandsautomaten in IEC 61131-3 realisiert. Der eine bewusst, der andere vielleicht unbewusst. Im Folgenden soll ein einfaches Beispiel drei verschiedene Ansätze vorstellen:

  1. CASE-Anweisung
  2. Zustandsübergänge in Methoden
  3. Das ‚State‘ Pattern

Unser Beispiel beschreibt einen Automaten, der nach Einwurf einer Münze und nach dem Drücken eines Knopfes ein Produkt ausgibt. Die Anzahl der Produkte ist begrenzt. Wird eine Münze eingeworfen und der Knopf betätigt, obwohl der Automat leer ist, so wird die Münze wieder zurückgegeben.

Der Automat soll durch den Funktionsblock FB_Machine abgebildet werden. Eingänge nehmen die Ereignisse entgegen und über Ausgänge wird der aktuelle Zustand und die Anzahl der noch verfügbaren Produkte ausgegeben. Bei der Deklaration des FBs wird die maximale Anzahl der Produkte festgelegt.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton           : BOOL;
  bInsertCoin       : BOOL;
  bTakeProduct      : BOOL;
  bTakeCoin         : BOOL;    
END_VAR
VAR_OUTPUT
  eState            : E_States;
  nProducts         : UINT;    
END_VAR

UML-Zustandsdiagramm

Zustandsautomaten lassen sich sehr gut als UML-Zustandsdiagramm (englisch: state diagram) darstellen.

Picture01

Ein UML-Zustandsdiagramm beschreibt einen Automaten, der sich zu jedem Zeitpunkt in genau einem Zustand einer endlichen Menge von Zuständen befindet.

Die Zustände in einem UML-Zustandsdiagramm werden durch Rechtecke mit abgerundeten Ecken (englisch: vertices) dargestellt (in anderen Diagrammformen häufig auch als Kreis). Zustände können Aktivitäten ausführen, die z.B. beim Eintritt in den Zustand (entry) oder beim Verlassen (exit) ausgeführt werden. Mit entry / n = n – 1 wird beim Eintritt in den Zustand die Variable n dekrementiert.

Die Pfeile zwischen den Zuständen symbolisieren mögliche Zustandsübergänge (englisch: transitions). Sie sind mit den Ereignissen beschriftet, die zu dem jeweiligen Zustandsübergang führen. Ein Zustandsübergang erfolgt, wenn das Ereignis eintritt und eine optionale Bedingung (guard) erfüllt ist. Bedingungen werden in eckigen Klammern angegeben. Hierdurch lassen sich Entscheidungsbäume realisieren.

Erste Variante: CASE-Anweisung

Häufig findet man CASE-Anweisungen für die Umsetzung von Zustandsautomaten. Die CASE-Anweisung fragt jeden möglichen Zustand ab. Innerhalb der jeweiligen Bereiche, für die einzelnen Zustände, werden die Bedingungen abgefragt. Ist die Bedingung erfüllt, wird die Aktion ausgeführt und die Zustandsvariable angepasst. Um die Lesbarkeit zu erhöhen, wird die Zustandsvariable gerne als ENUM abgebildet.

TYPE E_States :
(
    eWaiting := 0,
    eHasCoin,
    eProductEjected,
    eCoinEjected
);
END_TYPE

Somit sieht die erste Variante vom Zustandsautomat wie folgt aus:

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

CASE eState OF
  E_States.eWaiting:
    IF (rtrigButton.Q) THEN
      ; // keep in the state
    END_IF
    IF (rtrigInsertCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
      eState := E_States.eHasCoin;
    END_IF

  E_States.eHasCoin:
    IF (rtrigButton.Q) THEN
      IF (nProducts > 0) THEN
        nProducts := nProducts - 1;
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
        eState := E_States.eProductEjected;
      ELSE
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
        eState := E_States.eCoinEjected;
      END_IF
    END_IF

  E_States.eProductEjected:
    IF (rtrigTakeProduct.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
      eState := E_States.eWaiting;
    END_IF

  E_States.eCoinEjected:
    IF (rtrigTakeCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
      eState := E_States.eWaiting;
    END_IF

  ELSE
    ADSLOGSTR(ADSLOG_MSGTYPE_ERROR, 'Invalid state', '');
    eState := E_States.eWaiting;
END_CASE

Ein kurzer Test zeigt, dass der FB das macht, was er machen soll:

Picture02

Doch wird auch schnell deutlich, dass größer Anwendungen so nicht umsetzbar sind. Die Übersichtlichkeit geht nach wenigen Zuständen komplett verloren.

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Zweite Variante: Zustandsübergänge in Methoden

Das Problem ist reduzierbar, wenn alle Zustandsübergänge als Methode realisiert werden.

Picture03

Tritt ein bestimmtes Ereignis auf, so wird die jeweilige Methode aufgerufen.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

IF (rtrigButton.Q) THEN
  THIS^.PressButton();
END_IF
IF (rtrigInsertCoin.Q) THEN
  THIS^.InsertCoin();
END_IF
IF (rtrigTakeProduct.Q) THEN
  THIS^.CustomerTakesProduct();
END_IF
IF (rtrigTakeCoin.Q) THEN
  THIS^.CustomerTakesCoin();
END_IF

Je nach aktuellem Zustand, wird in den Methoden der gewünschte Zustandsübergang ausgeführt und die Zustandsvariable angepasst:

METHOD INTERNAL CustomerTakesCoin : BOOL
IF (THIS^.eState = E_States.eCoinEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
  eState := E_States.eWaiting;
END_IF

METHOD INTERNAL CustomerTakesProduct : BOOL
IF (THIS^.eState = E_States.eProductEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
  eState := E_States.eWaiting;
END_IF

METHOD INTERNAL InsertCoin : BOOL
IF (THIS^.eState = E_States.eWaiting) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
  THIS^.eState := E_States.eHasCoin;
END_IF

METHOD INTERNAL PressButton : BOOL
IF (THIS^.eState = E_States.eHasCoin) THEN
  IF (THIS^.nProducts > 0) THEN
    THIS^.nProducts := THIS^.nProducts - 1;
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
    THIS^.eState := E_States.eProductEjected;
  ELSE                
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
    THIS^.eState := E_States.eCoinEjected;
  END_IF
END_IF

Auch dieser Ansatz funktioniert tadellos. Der Zustandsautomat befindet sich allerdings weiterhin in nur einem Funktionsblock. Die Zustandsübergänge werden zwar in Methoden ausgelagert, jedoch handelt es sich um einen Lösungsansatz, der strukturierten Programmierung. Dieser ignoriert weiterhin die Möglichkeiten der Objektorientierung. Dies führt zu dem Ergebnis, dass der Quellcode weiterhin schlecht erweiterbar und unleserlich ist.

Beispiel 2 (TwinCAT 3.1.4022) auf GitHub

Dritte Variante: Das ‚State‘ Pattern

Zur Umsetzung des State Pattern sind einige OO-Entwurfsprinzipien hilfreich:

Kohäsion (= Grad, inwiefern eine Klasse einen einzigen konzentrierten Zweck hat) und Delegation

Kapsele jede Verantwortlichkeit in ein eigenes Objekt und delegiere Aufrufe an diese Objekte weiter. Eine Klasse, eine Verantwortlichkeit!

Identifiziere jene Aspekte, die sich ändern und trenne diese von jenen, die konstant bleiben

Wie werden die Objekte aufgeteilt, damit Erweiterungen am Zustandsautomat an möglichst wenigen Stellen notwendig sind? Bisher musste bei jeder Erweiterung FB_Machine angepasst werden. Gerade bei umfangreichen Zustandsautomaten, an denen mehrere Entwickler arbeiten, ist dieses ein großer Nachteil.

Betrachten wir noch einmal die Methoden CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() und PressButton(). Diese haben alle einen ähnlichen Aufbau. In If-Anweisungen wird der aktuelle Zustand abgefragt und die gewünschten Aktionen werden ausgeführt. Bei Bedarf wird außerdem der aktuelle Zustand angepasst. Dieser Ansatz skaliert jedoch nicht. Jedes Mal, wenn ein neuer Zustand hinzugefügt wird, müssen mehrere Methoden angepasst werden.

Das State Pattern verstreut den Status auf mehrere Objekte. Jeder mögliche Status wird durch einen FB repräsentiert. Diese Status FBs beinhalten das gesamte Verhalten für den jeweiligen Zustand. Dadurch kann ein neuer Status eingeführt werden, ohne dass der Quellcode der ursprünglichen Bausteine geändert werden muss.

Auf jeden Zustand kann jede Aktion (CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() und PressButton()) ausgeführt werden. Somit besitzen alle Status FBs die gleiche Schnittstelle. Aus diesem Grund wird ein Interface für alle Status FBs eingeführt:

Picture04
 
FB_Machine aggregiert dieses Interface (Zeile 9), welches die Methodenaufrufe an die jeweiligen Status FBs delegiert (Zeile 30, 34, 38 und 42).

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton            : BOOL;
  bInsertCoin        : BOOL;
  bTakeProduct       : BOOL;
  bTakeCoin          : BOOL;
END_VAR
VAR_OUTPUT
  ipState            : I_State := fbWaitingState;
  nProducts          : UINT;
END_VAR
VAR
  fbCoinEjectedState    : FB_CoinEjectedState(THIS);
  fbHasCoinState        : FB_HasCoinState(THIS);
  fbProductEjectedState : FB_ProductEjectedState(THIS);
  fbWaitingState        : FB_WaitingState(THIS);

  rtrigButton           : R_TRIG;
  rtrigInsertCoin       : R_TRIG;
  rtrigTakeProduct      : R_TRIG;
  rtrigTakeCoin         : R_TRIG;
END_VAR

rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

IF (rtrigButton.Q) THEN
  ipState.PressButton();
END_IF

IF (rtrigInsertCoin.Q) THEN
  ipState.InsertCoin();
END_IF

IF (rtrigTakeProduct.Q) THEN
  ipState.CustomerTakesProduct();
END_IF

IF (rtrigTakeCoin.Q) THEN
  ipState.CustomerTakesCoin();
END_IF

Doch wie kann in den jeweiligen Methoden, der einzelnen Status FBs, der Status geändert werden?

Als erstes wird von jedem Status FB eine Instanz innerhalb von FB_Machine deklariert. Per FB_init() wird an jeden Status FB ein Pointer auf FB_Machine übergeben (Zeile 13 – 16).

Jede einzelne Instanz kann per Eigenschaft aus FB_Machine gelesen werden. Zurückgegeben wird jedes Mal ein Interface Pointer auf I_State.

Picture05

Des Weiteren erhält FB_Machine eine Methode zum Setzen des Status,

METHOD INTERNAL SetState : BOOL
VAR_INPUT
  newState : I_State;
END_VAR
THIS^.ipState := newState;

sowie eine Methode zum Ändern der aktuellen Produktanzahl:

METHOD INTERNAL SetProducts : BOOL
VAR_INPUT
  newProducts : UINT;
END_VAR
THIS^.nProducts := newProducts;

FB_init() erhält eine weitere Eingangsvariable, damit bei der Deklaration die maximale Anzahl der Produkte vorgegeben werden kann.

Da der Anwender der Zustandsmaschine nur FB_Machine und I_State benötigt, wurden die vier Eigenschaften (CoinEjectedState, HasCoinState, ProductEjectedState und WaitingState), die beiden Methoden (SetState() und SetProducts()) und die vier Status FBs (FB_CoinEjectedState, FB_HasCoinState, FB_ProductEjectedState und FB_WaitingState) als INTERNAL deklariert. Befinden sich die FBs der Zustandsmaschine in einer compilierten Bibliothek, so sind diese von außen nicht sichtbar. Auch im Library Repository sind diese nicht vorhanden. Das Gleiche gilt auch für Elemente die als PRIVATE deklariert werden. FBs, Interfaces, Methoden und Eigenschaften, die nur innerhalb einer Bibliothek Verwendung finden, können so vor dem Anwender der Library versteckt werden.

Der Tests der Zustandsmaschine ist in allen drei Varianten gleich:

PROGRAM MAIN
VAR
  fbMachine      : FB_Machine(3);
  sState         : STRING;
  bButton        : BOOL;
  bInsertCoin    : BOOL;
  bTakeProduct   : BOOL;
  bTakeCoin      : BOOL;
END_VAR

fbMachine(bButton := bButton,
          bInsertCoin := bInsertCoin,
          bTakeProduct := bTakeProduct,
          bTakeCoin := bTakeCoin);
sState := fbMachine.ipState.Description;

bButton := FALSE;
bInsertCoin := FALSE;
bTakeProduct := FALSE;
bTakeCoin := FALSE;

Die Anweisung in Zeile 15 soll das Testen vereinfachen, da für jeden Zustand ein lesbarer Text angezeigt wird.

Beispiel 3 (TwinCAT 3.1.4022) auf GitHub

Diese Variante wirkt bei der ersten Betrachtung recht aufwendig, da deutlich mehr FBs benötigt werden. Doch die Verteilung der Zuständigkeiten auf einzelne FBs macht diesen Ansatz sehr flexibel und deutlich robuster für Erweiterungen.

Dieses wird deutlich, wenn die einzelnen Status FBs sehr umfangreich werden. So könnte eine Zustandsmaschine einen komplexen Prozess steuern, bei dem jeder Status FB weitere Unterprozesse enthält. Eine Aufteilung auf mehrere FBs macht solch ein Programm erst überhaupt wartbar, insbesondere dann, wenn mehrere Entwickler daran beteiligt sind.

Bei sehr kleinen Zustandsmaschinen ist die Anwendung des State Pattern nicht unbedingt die optimalste Variante. Ich persönlich greife auch gerne auf die Lösung mit der CASE-Anweisung zurück.

Alternativ bietet die IEC 61131-3 mit der Ablaufsprache (AS) bzw. Sequential Function Chart (SFC) eine weitere Möglichkeit an Zustandsmaschinen umzusetzen. Aber das ist eine andere Geschichte.

Definition

In dem Buch “Entwurfsmuster. Elemente wiederverwendbarer objektorientierter Software” von Gamma, Helm, Johnson und Vlissides wird dieses wie folgt ausgedrückt:

”Ermögliche es einem Objekt, sein Verhalten zu ändern, wenn sein interner Zustand sich ändert. Es wird so aussehen, als ob das Objekt seine Klasse gewechselt hat.”

Implementierung

Es wird eine gemeinsame Schnittstelle definiert (State), die für jeden Zustandsübergang (Transistion) eine Methode enthält. Für jeden Zustand wird eine Klasse erstellt, die diese Schnittstelle implementiert (State1, State2, …). Da hierdurch alle Zustände die gleiche Schnittstelle besitzen, sind diese untereinander austauschbar.

Das Objekt, dessen Verhalten in Abhängigkeit vom Zustand geändert werden soll (Context), aggregiert (kapselt) ein solches Zustandsobjekt. Dieses Objekt repräsentiert den aktuellen internen Zustand (currentState) und kapselt das zustandsabhängige Verhalten. Der Context delegiert Aufrufe an das aktuell gesetzte Zustandsobjekt.

Die Zustandswechsel können durch die konkreten Zustandsobjekte selbst durchgeführt werden. Dazu benötigt jedes Zustandsobjekt eine Referenz auf den Context (context). Weiterhin muss der Context eine Methode anbieten, um den Zustand ändern zu können (setState()). Der Folgezustand wird der Methode setState() als Parameter übergeben. Hierzu bietet der Context alle möglichen Zustände als Eigenschaften an.

UML Diagramm


Picture06

Bezogen auf das obige Beispiel ergibt sich folgende Zuordnung:

Context FB_Machine
State I_State
State1, State2, … FB_CoinEjectedState, FB_HasCoinState, FB_ProductEjectedState, FB_WaitingState
Handle() CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin(), PressButton()
GetState1, GetState2, … CoinEjectedState, HasCoinState, ProductEjectedState, WaitingState
currentState ipState
setState() SetState()
context pMachine

Anwendungsbeispiele

Ein TCP-Kommunikationsstack ist ein gutes Beispiel für die Verwendung des State Pattern. So kann jeder Zustand eines Verbindungs-Sockets durch entsprechende Zustandsklassen (TCPOpen, TCPClosed, TCPListen, …) abgebildet werden. Jede dieser Klassen implementiert das gleiche Interface (TCPState). Der Context (TCPConnection) beinhaltet das aktuelle Zustandsobjekt. Über dieses Zustandsobjekt werden alle Aktionen an die jeweilige Zustandsklasse übergeben. Diese bearbeitet die Aktionen und wechselt bei Bedarf in einen neuen Zustand.

Auch Textparser sind zustandsbasiert. So ist die Bedeutung eines Zeichens meistens abhängig von den zuvor gelesenen Zeichen.

Christian Binder [MS]: New role at Microsoft

After refreshing and reforming the team of the Microsoft Technology Center in Munich, which caused some silence on this blog  🙁   I decided to move to a more Engineering focused unit in Microsoft - the Commercial Software Engineering Group in EMEA and  I will continue working on Azure DevOps 🙂

Golo Roden: Vergleiche in JavaScript: == oder ===?

JavaScript kennt zwei Operatoren zum Vergleich, == und ===. Der erste ist nicht typsicher, der zweite hingegen sehr wohl – weshalb man stets den zweiten verwenden sollte.

Code-Inside Blog: Migrate a .NET library to .NET Core / .NET Standard 2.0

I have a small spare time project called Sloader and I recently moved the code base to .NET Standard 2.0. This blogpost covers how I moved this library to .NET Standard.

Uhmmm… wait… what is .NET Standard?

If you have been living under a rock in the past year: .NET Standard is a kind of “contract” that allows the library to run under all .NET implementations like the full .NET Framework or .NET Core. But hold on: The library might also run under Unity, Xamarin and Mono (and future .NET implementations that support this contract - that’s why it is called “Standard”). So - in general: This is a great thing!

Sloader - before .NET Standard

Back to my spare time project:

Sloader consists of three projects (Config/Result/Engine) and targeted the full .NET Framework. All projects were typical library projects. All components were tested with xUnit and builded via Cake. The configuration is using YAML and the main work is done via the HttpClient.

To summarize it: The library is a not too trivial example, but in general it has pretty low requirements.

Sloader - moving to .NET Standard 2.0

The blogpost from Daniel Crabtee “Upgrading to .NET Core and .NET Standard Made Easy” was a great resource and if you want to migrate you should check his blogpost.

The best advice from the blogpost: Just create new .NET Standard projects and xcopy your files to the new projects.

To migrate the projects to .NET Standard I really just needed to deleted the old .csproj files and copied everything into new .NET Standard library projects.

After some fine tuning and NuGet package reference updates everything compilied.

This GitHub PR shows the result of the migration.

Problems & Aftermath

In my library I still used the old way to access configuration via the ConfigurationManager class (referenced via the official NuGet package). This API is not supported on every platform (e.g. Azure Functions), so I needed to tweak those code parts to use System.Environment Variables (this is in my example OK, but there are other options as well).

Everthing else “just worked” and it was a great experience. I tried the same thing with .NET Core 1.0 and it failed horrible, but this time the migration was more or less painless.

.NET Portability Analyzer

If you are not sure if your code works under .NET Standard or Core just install the .NET Portability Analyzer.

This handy tool will give you an overwhy which parts might run without problems under .NET Standard or .NET Core.

.NET Standard 2.0 and .NET Framework

If you still targeting the full Framework, make sure you use at least .NET Framework Version 4.7.2. In theory .NET Standard 2.0 was supposed to work under .NET 4.6.1, but it seems that this ended not too well:

Hope this helps and encourage you to try a migration to a more modern stack!

Holger Schwichtenberg: Verbesserungen der Performance bei ASP.NET Core

ASP.NET Core MVC und ASP.NET Core WebAPI sind 12 bis 24 Prozent performanter als ihre ASP.NET-Vorgänger, wie Messungen des Dotnet-Doktors zeigen.

Norbert Eder: DELL XPS 13: Funktionstasten aktivieren

In der Standardeinstellung (für die meisten wohl ok, für Softwareentwickler richtig grausam), sind die Funktionstasten nur die zweite Belegung auf der Tastatur. In der primären Belegung werden die Multimedia-Tasten verwendet. Wer mit Funktionstasten arbeitet, kommt damit überhaupt nicht klar, vor allem, weil es auch einen Bruch in der bisherigen Bedienung darstellt.

Zum Glück kann dies im BIOS umgestellt werden. Nachfolgend siehst du die Standardeinstellung.

Dell XPS 13 Bios

Dell XPS 13 Bios

Wähle einfach Lock Mode Enable/Secondary im Abschnitt POST Behavior/Fn Lock Options und schon sind die Funktionstasten wieder ohne Fn zu verwenden.

Der Beitrag DELL XPS 13: Funktionstasten aktivieren erschien zuerst auf Norbert Eder.

Code-Inside Blog: Improving Code

Improving code

TL;DR;

Things I learned:

  • long one-liners are hard to read and understand
  • split up your code into small, easy to understand functions
  • less “plumping” (read infrastructure code) is the better
  • get indentation right
  • “Make it correct, make it clear, make it concise, make it fast. In that order.” Wes Dyer

Why should I bother?

Readable code is:

  • easier to debug
  • fast to fix
  • easier to maintain

The problem

Recently I wanted to implement an algorithm for a project we are doing. The goal was to create a so-called “Balanced Latin Square”, we used it to prevent ordering effects in user studies. You can find a little bit of background here and a nice description of the algorithm here.

It’s fairly simple, although it is not obvious how it works, just by looking at the code. The function takes an integer as an argument and returns a Balanced Latin Square. For example, a “4” would return this matrix of numbers:

1 2 4 3 
2 3 1 4 
3 4 2 1 
4 1 3 2 

And there is a little twist if your number is odd, then you need to reverse every row and append them to your result.

After I created the my implementation, I had an idea on how to simplify it. At least I thought its simpler ;)

First attempt - Loops

Based on the description and a Python version of that algorithm, I created a classical (read “imperative”) implementation.

So this is the C# Code:

public List<List<String>> BalancedLatinSquares(int n)
{
    var result = new List<List<String>>() { };
    for (int i = 0; i < n; i++)
    {
        var row = new List<String>();
        for (int j = 0; j < n; j++)
        {
            var cell = ((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n;
            cell++; // start counting from 1
            row.Add(cell.ToString());
        }
        result.Add(row);
    }
    if (n % 2 == 1)
    {
        var reversedResult = result.Select(x => x.AsQueryable().Reverse().ToList()).ToList();                
        result.AddRange(reversedResult);
    }
    return result;
}

I also wrote some simple unit tests to ensure this works. But in the end, I really didn’t like this code. It contains two nested loops and a lot of plumbing code. There are four lines alone just to create the result object (list) and to add the values to it. Recently I looked into functional programming and since C# also has some functional inspired features, I tried to improve this code with some functional goodness :)

Second attempt - Lambda Expressions

public List<List<String>> BalancedLatinSquares(int n)
{
    var result = Enumerable.Range(0, n)
        .Select(i =>
                Enumerable.Range(0, n).Select(j => ((((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n)+1).ToString()).ToList()
            )
        .ToList();     
    
    if (n % 2 == 1)
    {
        var reversedResult = result.Select(x => x.AsQueryable().Reverse().ToList()).ToList();
        result.AddRange(reversedResult);
        return result;
}

This is the result of my attempt to use some functional features. And hey, it is much shorter, therefore it must be better, right? Well, I posted a screenshot of both versions on Twitter and asked which one the people prefer. As it turned out, a lot of folks actually preferred the loop version. But why? Looking back at my code a saw two problems by looking at this line:

Enumerable.Range(0, n).Select(j => ((((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n)+1).ToString()).ToList()

  • I squeezed a lot of code in this one liner. This makes it harder to read and therefore harder to understand.
  • Another issue is, that I omitted descriptive variable names since they are not needed anymore. Oh and I removed the only comment I wrote since this comment would not fit in the one line of code :)

So, shorter is not always better.

Third attempt - better Lambda Expressions

The smart folks on Twitter had some great ideas about how to improve my code.

The first step was to get rid of the unholy one-liner. You can - and should - always split up your code into smaller, meaningful code blocks. I pulled out the calculateCell function and out of that I also extracted a isEven function. The nice thing is, that the function names also working as a kind of documentation about whats going on.

By returning IEnumerable instead of lists, I was able to remove some .toList() calls. Also, I was able to shorten the code to create the reversedResult.

Another simple step to improve readability is to get line indentation right. Personally, I don’t care which indentation style people are using, as long as it’s used consistently.

public static IEnumerable<IEnumerable<int>> GenerateBalancedLatinSquares(int n)
{
    bool isEven (int i) => i % 2 == 0;        
    int calculateCell(int j, int i) =>((isEven(j) ? n - j / 2 : j / 2 + 1) + i) % n + 1;
    
    var result = Enumerable
                    .Range(0, n)
                    .Select(row =>
                        Enumerable
                            .Range(0, n)
                            .Select(col =>calculateCell(col,row))
                    );     
    
    if (isEven(n) != false)
    {
        var reversedResult = result.Select(x => x.Reverse());                
        result = result.Concat(reversedResult);
    }        
    return result;conditional
}

I think there is room for further improvement. For the calculateCell function I am using this ?: conditional operator, it allows you to write very compact code, on the other hand, it’s also harder to read. If you would replace this with an if statement you would need more lines of code, but also have more space to add comments. Functional languages like Scala, F#, and Haskel providing this neat match expression that could help here.

Extra: How does this algorithm look in other languages:

Python

def balanced_latin_squares(n):
    l = [[((j/2+1 if j%2 else n-j/2) + i) % n + 1 for j in range(n)] for i in range(n)]
    if n % 2:  # Repeat reversed for odd n
        l += [seq[::-1] for seq in l]
    return l

I took this sample from Paul Grau.

Haskell

Thank you Carsten

Holger Schwichtenberg: Welche Dateisystem- und Druckerfreigaben gibt es und wer kann sie nutzen?

Eine Dateisystem- oder Druckerfreigabe ist schnell angelegt und auch schnell vergessen. Um sicher zu gehen, dass es keine verwaisten Freigaben oder Freigaben mit zu viel Zugriffsrechten gibt, hat der Dotnet-Doktor ein PowerShell-Skript geschrieben.

Code-Inside Blog: Easy way to copy a SQL database with Microsoft SQL Server Management Studio (SSMS)

How to copy a database on the same SQL server

The scenario is pretty simple: We just want a copy of our database, with all the data and the complete scheme and permissions.

1. step: Make a back up of your source database

Click on the desired database and choose “Backup” under tasks.

x

2. step: Use copy only or use a full backup

In the dialog you may choose “copy-only” backup. With this option the regular backup job will not be confused.

x

3. step: Use “Restore” to create a new database

This is the most important point here: To avoid fighting against database-file namings use the “restore” option. Don’t create a database manually - this is part of the restore operation.

x

4. step: Choose the copy-only backup and choose a new name

In this dialog you can name the “copy” database and choose the copy-only backup from the source database.

x

Now click ok and you are done!

Behind the scenes

This restore operation works way better to copy a database then to overwrite an existing database, because the restore operation will adjust the filenames.

x

Further information

I’m not a DBA, but when I follow these steps I normally have nothing to worry about if I want a 1:1 copy of a database. This can also be scripted, but then you may need to worry about filenames.

This stackoverflow question is full of great answers!

Hope this helps!

Golo Roden: Ein enum für JavaScript

JavaScript kennt keinen enum-Datentyp. Mithilfe eines Objekts, dessen Eigenschaften numerische Werte zugewiesen werden, lässt sich leicht Abhilfe schaffen. Tatsächlich ist das allerdings kaum besser, als hart verdrahtete Zeichenketten zu verwenden.

Golo Roden: Kein bind für Lambda-Ausdrücke

Lambda-Ausdrücke vereinfachen die Handhabung von this in JavaScript, bringen aber einige Besonderheiten mit, die unter Umständen unerwartet sind. Eine ist die fehlende Möglichkeit, den Wert von this eines Lambda-Ausdrucks neu zu binden.

Jürgen Gutsch: Live streaming ideas

With this post, I'd like to share some ideas about two live streaming shows with you. It would be cool to get some feedback from you, especially from the German speaking readers as well. The first idea is about an German speaking .NET Developer Community Standup and the second one is about a live coding stream (English or German), both hosted on Google Hangouts.

A German speaking .NET Developer Community Standup

Since the beginning of the ASP.NET Community Standup, I watch this show more or less regularly. I think I missed only two or three shows. Because of the different time zone it is almost not possible to watch the live stream. Anyway. I really like the format of that show.

Also since a few years the number of user group attendees decreases. In my user group sometimes only two or three attendees show up, even if we have a lot more registrations via meetup. We (Olivier Giss and me) have kinda fun hosting the user group, but it is also hard to push much effort in it for just a handful of loyal attendees. Since a while we record the sessions using skype for business or google hangouts and push them to YouTube. This gives some more folks the chance to see the talks. We thought a lot about the reasons and tried to change some things to get more attendees, but that didn't really work.

This is the reason why I'm thinking laud about a .NET Developer Community Standup for the German speaking region (Germany, Austria and Switzerland) since months.

I'd like to find two more people to join the team to host the show. Would be cool to have a person from Austria as well as from Switzerland. Since I'm a swiss MVP, I could also take over the swiss part in behalf ;-) In that case I would like to have another person from Germany. One host per country would be cool.

Three host is a nice number and it wouldn't be necessary for the hosts to be available every time we do a live stream. Anyone interested in joining the team?

To keep it simple I'd also use google hangouts to stream the show, and it is not necessary to have an high end steaming equipment. A good headset and a good internet connection should be enough.

In the show I would like to go threw some interesting community and technology news. Talking about some random stuff and I'd also like to invite special guests, who can show us things they did or who would like to talk about special things. This should be a lazy show about interesting stuff about technology and community. I'd also like to give community leads the chance to talk about their work and their events.

What are you thinking about that? Are you interested in?

If yes, I would set up a GitHub repo to collect ideas and topics to talk about.

Live Coding via Live Stream on Google Hangouts

Another idea is inspired by Jeff Fritz live stream on Twitch called "Fritz and Friends". The recorded streams are published to YouTube afterwards. I really like this live stream, even if it's a completely different kind of video to watch. Jeff is permanently in discussion with the users in the chat, while working on his projects. This is kinda wired and makes the show a little nervous, but it is also really interesting. The really cool thing is that he accepts pull request from his audience and he discuss their changes with the audience while working on his project.

I would do such a live stream as well, there were a few projects I would like to work on:

  • LightCore 2.0
    • An alternative DI container for .NET and .NET Core projects
    • Almost done, but needs to be finalized.
    • Maybe you folks want do add more features or add some optimizations
  • Working on the GraphQL middleware for ASP.NET Core
  • Working on health checks for ASP.NET and ASP.NET Core
    • Including health check application provided in the same way IdentityServer is provided to ASP.NET projects: Mainly as a single but extendable library and a optional UI to visualize the health of the connected services.
  • Working on a developer community platform like the portal Microsoft planned to release last year?
    • Unfortunately Microsoft retired that project. It would make more sense anyway, if this project is built and hosted by the community itself.
    • So this would be a great way to create such a developer community platform

Maybe it makes also sense to invite a special guest to talk about specific topics while working on the project. e.g. inviting Dominick Baier to implement authentication to the developer community platform.

What if I do the same thing? Are you interested in? What would be the best Language for that kind of life stream?

If you are interested, I would also set up a GitHub repo to collect ideas and topics to talk about and I would setup additional repos per project.

What do you think?

Do you like these ideas? Do you have any other idea? Please drop me a comment and share your thoughts :-)

Golo Roden: Objekte aufzählen in JavaScript

JavaScript kennt keine forEach-Schleife für Objekte. Der Einsatz von modernen Sprachmitteln wie der Funktion Object.entries und der for-of-Schleife ermöglicht aber das einfache und elegante Nachbauen einer solchen Schleife.

Uli Armbruster: Microservices mögen kein Denken in klassischen Entitäten

Einen ganz besonderen AHA Moment hatte ich kürzlich in einem Workshop bei Udi Dahan, CEO von Particular. In seinem Beispiel ging es um die klassische Entität eines Kunden.

Microservices zu realisieren bedeutet Fachlichkeiten sauber schneiden und in eigenständige Silos (oder Säulen) packen zu müssen. Jedes Silo muss dabei die Hohheit über die eigenen Daten besitzen, auf denen es die zugehörigen Geschäftsprozesse abbildet. Soweit so gut. Doch wie lässt sich dies im Falle eines Kunden bewerkstelligen, der klassischerweise wie im Screenshot zu sehen modelliert ist? Unterschiedliche Eigenschaften werden von unterschiedlichen Microservices benötigt bzw. verändert.

Wird die gleiche Entität in allen Silos verwendet, muss es eine entsprechende Synchronisierung zw. den Microservices geben. Das hat erhelbiche Auswirkungen auf Skalierbarkeit und Performance. In einer Applikation mit häufig parallelen Änderungen an einer Entität wird das Fehlschlagen von Geschäftsprozessen zunehmen – oder im schlimmsten Fall zu Inkonsistenzen führen.

Klassische Kundeentität

Klassische Kundeentität

 

Udi schlägt die folgende Modellierung vor:

Neue Modellierung eines Kunden

Der Kunde wird durch unabhängige Entitäten modelliert

Zur Identifikation, welche Daten zusammengehören, schlägt Udi einen Interessanten Ansatz vor:

Fragt die Fachabteilung, ob das Ändern einer Eigenschaft Auswirkung auf eine andere Eigenschaft hat. 

Würde das Ändern des Nachnamens einen Einfluss auf die Preiskalkulation haben? Oder auf die Art der Marketings?

Nun gilt es noch das Problem der Aggregation zu lösen, sprich wenn ich in meiner Anzeige unterschiedliche Daten unterschiedlicher Microserivces anzeigen möchte. Klassischerweise würde es jetzt eine Tabelle geben, die die Spalten

 

ID_Kunde ID_Kundenstamm ID_Bestandskundenmarketing ID_Preiskalkulation

 

besitzt. Das führt aber zu 2 Problemen:

  1. Die Tabelle muss immer erweitert werden, wenn ein neuer Microservices hinzugefügt wird.
  2. Sofern ein Microservices die gleiche Funktionalität in Form unterschiedlicher Daten abdeckt, müssten pro Microservices mehrere Spalten hinzugefügt und NULL Werte zugelassen werden.

Ein Beispiel für Punkt 2 wäre ein Microservices, der das Thema Bezahlmethoden abdeckt. Anfangs gab es beispielsweise nur Kreditkarte und Kontoeinzug. Dann folgte Paypal. Und kurze Zeit später dann Bitcoin. Der Microservices hätte hierzu mehrere Tabellen, wo er die individuelle Daten für die jeweilige Bezahlmethode halten würde. In oben gezeigter Aggregationstabelle müsste aber für jede Bezahlmethode, die der Kunde nutzt, eine Spalte gefüllt werden. Wenn er sie nicht benutzt, würde NULL geschrieben werden. Man merkt schon: Das stinkt.

Ein anderer Ansatz ist da deutlich besser geeignet. Welcher das ist und wie man diesen technischen realisieren kann, könnt ihr im GitHub Repository von Particular nachschauen.

 

Golo Roden: Ein asynchrones 'map' für JavaScript

Die 'map'-Funktion in JavaScript arbeitet stets synchron und hat kein asynchrones Pendant. Da 'async'-Funktionen vom Compiler aber in synchrone Funktionen, die Promises zurückgeben, umgewandelt werden, lässt sich 'map' mit 'Promise.all' kombinieren, um den gewünschten Effekt zu erzielen.

Jürgen Gutsch: Configuring HTTPS in ASP.NET Core 2.1

Finally HTTPS gets into ASP.NET Core. It was there before back in 1.1, but was kinda tricky to configure. It was available in 2.0 bit not configured by default. Now it is part of the default configuration and pretty much visible and present to the developers who will create a new ASP.NET Core 2.1 project.

So the title of that blog post is pretty much misleading, because you don't need to configure HTTPS. because it already is. So let's have a look how it is configured and how it can be customized. First create a new ASP.NET Core 2.1 web application.

Did you already install the latest .NET Core SDK? If not, go to https://dot.net/ to download and install the latest version for your platform.

Open a console and CD to your favorite location to play around with new projects. It is C:\git\aspnet\ in my case.

mkdir HttpsSecureWeb && cd HttpSecureWeb
dotnet new mvc -n HttpSecureWeb -o HttpSecureWeb
dotnet run

This commands will create and run a new application called HttpSecureWeb. And you will see HTTPS the first time in the console output by running an newly created ASP.NET Core 2.1 application:

There are two different URLs where Kestrel is listening on: https://localhost:5001 and http://localhost:5000

If you go to the Configure method in the Startup.cs there are some new middlewares used to prepare this web to use HTTPS:

In the Production and Staging environment mode there is this middleware:

app.UseHsts();

This enables HSTS (HTTP Strinct Transport Protocol), which is a HTTP/2 feature to avoid man-in-the-middle attacks. It tells the browser to cache the certificate for the specific host-headers and for a specific time range. If the certificate changes before the time range ends, something is wrong with the page. (More about HSTS)

The next new middleware redirects all requests without HTTPS to use the HTTPS version:

app.UseHttpsRedirection();

If you call http://localhost:5000, you get redirected immediately to https://localhost:5001. This makes sense if you want to enforce HTTPS.

So from the ASP.NET Core perspective all is done to run the web using HTTPS. Unfortunately the Certificate is missing. For the production mode you need to buy a valid trusted certificate and to install it in the windows certificate store. For the Development mode, you are able to create a development certificate using Visual Studio 2017 or the .NET CLI. VS 2017 is creating a certificate for you automatically.

Using the .NET CLI tool "dev-certs" you are able to manage your development certificates, like exporting them, cleaning all development certificates, trusting the current one and so on. Just type the following command to get more detailed information:

dotnet dev-certs https --help

On my machine I trusted the development certificate to not get the ugly error screen in the browser about an untrusted certificate and an unsecure connection every time I want to debug a ASP.NET Core application. This works quite well:

dotnet dev-cert https --trust

This command trusts the development certificate, by adding it to the certificate store or to the keychain on Mac.

On Windows you should use the certificate store to register HTTPS certificated. This is the most secured way on Windows machines. But I also like the idea to store the password protected certificate directly in the web folder or somewhere on the web server. This makes it pretty easy to deploy the application to different platforms, because Linux and Mac use different ways to store the certificated. Fortunately there is a way in ASP.NET Core to create a HTTPS connection using a file certificate which is stored on the hard drive. ASP.NET Core is completely customizable. If you want to replace the default certification handling, feel free to do it.

To change the default handling, open the Program.cs and take a quick look at the code, especially to the method CreateWebHostBuilder:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .UseStartup<Startup>();

This method creates the default WebHostBuilder. This has a lot of stuff preconfigured, which is working great in the most scenarios. But it is possible to override all of the default settings here and to replace it with some custom configurations. We need to tell the Kestrel webserver which host and port he need to listen on and we are able to configure the ListenOptions for specific ports. In this ListenOptions we can use HTTPS and pass in the certificate file and a password for that file:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseKestrel(options =>
        {
            options.Listen(IPAddress.Loopback, 5000);
            options.Listen(IPAddress.Loopback, 5001, listenOptions =>
            {
                listenOptions.UseHttps("certificate.pfx", "topsecret");
            });
        })
        .UseStartup<Startup>();

Usually we would use the hardcoded values from a configuration file or environment variables, instead of hardcoding it.

Be sure the certificate is password protected using a long password or even better a pass-phrase. Be sure to not store the password or the pass-phrase into a configuration file. In development mode you should use the user secrets to store such secret date and in production mode the Azure Key Vault could be an option.

Conclusion

I hope this helps to get you a rough overview over the the usage of HTTPS in ASP.NET Core. This is not really a deep dive, but tries to explain what are the new middlewares good for and how to configure HTTPS for different platforms.

BTW: I just saw in the blog post about HTTPS improvements, about HSTS in ASP.NET Core, there is a way to store the HTTPS configuration in the launchSettings.json. This is an easy way to pass in environment variables on startup to the application. The samples also shows to add the certificate password to this settings file. Please never ever do this! Because a file is easily shared to a source code repository or any other way, so the password inside is shared as well. Please use different mechanisms to set passwords in an application, like the already mentioned user secrets or the Azure Key Vault.

Uli Armbruster: Quellen zu Defensives Design und Separation of Concerns sind jetzt online

Den Quellcode zu meinen Vorträgen auf den Karlsruher Entwicklertagen und der DWX sind jetzt online:

  • Zu Super Mario Kata mit Fokus auf Defensivem Design geht es hier.
  • Zur Prüfsummen Kata mit fokus auf Separation of Concerns gelangt ihr hier.

Auf beiden Seiten findet ihr auch die Links zu den PowerPoint Folien. Im Juli 2018 werde ich den Code der Prüfsummen Kata noch in Form von Iterationen veröffentlichen und beide Talks als YouTube Video freigeben.

IMG_20180627_145405

Defensives Design Talk auf der DWX

Ihr wollt Teile davon in euren Vorträgen verwenden, ihr wollt ein Training zu dem Thema oder ich soll dazu in euerer Community einen Vortrag halten, dann kontaktiert mich über die auf GitHub genannten Kanäle.

Stefan Henneken: IEC 61131-3: The generic data type T_Arg

In the article The wonders of ANY, Jakob Sagatowski shows how the data type ANY can be effectively used. In the example described, a function compares two variables to determine whether the data type, data length and content are exactly the same. Instead of implementing a separate function for each data type, the same requirements can be implemented much more elegantly with only one function using data type ANY.

Some time ago, I had a similar task. A method should be developed that accepts any number of parameters. Both the data type and the number of parameters were random.

During my first attempt to find solution, I tried to use a variable-length array of type ARRAY [*] OF ANY. However, variable-length arrays can only be used as VAR_IN_OUT and the data type ANY only as VAR_INPUT (see also IEC 61131-3: Arrays with variable length). This approach was therefore ruled out.

As an alternative to data type ANY, structure T_Arg is also available. T_Arg is declared in the TwinCAT library Tc2_Utilities and, in contrast to ANY, is also available at TwinCAT 2. The structure of T_Arg is similar to the structure used for the data type ANY (see also The wonders of ANY).

TYPE T_Arg :
STRUCT
  eType   : E_ArgType   := ARGTYPE_UNKNOWN; (* Argument data type *)
  cbLen   : UDINT       := 0;               (* Argument data byte length *
  pData   : UDINT       := 0;               (* Pointer to argument data *)
END_STRUCT
END_TYPE

T_Arg can be used at any place, including in the VAR_IN_OUT range.

The following function adds any amount of numbers whose data type can also be random. The result is returned as LREAL.

FUNCTION F_AddMulti : LREAL
VAR_IN_OUT
  aArgs : ARRAY [*] OF T_Arg;
END_VAR
VAR
  nIndex : DINT;
  aUSINT : USINT;
  aUINT  : UINT;
  aINT   : INT;
  aDINT  : DINT;
  aREAL  : REAL;
  aLREAL : LREAL;
END_VAR

F_AddMulti := 0.0;
FOR nIndex := LOWER_BOUND(aArgs, 1) TO UPPER_BOUND(aArgs, 1) DO
  CASE (aArgs[nIndex].eType) OF
    E_ArgType.ARGTYPE_USINT:
      MEMCPY(ADR(aUSINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUSINT;
    E_ArgType.ARGTYPE_UINT:
      MEMCPY(ADR(aUINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUINT;
    E_ArgType.ARGTYPE_INT:
      MEMCPY(ADR(aINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aINT;
    E_ArgType.ARGTYPE_DINT:
      MEMCPY(ADR(aDINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aDINT;
    E_ArgType.ARGTYPE_REAL:
      MEMCPY(ADR(aREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aREAL;
    E_ArgType.ARGTYPE_LREAL:
      MEMCPY(ADR(aLREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aLREAL;
  END_CASE
END_FOR

However, calling the function is somewhat more complicated than with the data type ANY.

PROGRAM MAIN
VAR
  sum    : LREAL;
  args   : ARRAY [1..4] OF T_Arg;
  a      : INT := 4567;
  b      : REAL := 3.1415;
  c      : DINT := 7032345;
  d      : USINT := 13;
END_VAR

args[1] := F_INT(a);
args[2] := F_REAL(b);
args[3] := F_DINT(c);
args[4] := F_USINT(d);
sum := F_AddMulti(args);

The array passed to the function must be initialized first. The library Tc2_Utilities contains help functions that convert a variable into a structure of type T_Arg (F_INT(), F_REAL(), F_DINT(), …). The function for adding the values has only one input variable of type ARRAY [*] OF T_Arg.

The data type T_Arg is used, for example, in the function block FB_FormatString() or in the function F_FormatArgToStr() of TwinCAT. The function block FB_FormatString() can replace up to 10 placeholders in a string with values of PLC variables of type T_Arg (similar to fprintf in C).

An advantage of ANY is the fact that the data type is defined by the IEC 61131-3 standard.

Even if the generic data types ANY and T_Arg do not correspond to the generics in C# or the templates in C++, they still support the development of generic functions in IEC 61131-3. These can now be designed in such a way that the same function can be used for different data types and data structures.

Norbert Eder: Ein Tech-Stack für Microservices ist genug

Über Microservices gibt es zahlreiche Bücher und noch viel mehr Artikel. Darin finden sich darin auch viele Erfahrungen, Tipps, Meinungen und jede Menge Vorteile. Als einer dieser Vorteile taucht gerne die Wahlfreiheit der verwendeten Sprache/Plattform auf. Darauf möchte ich in diesem Beitrag näher eingehen.

Grundaussage

Ein Microservice ist eigenständig. Es läuft in einer passenden Umgebung. Alleine, oder zusammen mit anderen Microservices. Jedes Microservice kann für eine andere Plattform/Programmiersprache (C++, Java, .NET, Haskell, Rust, etc.) entwickelt werden.

Die Vorteile:

  • Die Umsetzung kann mit der Plattform/Programmiersprache umgesetzt werden, mit der sich die Anforderungen am besten abbilden lassen.

  • Softwareentwickler müssen nicht mehr dieselbe Plattform/Programmiersprache lernen, sondern können ein Microservice mit bereits vorhandenen Kenntnissen umsetzen. Das spart Zeit und bietet Vorteile bei der Suche neuer EntwicklerInnen.

Was bedeuten diese Vorteile in der Realität?

Machbarkeit/Realitätsnähe

Bei der Mitarbeitersuche nicht mehr auf eine Plattform Rücksicht nehmen zu müssen, klingt gut. Gerade in Zeiten, in denen jeder Softwareentwickler umkämpft, wie nie zuvor, ist.

Für das Unternehmen (und in weiterer Folge auch den Mitarbeitern) ergibt sich jedoch ein anderes Bild:

  • Neue Plattformen benötigen zusätzliches Know-how. Eine Entwicklungsmaschine ist schnell eingerichtet, für den Produktivbetrieb gelten andere Anforderungen hinsichtlich Administration, Konfiguration, Sicherheit usw. Es entstehen zusätzliche Kosten und Risiken.

  • Wer pflegt und führt die Weiterentwicklung fort, wenn der ursprüngliche Entwickler nicht (mehr) verfügbar ist? Durch Krankheit, Unfall oder Kündigung kann ein erheblicher Aufwand durch Know-how-Verlust auf das Unternehmen zukommen. Nicht selten wird Software neu entwickelt, weil dies insgesamt günstiger kommt.

  • Um Themen wie Performance-Optimierungen etc. bewältigen zu können, bedarf es tiefgreifenden Wissens über den Technologiestack. Denselben Grad an Tiefe über mehrere Plattformen hinweg aufzubauen und dabei auch am aktuellen Stand zu bleiben, ist eine aufwändige und damit kostenintensive Angelegenheit – und auf lange Frist in der Regeln nicht machbar.

Bei einem Blick auf die Unternehmensverteilung in Österreich, sind 99,8% der Unternehmen laut WKO im Jahr 2017 99,8% der Unternehmen den KMU zuzuordnen (0-249 Mitarbeiter). In Deutschland sieht dieses Bild ähnlich aus. Hier sind es laut Statista 99,6%.

Internationale Großkonzerne sind aufgrund der vorhandenen Mittel im Vorteil. Einschlägige Softwareunternehmen an der oberen Grenze, die zudem mit einem einzigen Produkt am Markt sind, könnten dies ebenfalls bewältigen. Der Rest sollte tunlichst die Finger davon lassen.

Fazit

Nicht jeder genannte Vorteil, entpuppt sich als solcher. Microservices haben ihre Daseinsberechtigung. Einzelne Services auf unterschiedlichen Plattformen aufsetzen zu können, kann in manchen Fällen tatsächlich sinnvoll und hilfreich sein. Tatsächlich muss dies im Sinne der Wirtschaftlichkeit hinterfragt werden und keinesfalls frei von jedem Entwickler im Unternehmen entschieden werden können.

Der Beitrag Ein Tech-Stack für Microservices ist genug erschien zuerst auf Norbert Eder.

David Tielke: #DWX2018 - Inhalte meiner Sessions, Workshops und Fernseh- und Radiointerviews

Vom 25.06. bis 28.06. fand auch in diesem Jahr wieder die Developer Week 2018 in Nürnberg statt. An vier Tagen öffnete das NCC Ost des Messezentrums Nürnberg die Pforten um tausenden wissenshungrigen Entwicklern zu empfangen, die sich in Sessions und Workshops rund um das Thema Softwareentwicklung schlau machen konnten. Wie in jedem Jahr, war ich auch in diesem Jahr wieder als Trackchair für die beiden Tracks Softwarequalität und Softwarearchitekturen inhaltlich zuständig. Neben der Programmgestaltung durfte ich auch selbst wieder aktiv werden und in insgesamt vier Sessions, einer Abendveranstaltung und einem Workshop mein Wissen an die Teilnehmer weiterreichen. 


Hier meine Beiträge im Überblick:

  • Session: Effektive Architekturen mit Workflows
  • Session: Architektur für die Praxis 2.0 (Ersatzvortrag)
  • Session: Metriken - wie gut ist Ihre Software?
  • Session: Testing Everything
  • Abendveranstaltung: SmartHome - Das Haus der Zukunft!
  • Workshop: Architektur für die Praxis 2.0

Fernsehinterview BR / ARD:


Im Zusammenhang mit meinem Vortrag zum Thema "Smarthome" am Montagabend, wurde ich im Vorfeld von diversen Medien wie BR, ARD, Nürnberger Nachrichten und Radio Gong zu dem Thema befragt. Der Beitrag des BR dazu ist noch immer in deren Mediathek abrufbar.

Inhalte zu meinen Workshops und Sessions

Wie mit den Teilnehmern in meinen Sessions vorab besprochen, stelle ich hier nun alle relevanten Materialien meiner Sessions zur Verfügung. Diese Inhalte umfassen zum einen meine Codebeispiele aus Visual Studio, sowie meine Notizen und Mitschriften aus OneNote als PDF aber vor allem meine Artikel auf meinem dotnetpro Kolumne "Davids Deep Dive" zu diesem Thema:

Das Passwort für beide Bereiche, wurde auf der Konferenz bekannt gegeben und kann alternativ bei mir per Email erfragt werden.

Bis zur Developer Week 2019!

Auch im nächsten Jahr wird die Developer Week wieder Ihre Pforten öffnen und ich freue mich schon jetzt darauf! Ich möchte auf diesem Wege nochmal allen Teilnehmern meiner Sessions für die tolle Atmosphäre und interessanten Diskussionen danken, es hat mal wieder super viel Spaß gemacht und es war mir wie immer eine Ehre. Ebenfalls ein riesen großes Danke geht an den Veranstalter Developer Media, die mal wieder ein noch besseres Event als im vorherigen Jahr auf die Beine gestellt haben. Bis nächstes Jahr!

Golo Roden: Shorthand-Syntax für die Konsole

Die Shorthand-Syntax von ES2015 ermöglicht das vereinfachte Definieren von Objekten, deren Werte gleichnamigen Variablen entsprechen. Diese Syntax kann man sich bei Ausgaben auf die Konsole zu Nutze machen, um besser lesbare und nachvollziehbare Ausgaben zu erhalten.

Jürgen Gutsch: Four times in a row

One year later, it is the July 1st and I got the email from the Global MVP Administrator. I got the MVP award the fourth time in a row :)

I'm pretty proud and honored about that and I'm really happy to be part of the great MVP community one year more. I'm also looking forward to the Global MVP Summit next year to meet all the other MVPs from around the world.

Still not really a fan-boy...!?

I'm also proud of being a MVP, because I never called myself a Microsoft fan-boy. And sometimes, I also criticize some tools and platforms built by Microsoft (I feel like a bad boy). But I like most of the development tools built by Microsoft and I like to use the tools, and frameworks and I really like the new and open Microsoft. The way how Microsoft now supports more than its own technologies and platforms. I like using VSCode, Typescript and Webpack to create NodeJS applications. I like VSCode and .NET Core on Linux to build Applications on a different platform than Windows. I also like to play around with UWP Apps on Windows for IoT on a Raspberry PI.

There are much more possibilities, much more platforms, much more customers to reach, using the current Microsoft development stack. And it is really fun to play with it, to use it in real project, to write about it in .NET magazines, in this blog and to talk about it in the user groups and on conferences.

In the last year being an MVP, I also learned that it is kinda fun to contribute to Microsoft's open source projects, being a part of that project and to see my own work in that projects. If you like open source as well, contribute to the the open source projects. Make the projects better, make the documentations better.

I also need to say Thanks

But I wouldn't get honored again without such a great development community. I wouldn't continue to contribute to the community without that positive feedback and without that great people. This is why the biggest "Thank You" goes to the development community :)

And like last year, I also need to say "Thank You" to my great family (my lovely wife and my three kids) which supports me in spending so much time to contribute to the community. I also need to say Thanks to the YooApplications AG, my colleagues and my boss for supporting me and allowing me to use parts of my working time to contribute the the community.

Norbert Eder: The importance of automated software testing

Much has already been written about automated software testing – by me as well. Still there are so many developers out there, who say automated tests only costs money and time and so they rather put their time into building more functionality. Because this is, what the customer needs – they think. Some developers even think they are godlike, their code can’t have any defects, so there is no need for automated software testing. My experience tells another story though …

Automated testing is so expensive

Many years ago, I was developing a SaaS solution with a team of developers. We created a monolithic service having many features, but we wrote no automated tests. Every three or four months there was a „testing phase“. Everyone in the company (~ ten people) had to test the software the whole daylong. This lasted for about two weeks and resulted in lists of hundreds of bugs. The following few weeks were used to fix all those bugs, no new features were implemented.

Some years later, we decided to move to a new technology for our user interface (ASP.NET web forms to AngularJS). As part of this change my team got the chance to transform the monolithic application into several microservices. As the lead of this team, I promoted the implementation of automated tests. We wrote unit tests as well as integration tests for our web API endpoints. Even our Angular front-end code had been tested. Each team member had to run all tests before committing changes to our source control.

What do you think was the result of introducing automated tests?

We only had three to four bugs a month reported by colleagues or customers and could focus on building new features and a better user experience, improve performance and do research. No more testing and bug fixing weeks. That saved us so much money! Unbelievable. And we really loved it.

But writing tests costs time/money

This is the number one argument. Of course, it does! But: The more tests you write and the better you get at it, the less time you need. You will not only learn how to write tests very fast, you’ll also gain a sense for what is important to test.

So, writing unit and/or integration tests costs money. Right now. However, it saves money in the future. Neglecting to write tests saves money now, but costs dearly in the future. Let us have a look into the details.

Costs for writing tests

Again, writing tests needs time. However, every test helps to avoid failures in future. Of course, bugs can arise and you have to fix them, but you can do that in a test-driven manner. Write a test which assumes the correct behavior, then fix the bug until everything is fine and the test completes successfully. This failure will not bother you anymore in future.

Depending on your software, your test coverage and the used types of automated tests, you still may need to perform manual tests as well though.

Costs for not writing tests

In case you don’t want to deliver untested software, you have to do manual testing to find bugs. For these tests, detailed test cases are necessary. In reality, they too hardly exist if there was a decision against automated testing – costs are the basis of such decisions.

Doing manual tests is very time-consuming. It is reasonable to test manually even if you have automated tests, but the expenditure of time will be much higher if there are no automated tests.

Only relying on manual tests cause some negative effects (and they are cost-effective):

  • The software developer has to try to understand the code again when bugs are found weeks or months after the implementation. The sooner the bug was found, the better.
  • Generally, more bugs will be reported by customers if there is no automated testing. This ties up a lot of resources on the customers side and of course also at your own company (someone has to file the ticket, try to reproduce the bug, manage development planning, another testing phase has to be planned, a new release has to be scheduled and delivered, release notes have to be communicated and so forth).
  • In case of unit tests, the software design will be clearer and coupling will be lower. If you work actively with unit tests you tend to be in intensive care of the software design because you think about usage of your classes and methods. This leads to less complexity and higher maintainability.

Deployment could be relatively easy in case of a SaaS solution but that would become much more complex (and expensive) if you have a lot of on premises installations, especially in complex industrial infrastructures.

Positive side effects

I run all tests before committing my changes to the source control. This provides instant feedback and things can get fixed immediately.

Automated tests are an investment in the future of your software as well as your company. Someday a refactoring will be necessary. Automated tests greatly minimize the risk that accidental behavior changes occur, even when the software design or the implementation changes.

There is another cool thing about automated tests: Run them with a nightly build and log the execution times into a database. This makes it possible to trace performance of your software. Doing this daily provides a great feedback about the impact of the implementations made.

Conclusion

In reality, it is much more expensive to neglect to write tests. Some ignore the fact that the costs will incur much later. They just see less cost in development, but this is – in my opinion – a big mistake. My advice is to write both unit tests whenever reasonable, as well as integration tests for your entire API.

Der Beitrag The importance of automated software testing erschien zuerst auf Norbert Eder.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.