Holger Schwichtenberg: Community-Konferenz in Madgeburg im April ab 40 Euro pro Tag

Die Magdeburger Developer Days gehen in die dritte Auflage und dieses Mal auch dreitägig vom 9. bis 11. April 2018.

Holger Schwichtenberg: GroupBy funktioniert in Entity Framework Core 2.1 Preview 1 immer noch nicht so ganz

Aggregatoperatoren wie Min(), Max(), Sum() und Average() funktionieren, nicht aber Count().

Manfred Steyer: Custom Schematics - Part IV: Frictionless Library Setup with the Angular CLI and Schematics

Table of Contents

This blog post is part of an article series.

Thanks a lot to Hans Larsen from the Angular CLI team for reviewing this article.

It's always the same: After npm installing a new library, we have to follow a readme step by step to include it into our application. Usually this involves creating configuration objects, referencing css files, and importing Angular Modules. As such tasks aren't fun at all it would be nice to automate this.

This is exactly what the Angular CLI supports beginning with Version 6 (Beta 5). It gives us a new ng add command that fetches an npm package and sets it up with a schematic -- a code generator written with the CLI's scaffolding tool Schematics. To support this, the package just needs to name this schematic ng-add.

In this article, I show you how to create such a package. For this, I'll use ng-packagr and a custom schematic. You can find the source code in my GitHub account.

If you haven't got an overview to Schematics so far, you should lookup the well written introduction in the Angular Blog before proceeding here.


To demonstrate how to leverage ng add, I'm using an example with a very simple logger library here. It is complex enough to explain how everything works and not indented for production. After installing it, one has to import it into the root module using forRoot:

[...] import { LoggerModule } from '@my/logger-lib'; @NgModule({ imports: [ [...], LoggerModule.forRoot({ enableDebug: true }) ], [...] }) export class AppModule { }

As you see in the previous listing, forRoot takes a configuration object. After this, the application can get hold of the LoggerService and use it:

[...] import { LoggerService } from '@my/logger-lib'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent { constructor(private logger: LoggerService) { logger.debug('Hello World!'); logger.log('Application started'); } }

To prevent the need for importing the module manually and for remembering the structure of the configuration object, the following sections present a schematic for this.

Schematics is currently an Angular Labs project. Its public API is experimental and can change in future.

Angular Labs

Getting Started

To get started, you need to install version 6 of the Angular CLI. Make sure to fetch Beta 5 or higher:

npm i -g @angular/cli@~6.0.0-beta

You also need the Schematics CLI:

npm install -g @angular-devkit/schematics-cli

The above mentioned logger library can be found in the start branch of my sample:

git clone https://github.com/manfredsteyer/schematics-ng-add
cd schematics-ng-add
git checkout start

After checking out the start branch, npm install its dependencies:

npm install

If you want to learn more about setting up a library project from scratch, I recommend the resources outlined in the readme of ng-packagr.

Adding an ng-add Schematic

As we have everything in place now, let's add a schematics project to the library. For this, we just need to run the blank Schematics in the project's root:

schematics blank --name=schematics

This generates the following folder structure:

Generated Schematic

The folder src/schematics contains an empty schematic. As ng add looks for an ng-add schematic, let's rename it:

Renamed Schematic

In the index.ts file in the ng-add folder we find a factory function. It returns a Rule for code generation. I've adjusted its name to ngAdd and added a line for generating a hello.txt.

import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; export function ngAdd(): Rule { return (tree: Tree, _context: SchematicContext) => { tree.create('hello.txt', 'Hello World!'); return tree; }; }

The generation of the hello.txt file represents the tasks for setting up the library. We will replace it later with a respective implementation.

As our schematic will be looked up in the collection.json later, we have also to adjust it:

{ "$schema": "../node_modules/@angular-devkit/schematics/collection-schema.json", "schematics": { "ng-add": { "description": "Initializes Library", "factory": "./ng-add/index#ngAdd" } } }

Now, the name ng-add points to our rule -- the ngAdd function in the ng-add/index.ts file.

Adjusting the Build Script

In the current project, ng-packagr is configured to put the library build out of our sources in the folder dist/lib. The respective settings can be found within the ngPackage node in the package.json. When I'm mentioning package.json here, I'm referring to the project root's package.json and not to the generated one in the schematics folder.

To make use of our schematic, we have to make sure it is compiled and copied over to this folder. For the latter task, I'm using the cpr npm package we need to install in the project's root:

npm install cpr --save-dev

In order to automate the mentioned tasks, add the following scripts to the package.json:

[...] "scripts": { [...], "build:schematics": "tsc -p schematics/tsconfig.json", "copy:schematics": "cpr schematics/src dist/lib/schematics --deleteFirst", [...] }, [...]

Also, extend the build:lib script so that the newly introduced scripts are called:

[...] "scripts": { [...] "build:lib": "ng-packagr -p package.json && npm run build:schematics && npm run copy:schematics", [...] }, [...]

When the CLI tries to find our ng-add schematic, it looks up the schematics field in the package.json. By definition it points to the collection.json which in turn points to the provided schematics. Hence, let's add this field to our package.json too:

{ [...], "schematics": "./schematics/collection.json", [...] }

Please note that the mentioned path is relative to the folder lib where ng-packagr copies the package.json over.

Test the Schematic Directly

For testing the schematic, let's build the library:

npm run build:lib

After this, move to the dist/lib folder and run the schematic:

schematics .:ng-add

Testing the ng-add schematic

Even though the output mentions that a hello.txt is generated, you won't find it because when executing a schematic locally it's performing a dry run. To get the file, set the dry-run switch to false:

schematics .:ng-add --dry-run false

After we've seen that this works, generate a new project with the CLI to find out whether our library plays together with the new ng add:

ng new demo-app
cd demo-app
ng add ..\logger-lib\dist\lib

ng add with relative path

Make sure that you point to our dist/lib folder. Because I'm working on Windows, I've used backslashes here. For Linux or Mac, replace them with forward slashes.

When everything worked, we should see a hello.txt.

As ng add is currently not adding the installed dependency to your package.json, you should do this manually. This might change in future releases.

Test the Schematic via an npm Registry

As we know now that everything works locally, let's also check whether it works when we install it via an npm registry. For this, we can for instance use verdaccio -- a very lightweight node-based implementation. You can directly npm install it:

npm install -g verdaccio

After this, it is started by simply running the verdaccio command:

Running verdaccio

Before we can publish our library to verdaccio, we have to remove the private flag from our package.json or at least set it to false:

{ [...] "private": false, [...] }

To publish the library, move to your project's dist/lib folder and run npm publish:

npm publish --registry http://localhost:4873

Don't forget to point to verdaccio using the registry switch.

Now, let's switch over to the generated demo-app. To make sure our registry is used, create an .npmrc file in the project's root:


This entry causes npm to look up each library with the @my scope in our verdaccio instance.

After this, we can install our logger library:

ng add @my/logger-lib

ng add

When everything worked, we should find our library in the node_modules/@my/logger-lib folder and the generated hello.txt in the root.

Extend our Schematic

So far, we've created a library with a prototypical ng-add schematic that is automatically executed when installing it with ng add. As we know that our setup works, let's extend the schematic to setup the LoggerModule as shown in the beginning.

Frankly, modifying existing code in a safe way is a bit more complicated than what we've seen before. But I'm sure, we can accomplish this together ;-).

For this endeavour, our schematic has to modify the project's app.module.ts file. The good message is, that this is a common task the CLI performs and hence its schematics already contain the necessary logic. However, when writing this, the respective routines have not been part of the public API and so we have to fork it.

For this, I've checked out the Angular DevKit and copied the contents of its packages/schematics/angular/utility folder to my library project's schematics/src/utility folder. Because those files are subject to change, I've conserved the current state here.

Now, let's add a Schematics rule for modifying the AppModule. For this, move to our schematics/src/ng-add folder and add a add-declaration-to-module.rule.ts file. This file gets an addDeclarationToAppModule function that takes the path of the app.module.ts and creates a Rule for updating it:

import { Rule, Tree, SchematicsException } from '@angular-devkit/schematics'; import { normalize } from '@angular-devkit/core'; import * as ts from 'typescript'; import { addSymbolToNgModuleMetadata } from '../utility/ast-utils'; import { InsertChange } from "../utility/change"; export function addDeclarationToAppModule(appModule: string): Rule { return (host: Tree) => { if (!appModule) { return host; } // Part I: Construct path and read file const modulePath = normalize('/' + appModule); const text = host.read(modulePath); if (text === null) { throw new SchematicsException(`File ${modulePath} does not exist.`); } const sourceText = text.toString('utf-8'); const source = ts.createSourceFile(modulePath, sourceText, ts.ScriptTarget.Latest, true); // Part II: Find out, what to change const changes = addSymbolToNgModuleMetadata(source, modulePath, 'imports', 'LoggerModule', '@my/logger-lib', 'LoggerModule.forRoot({ enableDebug: true })'); // Part III: Apply changes const recorder = host.beginUpdate(modulePath); for (const change of changes) { if (change instanceof InsertChange) { recorder.insertLeft(change.pos, change.toAdd); } } host.commitUpdate(recorder); return host; }; }

Most of this function has been "borrowed" from the Angular DevKit. It reads the module file and calls the addSymbolToNgModuleMetadata utility function copied from the DevKit. This function finds out what to modify. Those changes are applied to the file using the recorder object and its insertLeft method.

To make this work, I had to tweak the copied addSymbolToNgModuleMetadata function a bit. Originally, it imported the mentioned Angular module just by mentioning its name. My modified version has an additional parameter which takes an expression like LoggerModule.forRoot({ enableDebug: true }). This expression is put into the module's imports array.

Even though this just takes some minor changes, the whole addSymbolToNgModuleMetadata method is rather long. That's why I'm not printing it here but you can look it up in my solution.

After this modification, we can call addDeclarationToAppModule in our schematic:

import { Rule, SchematicContext, Tree, chain, branchAndMerge } from '@angular-devkit/schematics'; import { addDeclarationToAppModule } from './add-declaration-to-module.rule'; export function ngAdd(): Rule { return (tree: Tree, _context: SchematicContext) => { const appModule = '/src/app/app.module.ts'; let rule = branchAndMerge(addDeclarationToAppModule(appModule)); return rule(tree, _context); }; }

Now, we can test our Schematic as shown above. To re-publish it to the npm registry, we have to increase the version number in the package.json. For this, you can make use of npm version:

npm version minor

After re-building it (npm run build:lib) and publishing the new version to verdaccio (npm publish --registry http://localhost:4873), we can add it to our demo app:

Add extended library


An Angular-based library can provide an ng-add Schematic for setting it up. When installing the library using ng add, the CLI calls this schematic automatically. This innovation has a lot of potential and will dramatically lower the entry barrier for installing libraries in the future.

MSDN Team Blog AT [MS]: Neue Azure Regions

Microsoft hat heute eine starke Erweiterung Ihrer Azure Rechenzentren in Europa bekannt gegeben. Es wurden zwei neue Azure Regionen in der Schweiz angekündigt, zwei weitere Regionen in Deutschland und die Inbetriebnahme der beiden fertiggestellten Regionen in Frankreich. Die beiden zusätzlichen Regionen in Deutschland werden, anders als die existierenden, Teil der internationalen Azure Rechenzentren sein. Eine dedizierte Datenspeicherung in Deutschland ist möglich aber auch die Nutzung der Skalierbarkeit und Ausfalsssicherheit im Zusammenspiel mit Irland, den Niederlanden, Frankreich und bald auch der Schweiz.

Neben Europa wurden auch zwei Regionen in den Vereinigten Arabischen Emiraten angekündigt.


Holger Schwichtenberg: Erste Preview-Version von .NET Core 2.1 & Co.

Ganz knapp zum Ende des von Microsoft geplanten Zeitraums ist dann doch noch die erste Preview-Version am 27. Februar erschienen.

Jürgen Gutsch: Creating a chat application using React and ASP.NET Core - Part 5

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.


In this post I will write about the deployment of the app to Azure App Services. I will use CAKE to build pack and deploy the apps, both the identity server and the actual app. I will run the build an AppVeyor, which is a free build server for open source projects and works great for projects hosted on GitHub.

I'll not go deep into the AppVeyor configuration, the important topics are cake and azure and the app itself.

BTW: SignalR was going into the next version the last weeks. It is not longer alpha. The current version is 1.0.0-preview1-final. I updated the version in the package.json and in the ReactChatDemo.csproj. Also the NPM package name changed from "@aspnet/signalr-client" to "@aspnet/signalr". I needed to update the import statement in the WebsocketService.ts file as well. After updating SignalR I got some small breaking changes, which are easily fixed. (Please see the GitHub repo, to learn about the changes.)

Setup CAKE

CAKE is a build DSL, that is built on top of Roslyn to use C#. CAKE is open source and has a huge community, who creates a ton of add-ins for it. It also has a lot of built-in features.

Setting up CAKE is easily done. Just open the PowerShell and cd to the solution folder. Now you need to load a PowerShell script that bootstraps the CAKE build and loads more dependencies if needed.

Invoke-WebRequest https://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

Later on, you need to run the build.ps1 to start your build script. Now the Setup is complete and I can start to create the actual build script.

I created a new file called build.cake. To edit the file it makes sense to use Visual Studio Code, because @code also has IntelliSense. In Visual Studio 2017 you only have syntax highlighting. Currently I don't know an add-in for VS to enable IntelliSense.

My starting point for every new build script is, the simple example from the quick start demo:

var target = Argument("target", "Default");

  .Does(() =>
    Information("Hello World!");


The script then gets started by calling the build.ps1 in a PowerShell


If this is working I'm able to start hacking the CAKE script in. Usually the build steps I use looks like this.

  • Cleaning the workspace
  • Restoring the packages
  • Building the solution
  • Running unit tests
  • Publishing the app
    • In the context of non-web application this means packaging the app
  • Deploying the app

To deploy the App I use the CAKE Kudu client add-in and I need to pass in some Azure App Service credentials. You get this credentials, by downloading the publish profile from the Azure App Service. You can just copy the credentials out of the file. Be careful and don't save the secrets in the file. I usually store them in environment variables and read them from there. Because I have two apps (the actual chat app and the identity server) I need to do it twice:

#addin nuget:?package=Cake.Kudu.Client

string  baseUriApp     = EnvironmentVariable("KUDU_CLIENT_BASEURI_APP"),
        userNameApp    = EnvironmentVariable("KUDU_CLIENT_USERNAME_APP"),
        passwordApp    = EnvironmentVariable("KUDU_CLIENT_PASSWORD_APP"),
        baseUriIdent   = EnvironmentVariable("KUDU_CLIENT_BASEURI_IDENT"),
        userNameIdent  = EnvironmentVariable("KUDU_CLIENT_USERNAME_IDENT"),
        passwordIdent  = EnvironmentVariable("KUDU_CLIENT_PASSWORD_IDENT");;

var target = Argument("target", "Default");

    .Does(() =>

	.Does(() => 

	.Does(() => 
              var settings = new DotNetCoreBuildSettings
                  NoRestore = true,
                  Configuration = "Release"
              DotNetCoreBuild("./react-chat-demo.sln", settings);

	.Does(() =>
              var settings = new DotNetCoreTestSettings
                  NoBuild = true,
                  Configuration = "Release",
                  NoRestore = true
              var testProjects = GetFiles("./**/*.Tests.csproj");
              foreach(var project in testProjects)
                  DotNetCoreTest(project.FullPath, settings);

	.Does(() => 
              var settings = new DotNetCorePublishSettings
                  Configuration = "Release",
                  OutputDirectory = "./publish/ReactChatDemo/",
                  NoRestore = true
              DotNetCorePublish("./ReactChatDemo/ReactChatDemo.csproj", settings);
              settings.OutputDirectory = "./publish/ReactChatDemoIdentities/";
              DotNetCorePublish("./ReactChatDemoIdentities/ReactChatDemoIdentities.csproj", settings);

	.Does(() => 
              var kuduClient = KuduClient(
              var sourceDirectoryPath = "./publish/ReactChatDemo/";
              var remoteDirectoryPath = "/site/wwwroot/";


              kuduClient = KuduClient(
              sourceDirectoryPath = "./publish/ReactChatDemoIdentities/";
              remoteDirectoryPath = "/site/wwwroot/";


    .Does(() =>
              Information("Your build is done :-)");


To get this script running locally, you need to set each of the environment variables in the current PowerShell session:

$env:KUDU_CLIENT_PASSWORD_APP = "super secret password"
# and so on...

If you only want to test the compile and publish stuff, just set the dependency of the default target to "Publish" instead of "Deploy". Doing this the deploy part will not run, you don't deploy in accident and you save some time while trying this.

Use CAKE in AppVeyor

On AppVeyor the environment variables are set in the UI. Don't set them in the YAML configuration, because they are not properly save and everybody can see them.

The most simplest appveyor.yml file looks like this.

version: 1.0.0-preview1-{build}
  do_not_increment_build_number: true
  - master
skip_tags: true
image: Visual Studio 2017 Preview
- ps: .\build.ps1
test: off
deploy: off
# this is needed to install the latest node version
  nodejs_version: "8.9.4"
  - ps: Install-Product node $env:nodejs_version
  # write out version
  - node --version
  - npm --version

This configuration only builds the master and the develop branch, which makes sense if you use git flow, as I used to do. Otherwise change it to just use the master branch or whatever branch you want to build. I skip tags to build and any other branches.

The image is Visual Studio 2017 (preview only if you want to try the latest features)

I can switch off tests, because this is done in the CAKE script. The good thing is, that the XUnit test output, built by the test runs in CAKE , gets anyway published to the AppVeyor reports. Deploy is also switched off, because it's done in CAKE too.

The last thing that needs to be done is to install the latest Node.JS version. Otherwise the already installed pretty much outdated version is is used. This is needed to download the React dependencies and to run Webpack to compile and bundle the React app.

You could also configure the CAKE script in a way that test, deploy and build calls different targets inside CAKE. But this is not really needed and makes the build a little less readable.

If you now push the entire repository to your repository on GitHub, you need to go to AppVeyor and to setup a new build project by selecting your GitHub repository. An new AppVeyor account is easily set up using an existing GitHub account. When the build project is created, you don't need to setup more. Just start a new build and see what happens. Hopefully you'll also get a green build like this:

Closing words

This post was finished one day after the Global MVP Summit 2018 on a pretty sunny day in Seattle

I spent two nights before the summit starts in Seattle downtown and the two nights after. Both times it was unexpectedly sunny.

I finish this series with this fifth blog post and learned a little bit about React and how it behaves in an ASP.NET Core project. And I really like it. I wouldn't really do a single page application using React, this seems to be much easier and faster using Angular, but I will definitely use react in future to create rich HTML UIs.

It works great using the React ASP.NET Core project in Visual Studio. It is great that Webpack is used here, because it saves a lot of time and avoids hacking around the VS environment.

MSDN Team Blog AT [MS]: Mobile Developer After-Work #17

Mobile Developer After-Work #17

Progressive Web Apps, Blockchain und Mixed Reality

Mittwoch, 21 März 2018, 17:30 – 21:30
Raum D, Museumsquartier, 1070 Wien
Jetzt anmelden!

Die Technik-Welt wird gerade von 3 Themen beherrscht. Beim #mdaw17 erfährst du, was dahintersteckt:

  • Wie erstellt man eine Progressive Web App in 30 Minuten?
  • Was bieten Blockchains für seriöse Geschäftsfälle?
  • Mixed Reality: die neuesten Trends und ein Praxisbeispiel aus der Kunst


17:00 – 17:30: Anmeldung

17:30 – 17:35: Willkommen
Andreas Jakl, FH St. Pölten
Helmut KrämerTieto

17:35 – 18:10: In 30 Minuten zur Progressive Web App (35 min)
Stefan Baumgartner, fettblog.eu
Das Web als Applikationsplattform für mobile Geräte hat sich ja jetzt nicht ganz so durchgesetzt, wie man immer wollte. Viele Versuche gab es, aber auch genauso viele sind wieder in der Versenkung verschwunden. Mit Progressive Web Applications soll sich das ändern. Dahinter stehen eine Reihe von Web Standards, ein neues Mindset und ganz wichtig: ein kollaboratives Vorgehen aller Browser- und Gerätehersteller. In diesem Vortrag schauen wir uns am Beispiel an, wie man am schnellsten zu einer PWA kommt, und welche Technologien es dafür benötigt.

18:15 – 18:50: Blockchain - Nachweis und Vertrauen in der Supplychain (35 min)
Sebastian Nigl, Tieto
Durch das steigende Interesse der Unternehmen an der Blockchain-Technologie abseits der privaten Finanzwirtschaft hat das medial gehypte Thema auch Einzug in der Geschäftswelt gefunden. Vom Hype zum seriösen Geschäftsfall gilt es für viele Unternehmen nun herauszufinden wie Blockchain sich tatsächlich in den betrieblichen Prozessen einbinden lässt. Nachweis und Vertrauen in der Supplychain sind von der Beschaffung bis zur Auslieferung des Fertigprodukts Bereiche, wo Blockchain eine Lösung hat. Am Beispiel der FSC Zertifizierung soll gezeigt werden, wie eine Blockchain Lösung in der Logistik aussehen kann.

18:50 – 19:10: Pause

19:10 – 19:30: Hands-On Mixed Reality (20 min)
Andreas Jakl, FH St. Pölten
Mit Google ARCore oder Apple ARKit tragen bald fast alle Smartphone-User AR-taugliche Geräte immer bei sich. Auf der anderen Seite ist mit Windows Mixed Reality der Einstieg in VR einfach & günstig wie nie. Was bieten die aktuellen Plattformen auf der technischen Seite? Anhand von kurzen Beispielen werfen wir einen Blick auf die Entwicklung von Mixed Reality-Apps für die neuesten Plattformen mit Unity.

19:35 – 20:05: AR im Kunstkontext (30 min)
Reinhold Bidner
Augmented Reality ist mittlerweile massentauglich geworden. Abseits von Games oder Produktpräsentationen experimentieren auch KünstlerInnen zunehmend in diesem Bereich. Reinhold Bidner wird in seinem Talk über ein paar seiner künstlerischen AR Geh-Versuche erzählen: einerseits als eigenständiger Animations-Künstler, andererseits als Part des Kollektivs gold extra, oder momentan als Mitglied in einem AR-Projekt für das Volkstheater/Volx Margareten namens "Vienna - All Tomorrows" (Leitung: Georg Hobmeier).

20:05: Networking mit Snacks und Getränken


Zum Event gelangst du direkt mit der U2 / U3: Museumsquartier bzw. Volkstheater. Die Veranstaltung findet im Raum D im Quartier 21 des Museumsquartiers in 1070 Wien statt.

Raum D / Museumsquartier Wien

Organisation & Partner

Vielen Dank an unsere wunderbaren Partner! Das Event wird von mobility.builders organisiert, unterstützt von der FH St. Pölten und Tieto. Microsoft stellt die großartige Verpflegung zur Verfügung. Besonderer Dank gilt auch der ASIFA Austria für die besonders schöne Event-Location im Museumsquartier!

Partner: FH St. Pölten, Tieto, ASIFA Austria, Microsoft

Anmelden zum #mdaw17

Die Anzahl der TeilnehmerInnen ist beschränkt – gleich kostenlos anmelden!

MSDN Team Blog AT [MS]: Computer Vision Hackfest in Paris


Für alle die sich mit Computer Vision Technologien beschäftigen bieten wir vom 18. – 20. April in Paris eine tolle Gelegenheit um gemeinsam mit Software Engineers der CSE (Commercial Software Engineering) am eigenen computer vision-based Scenario / Projekt weiterzuarbeiten. Unsere Experten können Euch unterstützen einen Prototyp Eurer Lösung zu erstellen.

Im Rahmen des Hackfests zeigen wir Euch wie ihr Microsoft AI/Open Source Technologien, Azure Machine Learning, Cognitive Services, und andere Microsoft Angebote nutzen könnt, um Euer Projekt umzusetzen.

Da das Hackfest in englischer Sprache durchgeführt wird gibt es auch die weiteren Details auf englisch:

As a part of the Hackfest you will:

  • Hear from Engineering and Microsoft Technical teams on the tools and services to build great Computer Vision solutions.
  • Engage in architecture and coding sessions with Microsoft teams
  • Get to code side-by-side with the Engineering teams to help implement your scenario
  • Learn from the other attendees who will work on different scenarios but on the same set of technologies.

What’s required?

  • Bring a specific business problem and a related dataset where Image Classification and/or Object Detection is required.
  • Commit to at least 2 members of your development team to attend the Hackfest to code together with the Software engineers
  • Your company must have an Azure subscription

What’s provided?

  • Space to work as a team with Microsoft engineers
  • Coffee breaks + Lunch + Afternoon snacks
  • Network connectivity


  • The hackfest will happen in Paris, in a venue to be confirmed
  • It will start on Monday, April 16th at 10am and will close on Friday, April 20th at 12pm.
  • The event is invitation-based and free.


Wie melde ich mich und mein Team an?

Für Anmeldungen oder Details zum Computer Vision Hackfest in Paris schickt bitte eine E-Mail an Gerhard.Goeschl@Microsoft.com. Wir setzen dann mit euch einen Scoping Call für Euer Projekt auf um dieses im Vorfeld besser verstehen und Euch beim Hackfest besser unterstützen zu können..

MSDN Team Blog AT [MS]: Artificial Intelligence Hackfest in Belgien



Vom 9. – 13. April veranstalten wir in Belgien ein Artificial Intelligence Hackfest. Für alle die sich schon mit dem Thema AI beschäftigen eine tolle Gelegenheit um gemeinsam mit Software Engineers der CSE (Commercial Software Engineering) am eigenen Projekt weiterzuarbeiten. Unsere Experten können Euch unterstützen einen Prototyp Eurer AI Lösung zu erstellen.

Im Rahmen des Hackfests zeigen wir Euch wie ihr Azure Machine Learning, Cognitive Services, und andere Microsoft AI Angebote nutzen könnt, um Eure Firmen Daten in Intelligente Erkenntnisse umzusetzen.

Da das Hackfest in englischer Sprache durchgeführt wird gibt es auch die weiteren Details auf englisch:

As a part of the Hackfest you will:

  • Hear from Engineering and Microsoft Technical Evangelism teams on the tools and services to build great AI solutions.
  • Engage in architecture and coding sessions with Microsoft teams
  • Get to code side-by-side with the Engineering and Technical Evangelist teams to help implement AI scenarios with your apps or sites.

What’s required?

  • Bring a specific business problem and a dataset around which this problem is centered. (briefing document attached)
  • Commit to at least 2 members of their development team to attend the Hackfest to work together with the Software engineers to build their idea into a working prototype
  • Your company must have an Azure subscription

What’s provided?

  • Network connectivity
  • Coffee breaks + Lunch + Space to work as a team with a Microsoft engineer at the Van Der Valk hotel

Für Anmeldungen oder Details zum Artificial Intelligence Hackfest in Belgien schickt bitte eine E-Mail an Gerhard.Goeschl@Microsoft.com

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 4

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.


My idea about this app is to split the storages, between a storage for flexible objects and immutable objects. The flexible objects are the users and the users metadata in this case. Immutable objects are the chat message.

The messages are just stored one by one and will never change. Storing a message doesn't need to be super fast, but reading the messages need to be as fast as possible. This is why I want to go with the Azure Table Storage. This is one of the fastest storages on Azure. In the past, at the YooApps, we also used it as an event store for CQRS based applications.

Handling the users doesn't need to be super fast as well, because we only handle one user at one time. We don't read all of the users at one blow, we don't do batch operations on it. So using a SQL Storage with IdentityServer4on e.g. a Azure SQL Database should be fine.

The users online will be stored in memory only, which is the third storage. The memory is save in this case, because, if the app shuts down, the users need to logon again anyway and the list of users online gets refilled. And it is even not really critical, if the list of the users online is not in sync with the logged on users.

This leads into three different storages:

  • Users: Azure SQL Database, handled by IdentityServer4
  • Users online: Memory, handled by the chat app
    • A singleton instance of a user tracker class
  • Messages: Azure Table Storage, handled by the chat app
    • Using the SimpleObjectStore and the Azure table Storage provider

Setup IdentityServer4

To keep the samples easy, I do the logon of the users on the server side only. (I'll go through the SPA logon using React and IdentityServer4 in another blog post.) That means, we are validating and using the senders name on the server side - in the MVC controller, the API controller and in the SignalR Hub - only.

It is recommended to setup the IdentityServer4 in a separate web application. We will do it the same way. So I followed the quickstart documentation on the IdentityServer4 web site, created a new empty ASP.NET Core project and added the IdentiyServer4 NuGet packages, as well as the MVC package and the StaticFile package. I first planned to use ASP.NET Core Identity with the IdentityServer4 to store the identities, but I changed that, to keep the samples simple. Now I only use the in-memory configuration, you can see in the quickstart tutorials, I'm able to use ASP.NET Identity or any other custom SQL storage implementation later on. I also copied the IdentityServer4 UI code from the IdentityServer4.Quickstart.UI repository into that project.

The Startup.cs of the IdentityServer project look s pretty clean. It adds the IdentityServer to the service collection and uses the IdentityServer middleware. While adding the services, I also add the configurations for the IdentityServer. As recommended and shown in the quickstart, the configuration is wrapped in the Config class, that is used here:

public class Startup
    public void ConfigureServices(IServiceCollection services)

        // configure identity server with in-memory stores, keys, clients and scopes

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        if (env.IsDevelopment())

        // use identity server


The next step is to configure the IdentityServer4. As you can see in the snippet above, this is done in a class called Config:

public class Config
    public static IEnumerable<Client> GetClients()
        return new List<Client>
            new Client
                ClientId = "reactchat",
                ClientName = "React Chat Demo",

                AllowedGrantTypes = GrantTypes.Implicit,
                RedirectUris = { "http://localhost:5001/signin-oidc" },
                PostLogoutRedirectUris = { "http://localhost:5001/signout-callback-oidc" },

                AllowedScopes =

    internal static List<TestUser> GetUsers()
        return new List<TestUser> {
            new TestUser
                SubjectId = "1",
                Username = "juergen@gutsch-online.de",
                Claims = new []{ new Claim("name", "Juergen Gutsch") },
                Password ="Hello01!"
    public static IEnumerable<ApiResource> GetApiResources()
        return new List<ApiResource>
            new ApiResource("reactchat", "React Chat Demo")

    public static IEnumerable<IdentityResource> GetIdentityResources()
        return new List<IdentityResource>
            new IdentityResources.OpenId(),
            new IdentityResources.Profile(),

The clientid is calles reactchat. I configured both projects, the chat application and the identity server application, to run with specific ports. The chat application runs with port 5001 and the identity server uses port 5002. So the redirect URIs in the client configuration points to the port 5001.

Later on we are able to replace this configuration with a custom storage for the users and the clients.

We also need to setup the client (the chat application) to use this identity server.

Adding authentication to the chat app

To add authentication, I need to add some configuration to the Startup.cs. The first thing is to add the authentication middleware to the Configure method. This does all the authentication magic and handles multiple kinds of authentication:


Be sure to add this line before the usage of MVC and SignalR. I also put this line before the usage of the StaticFilesMiddleware.

Now I need to add and to configure the needed services for this middleware.

services.AddAuthentication(options =>
        options.DefaultScheme = "Cookies";
        options.DefaultChallengeScheme = "oidc";                    
    .AddOpenIdConnect("oidc", options =>
        options.SignInScheme = "Cookies";

        options.Authority = "http://localhost:5002";
        options.RequireHttpsMetadata = false;
        options.TokenValidationParameters.NameClaimType = "name";

        options.ClientId = "reactchat";
        options.SaveTokens = true;

We add cookie authentication as well as OpenID connect authentication. The cookie is used to temporary store the users information to avoid an OIDC login on every request. To keep the samples simples I switched off HTTPS.

I need to specify the NameClaimType, because IdentityServer4 provides the users name within a simpler claim name, instead of the long default one..

That's it for the authentication part. We now need to secure the chat, This is done by adding the AuthorizeAttribute to the HomeController. Now the app will redirect to the identity servers login page, if we try to access the view created by the secured controller:

After entering the credentials, we need to authorize the app to get the needed profile information from the identity server:

If this is done we can start using the users name in the chat. To do this, we need to change the AddMessage method in the ChatHab a little bit:

public void AddMessage(string message)
    var username = Context.User.Identity.Name;
    var chatMessage =  _chatService.CreateNewMessage(username, message);
    // Call the MessageAdded method to update clients.
    Clients.All.InvokeAsync("MessageAdded", chatMessage);

I removed the magic string with my name in it and replaced it with the username I get from the current Context. Now the chat uses the logged on user to add chat messages:

I'll not go into the user tracker here, to keep this post short. Please follow the GitHub repos to learn more about tracking the online state of the users.

Storing the messages

The idea is to keep the messages stored permanently on the server. The current in-memory implementation doesn't handle a restart of the application. Every time the app restarts the memory gets cleared and the messages are gone. I want to use the Azure Table Storage here, because it is pretty simple to use and reading the storage is amazingly fast. We need to add another NuGet package to our app which is the AzureStorageClient.

To encapsulate the Azure Storage I will create a ChatStorageRepository, that contains the code to connect to the Tables.

Let's quickly setup a new storage account on azure. Logon to the azure portal and go to the storage section. Create a new storage account and follow the wizard to complete the setup. After that you need to copy the storage credentials ("Account Name" and "Account Key") from the portal. We need them to to connect to the storage account alter on.

Be careful with the secrets

Never ever store the secret information in a configuration or settings file, that is stored in the source code repository. You don't need to do this anymore, with the user secrets and the Azure app settings.

All the secret information and the database connection string should be stored in the user secrets during development time. To setup new user secrets, just right click the project that needs to use the secrets and choose the "Manage User Secrets" entry from the menu:

Visual Studio then opens a secrets.json file for that specific project, that is stored somewhere in the current users AppData folder. You see the actual location, if you hover over the tab in Visual Studio. Add your secret data there and save the file.

The data than gets passed as configuration entries into the app:

// ChatMessageRepository.cs
private readonly string _tableName;
private readonly CloudTableClient _tableClient;
private readonly IConfiguration _configuration;

public ChatMessageRepository(IConfiguration configuration)
    _configuration = configuration;

    var accountName = configuration.GetValue<string>("accountName");
    var accountKey = configuration.GetValue<string>("accountKey");
    _tableName = _configuration.GetValue<string>("tableName");

    var storageCredentials = new StorageCredentials(accountName, accountKey);
    var storageAccount = new CloudStorageAccount(storageCredentials, true);
    _tableClient = storageAccount.CreateCloudTableClient();

On Azure there is an app settings section in every Azure Web App. Configure the secrets there. This settings get passes as configuration items to the app as well. This is the most secure approach to store the secrets.

Using the table storage

You don't really need to create the actual table using the Azure portal. I do it by code if the table doesn't exists. To do this, I needed to create a table entity object first. This defines the available fields in that Azure Table Storage

public class ChatMessageTableEntity : TableEntity
    public ChatMessageTableEntity(Guid key)
        PartitionKey = "chatmessages";
        RowKey = key.ToString("X");

    public ChatMessageTableEntity() { }

    public string Message { get; set; }

    public string Sender { get; set; }

The TableEntity has three default properties, which are a Timestamp, a unique RowKey as string and a PartitionKey as string. The RowKey need to be unique. In a users table the RowKey could be the users email address. In our case we don't have a unique value in the chat messages so we'll use a Guid instead. The PartitionKey is not unique and bundles several items into something like a storage unit. Reading entries from a single partition is quite fast, data inside a partition never gets spliced into many storage locations. They will kept together. In the current phase of the project it doesn't make sense to use more than one partition. Later on it would make sense to use e.g. one partition key per chat room.

The ChatMessageTableEntity has one constructor we will use to create a new entity and an empty constructor that is used by the TableClient to create it out of the table data. I also added two properties for the Message and the Sender. I will use the Timestamp property of the parent class for the time shown in the chat window.

Add a message to the Azure Table Storage

To add a new message to the Azure Table Storage, I created a new method to the repository:

// ChatMessageRepository.cs
public async Task<ChatMessageTableEntity> AddMessage(ChatMessage message)
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();

    var chatMessage = new ChatMessageTableEntity(Guid.NewGuid())
        Message = message.Message,
        Sender = message.Sender

    // Create the TableOperation object that inserts the customer entity.
    TableOperation insertOperation = TableOperation.Insert(chatMessage);

    // Execute the insert operation.
    await table.ExecuteAsync(insertOperation);

    return chatMessage;

This method uses the TableClient created in the constructor.

Read messages from the Azure Table Storage

Reading the messages is done using the method ExecuteQuerySegmentedAsync. With this method it is possible to read all the table entities in chunks from the Table Storage. This makes sense, because there is a request limit of 1000 table entities. In my case I don't want to load all the data but the latest 100:

// ChatMessageRepository.cs
public async Task<IEnumerable<ChatMessage>> GetTopMessages(int number = 100)
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();
    string filter = TableQuery.GenerateFilterCondition(
    var query = new TableQuery<ChatMessageTableEntity>()

    var entities = await table.ExecuteQuerySegmentedAsync(query, null);

    var result = entities.Results.Select(entity =>
        new ChatMessage
            Id = entity.RowKey,
            Date = entity.Timestamp,
            Message = entity.Message,
            Sender = entity.Sender

    return result;

Using the repository

In the Startup.cs I changed the registration of the ChatService from Singleton to Transient, because we don't need to store the messages in memory anymore. I also add a transient registration for the IChatMessageRepository:

services.AddTransient<IChatMessageRepository, ChatMessageRepository>();
services.AddTransient<IChatService, ChatService>();

The IChatMessageRepository gets injected into the ChatService. Since the Repository is async I also need to change the signature of the service methods a little bit to support the async calls. The service looks cleaner now:

public class ChatService : IChatService
    private readonly IChatMessageRepository _repository;

    public ChatService(IChatMessageRepository repository)
        _repository = repository;

    public async Task<ChatMessage> CreateNewMessage(string senderName, string message)
        var chatMessage = new ChatMessage(Guid.NewGuid())
            Sender = senderName,
            Message = message
        await _repository.AddMessage(chatMessage);

        return chatMessage;

    public async Task<IEnumerable<ChatMessage>> GetAllInitially()
        return await _repository.GetTopMessages();

Also the Controller action and the Hub method need to change to support the async calls. It is only about making the methods async, returning Tasks and to await the service methods.

// ChatController.cs
public async Task<IEnumerable<ChatMessage>> InitialMessages()
    return await _chatService.GetAllInitially();

Almost done

The authentication and storing the messages is done now. What needs to be done in the last step, is to add the logged on user to the UserTracker and to push the new user to the client. I'll not cover that in this post, because it already has more than 410 lines and more than 2700 words. Please visit the GitHub repository during the next days to learn how I did this.

Closing words

Even this post wasn't about React. The authentication is only done server side, since this isn't really a single page application.

To finish this post I needed some more time to get the Authentication using IdentityServer4 running. I stuck in a Invalid redirect URL error. At the end it was just a small typo in the RedirectUris property of the client configuration of the IdentityServer, but it took some hours to find it.

In the next post I will come back a little bit to React and Webpack while writing about the deployment. I'm going to write about automated deployment to an Azure Web App using CAKE, running on AppVeyor.

I'm attending the MVP Summit next week, so the last post of this series, will be written and published from Seattle, Bellevue or Redmond :-)

Holger Sirtl: Use Azure Automation for creating Resource Groups despite having limited permissions only

In Azure-related engagements I often observe that Azure users are only assigned Contributor-role at one single Azure Resource Group. In general, motivation for this is…

  • Users can provision resources within this Resource Group
  • Users are protected from accidentally deleting resources in other Resource Groups (e.g. settings of a Virtual Network, security settings, …)

On the other hand, this approach leads to a few issues:

  • With only having Contributor role on one Resource Group, you cannot create additional Resource Groups (which would be nice to structure your Azure resources).
  • Some Azure Marketplace solutions require to be installed in empty Resource Groups. So, if a user already provisioned resources in “his” Resource Group, he just can’t provision these solutions.


Azure Automation lets you – among other features – configure PowerShell skripts (called Runbooks) that can run under elevated privileges allowing Contributors to call these Runbooks and perform actions they normally wouldn’t have sufficient permissions for. Such a Runbook can perform following actions (under Subscription Owner role):

  1. Create a new Resource Group (name specified by a parameter)
  2. Assign an AD Group (name specified by a parameter) as Contributor to this Resource Group

Necessary steps for implementing the solution

For implementing this, following steps must be taken:

  • Step 1: Create an Azure Automation Account
  • Step 2: Create a Run As Account with sufficient access permissions
  • Step 3: Create and test a new Automation Runbook that creates Resource Groups
  • Step 4: Publish the Runbook

Description of the Solution

Step 1: Create an Azure Automation Account

  • In Azure Portal click on New / Monitoring + Management / Automation


  • Fill in the required parameters and make sure you create a Run As account


  • Confirm with Create.

For more information about creating Azure Automation Accounts see:

Step 2: Create a Run As Account with sufficient access permissions

  • If you haven’t created the Run As Account during the creation of the Azure Automation Account, create one following this description: https://docs.microsoft.com/en-us/azure/automation/automation-create-runas-account

  • Go to your Azure Automation Account

  • Navigate to Connections / AzureRunAsConnection


  • Now you see all the information of the service principal that the Runbook is later running under. Copy the Application ID to the clipboard.


  • Navigate to the Subscription. Type “Subscriptions” into the search field of the portal.


    Click on Subscriptions.

  • Select your Subscription. On the Subscription overview page click Access Control (IAM) / Add.


  • Choose the “Owner”-Role. Enter the Application ID from the clipboard to the Select field and click on the User that gets displayed. That is the service principal from your Azure AD.


  • Confirm with Save.

You have now successfully assigned Subscription Owner rights to the Service Principal created for the Azure Automation RunAs-Account the Runbook will run under.

Step 3: Create and test a new Automation Runbook that creates Resource Groups

Go back to your Automation Account.

  • Select Runbooks and click on Add a Runbook.


  • Select Create a new runbook.


    Fill in the requested fields:
    Name: <a name for your runbook>
    Runbook type: PowerShell
    Description: <some description for the runbook>

  • Confirm with Create

In the editor add the following lines of code


$connectionName = "AzureRunAsConnection"

    # Get the connection "AzureRunAsConnection "
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName  

    $tenantID = $servicePrincipalConnection.TenantId
    $applicationId = $servicePrincipalConnection.ApplicationId
    $certificateThumbprint = $servicePrincipalConnection.CertificateThumbprint

    "Logging in to Azure..."
    Login-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $tenantID `
        -ApplicationId $applicationId `
        -CertificateThumbprint $certificateThumbprint

    New-AzureRmResourceGroup -Name $ResourceGroupName -Location 'West Europe'

    # Set the scope to the Resource Group created above
    $scope = (Get-AzureRmResourceGroup -Name $ResourceGroupName).ResourceId

    # Assign Contributor role to the group
    New-AzureRmRoleAssignment -ObjectId $RGGroupID -Scope $scope -RoleDefinitionName "Contributor"
catch {
   if (!$servicePrincipalConnection)
      $ErrorMessage = "Connection $connectionName not found."
      throw $ErrorMessage
  } else{
      Write-Error -Message $_.Exception
      throw $_.Exception

  • Save the Runbook and click on Test Pane.


  • Fill in the two parameter fields (Resource Group Name and Group ID) and click on Start.


This creates the Resource Group and assigns Contributor role to the Azure AD group specified. You might get an error message “Authentication_Unauthorized” because of a bug that occurs when working with a service principal. Ignore this message as the script (according to my tests) does the job.

Step 4: Publish the Runbook

  • Close the test pane and click on Publish.


    Confirm with Yes.

  • That’s it. Users can now go to the Automation Account, select the Runbook and click on Start.


  • This opens the form for entering the parameters. After clicking OK, the Resource Group will be created and the group assignment will be done.


Final Words

One extension to this (runbook) could be to allow the user to enter a group name instead of the group id. This would require one additional step: an AD lookup. See the code here:

$groupID = (Get-AzureRmADGroup -SearchString $RGGroup).Id
New-AzureRmRoleAssignment -ObjectId $groupID -Scope $scope -RoleDefinitionName "Contributor"

This, however, would require giving the service principal (Run As account) access permissions to the Azure AD. That’s something I wanted to avoid here.

Code-Inside Blog: Windows Fall Creators Update 1709 and Docker Windows Containers

Who shrunk my Windows Docker image?

We started to package our ASP.NET/WCF/Full-.NET Framework based web app into Windows Containers, which we then publish to the Docker Hub.

Someday we discovered that one of our new build machines produced Windows Containers only half the size: Instead of a 8GB Docker image we only got a 4GB Docker image. Yeah, right?

The problem with Windows Server 2016

I was able to run the 4GB Docker image on my development machine without any problems and I thought that this is maybe a great new feature (it is… but!). My boss then told my that he was unable to run this on our Windows Server 2016.

The issue: Windows 10 Fall Creators Update

After some googling around we found the problem: Our build machine was a Windows 10 OS with the most recent “Fall Creators Update” (v1709) (which was a bad idea from the beginning, because if you want to run Docker as a Service you will need a Windows Server!). The older build machine, which produced the much larger Docker image, was running with the normal Creators Update from March(?).

Docker resolves the base images for Windows like this:

Compatibility issue

As it turns out: You can’t run the smaller Docker images on Windows Server 2016. Currently it is only possible to do it via the preview “Windows Server, version 1709” or on the Windows 10 Client OS.

Oh… and the new Windows Server is not a simple update to Windows Server 2016, instead it is a completely new version. Thanks Microsoft.


Because we need to run our images on Windows Server 2016, we just target the LTSC2016 base image, which will produce 8GB Docker images (which sucks, but works for us).

This post could also be in the RTFM-category, because there are some notes on the Docker page available, but it was quite easy to overread ;)

MSDN Team Blog AT [MS]: Global Azure Bootcamp, 21. April 2018, Linz

Am 21. April 2018 werden im Rahmen des Global Azure Bootcamps auf der ganzen Welt hunderte Workshops zum Thema Cloud Computing und Microsoft Azure stattfinden. Die letzten Jahre waren ein voller Erfolg und daher sind wir 2018 auch in Österreich wieder im Wissensturm in Linz mit dabei. 2017 haben wir erstmals deutlich die 140-Teilnehmer-Grenze gesprengt. Diesen Rekord wollen wir heuer toppen und hoffen, dass wieder viele von euch dabei sind.

Das Event ist 100%ig von der Community für die Community. Die Teilnahme am Event ist kostenlos. Die Sponsoren übernehmen alle Kosten. Ein großes Dankeschön dafür! Tausende Teilnehmer weltweit bestätigten in den letzten Jahren, dass das Global Azure Bootcamp eine tolle Gelegenheit ist, entweder in Azure einzusteigen oder schon bestehendes Wissen zu vertiefen.

Global Azure Bootcamp 2018

Nähere Informationen über Vorträge, Sprecher und Location sowie die Möglichkeit zur Anmeldung findest Du auf unserer Event-Seite. Die Tickets sind begrenzt, am besten gleich anmelden!

Zur Event-Seite des Global Azure Bootcamp Österreich ...

Hier als "Appetitanreger" ein Auszug aus den Themen, zu denen es Sessions am GAB geben wird:

  • Machine Learning und Deep Learning
  • Web APIs mit GraphQL
  • NoSQL Datenbanken mit CosmosDB
  • Power BI
  • Microservices mit Service Fabric
  • IoT
  • Serverless Workflows mit Logic Apps
  • und vieles mehr...

Martin Richter: Advanced Developers Conference C++ / ADC++ 2018 in Burghausen vom 15.-16.05.2018

Am 15. und 16. Mai 2018 wird die nächste Advanced Developers Conference C++ stattfinden.
Am 14. Mai finden ganztägige Workshops statt.!

Diesmal in der Heimatstadt des ppedv Teams in Burghausen unter der bewährten Leitung von Hannes Preishuber.
Mit dem Heimvorteil des ppdevs Team darf man gespannt sein was für eine Abendveranstaltung geplant sein wird. Diese haben zumindest für mich ja schon fast einen „legendären“ 😉 Ruf und haben immer viel Spaß und Abwechslung geboten.

Diesmal sind wieder interessante Gäste geladen. Insbesondere der Bekannte Buch Autor Nicolai Josuttis sticht für mich förmlich aus der Menge der Redner heraus. Manch altes und auch neues Gesicht findet sich in der Rednerliste.

Hauptthema wird C++17 sein und C++20 befindet sich ja auch schon wieder in der Standardisierung.
Neben dem (immer wieder guten) Social-Event bei dem man viele Kollegen treffen kann und darf, werden die Vorträge sicherlich die nächsten Evolutionsstufen von beleuchten.

Und bei der mittlerweile rasanten Updatefolge in Visual Studio werden wir vielleicht auch einiges interessantes neues vom Microsoft-Sprecher Steve Caroll erfahren.

Weitere Infos unter dem Link http://www.adcpp.de/2018 

Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Uli Armbruster: Beispiele für Zielvereinbarungen


Dieser Beitrag soll als kleine Hilfestellung für all diejenigen dienen, die sich mit der Findung von guten Zielen schwertun. Wer wie wir darauf hinarbeitet, dass die Mitarbeiter sich aus der Unternehmensstrategie die individuellen Ziele selbst ableiten und der Vorgesetzte nur noch unterstützende Funktion (z.B. Bereitstellen von Resourcen) ausüben muss, der wird zuerst viel kommunizieren und beraten müssen.

Während die eine Seite erfolgreicher Zielvereinbarungen die Verlagerung der Zieldefinition auf den Mitarbeiter selbst ist, ist die andere das Sichtbarmachen der operativen und strategischen Unternehmensziele. Damit einher geht natürlich das kontinuierliche Gespräch über die gemeinsame Unternehmenskultur. Beispielsweise wird es schwierig über monetäre Ergebnisziele zu sprechen, wenn die Kennzahlen nicht transparent zur Verfügung gestellt und erklärt werden. Nicht jeder Chef ist davon begeistert, wenn die Angestellten die genauen Margen von Aufträgen kennen.

  • Der Kern guter Ziele sind die S.M.A.R.T. Kriterien, auf die an dieser Stelle nicht weiter eingegangen werden soll.
  • Die Ziele drüfen sich nicht gegenseitig beeinträchtigen. Stattdessen sollen sie sich vielmehr gegenseitig beflügeln.
  • Die Erreichung des Ziels soll immer einen direkten oder indirekten Benefit für das Unternehmen haben (Stichwort Return on Invest).


Auf den Punkt gebracht: Die Mitarbeiter müssen um die Unternehmsziele wissen und diese mittragen. Ein gemeinsames Verständnis der Modalitäten und eine dazu passende Informationstransparenz sind ebenso elementar wie das Mitwirkenwollen des Einzelnen.


Beispiele für individuelle Ziele

  • Community Auftritte
    • Bis 31.12.18 hält der Mitarbeiter in den User Groups Berlin, Dresden, Leipzig, Chemnitz, Karlsruhe und Luzern Vorträge
  • Fachartikel
    • Bis 31.12.18 publiziert der Mitarbeiter 3 Fachartikel in Fachzeitschriften (z.B. in der dotnetpro)
  • Öffentliche Auftritte auf Konferenzen
    • Bis 31.12.18 hält der Mitarbeiter auf mind. 3 unterschiedlichen, kommerziellen Konferenzen Talks oder Trainings (z.B. DWX, Karlsruher Entwicklertage etc.)
  • Kommerzielle Workshops
    • Bis 31.12.18 generiert der Mitarbeiter durch Workshops einen Gesamtumsatz von mind. 20.000€
  • Fakturierung
    • Bis 31.12.18 fakturiert der Mitarbeiter durch Consulting mind. 800 Stunden.
  • Zertifizierung

MSDN Team Blog AT [MS]: Zur Microsoft Build anmelden – der Countdown läuft


Die Microsoft Build findet heuer von 7. – 9. Mai in Seattle statt. Die Anmeldung beginnt morgen, den 15. Februar um 18:00. Die Zeit rinnt dahin!!

Erfahrungsgemäß ist die Veranstaltung binnen kürzester Zeit ausverkauft. Wer also erst noch seinen Chef fragen muss. sollte das schleunigst tun. Winking smile

Zum Anmelden einfach auf die Webseite der Microsoft Build gehen: https://microsoft.com/build

Zum Einstimmen hier ein kleines Teaser Video:

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 3

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

About SignalR

SignalR for ASP.NET Core is a framework to enable Websocket communication in ASP.NET Core applications. Modern browsers already support Websocket, which is part of the HTML5 standard. For older browser SignalR provides a fallback based on standard HTTP1.1. SignalR is basically a server side implementation based on ASP.NET Core and Kestrel. It uses the same dependency injection mechanism and can be added via a NuGet package into the application. Additionally, SignalR provides various client libraries to consume Websockets in client applications. In this chat application, I use @aspnet/signalr-client loaded via NPM. The package also contains the TypeScript definitions, which makes it easy to use in a TypeScript application, like this.

I added the React Nuget package in the first part of this blog series. To enable SignalR I need to add it to the ServiceCollection:


The server part

In C#, I created a ChatService that will later be used to connect to the data storage. Now it is using a dictionary to store the messages and is working with this dictionary. I don't show this service here, because the implementation is not relevant here and will change later on. But I use this Service in in the code I show here. This service is mainly used in the ChatController, the Web API controller to load some initial data and in the ChatHub, which is the Websocket endpoint for this chat. The service gets injected via dependency injection that is configured in the Startup.cs:

services.AddSingleton<IChatService, ChatService>();


The ChatController is simple, it just contains GET methods. Do you remember the last posts? The initial data of the logged on users and the first chat messages were defined in the React components. I moved this to the ChatController on the server side:

public class ChatController : Controller
    private readonly IChatService _chatService;

    public ChatController(IChatService chatService)
        _chatService = chatService;
    // GET: api/<controller>
    public IEnumerable<UserDetails> LoggedOnUsers()
        return new[]{
            new UserDetails { Id = 1, Name = "Joe" },
            new UserDetails { Id = 3, Name = "Mary" },
            new UserDetails { Id = 2, Name = "Pete" },
            new UserDetails { Id = 4, Name = "Mo" } };

    public IEnumerable<ChatMessage> InitialMessages()
        return _chatService.GetAllInitially();

The method LoggedOnUsers simply created the users list. I will change that, if the authentication is done. The method InitialMessages loads the first 50 messages from the faked data storage.


The Websocket endpoints are defined in so called Hubs. One Hub is defining one single Websocket endpoint. I created a ChatHub, that is the endpoint for this application. The methods in the ChatHub are handler methods, to handle incoming messages through a specific channel.

The ChatHub needs to be added to the SignalR middleware:

app.UseSignalR(routes =>

A SignalR Methods in the Hub are the channel definitions and the handlers at the same time, while NodeJS socket.io is defining channels and binds an handler to this channel.

The currently used data are still fake data and authentication is not yet implemented. This is why the users name is hard coded yet:

using Microsoft.AspNetCore.SignalR;
using ReactChatDemo.Services;

namespace ReactChatDemo.Hubs
    public class ChatHub : Hub
        private readonly IChatService _chatService;

        public ChatHub(IChatService chatService)
            _chatService = chatService;

        public void AddMessage(string message)
            var chatMessage = _chatService.CreateNewMessage("Juergen", message);
            // Call the MessageAdded method to update clients.
            Clients.All.InvokeAsync("MessageAdded", chatMessage);

This Hub only contains a method AddMessage, that gets the actual message as a string. Later on we will replace the hard coded user name, with the name of the logged on user. Than a new message gets created and also added to the data store via the ChatService. The new message is an object, that contains a unique id, the name of the authenticated user, a create date and the actual message text.

Than the message gets, send to the client through the Websocket channel "MessageAdded".

The client part

On the client side, I want to use the socket in two different components, but I want to avoid to create two different Websocket clients. The idea is to create a WebsocketService class, that is used in the two components. Usually I would create two instances of this WebsocketService, but this would create two different clients too. I need to think about dependency injection in React and a singleton instance of that service.

SignalR Client

While googling for dependency injection in React , I read a lot about the fact, that DI is not needed in React. I was kinda confused. DI is everywhere in Angular, but it is not necessarily needed in React? There are packages to load, to support DI, but I tried to find another way. And actually there is another way. In ES6 and in TypeScript it is possible to immediately create an instance of an object and to import this instance everywhere you need it.

import { HubConnection, TransportType, ConsoleLogger, LogLevel } from '@aspnet/signalr-client';

import { ChatMessage } from './Models/ChatMessage';

class ChatWebsocketService {
    private _connection: HubConnection;

    constructor() {
        var transport = TransportType.WebSockets;
        let logger = new ConsoleLogger(LogLevel.Information);

        // create Connection
        this._connection = new HubConnection(`http://${document.location.host}/chat`,
            { transport: transport, logging: logger });
        // start connection
        this._connection.start().catch(err => console.error(err, 'red'));

    // more methods here ...

const WebsocketService = new ChatWebsocketService();

export default WebsocketService;

Inside this class the Websocket (HubConnection) client gets created and configured. The transport type needs to be WebSockets. Also a ConsoleLogger gets added to the Client, to send log information the the browsers console. In the last line of the constructor, I start the connection and add an error handler, that writes to the console. The instance of the connections is stored in a private variable inside the class. Right after the class I create an instance and export the instance. This way the instance can be imported in any class:

import WebsocketService from './WebsocketService'

To keep the Chat component and the Users component clean, I created additional service classes for each the components. This service classes encapsulated the calls to the Web API endpoints and the usage of the WebsocketService. Please have a look into the GitHub repository to see the complete services.

The WebsocketService contains three methods. One is to handle incoming messages, when a user logged on the chat:

registerUserLoggedOn(userLoggedOn: (id: number, name: string) => void) {
    // get new user from the server
    this._connection.on('UserLoggedOn', (id: number, name: string) => {
        userLoggedOn(id, name);

This is not yet used. I need to add the authentication first.

The other two methods are to send a chat message to the server and to handle incoming chat messages:

registerMessageAdded(messageAdded: (msg: ChatMessage) => void) {
    // get nre chat message from the server
    this._connection.on('MessageAdded', (message: ChatMessage) => {
sendMessage(message: string) {
    // send the chat message to the server
    this._connection.invoke('AddMessage', message);

In the Chat component I pass a handler method to the ChatService and the service passes the handler to the WebsocketService. The handler than gets called every time a message comes in:

let that = this;
this._chatService = new ChatService((msg: ChatMessage) => {
    this.handleOnSocket(that, msg);

In this case the passed in handler is only an anonymous method, a lambda expression, that calls the actual handler method defined in the component. I need to pass a local variable with the current instance of the chat component to the handleOnSocket method, because this is not available after when the handler is called. It is called outside of the context where it is defined.

The handler than loads the existing messages from the components state, adds the new message and updates the state:

handleOnSocket(that: Chat, message: ChatMessage) {
    let messages = that.state.messages;
        messages: messages,
        currentMessage: ''

At the end, I need to scroll to the latest message and to focus the text field again.

Web API client

The UsersService.ts and the ChatService.ts, both contain a method to fetch the data from the Web API. As preconfigured in the ASP.NET Core React project, I am using isomorphic-fetch to call the Web API:

public fetchInitialMessages(fetchInitialMessagesCallback: (msg: ChatMessage[]) => void) {
        .then(response => response.json() as Promise<ChatMessage[]>)
        .then(data => {

The method fetchLogedOnUsers in the UsersService service looks almost the same. The method gets a callback method from the Chat component, that gets the ChatMessages passed in. Inside the Chat component this method get's called like this:


The handler than updates the state with the new list of ChatMessages and scrolls the chat area down to the latest message:

handleOnInitialMessagesFetched(messages: ChatMessage[]) {
        messages: messages


Let's try it

Now it is time to try it out. F5 starts the application and opens the configured browser:

This is almost the same view as in the last post about the UI. To be sure React is working, I had a look into the network tap in the browser developer tools:

Here it is. Here you can see the message history of the web socket endpoint. The second line displays the message sent to the server and the third line is the answer from the server containing the ChatMessage object.

Closing words

This post was less easy than the posts before. Not the technical part, but I refactored the the client part a little bit to keep the React component as simple as possible. For the functional components, I used regular TypeScript files and not the TSX files. This worked great.

I'm still impressed by React.

In the next post I'm going to add Authorization to get the logged on user and to authorize the chat to logged-on users only. I'll also add a permanent storage for the chat message.

Holger Schwichtenberg: TCP-Verbindungen für SQL Server aktivieren via PowerShell

Auf einem Rechner wollte der "SQL Server Configuration Manager" nicht mehr starten, der gebraucht wurde, um das TCP-Protokoll als "Client Protocol" für den Zugriff auf den Microsoft SQL Server zu aktivieren.

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 2

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Basic Layout

First let's have a quick look into the hierarchy of the React components in the folder ClientApp.

The app gets bootstrapped within the boot.tsx file. This is the first sort of component where the AppContainer gets created and the router is placed. This file also contains the the call to render the react app in the relevant HTML element, which is a div with the ID react-app in this case. It is a div in the Views/Home/Index.cshtml

This component also renders the content of the routes.tsx. This file contains the route definitions wrapped inside a Layout element. This Layout element is defined in the layout.tsx inside the components folder. The routes.tsx also references three more components out of the components folder: Home, Counter and FetchData. So it seems the router renders the specific components, depending on the requested path inside the Layout element:

// routes.tsx
import * as React from 'react';
import { Route } from 'react-router-dom';
import { Layout } from './components/Layout';
import { Home } from './components/Home';
import { FetchData } from './components/FetchData';
import { Counter } from './components/Counter';

export const routes = <Layout>
    <Route exact path='/' component={ Home } />
    <Route path='/counter' component={ Counter } />
    <Route path='/fetchdata' component={ FetchData } />

As expected, the Layout component than defines the basic layout and renders the contents into a Bootstrap grid column element. I changed that a little bit to render the contents directly into the fluid container and the menu is now outside the fluid container. This component now contains less code than before.:

import * as React from 'react';
import { NavMenu } from './NavMenu';

export interface LayoutProps {
    children?: React.ReactNode;

export class Layout extends React.Component<LayoutProps, {}> {
    public render() {
        return <div>
            <NavMenu />
            <div className='container-fluid'>

I also changed the NavMenu component to place the menu on top of the page using the typical Bootstrap styles. (Visit the repository for more details.)

My chat goes into the Home component, because this is the most important feature of my app ;-) This is why I removed all the contents of the Home component and placed the layout for the actual chat there.

import * as React from 'react';
import { RouteComponentProps } from 'react-router';

import { Chat } from './home/Chat';
import { Users } from './home/Users';

export class Home extends React.Component<RouteComponentProps<{}>, {}> {
    public render() {
        return <div className='row'>
            <div className='col-sm-3'>
              	<Users />
            <div className='col-sm-9'>
                <Chat />

This component uses two new components: Users to display the online users and Chat to add the main chat functionalities. It seems to be a common way in Rdeact to store sub-components inside a subfolder with the same name as the parent component. So, I created a Home folder inside the components folder and placed the Users component and the Chat component inside of that new folder.

The Users Component

Let's have a look into the more simple Users component first. This component doesn't have any interaction yet. It only fetches and displays the users online. To keep the first snippet simple I removed the methods inside. This file imports all from the module 'react' as React object. Using this we are able to access the Component type we need to derive from:

// components/Home/Users.tsx
import * as React from 'react';

interface UsersState {
    users: User[];
interface User {
    id: number;
    name: string;

export class Users extends React.Component<{}, UsersState> {

This base class also defines a state property. The type of that state is defined in the second generic argument of the React.Component base class. (The first generic argument is not needed here). The state is a kind of a container type that contains data you want to store inside the component. In this case I just need a UsersState with a list of users inside. To display a user in the list we only need an identifier and a name. A unique key or id is required by React to create a list of items in the DOM

I don't fetch the data from the server side yet. This post is only about the UI components, so I'm going to mock the data in the constructor:

constructor() {
    this.state = {
        users: [
            { id: 1, name: 'juergen' },
            { id: 3, name: 'marion' },
            { id: 2, name: 'peter' },
            { id: 4, name: 'mo' }]

Now the list of users is available in the current state and I'm able to use this list to render the users:

public render() {
    return <div className='panel panel-default'>
        <div className='panel-body'>
            <h3>Users online:</h3>
            <ul className='chat-users'>
                {this.state.users.map(user =>
                    <li key={user.id}>{user.name}</li>

JSX is a wired thing: HTML like XML syntax, completely mixed with JavaScript (or TypeScript in this case) but it works. It remembers a little bit like Razor. this.state.users.map iterates through the users and renders a list item per user.

The Chat Component

The Chat component is similar, but contains more details and some logic to interact with the user. Initially we have almost the same structure:

// components/Home/chat.tsx
import * as React from 'react';
import * as moment from 'moment';

interface ChatState {
    messages: ChatMessage[];
    currentMessage: string;
interface ChatMessage {
    id: number;
    date: Date;
    message: string;
    sender: string;

export class Chat extends React.Component<{}, ChatState> {

I also imported the module moment, which is moment.js I installed using NPM:

npm install moment --save

moment.js is a pretty useful library to easily work with dates and times in JavaScript. It has a ton of features, like formatting dates, displaying times, creating relative time expressions and it also provides a proper localization of dates.

Now it makes sense to have a look into the render method first:

// components/Home/chat.tsx
public render() {
    return <div className='panel panel-default'>
        <div className='panel-body panel-chat'
                {this.state.messages.map(message =>
                    <li key={message.id}><strong>{message.sender} </strong>
                        ({moment(message.date).format('HH:mm:ss')})<br />
        <div className='panel-footer'>
            <form className='form-inline' onSubmit={this.onSubmit}>
                <label className='sr-only' htmlFor='msg'>Message</label>
                <div className='input-group col-md-12'>
                    <button className='chat-button input-group-addon'>:-)</button>
                    <input type='text' value={this.state.currentMessage}
                        placeholder='Your message'
                        ref={this.handleMessageRef} />
                    <button className='chat-button input-group-addon'>Send</button>

I defined a Bootstrap panel, that has the chat area in the panel-body and the input fields in the panel-footer. In the chat area we also have a unordered list ant the code to iterate through the messages. This is almost similar to the user list. We only display some more date here. Here you can see the usage of moment.js to easily format the massage date.

The panel-footer contains the form to compose the message. I used a input group to add a button in front of the input field and another one after that field. The first button is used to select an emoji. The second one is to also send the message (for people who cannot use the enter key to submit the message).

The ref attributes are used for a cool feature. Using this, you are able to get an instance of the element in the backing code. This is nice to work with instances of elements directly. We will see the usage later on. The code in the ref attributes are pointing to methods, that get's an instance of that element passed in:

msg: HTMLInputElement;
panel: HTMLDivElement;

// ...

handlePanelRef(div: HTMLDivElement) {
    this.panel = div;
handleMessageRef(input: HTMLInputElement) {
    this.msg = input;

I save the instance globally in the class. One thing I didn't expect is a wired behavior of this. This behavior is a typical JavaScript behavior, but I expected is to be solved in TypeScript. I also didn't see this in Angular. The keyword this is not set. It is nothing. If you want to access this in methods used by the DOM, you need to kinda 'inject' or 'bind' an instance of the current object to get this set. This is typical for JavaScript and makes absolutely sense This needs to be done in the constructor:

constructor() {
    this.state = { messages: [], currentMessage: '' };

    this.handlePanelRef = this.handlePanelRef.bind(this);
    this.handleMessageRef = this.handleMessageRef.bind(this);
    // ...

This is the current constructor, including the initialization of the state. As you can see, we bind the the current instance to those methods. We need to do this for all methods, that need to use the current instance.

To get the message text from the text field, it is needed to bind an onChange method. This method collects the value from the event target:

handleMessageChange(event: any) {
    this.setState({ currentMessage: event.target.value });

Don't forget to bind the current instance in the constructor:

this.handleMessageChange = this.handleMessageChange.bind(this);

With this code we get the current message into the state to use it later on. The current state is also bound to the value of that text field, just to clear this field after submitting that form.

The next important event is onSubmit in the form. This event gets triggered by pressing the send button or by pressing enter inside the text field:

onSubmit(event: any) {

This method stops the default behavior of HTML forms, to avoid a reload of the entire page. And calls the method addMessage, that creates and ads the message to the current states messages list:

addMessage() {
    let currentMessage = this.state.currentMessage;
    if (currentMessage.length === 0) {
    let id = this.state.messages.length;
    let date = new Date();

    let messages = this.state.messages;
        id: id,
        date: date,
        message: currentMessage,
        sender: 'juergen'
        messages: messages,
        currentMessage: ''
    this.panel.scrollTop = this.panel.scrollHeight - this.panel.clientHeight;

Currently the id and the sender of the message are faked. Later on, in the next posts, we'll send the message to the server using Websockets and we'll get a massage including a valid id back. We'll also have an authenticated user later on. As mentioned the current post, is just about to get the UI running.

We get the currentMessage and the massages list out of the current state. Than we add the new message to the current list and assign a new state, with the updated list and an empty currentMessage. Setting the state triggers an event to update the the UI. If I just update the fields inside the state, the UI don't get notified. It is also possible to only update a single property of the state.

If the state is updated, I need to focus the text field and to scroll the panel down to the latest message. This is the only reason, why I need the instance of the elements and why I used the ref methods.

That's it :-)

After pressing F5, I see the working chat UI in the browser

Closing words

By closing this post, the basic UI is working. This was easier than expected, I just stuck a little bit, by accessing the HTML elements to focus the text field and to scroll the chat area and when I tried to access the current instance using this. React is heavily used and the React community is huge. This is why it is easy to get help pretty fast.

In the next post, I'm going to integrate SignalR and to get the Websockets running. I'll also add two Web APIs to fetch the initial data. The current logged on users and the latest 50 chat messages, don't need to be pushed by the Websocket. Using this I need to get into the first functional component in React and to inject this into the UI components of this post.

Holger Schwichtenberg: Kommende Vorträge

Der Dotnet-Doktor wird in den kommenden drei Monaten wieder einige öffentliche Vorträge halten. Hier eine Terminübersicht.

Golo Roden: Einführung in React, Folge 6: Wiederverwendung von Code

React-Komponenten lassen sich wiederverwenden, wobei zwischen Präsentations- und Funktionskomponenten unterschieden wird. Dafür gibt es verschiedene Ansätze, unter anderem Container-Komponenten und Komponenten höherer Ordnung. Wie funktionieren sie?

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 1

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.


I want to create a small chat application that uses React, SignalR and ASP.NET Core 2.0. The frontend should be created using React. The backend serves a Websocket end-point using SignalR and some basic Web API end-points to fetch some initial data, some lookup data and to do the authentication (I'll use IdentityServer4 to do the authentication). The project setup should be based on the Visual Studio React Project I introduced in one of the last posts.

The UI should be clean and easy to use. It should be possible to use the chat without a mouse. So the focus is also on usability and a basic accessibility. We will have a large chat area to display the messages, with an input field for the messages below. The return key should be the primary method to send the message. There's one additional button to select emojis, using the mouse. But basic emojis should also be available using text symbols.

On the left side, I'll create a list of online users. Every new logged on user should be mentioned in the chat area. The user list should be auto updated after a user logs on. We will use SignalR here too.

  • User list using SignalR
    • small area on the left hand side
    • Initially fetching the logged on users using Web API
  • Chat area using SignalR
    • large area on the right hand side
    • Initially fetching the last 50 messages using Web API
  • Message field below the chat area
    • Enter kay should send the message
    • Emojis using text symbols
  • Storing the chat history in a database (using Azure Table Storage)
  • Authentication using IdentityServer4

Project setup

The initial project setup is easy and already described in one of the last post. I'll just do a quick introduction here.

You can either use visual studio 2017 to create a new project

or the .NET CLI

dotnet new react -n react-chat-app

It takes some time to fetch the dependent packages. Especially the NPM packages are a lot. The node_modules folder contains around 10k files and will require 85 MB on disk.

I also added the "@aspnet/signalr-client": "1.0.0-alpha2-final" to the package.json

Don'be be confused, with the documentation. In the GitHub repository they wrote the NPM name signalr-client should not longer used and the new name is just signalr. But when I wrote this lines, the package with the new name is not yet available on NPM. So I'm still using the signalr-client package.

After adding that package, an optional dependency wasn't found and the NPM dependency node in Visual Studio will display a yellow exclamation mark. This is annoying and id seems to be an critical error, but it will work anyway:

The NuGet packages are fine. To use SignalR I used the the Microsoft.AspNetCore.SignalR package with the version 1.0.0-alpha2-final.

The project compiles without errors. And after pressing F5, the app starts as expected.

Since a while I configured Edge as the start-up browser to run ASP.NET Core projects, because Chrome got very slow. Once the IISExpress or Kestrel is running you can easily use Chrome or any other browser to call and debug the web. Which makes sense, since the React developer tolls are not yet available for Edge and IE.

This is all to setup and to configure. All the preconfigured TypeScript and Webpack stuff is fine and runs as expected. If there's no critical issue, you don't really need to know about it. It just works. I would anyway recommend to learn about the TypeScript configuration and Webpack to be safe.

Closing words

Now the requirements are clear and the project is set-up. In this series I will not set up an automated build using CAKE. I'll also not write about unit tests. The focus is React, SignalR and ASP.NET Core only.

In the next chapter I'm going build the UI React components and to implement the basic client logic to get the UI working.

Jürgen Gutsch: Another GraphQL library for ASP.​NET Core

I recently read a interesting tweet by Glenn Block about a GraphQL app running on the Linux Subsystem for Windows:

It is impressive to run a .NET Core app in Linux on Windows, which is not a Virtual Machine on Windows. I never hat the chance to try that. I just played a little bit with the Linux Subsystem for Windows. The second that came to mind was: "wow, did he use my GraphQL Middleware library or something else?"

He uses different libraries, as you can see in his repository on GitHub: https://github.com/glennblock/orders-graphql

  • GraphQL.Server.Transports.AspNetCore
  • GraphQL.Server.Transports.WebSockets

This libraries are built by the makers of graphql-dotnet. The project is hosted in the graphql-dotnet organization on GitHub: https://github.com/graphql-dotnet/server. They also provide a Middleware that can be used in ASP.NET Core projects. The cool thing about that project is a WebSocket endpoint for GraphQL.

What about the GraphQL middleware I wrote?

Because my GraphQL middleware, is also based on graphql-dotnet, I'm not yet sure whether to continue maintaining this middleware or to retire this project. I'm not yet sure what to do, but I'll try the other implementation to find out more.

I'm pretty sure the contributors of the graphql-dotnet project know a lot more about GraphQL and there library, than I do. Both project will work the same way and will return the same result - hopefully. The only difference is the API and the configuration. The only reason to continue working on the project is to learn more about GraphQL or to maybe provide a better API ;-)

If I retire my project, I would try to contribute to the graphql-dotnet projects.

What do you think? Drop me a comment and tell me.

Code-Inside Blog: WCF Global Fault Contracts

If you are still using WCF you might have stumbled upon this problem: WCF allows you to throw certain Faults in your operation, but unfortunatly it is a bit awkward to configure if you want “Global Fault Contracts”. With this solution here it should be pretty easy to get “Global Faults”:

Define the Fault on the Server Side:

Let’s say we want to throw the following fault in all our operations:

public class FoobarFault


Register the Fault

The tricky part in WCF is to “configure” WCF that it will populate the fault. You can do this manually via the [FaultContract-Attribute] on each operation, but if you are looking for a global WCF fault configuration, you need to apply it as a contract behavior like this:

[AttributeUsage(AttributeTargets.Interface, AllowMultiple = false, Inherited = true)]
public class GlobalFaultsAttribute : Attribute, IContractBehavior
    // this is a list of our global fault detail classes.
    static Type[] Faults = new Type[]

    public void AddBindingParameters(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint,
        BindingParameterCollection bindingParameters)

    public void ApplyClientBehavior(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint,
        ClientRuntime clientRuntime)

    public void ApplyDispatchBehavior(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint,
        DispatchRuntime dispatchRuntime)

    public void Validate(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint)
        foreach (OperationDescription op in contractDescription.Operations)
            foreach (Type fault in Faults)

    private FaultDescription MakeFault(Type detailType)
        string action = detailType.Name;
        DescriptionAttribute description = (DescriptionAttribute)
            Attribute.GetCustomAttribute(detailType, typeof(DescriptionAttribute));
        if (description != null)
            action = description.Description;
        FaultDescription fd = new FaultDescription(action);
        fd.DetailType = detailType;
        fd.Name = detailType.Name;
        return fd;

Now we can apply this ContractBehavior in the Service just like this:

[ServiceBehavior(...), GlobalFaults]
public class FoobarService

To use our Fault, just throw it as a FaultException:

throw new FaultException<FoobarFault>(new FoobarFault(), "Foobar happend!");

Client Side

On the client side you should now be able to catch this exception just like this:

	catch (Exception ex)
		if (ex is FaultException faultException)
			if (faultException.Action == nameof(FoobarFault))

Hope this helps!

(This old topic was still on my “To-blog” list, even if WCF is quite old, maybe someone is looking for something like this)

Jürgen Gutsch: The ASP.​NET Core React Project

In the last post I wrote I had a first look into a plain, clean and lightweight React setup. I'm still impressed how easy the setup is and how fast the loading of a React app really is. Before trying to push this setup into a ASP.NET Core application, it would make sense to have a look into the ASP.NET Core React project.

Create the React project

You can either use the "File New Project ..." dialog in Visual Studio 2017 or the .NET CLI to create a new ASP.NET Core React project:

dotnet new react -n MyPrettyAwesomeReactApp

This creates a ready to go React project.

The first impression

At the first glance I saw the webpack.config.js, which is cool. I really love Webpack and I love how it works, how it bundles the relevant files recursively and how it saves a lot of time. Also a tsconfig.json is available in the project. This means the React-Code will be written in TypeScript. Webpack compiles the TypeScript into JavaScript and bundles it into an output file, called main.js

Remember: In the last post the JavaScript code was written in ES6 and transpiled using Babel

The TypeScript files are in the folder ClientApp and the transpiled and bundled Webpack output gets moved to the wwwroot/dist/ folder. This is nice. The Build in VS2017 runs Webpack, this is hidden in MSBuild tasks inside the project file. To see more, you need to have a look into the project file by right clicking the project and select Edit projectname.csproj

You'll than find a ItemGroup with the removed ClientApp Folder:

  <!-- Files not to publish (note that the 'dist' subfolders are re-added below) -->
  <Content Remove="ClientApp\**" />

And there are two Targets, which have definitions for the Debug and Publish build defined:

<Target Name="DebugRunWebpack" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('wwwroot\dist') ">
  <!-- Ensure Node.js is installed -->
  <Exec Command="node --version" ContinueOnError="true">
    <Output TaskParameter="ExitCode" PropertyName="ErrorCode" />
  <Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." />

  <!-- In development, the dist files won't exist on the first run or when cloning to
        a different machine, so rebuild them if not already present. -->
  <Message Importance="high" Text="Performing first-run Webpack build..." />
  <Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js" />
  <Exec Command="node node_modules/webpack/bin/webpack.js" />

<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
  <!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
  <Exec Command="npm install" />
  <Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js --env.prod" />
  <Exec Command="node node_modules/webpack/bin/webpack.js --env.prod" />

  <!-- Include the newly-built files in the publish output -->
    <DistFiles Include="wwwroot\dist\**; ClientApp\dist\**" />
    <ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">

As you can see it runs Webpack twice. Once for the vendor dependencies like Bootstrap, jQuery, etc. and once for the react app in the ClientApp folder.

Take a look at the ClientApp

The first thing you'll see, if you look into the ClientApp folder. There are *.tsx-files instead of *.ts files. This are TypeScript files which are supporting JSX, the wired XML/HTML syntax inside JavaScript code. VS 2017 already knows about the JSX syntax and doesn't show any errors. That's awesome.

This client app is bootstrapped in the boot.tsx (we had the index.js in the other blog post). This app supports routing via the react-router-dom Component. The boot.tsx defines an AppContainer, that primarily hosts the route definitions. stored in the routes.tsx. The Routes than calls the different components depending on the path in the bowsers address bar. This routing concept is a little more intuitive to use than the Angular one. The routing is defined in the component that hosts the routed contents. In this case the Layout component contains the dynamic contents:

// routes.tsx
export const routes = <Layout>
    <Route exact path='/' component={ Home } />
    <Route path='/counter' component={ Counter } />
    <Route path='/fetchdata' component={ FetchData } />

Inside the Layout.tsx you see, that the routed components will be rendered in a specific div tag that renders the children defined in the routes.tsx

// Layout.tsx
export class Layout extends React.Component<LayoutProps, {}> {
  public render() {
    return <div className='container-fluid'>
      <div className='row'>
        <div className='col-sm-3'>
          <NavMenu />
    <div className='col-sm-9'>
      { this.props.children }

Using this approach, it should be possible to add sub routes for specific small areas of the app. Some kind of "nested routes".

There's also an example available about how to fetch data from a Web API. This sample uses isomorphic-fetch' to fetch the data from the Web API:

constructor() {    
  this.state = { forecasts: [], loading: true };

    .then(response => response.json() as Promise<WeatherForecast[]>)
    .then(data => {
          this.setState({ forecasts: data, loading: false });

Since React doesn't provide a library to load data via HTTP request, you are free to use any library you want. Some other libraries used with React are axios, fetch or Superagent.

A short look into the ASP.NET Core parts

The Startup.cs is a little special. Not really much, but you'll find some differences in the Configure method. There is the use of the WebpackDevMiddleware, that helps while debugging. It calls Webpack on every change in the used TypeScript files and reloads the scripts in the browser while debugging. Using this middleware, you don't need to recompile the whole application or to restart debugging:

if (env.IsDevelopment())
  app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions
    HotModuleReplacement = true,
    ReactHotModuleReplacement = true

And the route configuration contains a fallback route, that gets used, if the requested path doesn't match any MVC route:

app.UseMvc(routes =>
    name: "default",
    template: "{controller=Home}/{action=Index}/{id?}");

    name: "spa-fallback",
    defaults: new { controller = "Home", action = "Index" });

The Integration in the views is interesting as well. In the _Layout.cshtml:

  • There is a base href set to the current base URL.
  • The vendor.css and a site.css is referenced in the head of the document.
  • The vendor.js is referenced at the bottom.
<!DOCTYPE html>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>@ViewData["Title"] - ReactWebApp</title>
    <base href="~/" />

    <link rel="stylesheet" href="~/dist/vendor.css" asp-append-version="true" />
    <environment exclude="Development">
        <link rel="stylesheet" href="~/dist/site.css" asp-append-version="true" />

    <script src="~/dist/vendor.js" asp-append-version="true"></script>
    @RenderSection("scripts", required: false)

The actual React app isn't referenced here, but in the Index.cshtml:

    ViewData["Title"] = "Home Page";

<div id="react-app">Loading...</div>

@section scripts {
    <script src="~/dist/main.js" asp-append-version="true"></script>

This makes absolutely sense. Doing like this, you are able to create a React app per view. Routing probably doesn't work this way, because there is only one SpaFallbackRoute, but if you just want to make single views more dynamic, it would make sense to create multiple views which are hosting a specific React app.

This is exactly what I expect using React. E. g. I have many old ASP.NET Applications and I want to get rid of the old client script and I want to modernize those applications step by step. In many cases a rewrite costs to much and it would be easy to replace the old code by clean React apps.

The other changes in that project are not really related to React in general. They are just implementation details of this React demo applications

  • There is a simple API controller to serve the weather forecasts
  • The HomeController only contains the Index and the Error actions

Some concluding words

I didn't really expect such a clearly and transparently configured project template. If I try to put the setup of the last post into a ASP.NET Core project, I would do it almost the same way. Using Webpack to transpile and bundle the files and save them somewhere in the wwwroot folder.

From my perspective, I would use this project template as a starter for small projects to medium sized projects (whatever this means). For medium to bigger sized projects, I would - again - propose to divide the client app ad the server part into two different projects, to host them independently, to develop them independently. Hosting independently also means, scale independently. Develop independently means both, scale the teams independently and to focus only on the technology and tools, which are used for this part of the application.

To learn more about React and how it works with ASP.NET Core in Visual Studio 2017, I will create a Chat-App. I will also write a small series about it:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo

Jürgen Gutsch: Trying React the first time

The last two years I worked a lot with Angular. I learned a lot and I also wrote some blog posts about it. While I worked with Angular, I always had React in mind and wanted to learn about that. But I never head the time or a real reason to look at it. I still have no reason to try it, but a little bit of time left. So why not? :-)

This post is just a small overview of what I learned during the setup and in the very first tries.

The Goal

It is not only about developing using React, later I will also see how React works with ASP.NET and ASP.NET Core and how it behaves in Visual Studio. I also want to try the different benefits (compared to Angular) I heard and read about React:

  • It is not a huge framework like Angular but just a library
  • Because It's a library, it should be easy to extend existing web-apps.
  • You should be more free to use different libraries, since there is not all the stuff built in.


My first ideas was to follow the tutorials on https://reactjs.org/. Using this tutorial some other tools came along and some hidden configuration happened. The worst thing from my perspective is that I need to use a package manager to install another package manager to load the packages. Yarn was installed using NPM and was used. Webpack was installed and used in some way, but there was no configuration, no hint about it. This tutorial uses the create-react-app starter kid. This thing hides many stuff.

Project setup

What I like while working with Angular is a really transparent way of using it and working with it. Because of this I searched for a pretty simple tutorial to setup React in a simple, clean and lightweight way. I found this great tutorial by Robin Wieruch: https://www.robinwieruch.de/minimal-react-Webpack-babel-setup/

This Setup uses NPM to get the packages. It uses Webpack to bundle the needed Javascript, Babel is integrated in to Webpack to transpile the JavaScripts from ES6 to more browser compatible JavaScript.

I also use the Webpack-dev-server to run the React app during development. Also react-hot-loader is used to speed up the development time a little bit. The main difference to Angular development is the usage of ES6 based JavaScript and Babel instead of using Typescript. It should also work with typescript, but it doesn't really seem to matter, because they are pretty similar. I'll try using ES6 to see how it works. The only thing I possibly will miss is the type checking.

As you can see, there is not really a difference to Typescript yet, only the JSX thing takes getting used to:

// index.js
import React from 'react';
import ReactDOM from 'react-dom';

import Layout from './components/Layout';

const app = document.getElementById('app');

ReactDOM.render(<Layout/>, app);


I can also uses classes in JavaScript:

// Layout.js
import React from 'react';
import Header from './Header';
import Footer from './Footer';

export default class Layout extends React.Component {
    render() {
        return (

With this setup, I believe I can easily continue to play around with React.

Visual Studio Code

To support ES6, React and JSX in VSCode I installed some extensions for it:

  • Babel JavaScript by Michael McDermott
    • Syntax-Highlighting for modern JavaScripts
  • ESLint by Dirk Baeumer
    • To lint the modern JavaScripts
  • JavaScript (ES6) code snippets by Charalampos Karypidis
  • Reactjs code snippets by Charalampos Karypidis


Webpack is configured to build a bundle.js to thde ./dist folder. This folder is also the root folder for the Webpack dev server. So it will serve all the files from within this folder.

To start building and running the app, there is a start script added to the packages.config

"start": "Webpack-dev-server --progress --colors --config ./Webpack.config.js",

With this I can easily call npm start from a console or from the terminal inside VSCode. The Webpack dev server will rebuild the codes and reload the app in the browser, if a code file changes.

const webpack = require('webpack');

module.exports = {
    entry: [
    module: {
        rules: [{
            test: /\.(js|jsx)$/,
            exclude: /node_modules/,
            use: ['babel-loader']
    resolve: {
        extensions: ['*', '.js', '.jsx']
    output: {
        path: __dirname + '/dist',
        publicPath: '/',
        filename: 'bundle.js'
    plugins: [
      new webpack.HotModuleReplacementPlugin()
    devServer: {
      contentBase: './dist',
      hot: true

React Developer Tools

For Chrome and Firefox there are add-ins available to inspect and debug React apps in the browser. For Chrome I installed the React Developer Tools, which is really useful to see the component hierarchy:

Hosting the app

The react app is hosted in a index.html, which is stored inside the ./dist folder. It references the bundle.js. The React process starts in the index.js. React putts the App inside a div with the Id app (as you can see in the first code snippet in this post.)

<!DOCTYPE html>
      <title>The Minimal React Webpack Babel Setup</title>
    <div id="app"></div>
    <script src="bundle.js"></script>

The index.js import the Layout.js. Here a basic layout is defined, by adding a Header and a Footer component, which are also imported from other components.

// Header.js
import React from 'react';
import ReactDOM from 'react-dom';

export default class Header extends React.Component {
    constructor(props) {
        this.title = 'Header';
    render() {
        return (
// Footer.js
import React from 'react';
import ReactDOM from 'react-dom';

export default class Footer extends React.Component {
    constructor(props) {
        this.title = 'Footer';
    render() {
        return (

The resulting HTML looks like this:

<!DOCTYPE html>
    <title>The Minimal React Webpack Babel Setup</title>
    <div id="app">
    <script src="bundle.js"></script>


My current impression is that React is much more fast on startup than Angular. This is just a kind of a Hello world app, but even for such an app Angular need some time to start a few lines of code. Maybe that changes if the App gets bigger. But I'm sure it keeps to be fast, because of less overhead in the framework.

The setup was easy and works on the first try. The experience in Angular helped a lot here. I already know the tools. Anyway, Robins tutorial is pretty clear, simple and easy to read: https://www.robinwieruch.de/minimal-react-Webpack-babel-setup/

To get started with React, there's also a nice Video series on YouTube, which tells you about the really basics and how to get started creating components and adding the dynamic stuff to the components: https://www.youtube.com/watch?v=MhkGQAoc7bc&list=PLoYCgNOIyGABj2GQSlDRjgvXtqfDxKm5b

Marco Scheel: What’s in your bag, Marco?

Ich bin ein Fan von diesem Typ Blog-Post. Tolle Beispiele gibt es zum Beispiel bei The Verge. Ich möchte euch heute Einblick in meine Tasche geben und zeigen, was ein 100% Cloud Berater Lead Cloud Architect für einen erfolgreichen Tag braucht.

Wir reden oft über den modernen Arbeitsplatz, aber gerade für mich ist der moderne Arbeitsplatz nirgendwo und überall. Was habe ich dabei? Es gibt immer mal wieder Blogs in denen ihr Autor beschreibt, was er mit sich trägt und welche Motivation dahintersteckt. In vielen Meetings kann man sehen, dass ich ein Microsoft Surface Book (GEN1) nutze, aber was noch alles in meiner Tasche steckt, zeige ich euch jetzt. Für die spannenden Dinge gibt es dann auch noch ein Satz zum Warum.


Das Bild nochmal in XL auf Twitter.

Fangen wir mit dem Offensichtlichsten an: Vaude Albert M ist meine Tasche (1). Ich versuche „leicht“ unterwegs zu sein. Eine Tasche mit Rollen mag praktisch sein, aber für meine Anforderungen und mein Budget hat es die Tasche genau getroffen. Es sollten Laptops bis 13‘‘ reinpassen und für ausgedehnte Reisen kann Sie auf einen Trolley gesteckt werden. Die Fächeraufteilung finde ich ausreichend und es gibt 4 Netze im Inneren für all mein Kleinkram.

Mein Laptop (2) oder mein 2-1 ist ein Microsoft Book (GEN1). Seit ich bei Glück & Kanja arbeite, bin ich in der glücklichen Lage Stift-PCs zu verwenden. Eingestiegen mit einem Toshiba Portege m200 über eine Microsoft Surface Pro (das Original) direkt zum Surface Book. Einziger Ausrutscher war ein Dell Latitude E6400. Mein Laptop transportiere ich auch in der Tasche geschützt durch ein Belkin Neopren Schutzhülle (3). Die Hülle kommt auch zum Einsatz und schützt das Gerät vor eventuellem Regen, wenn ich mal bei einem Kunden ein Meeting in einem anderen Gebäude/Stadtteil habe.

Die Akkuleistung des Surface Book ist OK, aber ohne Netzteil (4) traut sich keiner aus dem Haus. Das Netzteil ist super. Was kann an einem Netzteil super sein? Es hat ein USB Ladeport! Ich habe immer ein kurzes Micro-USB Kabel mit USB-C Adapter dran. Handy, Headset, PowerBank und ähnliches kann so schnell und unkompliziert geladen werden.

Als mobile Maus kommt eine Microsoft Arc Touch Mouse Surface Edition (also mit Bluetooth) zum Einsatz. Die Maus war essentiell in der Zeit mit dem Surface Pro, da es nur ein USB Anschluss gab und der Trackpad seinen Namen nicht verdient hat. Mit dem Surface Book habe ich nun ein sehr gutes Trackpad und die Maus kommt immer seltener zum Einsatz, aber ganz ohne geht es einfach noch nicht. Durch den Knick-Mechanismus nimmt sie quasi kein Platz weg in der Tasche.

Für die zahlreichen Telcos in der Woche ist ein richtiges Headset (6) unersetzlich. Glück & Kanja ist im Bereich UC nicht unbekannt und so haben ich viele Kollegen, die mir immer wieder mitteilen wie wichtig ein gutes Headset ist (und ein LAN Kabel). Mit dem Plantronics Voyager Focus UC habe ich ein Oberklasse Headset. Es hat sogar „Active Noise Cancelling“ und kann so auch im Zug zum Video gucken genutzt werden. Ich nutze dieses Headset mobil und am Arbeitsplatz in Offenbach. Das Teil hat ein super einfach zu erreichenden Mute Switch und so störe ich in Konferenzen zumindest nicht durch unnötige Geräusche. Durch ein Bug auf meinem Surface Book und der aktuellen Windows Insider Version nutze ich mein „altes“ mobiles Headset (7). Das Jabra Stealth UC hat mich immer mobil begleitet und nie meine Tasche verstopft. Privat nutze ich die Jaybird Freedom Sprint Bluetooth Kopfhörer (8). In der Regel sind sie nur mit meinem Smartphone gekoppelt und ich nutze sie überwiegend für Podcasts. Durch die Taste am Ohr kann ich den Podcast so jederzeit unterbrechen und fortsetzen. Die Kopfhörer kann man auch zum Joggen anziehen.

Meine Tasche hat zwar viele Netze und Fächer, aber der Kleinkram ist immer irgendwie hin und her geflogen und die Kabelkopfhörer haben sie immer in allem verheddert. Ich habe auf einem US Blog in einem „Whats in my bag“ Post dieses tolle Gadget gefunden. Ein „Stuff“-Organizer (9) mit dem Namen Cocoon GRID-IT. Die Teile sind mittlerweile ganz erschwinglich und in unterschiedlichen Größen zu haben. Ich haben folgende Dinge organisiert:

  • Kabelkopfhörer
  • Kabelheadset (noch von meinem Lumia 920)
  • MicroSD mit Adapter
  • USB 2.0 Stick (klein) + USB 3.0 Stick (groß)
  • Surface Pen Tip Kit
  • USB-C to Headphone-Jack Adapter
  • Mini-Display-Port zu VGA
  • Mini-Display-Port zu HDMI, DVI, Display-Port
  • Micro-USB zu USB Adapter

Eine PowerBank gehört heute einfach in jede Tasche. Ich habe mir im letzten Amazon Xmas Sale nochmal einen „kleinen“ Anker den PowerCore 10000 geholt. Ich hatte sonst einen Anker PowerCore mit 20100 mAH dabei, aber der war nur unnötig schwer. Unterwegs ein Headset oder Handy laden kann man mit dem kleinen Akku viel eleganter. Für diesen Einsatz habe ich auch kurze Kabel in der kleiner Tasche (Micro-USB und USB-C).

Meine Kabelsammlung und ein Anker 24-Watt 2-Port Netzteil trage ich immer in der vordersten Tasche dabei. Für den Einsatz am Netzteil eignen sich lange Kabel. Mein Lieblingskabel (weil lang) ist das weiße Micro-USB Kabel, dass ich noch aus den Kindle Keyboard Zeiten übrighabe. Aktuellere Geräte brauchen ein USB-C Kabel und da habe ich noch eins von meinem Lumia 950 XL übrig. Mein Frau nutzt als „Computer“ ein iPad, also kann es nie schaden auch ein Lightning Kabel (natürlich von Anker) dabei zu haben.

Es bleibt nur der Rest (12) übrig. Handcreme, Kugelschreiber, Bonbons gegen das Kratzen im Hals, Heuschnupfen-Tabellen, Batterien für den Surface Pen, Regenschirm und als Vater von zwei Jungs immer ein paar Feuchtigkeitstücher.

André Krämer: Klickbare Labels Mit Xamarin Forms

Während eines Prototypworkshops fragte mich ein Kunde kürzlich, warum das Xamarin.Forms Label Steuerelement kein Click oder Tap Ereignis veröffentlicht. Die Antwort darauf ist relativ einfach: Die meisten Xamarin.Forms Steuerelemente veröffentlichen selbst kein Clickoder Tap Ereignis. Stattdessen nutzt man in einem solchen Fall einen Tap Gesture Gesture Recognizer. Über Tap Gesture Recognizer kann so ziemliches jedes Steuerelement klickbar gemacht werden. Klicks / Taps mit Event Handlern In einem kleinen Beispiel sieht das ganze wie folgt aus:

Uli Armbruster: Ideen für das Format des NOSSUED 2018 gesucht

Vom 16-17. Juni 2018 findet wieder der NOSSUED in Karlsruhe statt. Der Termin wurde vorgezogen, um den unterschiedlichen Sommerferien der einzelnen Bundesländer Rechnung zu tragen.

Wie jedes Jahr soll die Konferenz als Open Space ausgerichtet werden. Darüber hinaus suchen wir als Orga nach Ideen, um das Format zu erweitern und neue Anreize einfließen zu lassen.

Beispielsweise wäre ein Mix aus Charity Development, Hackathon und Open Space möglich. Dabei würde im Vorhhinein, z.B. über einen Austausch auf Facebook oder am Vorabend des NOSSUED, ein Projekt ausgewählt werden, welches Schulen oder Vereine einsetzen können. Eine webbasierte Mitgliederverwaltung samt deren Mitgliedsbeiträge für Sportvereine wäre so etwas. Im Anschluss würden wir uns darüber austauschen welche Technologien dazu eingesetzt und wir die Architektur aufgebaut werden sollen. Die Sessions des Open Spaces würde dann auf diese Themen ausgerichtet werden. Parallel findet in Gruppen ein Hackathon statt, bei dem das Produkt entwivckelt wird. Ergeben sich dabei Fragen, Diskussionen oder Schwierigkeiten, dann kann daraus eine neue Open Space Session generiert und spontan abgehalten werden.

Was sagt ihr zu diesem Ansatz? Habt ihr andere Ideen? Oder wollt ihr vielleicht am bisherigen Format gar nichts ändern? Schreibt mir in die Kommentare.

Stefan Henneken: IEC 61131-3: Unit-Tests

Unit tests are an essential tool for every programmer to ensure functioning of his software. Software bugs cost time and money, so you need an automated solution to find these bugs and preferably before the software is used. Unit tests should be used wherever software is professionally developed. This article is intended to provide a quick introduction and allow an understanding of the benefits of unit tests.


Separate test programs are often written to test function blocks. In such a test program, an instance of the desired function block is created and called. The output variables are observed and manually checked for correctness. If these do not match the expected values, the function block is adjusted until it works as intended.

But testing software once is not enough. Amendments and extensions of a program are often responsible that functions or function blocks, that were previously tested and worked without errors, suddenly do not work correctly any longer. It is also possible that the correction of program errors can also affect other parts of the program and can lead to malfunctions in other parts of the code. Previously executed and completed tests must therefore be repeated manually.

One possible way to improve this is to automate the tests. For this purpose, a test program is developed which calls up the functionality of the program to be tested and checks the return values. A test program written once offers a number of advantages:

– The tests are automated and can therefore be repeated at any time with the same framework conditions (timings, …).

– Once written tests are retained for other team members.

Unit Tests

A unit test checks a very small and self-sufficient part (unit) of a software. In IEC 61131-3, this is a single function block or a function. Each test calls the unit to be tested (function block, method or function) with test data (parameters) and checks its reaction to this test data. If the delivered result matches the expected result, the test is considered passed. A test generally consists of a whole series of test cases that not only check one target/actual pair, but several of them.

A developer decides himself which test scenarios he implements. However, it makes sense to test with values that typically occur in practice when they are called. Consideration of limit values (extremely large or small values) or special values (zero pointer, empty string) is also useful. If all these test scenarios deliver correct values as expected, developer can assume that his implementation is correct.

A positive side-effect is that it gives developers less headaches to make complex changes to their code. After all, they can check the system at any time after making such changes. If no errors occur after such a change, it is most likely to have been successful.

However, the risk of poor test implementation must not be ignored. If these are inadequate or even false, but produce a positive result, this deceptive certainty will sooner or later lead to major problems.

The Unit Test Framework TcUnit

Unit test frameworks offer necessary functionalities to create unit tests quickly and effectively. They offer further advantages:

– Everyone in the team can extend the tests quickly and easily.

– Everyone is able to start the tests and check the results of the tests for correctness.

The unit test framework TcUnit was developed as part of a project. In fact, it is a PLC library that provides methods for verifying variables (assert methods). If a check was not successful, a status message is displayed in the output window. The assert methods are contained in the function block FB_Assert.

There is a method for each data type, whereby the structure is always similar. There is always a parameter that contains the actual value and a parameter for the setpoint. If both match, the method returns TRUE, otherwise FALSE. The parameter sMessage specifies the output text to be displayed in the event of an error. This allows you to assign the messages to the individual test cases. The names of the assert methods always begin with AreEqual.

Here, for example, the method checks a variable of type integer for validity.


Some methods contain additional parameters.


All standard data types (BOOL, BYTE, INT, WORD, STRING, TIME, …) have corresponding assert methods. Some special data types, such as AreEqualMEM for checking a memory area or AreEqualGIUD, are also supported.

The first Example

Unit tests are used to check individual function blocks independently of other components. These function blocks can be located in a PLC library or in a PLC project.

For the first example, the FB to be tested should be located in a PLC project. This is the function block FB_Foo.


Defines the time that the output bOut remains set if no further positive edges are applied to bSwitch.

bSwitch A positive edge sets the output bOut to TRUE. This remains active for the time tDuration. If the output is already set, the time tDuration is restarted.
bOff The output bOut is immediately reset by a positive edge.
tDuration Defines the time that the output bOut remains set if no further positive edges are applied to bSwitch.

Unit tests are intended to prove that the FB_Foo function block behaves as expected. The code for testing is implemented directly in the TwinCAT project.

Project Setup

To separate the test code from the application, the folder TcUnit_Tests is created. The POU P_Unit_Tests is stored in this folder from which the respective test cases are called.

A corresponding test FB is created for each FB. This has the same name plus the postfix _Tests. In our example, the name is FB_Foo_Tests.


In P_Unit_Tests, an instance of FB_Foo_Tests is created and called.

PROGRAM P_Unit_Tests
   fbFoo_Tests : FB_Foo_Tests;


FB_Foo_Tests contains the entire test code for checking FB_Foo. In FB_Foo_Tests, an instance of FB_Foo is created for each test case. These are called with different parameters and the return values are validated using the assert methods.

The execution of the individual test cases takes place in a state machine, which is also managed by the PLC library TcUnit. This means, for example, that the test is automatically terminated as soon as an error has been detected.

Definition of Test Cases

The individual test cases must first be defined. Each test case occupies a certain area in the state machine.

For the naming of the individual test cases, several naming rules have proven useful, which help to make the test setup more transparent.

The name of the test cases that are to check an inbox of FB_Foo is composed of [input name]_[test condition]_[expected behaviour]. Test cases that test methods of FB_Foo are named similarly, i. e. [method name]_[test condition]_[expected behaviour].

The following test cases are defined according to this scheme:


Tests whether the output bOut is set to 1 s by a positive edge at bSwitch, if tDuration is set to t#1s.


Tests whether a positive edge at bSwitch causes the output bOut to become FALSE again after 1100 ms, if tDuration has been set to t#1s.


Tests whether the time tDuration is restarted by a new positive edge at bSwitch.


Tests whether the output bOut is set to FALSE by a positive edge at bOff.

Test Case Implementation

Each test case occupies at least one step in the state machine. In this example, the increment between the individual test cases is 16#0100. The first test case starts at 16#0100, the second at 16#0200, etc. In step 16#0000, initializations will be performed, whereas step 16#FFFF must be available, since this is started by the state machine as soon as an assert method has detected an error. If the test runs without errors, a message is displayed in 16#FF00 and the unit test for FB_Foo is finished.

The pragma region is very helpful to simplify navigation in the source code.

   bError : BOOL;
   bDone : BOOL;
   Assert : FB_ASSERT('FB_Foo');
   fbFoo_0100 : FB_Foo;
   fbFoo_0200 : FB_Foo;
   fbFoo_0300 : FB_Foo;
   fbFoo_0400 : FB_Foo;

CASE Assert.State OF
{region 'start'}
   bError := FALSE;
   bDone := FALSE;
   Assert.State := 16#0100;

{region 'Switch_RisingEdgeAndDuration1s_OutIsTrueFor1s'}
   Assert.State := 16#0200;

{region 'Switch_RisingEdgeAndDuration1s_OutIsFalseAfter1100ms'}
   Assert.State := 16#0300;

{region 'Switch_RetriggerSwitch_OutKeepsTrue'}
   Assert.State := 16#0400;

{region 'Off_RisingEdgeAndOutIsTrue_OutIsFalse'}
   Assert.State := 16#FF00;

{region 'done'}
   Assert.State := 16#FF10;

   bDone := TRUE;

{region 'error'}
   bError := TRUE;


There is a separate instance of FB_Foo for each test case. This ensures that each test case works with a newly initialized instance of FB_Foo. This avoids mutual influence of the test cases.

   fbFoo_0100(bSwitch := TRUE, tDuration := T#1S);
   Assert.AreEqualBOOL(TRUE, fbFoo_0100.bOut, 'Switch_RisingEdgeAndDuration1s_OutIsTrueFor1s');
   tonDelay(IN := TRUE, PT := T#900MS);
   IF (tonDelay.Q) THEN
      tonDelay(IN := FALSE);
      Assert.State := 16#0200;

The block to be tested is called for 900 ms. During this time, bOut must be TRUE, because bSwitch has been set to TRUE and tDuration is 1 s. The assert method AreEqualBOOL checks the output bOut. If it does not have the expected status, an error message is output. After 900 ms, you switch to the next test case by setting the property State of FB_Assert.

A test case can also consist of several steps:

   fbFoo_0300(bSwitch := TRUE, tDuration := T#500MS);
   Assert.AreEqualBOOL(TRUE, fbFoo_0300.bOut, 'Switch_RetriggerSwitch_OutKeepsTrue');
   tonDelay(IN := TRUE, PT := T#400MS);
   IF (tonDelay.Q) THEN
      tonDelay(IN := FALSE);
      fbFoo_0300(bSwitch := FALSE);
      Assert.State := 16#0310;

   fbFoo_0300(bSwitch := TRUE, tDuration := T#500MS);
   Assert.AreEqualBOOL(TRUE, fbFoo_0300.bOut, 'Switch_RetriggerSwitch_OutKeepsTrue');
   tonDelay(IN := TRUE, PT := T#400MS);
   IF (tonDelay.Q) THEN
      tonDelay(IN := FALSE);
      Assert.State := 16#0400;

Triggering of bSwitch is performed in line 7 and line 12. Lines 3 and 13 check whether the output remains set.

Output of Messages

After executing all test cases for FB_Foo, a message is output (step 16#FF00).


If an assert method detects an error, it is also displayed as a message.


If the AbortAfterFail property of FB_Assert is set to TRUE, the step 16#FFFF is called in case of an error and the test is terminated.

The assert method prevents the same message from being issued more than once in a single step. Multiple output of the same message, e. g. in a loop, is thus suppressed. By setting the MultipleLog property to TRUE, this filter is deactivated and every message is output.

Due to the structure shown above, the unit tests are clearly separated from the actual application. FB_Foo remains completely unchanged.

This TwinCAT solution is stored in the source code management (such as TFS or Git) together with the TwinCAT solution for the PLC library. Thus, the tests are available to all team members of a project. With the unit test framework, tests can also be extended by anyone, and existing tests can be started and easily evaluated.

Even though the term unit test framework is somewhat sophisticated for the PLC library TcUnit, it is evident that automated tests with only a few tools are also possible with the IEC 61131-3. Commercial unit test frameworks go far beyond what a PLC library can do. Thus, they contain dialogs to start the tests and display the results. Also, the areas in the source code that have been run through by the individual test cases are often marked.

Library TcUnit (TwinCAT 3.1.4022) on GitHub

Sample (TwinCAT 3.1.4022) on GitHub


The biggest obstacle in unit tests is often one’s weaker self. Once this has been overcome, the unit tests write themselves almost automatically. The second hurdle is the question which parts of the software have to be tested. It makes little sense to want to test everything. Instead, you should concentrate on essential areas of the software and test thoroughly the function blocks that form the basis of the application.

Basically, a unit test is considered to be fairly qualitative if possibly many branches are run through during execution. When writing unit tests, the test cases should be selected in such a way that as many branches of the function block as possible are run through.

If errors still occur in practice, it can be advantageous to write tests for this error case. This ensures that an error that has occurred once does not occur again.

The mere fact that two or more function blocks work correctly and this is proven by unit tests does not mean that an application also applies these function blocks correctly. Unit tests do not replace integration and acceptance tests in any way. Such test methods validate the overall system and evaluate it as a whole. Even though the unit test are applied, it is necessary to continue to test the entire work. However, a significant part of potential errors is eliminated in advance by unit tests, which saves time and money in the end.

Further Information

During the preparation for this post, Jakob Sagatowski published the first part of an article series on Test driven development in TwinCAT in his blog AllTwinCAT. For all those who want to go deeper into the topic, I can highly recommend the blog. It is encouraging that other PLC programmers also confront themselves with the testing of their software. The book The Art of Unit Testing by Roy Osherove is also a good introduction to the topic. Although the book was not written for IEC 61131-3, it contains some interesting approaches that can be implemented in the PLC without any problems.

Finally, I would like to thank my colleagues Birger Evenburg and Nils Johannsen. The basis for this post was a PLC library, kindly provided by them.

MSDN Team Blog AT [MS]: Wanted: Kinder lernen programmieren – Mentoren gesucht

Ihr arbeitet gerne mit Kindern zusammen? Die CoderDojos suchen Mentoren!!

Die CoderDojos sind ein Club für Kinder und Jugendliche zwischen 8 und 17 Jahren, in dem sie Technik und Programmieren lernen können. In regelmäßigen Treffen werden verschiedene Übungen gemacht.

Wir sind in Kontakt mit den CoderDojos in Wien und Linz: https://wien.coderdojo.net/ bzw. http://coderdojo-linz.github.io/, und am 16. Februar finden bei uns im Haus zwei Veranstaltungen dazu statt:

  1. Der CoderDojo Hackathon, wo nach einer Tour durch den Microsoft Learning Hub neue Übungsbeispiele für die Kinder und Jugendlichen entwickelt werden: https://www.eventbrite.de/e/coderdojo-wien-hackathon-at-microsoft-tickets-41918976788?aff=erelpanelorg
  2. Das eigentliche CoderDojo für die Kids selber: https://www.eventbrite.de/e/coderdojo-wien-at-microsoft-tickets-41919099154

Für beide Veranstaltungen werden noch Interessierte gesucht die an neuen Übungsbeispielen für die Kids mitarbeiten wollen, bzw. dann in weiterer Folge auch ab und zu bei den CoderDojos dabei sind und die interessierten Kids unterstützen.

CoderDojo Wien Hackathon at Microsoft Tickets CoderDojo Wien @VERBUND Tickets

MSDN Team Blog AT [MS]: Microsoft DevOps tools win Tool Challenge @ Software Quality Days

Yes, we did it (again). Microsoft did win the Tool Challenge at the Software Quality Days and was presented the BEST TOOL AWARD 2018. The beaRainer and Gerwald with the Best Tool awarduty about this is that the conference participants voted for the best tool between vendors like CA Technologies, Micro Focus, Microsoft or Tricentis. Rainer Stropek presented for Microsoft on the Future of Visual Studio & Visual Studio Team Services covering topics like DevOps, mobile DevOps, Live Unit Testing and how Machine Learning will affect testing.


During the conference we presented our DevOps solution based on Visual Studio Team Services, the new Visual Studio App Center service for mobile DevOps and the Cloud platform Microsoft Azure as a place for every tester and developer, regardless of platform or language used, to run their applications or test environments.


Software Quality Days are the brand of a yearly 2-day conference (+2 days workshops) focusing on software quality and testing technologies with about 400 attendees. The conference is held in Vienna, Austria and was celebrating its 20th anniversary in 2018. Five tracks - practical, scientific and tool-oriented are building the agenda of conference. In the 3 Software Quality Days 2018 Logopractical tracks there are presentations of application-oriented experiences and lectures - from users for users. The scientific track presents a corresponding level of innovation and research results, and how they relate to practical usage scenarios. The leading vendors of the industry are presenting latest services and tools in the exhibition and showcase practical examples and implementations in the Solution Provider Forum.

Tool challenge
As part of the Software Quality Days the Tool challenge is a special format on the first day of the conference. Participating vendors get questions or a practical challenge that needs to be “solved” during the day. In the late afternoon the solution needs to be presented back to the audience of the conference. For the participating vendors the challenge is the development of the solution and content at the conference location with limited time available as well as the presentation to the audience in a slot of 12 minutes only. Each conference participant gets one voting card and can selects his favorite solution or presentation. The vendor with the highest number of voting cards wins the tool challenge.

Rainer and GerwaldThe slides of our contribution are posted on SlideShare: http://www.slideshare.net/rstropek/software-quality-days-2018-tools-challenge

Video of the Tool Challenge presentation: https://www.youtube.com/watch?v=STr0ZiBtfPQ

Special thanks go to Rainer Stropek for the superior presentation at the Tool Challenge!


Rainer Stropek, Regional Director & MVP, Azure (right in the picture)
Gerwald Oberleitner, Technical Sales, Intelligent Cloud, Microsoft (left in the picture)

André Krämer: Fehler: Xamarin.Forms legt in Visual Studio 2017 Update 5 leere Projektmappe an

Kürzlich stieß ich auf einen sehr unschönen Fehler in Visual Studio 2017 Update 5. Beim Testen der neuen Xamarin.Forms Projektvorlage, die nun auch .NET Standard für das Teilen des Codes unterstützt, erhielt ich als Ergebnis in Visual Studio eine leere Projektmappe. Sowohl das geteilte Projekt, als auch die plattformspezifischen Projekte fehlten. Ein Blick in den Dateiexplorer zeigte, dass es sich nicht um einen Anzeigefehler in Visual Studio handelte, sondern dass tatsächlich auch im Dateisystem keine Dateien angelegt wurden.

Jürgen Gutsch: Book Review: ASP.​NET Core 2 and Angular 5

Last fall, I did my first technical review of a book written by Valerio De Sanctis, called ASP.NET Core 2 and Angular 5. This book is about to use Visual Studio 2017 to create a Single Page Application using ASP.NET Core and Angular.

About this book

The full title is "ASP.NET Core 2 and Angular 5: Full-Stack Web Development with .NET Core and Angular" and was published by PacktPub and also available on Amazon. It is available as a printed version and via various e-book formats.

This book doesn't cover both technologies in deep, but gives you a good introduction on how both technologies are working together. It leads you step by step from the initial setup to the finished application. Don't expect a book for expert developers. But this book is great for ASP.NET Developers who want to start with ASP.NET Core and Angular. This book is a step by step tutorial to create all parts of an Application that manages tests, its questions, answers and results. It describes the database as well as the Web APIs, the Angular parts and the HTML, the authentication and finally the deployment to a web server.

Valerio uses the Angular based SPA project , which is available in Visual Studio 2017 and the .NET Core 2.0 SDK. This project template is not the best solution for bigger projects, but but it fits good for small size projects as described in this book.

About the technical review

It was my first technical review of an entire book. It was kinda fun to do this. I'm pretty sure it was a pretty hard job for Valerio, because the technologies changed while he was working on the chapters. ASP.NET Core 2.0 was released after he finished four or five chapter and he needed to rewrite those chapters. He changed the whole Angular integration into the ASP.NET Project, because of the new Angular SPA project template. Also Angular 5 came out during writing. Fortunately there wasn't so much relevant changes between, version 4 and version 5. In know this issues, about writing good contents, while technology changes. I did a article series for a developer magazine about ASP.NET Core and Angular 2 and both ASP.NET Core and Angular changes many times. And changes again right after I finished the articles. I rewrote that stuff a lot and worked almost six months on only three articles. Even my Angular posts in this blog are pretty much outdated and don't work anymore with the latest versions.

Kudos to Valerio, he really did a great job.

I got one chapter by another to review. My job wasn't just to read the chapters, but also to find logical errors, mistakes that will possibly confuse the readers and also to find not working code parts. I followed the chapters as written by Valerio to build this sample application. I followed all instructions and samples to find errors. I reported a lot of errors, I think. And I'm sure that all of them where removed. After I finished the review of the last chapter, I also finished the coding and got a running application deployed on a webserver.

Readers reviews on Amazon and PacktPub

I just had a look into the readers reviews on Amazon and PacktPub. There are not so much reviews done currently, but unfortunately there are (currently) 4 out of 9 reviews talking about errors in the code samples. Mostly about errors in the client side Angular code. This is a lot IMHO. This turns me sadly. And I really apologize that. I was pretty sure I found almost all mistakes, maybe at least those errors that prevents a running application. Because I got it running at the end. Additionally I wasn't the only technical review. There was Ramchandra Vellanki who also did a great job, for sure.

What happened that some readers found errors? Two reasons I thought about first:

  1. The readers didn't follow the instructions really carefully. Especially experienced developers really know how it works or how it should work in their perspective. They don't read exactly, because they seem to know where the way goes. I did so as well during the first three or four chapters and I needed to start again from the beginning.
  2. Dependencies were changing since the book was published. Especially if the package versions inside the package.json were not fixed to a specific version. npm install then loads the latest version, which may contain breaking changes. The package.json in the book has fixed version, but the sources on GitHub doesn't.

I'm pretty sure there are some errors left in the codes, but at the end the application should run.

Also there are conceptual differences. While writing about Angular and ASP.NET Core and while working with it, I learned a lot and from my current point of view, I would not host an Angular app inside an ASP.NET Core application anymore. (Maybe I'll think about doing that in a really small application.) Anyway, there is that ASP.NET Core Angular SPA project and it is really easy to setup a SPA using this. So, why not using this project template to describe the concepts and interaction of Angular and ASP.NET Core? This keeps the book simple and short for beginners.


I would definitely do a technical review again, if needed. As I said, it is fun and an honor to help an author to write a book like this.

Too bad, that some readers struggle about errors anyway and couldn't get the code running. But writing a book is hard work. And we developers all know, that no application is really bug free, so even a book about quickly changing technologies cannot be free of errors.

Manfred Steyer: Microservice Clients with Web Components using Angular Elements: Dreams of the (near) future?

In one of my last blog posts I've compared several approaches for using Single Page Applications, esp. Angular-based ones, in a microservice-based environment. Some people are calling such SPAs micro frontends; other call them micro aps. As you can read in the mentioned post, there is not the one and only perfect approach but several feasible concepts with different advantages and disadvantages.

In this post I'm looking at one of those approaches in more detail: Using Web Components. For this, I'm leveraging the new Angular Elements library (@angular/elements) the Core Team is currently working on. Please note that it's still an Angular Labs Project which means that it's experimental and that there can be breaking changes anytime.

Angular Labs

Angular Elements

To get started with @angular/elements you should have a look at Vincent Ogloblinsky's blog post. It really explains the ideas behind it very well. If you prefer a video, have a look to Rob Wormald's presentation from Angular Connect 2017. Also, my buddy Pascal Precht gave a great talk about this topic at ng-be 2017.

As those resources are really awesome, I won't repeat the information they provide here. Instead, I'm showing how to leverage this know-how to implement microservice clients.

Case Study

The case study presented here is as simple as possible. It contains a shell app that activates microservice clients as well as routes within those microservice clients. They are just called Client A and Client B. In addition, Client B also contains a widget from Client A.

Client A is activated

Client B with widget from Client A

The whole source code can be found in my GitHub repo.

Routing within Microservice Clients

One thing that is rather unusual here, is that whole clients are implemented as Web Components and therefore they are using routing:

@NgModule({ imports: [ ReactiveFormsModule, BrowserModule, RouterModule.forRoot([ { path: 'client-a/page1', component: Page1Component }, { path: 'client-a/page2', component: Page2Component }, { path: '**', component: Page1Component} ], { useHash: true }) ], declarations: [ ClientAComponent, Page1Component, Page2Component, [...] ], entryComponents: [ ClientAComponent, [...] ] }) export class AppModule { ngDoBootstrap() { } }

When bootstrapping such components as Web Components we have to initialize the router manually:

@Component([...]) export class ClientAComponent { constructor(private router: Router) { router.initialNavigation(); // Manually triggering initial navigation for @angular/elements ? } }

Excluding zone.js

Normally, Angular leverages zone.js for change detection. It provides a lot of convenience by informing Angular about all browser events. To be capable of this, it's monkey-patching all browser objects. Especially, when we want to use several microservice clients within a single page it can be desirable to avoid such a behavior. This would also lead to smaller bundle sizes.

Beginning with Angular 5 we can exclude zone.js by setting the property ngZone to noop during bootstrapping:

registerAsCustomElements([ClientAComponent, ClientAWidgetComponent], () => platformBrowserDynamic().bootstrapModule(AppModule, { ngZone: 'noop' }) );

After this, we have to trigger change detection manually. But this is cumbersome and error-prone. There are some ideas to deal with this. A prototypical (!) one comes from Fabian Wiles who is an active community member. It uses a custom push pipe that triggers change detection when an observable yields a new value. It works similar to the async pipe but other than it push also works without zone.js:

@Component({ selector: 'client-a-widget', template: ` <div id="widget"> <h1>Client-A Widget</h1> <input [formControl]="control"> {{ value$ | push }} </div> `, styles: [` #widget { padding:10px; border: 2px darkred dashed } `], encapsulation: ViewEncapsulation.Native }) export class ClientAWidgetComponent implements OnInit { control = new FormControl(); value$: Observable<string>; ngOnInit(): void { this.value$ = this.control.valueChanges; } }

You can find Fabian's push pipe within my github repo.

Build Process

For building the web components, I'm using a modified version of the webpack configuration from Vincent Ogloblinsky's blog post. I've modified it to create a bundle for each microservice client. Normally, they would be build within separate projects but for the sake of simplicity I've put everything into my sample:

const AotPlugin = require('@ngtools/webpack').AngularCompilerPlugin; const path = require('path'); var clientA = { entry: { 'client-a': './src/client-a/main.ts' }, resolve: { mainFields: ['es2015', 'browser', 'module', 'main'] }, module: { rules: [{ test: /\.ts$/, loaders: ['@ngtools/webpack'] }] }, plugins: [ new AotPlugin({ tsConfigPath: './tsconfig.json', entryModule: path.resolve(__dirname, './src/client-a/app.module#AppModule' ) }) ], output: { path: __dirname + '/dist', filename: '[name].bundle.js' } }; var clientB = { entry: { 'client-b': './src/client-b/main.ts' }, resolve: { mainFields: ['es2015', 'browser', 'module', 'main'] }, module: { rules: [{ test: /\.ts$/, loaders: ['@ngtools/webpack'] }] }, plugins: [ new AotPlugin({ tsConfigPath: './tsconfig.json', entryModule: path.resolve(__dirname, './src/client-b/app.module#AppModule' ) }) ], output: { path: __dirname + '/dist', filename: '[name].bundle.js' } }; module.exports = [clientA, clientB];

Loading bundles

After creating the bundles, we can load them into a shell application:

<client-a></client-a> <client-b></client-b> <script src="dist/client-a.bundle.js"></script> <script src="dist/client-b.bundle.js"></script>

In this example the bundles are located via relative paths but you could also load them from different origins. The latter one allows for a separate development and deployment of microservice clients.

In addition to that, we need some kind of meta-routing that makes sure that the microservice clients are only displayed when specific menu items are activated. I've implemented this in VanillaJS. You can look it up in the example provided.

Providing Widgets for other Microservice Clients

A bundle can provide several Web Components. For instance, the bundle for Client A also contains a ClientAWidgetComponent which is used in Client B:

registerAsCustomElements([ClientAComponent, ClientAWidgetComponent], () => platformBrowserDynamic().bootstrapModule(AppModule, { ngZone: 'noop' }) );

When calling it there is one challenge: In Client B, Angular doesn't know anything about Client A's ClientAWidgetComponent. Calling it would therefore make Angular to throw an exception. To avoid this, we can make use of the CUSTOM_ELEMENTS_SCHEMA:

@NgModule({ [...] schemas: [CUSTOM_ELEMENTS_SCHEMA], [...] }) export class AppModule { ngDoBootstrap() { } }

After this, we can call the widget anywhere within Client B:

<h2>Client B - Page 2</h2> <client-a-widget></client-a-widget>


As mentioned, @angular/elements is currently experimental. Therefore this approach is more or less a dream of the (near) future. Besides this, there are some advantages and disadvantages:


  • Styling is isolated from other Microservice Clients due to Shadow DOM
  • Allows for separate development and separate deployment
  • Mixing widgets from different Microservice Clients is possible
  • The shell can be a Single Page Application too
  • We can use different SPA frameworks in different versions for our Microservice Clients


  • Microservice Clients are not completely isolated as it would be the case when using hyperlinks or iframes instead. This means that they could influence each other in an unplanned way. This also means that there can be conflicts when using different frameworks in different versions.
  • Shadow DOM doesn't work with IE 11
  • We need polyfills for some browsers

Holger Schwichtenberg: Tupel in Tupeln in C# 7.x

Tupel dienen dazu, strukturierte und typisierte Einzelinformationen aneinander zu binden, ohne dafür eine Klasse oder Struktur zu deklarieren. Man kann sie ineinander verschachteln.

Holger Schwichtenberg: User-Group-Vortrag zu .NET 4.7 und Visual Studio 2017 am 10. Januar in Dortmund

Der Dotnet-Doktor zeigt an diesem Abend die neusten Features in .NET, C# und Visual Studio.

Manfred Steyer: Generating custom Angular Code with the CLI and Schematics, Part III: Extending existing Code with the TypeScript Compiler API

Table of Contents

This blog post is part of an article series.

In my two previous blog posts, I've shown how to leverage Schematics to generate custom code with the Angular CLI as well as to update an existing NgModules with declarations for generated components. The latter one was not that difficult because this is a task the CLI performs too and hence there are already helper functions we can use.

But, as one can imagine, we are not always that lucky and find existing helper functions. In these cases we need to do the heavy lifting by ourselves and this is what this post is about: Showing how to directly modify existing source code in a safe way.

When we look into the helper functions used in the previous article, we see that they are using the TypeScript Compiler API which e. g. gives us a syntax tree for TypeScript files. By traversing this tree and looking at its nodes we can analyse existing code and find out where a modification is needed.

Using this approach, this post extends the schematic from the last article so that the generated Service is injected into the AppComponent where it can be configured:

[...] import { SideMenuService } from './core/side-menu/side-menu.service'; @Component({ [...] }) export class AppComponent { constructor( private sideMenuService: SideMenuService) { // sideMenuService.show = true; } }

I think, providing boilerplate for configuring a library that way can lower the barrier for getting started with it. However, please note that this simple example represents a lot of situations where modifying existing code provides more convenience.

The source code for the examples used for this can be found here in my GitHub repository.

Schematics is currently an Angular Labs project. Its public API is experimental and can change in future.

Angular Labs

Walking a Syntax Tree with the TypeScript Compiler API

To get familiar with the TypeScript Compiler API, let's start with a simple NodeJS example that demonstrates its fundamental usage. All we need for this is TypeScript itself. As I'm going to use it within an simple NodeJS application, let's also install the typings for it. For this, we can use the following commands in a new folder:

npm init
npm install typescript --save
npm install @types/node --save-dev

In addition to that, we need a tsconfig.json with respective compiler settings:

{ "compilerOptions": { "target": "es6", "module": "commonjs", "lib": ["dom", "es2017"], "moduleResolution": "node" } }

Now we have everything in place for our first experiment with the Compiler CLI. Let's create a new file index.ts:

import * as ts from 'typescript'; import * as fs from 'fs'; function showTree(node: ts.Node, indent: string = ' '): void { console.log(indent + ts.SyntaxKind[node.kind]); if (node.getChildCount() === 0) { console.log(indent + ' Text: ' + node.getText()); } for(let child of node.getChildren()) { showTree(child, indent + ' '); } } let buffer = fs.readFileSync('demo.ts'); let content = buffer.toString('utf-8'); let node = ts.createSourceFile('demo.ts', content, ts.ScriptTarget.Latest, true); showTree(node);

The showTree function recursively traverses the syntax tree beginning with the passed node. For this it logs the node's kind to the console. This property tells us whether the node represents for instance a class name, a constructor or a parameter list. If the node doesn't have any children, the program is also printing out the node's textual content, e. g. the represented class name. The function repeats this for each child node with an increased indent.

At the end, the program is reading a TypeScript file and constructing a new SourceFile object with it's content. As the type SourceFile is also a node, we can pass it to showTree.

In addition to this, we also need the demo.ts file the application is loading. For the sake of simplicity, let's go with the following simple class:

class Demo { constructor(otherDemo: Demo) {} }

To compile and run the application, we can use the following commands:

tsc index.ts
node index.js

Of course, it would make sense to create a npm script for this.

When running, the application should show the following syntax tree:

                Text: class
                Text: Demo
                Text: {
                        Text: constructor
                        Text: (
                                Text: otherDemo
                                Text: :
                                    Text: Demo
                        Text: )
                            Text: {
                            Text: }
                Text: }

Take some time to look at this tree. As you see, it contains a node for every aspect of our demo.ts. For instance, there is a node with of the kind ClassDeclaration for our class and it contains a ClassKeyword and an Identifier with the text Demo. You also see a Constructor with nodes that represent all the pieces a constructor consists of. It contains a SyntaxList with a sub tree for the constructor argument otherDemo.

When we combine what we've learned when writing this example with the things we already know about Schematics from the previous articles, we have everything to implement the initially described endeavor. The next sections describe the necessary steps.

Providing Key Data

When writing a Schematics rule, a first good step is thinking about all the data it needs and creating a class for it. In our case, this class looks like this:

export interface AddInjectionContext { appComponentFileName: string; // e. g. /src/app/app.component.ts relativeServiceFileName: string; // e. g. ./core/side-menu/side-menu.service serviceName: string; // e. g. SideMenuService }

To get this data, let's create a function createAddInjectionContext:

function createAddInjectionContext(options: ModuleOptions): AddInjectionContext { let appComponentFileName = '/' + options.sourceDir + '/' + options.appRoot + '/app.component.ts'; let destinationPath = constructDestinationPath(options); let serviceName = classify(`${options.name}Service`); let serviceFileName = join(normalize(destinationPath), `${dasherize(options.name)}.service`); let relativeServiceFileName = buildRelativePath(appComponentFileName, serviceFileName); return { appComponentFileName, relativeServiceFileName, serviceName } }

As this listing shows, createAddInjectionContext takes an instance of the class ModuleOptions. It is part of the utils Schematics contains and represents the parameters the CLI passes. The three needed fields are inferred from those instance. To find out in which folder the generated files are placed, it uses the custom helper constructDestinationPath:

export function constructDestinationPath(options: ModuleOptions): string { return '/' + (options.sourceDir? options.sourceDir + '/' : '') + (options.path || '') + (options.flat ? '' : '/' + dasherize(options.name)); }

In addition to this, it uses further helper functions Schematics provides us:

  • classify: Creates a class name, e. g. SideMenu when passing side-menu.
  • normalize: Normalizes a path in order to compensate for platform specific characters like \ under Windows.
  • dasherize: Converts to Kebab case, e. g. it returns side-menu for SideMenu.
  • join: Combines two paths.
  • buildRelativePath: Builds a relative path that points from the first passed absolute path to the second one.

Please note, that some of the helper functions used here are not part of the public API. To prevent breaking changes I've copied the respective files. More about this wrinkle can be found in my previous article about this topic.

Adding a new constructor

In cases where the AppComponent does not have a constructor, we have to create one. The Schematics way of doing this is creating a Change-Object that describes this modification. For this task, I've created a function createConstructorForInjection. Although it is a bit long because we have to include several null/undefined checks, it is quite straight:

function createConstructorForInjection(context: AddInjectionContext, nodes: ts.Node[], options: ModuleOptions): Change { let classNode = nodes.find(n => n.kind === ts.SyntaxKind.ClassKeyword); if (!classNode) { throw new SchematicsException(`expected class in ${context.appComponentFileName}`); } if (!classNode.parent) { throw new SchematicsException(`expected constructor in ${context.appComponentFileName} to have a parent node`); } let siblings = classNode.parent.getChildren(); let classIndex = siblings.indexOf(classNode); siblings = siblings.slice(classIndex); let classIdentifierNode = siblings.find(n => n.kind === ts.SyntaxKind.Identifier); if (!classIdentifierNode) { throw new SchematicsException(`expected class in ${context.appComponentFileName} to have an identifier`); } if (classIdentifierNode.getText() !== 'AppComponent') { throw new SchematicsException(`expected first class in ${context.appComponentFileName} to have the name AppComponent`); } // Find opening cury braces (FirstPunctuation means '{' here). let curlyNodeIndex = siblings.findIndex(n => n.kind === ts.SyntaxKind.FirstPunctuation); siblings = siblings.slice(curlyNodeIndex); let listNode = siblings.find(n => n.kind === ts.SyntaxKind.SyntaxList); if (!listNode) { throw new SchematicsException(`expected first class in ${context.appComponentFileName} to have a body`); } let toAdd = ` constructor(private ${camelize(context.serviceName)}: ${classify(context.serviceName)}) { // ${camelize(context.serviceName)}.show = true; } `; return new InsertChange(context.appComponentFileName, listNode.pos+1, toAdd); }

The parameter nodes contains all nodes of the syntax tree in a flat way. This structure is also used by some default rules Schematics comes with and allows to easily search the tree with Array methods. The function looks for the first node of the kind ClassKeyword which contains the class keyword. Compare this with the syntax tree above which was displayed by the first example.

After this it gets an array with the ClassKeyword's siblings (=its parent's children) and searches it from left to right in order to find a position for the new constructor. To search from left to right, it truncates everything that is on the left of the current position using slice several times. To be honest, this is not the best decision in view of performance, but it should be fast enough and I think that it makes the code more readable.

Using this approach, the functions walks to the right until it finds a SyntaxList (= class body) that follows a FirstPunctuation node (= the character '{' in this case) which in turn follows an Identifier (= the class name). Then it uses the position of this SyntaxList to create an InsertChange object that describes that a constructor should be inserted there.

Of course, we could also search the body of the class to find a more fitting place for the constructor -- e. g. between the property declarations and the method declarations -- but for the sake of simplicity and demonstration, I've dropped this idea.

Adding a constructor argument

If there already is a constructor, we have to add another argument for our service. The following function is taking care about this task. Among other parameters, it takes the node that represents the constructor. You can also compare this with the syntax tree of our first example at the beginning.

function addConstructorArgument(context: AddInjectionContext, ctorNode: ts.Node, options: ModuleOptions): Change { let siblings = ctorNode.getChildren(); let parameterListNode = siblings.find(n => n.kind === ts.SyntaxKind.SyntaxList); if (!parameterListNode) { throw new SchematicsException(`expected constructor in ${context.appComponentFileName} to have a parameter list`); } let parameterNodes = parameterListNode.getChildren(); let paramNode = parameterNodes.find(p => { let typeNode = findSuccessor(p, [ts.SyntaxKind.TypeReference, ts.SyntaxKind.Identifier]); if (!typeNode) return false; return typeNode.getText() === context.serviceName; }); // There is already a respective constructor argument --> nothing to do for us here ... if (paramNode) return new NoopChange(); // Is the new argument the first one? if (!paramNode && parameterNodes.length == 0) { let toAdd = `private ${camelize(context.serviceName)}: ${classify(context.serviceName)}`; return new InsertChange(context.appComponentFileName, parameterListNode.pos, toAdd); } else if (!paramNode && parameterNodes.length > 0) { let toAdd = `, private ${camelize(context.serviceName)}: ${classify(context.serviceName)}`; let lastParameter = parameterNodes[parameterNodes.length-1]; return new InsertChange(context.appComponentFileName, lastParameter.end, toAdd); } return new NoopChange(); }

This function retrieves all child nodes of the constructor and searches for a SyntaxList (=the parameter list) node having a TypeReference child which in turn has a Identifier child. For this, it uses the helper function findSuccessor displayed below. The found identifier holds the type of the argument in question. If there is already an argument that points to the type of our service, we don't need to do anything. Otherwise the function checks wether we are inserting the first argument or a subsequent one. In each case, the correct position for the new argument is located and then the function returns a respective InsertChange-Object for the needed modification.

function findSuccessor(node: ts.Node, searchPath: ts.SyntaxKind[] ) { let children = node.getChildren(); let next: ts.Node | undefined = undefined; for(let syntaxKind of searchPath) { next = children.find(n => n.kind == syntaxKind); if (!next) return null; children = next.getChildren(); } return next; }

Deciding whether to create or modify a Constructor

The good message first: We've done the heavy work. What we need now is a function that decides which of the two possible changes -- adding a constructor or modifying it -- needs to be done:

function buildInjectionChanges(context: AddInjectionContext, host: Tree, options: ModuleOptions): Change[] { let text = host.read(context.appComponentFileName); if (!text) throw new SchematicsException(`File ${options.module} does not exist.`); let sourceText = text.toString('utf-8'); let sourceFile = ts.createSourceFile(context.appComponentFileName, sourceText, ts.ScriptTarget.Latest, true); let nodes = getSourceNodes(sourceFile); let ctorNode = nodes.find(n => n.kind == ts.SyntaxKind.Constructor); let constructorChange: Change; if (!ctorNode) { // No constructor found constructorChange = createConstructorForInjection(context, nodes, options); } else { constructorChange = addConstructorArgument(context, ctorNode, options); } return [ constructorChange, insertImport(sourceFile, context.appComponentFileName, context.serviceName, context.relativeServiceFileName) ]; }

As the first sample in this post, it uses the TypeScript Compiler API to create a SourceFile object for the file containing the AppComponent. Then it uses the function getSourceNodes which is part of Schematics to traverse the whole tree and creates a flat array with all nodes. These nodes are searched for a constructor. If there is none, we are using our function createConstructorForInjection to create a Change object; otherwise we are going with addConstructorArgument. At the end, the function returns this Change together with another Change created by insertImport which also comes with Schematics and creates the needed import statement at the beginning of the TypeScript file.

Please note that the order of these two changes is vital because they are adding lines to the source file which is forging the position information within the node objects.

Putting all together

Now, we just need a factory function for a rule that is calling buildInjectionChanges and applying the returned changes:

export function injectServiceIntoAppComponent(options: ModuleOptions): Rule { return (host: Tree) => { let context = createAddInjectionContext(options); let changes = buildInjectionChanges(context, host, options); const declarationRecorder = host.beginUpdate(context.appComponentFileName); for (let change of changes) { if (change instanceof InsertChange) { declarationRecorder.insertLeft(change.pos, change.toAdd); } } host.commitUpdate(declarationRecorder); return host; }; };

This function takes the ModuleOptions holding the parameters the CLI passes and returns a Rule function. It creates the context object with the key data and delegates to buildInjectionChanges. The received rules are iterated and applied.

Adding Rule to Schematic

To get our new injectServiceIntoAppComponent rule called, we have to call it in its index.ts:

[...] export default function (options: MenuOptions): Rule { return (host: Tree, context: SchematicContext) => { [...] const rule = chain([ branchAndMerge(chain([ mergeWith(templateSource), addDeclarationToNgModule(options, options.export), injectServiceIntoAppComponent(options) ])) ]); return rule(host, context); } }

Testing the extended Schematic

To try the modified Schematic out, compile it and copy everything to the node_modules folder of an example application. As in the former blog article, I've decided to copy it to node_modules/nav. Please make sure to exclude the Schematic Collection's node_modules folder, so that there is no folder node_modules/nav/node_modules.

After this, switch to the example application's root and call the Schematic:

Calling Schematic which generated component and registers it with the module

This not only created the SideMenu but also injects its service into the AppComponent:

import { Component } from '@angular/core'; import { OnChanges, OnInit } from '@angular/core'; import { SideMenuService } from './core/side-menu/side-menu.service'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { constructor(private sideMenuService: SideMenuService) { // sideMenuService.show = true; } title = 'app'; }

Code-Inside Blog: First steps to enable login with Microsoft or Azure AD account for your application

It is quite common these days to “Login with Facebook/Google/Twitter”. Of course Microsoft has something similar. If I remember it correctly the first version was called “Live SDK” with the possibility to login with your personal Microsoft Account.

With Office 365 and the introduction of Azure AD we were able to build an application to sign-in with a personal account via the “Live SDK” and organizational account via “Azure AD”.

However: The developer and end user UX was far way from perfect, because the implementation for each account type was different and for the user it was not clear which one to choose.

Microsoft Graph & Azure AD 2.0

Fast forward to the right way: Use the Azure AD 2.0 endpoint.

Step 1: Register your own application

You just need to register your own application in the Application Registration Portal. The registration itself is a typical OAuth-application registration and you get a ClientId and Secret for your application.

Warning: If you have “older” LiveSDK application registered under your account you need to choose Converged Applications. LiveSDK applications are more or less legacy and I wouldn’t use them anymore.

Step 2: Choose a platform

Now you need to choose your application platform. If you want to enable the sign-in stuff for your web application you need to choose “Web” and insert the redirect URL. After the sign-in process the token will be send to this URL.


Step 3: Choose Microsoft Graph Permissions (Scopes)

In the last step you need to select what permissions your applications need. A first-time user needs to accept your permission requests. The “Microsoft Graph” is a collection of APIs that works for personal Microsoft accounts and Office 365/Azure AD account.


The “User.Read” permission is the most basic permission that would allow to sign-in, but if you want to access other APIs as well you just need to add those permissions to your application:



After the application registration and the selection of the needed permissions you are ready to go. You can even generate a sample application on the portal. For a quick start check this page

Microsoft Graph Explorer


As I already said: The Graph is the center of Microsofts Cloud Data and the easiest way to play around with the different scopes and possibilities is with the Microsoft Graph Explorer.

Hope this helps.

Alexander Schmidt: Unit Tests gegen automatisch bereit gestellte SQL-Datenbanken

Automatische Verteilung von ge-seedeten Datenbanken bei Ausführung von Unit-Tests.

Manfred Steyer: A software architect's approach towards using Angular (and SPAs in general) for microservices aka microfrontends

TLDR; To choose a strategy for implementing micro frontends with SPA, you need to know your architectural goals, prioritize them and evaluate them against the options available. This article does this for some common goals and presents a matrix which reflects the results of the evaluation. To provide some orientation, you also might find this decision tree useful:

Some general advice for choosing an implementation strategy for microfrontends

People ask me on regular basis how to use SPAs and/or Angular in an microservice-based environment. The need for such microfrontends is no surprise, as microservices are quite popular nowadays. The underlying idea of microservices is quite simple: Create several tiny applications -- so called microservices -- instead of one big monolytic applications. This leads for instance (but not only) to smaller teams (per microservice) that can make decisions faster and chose for the "best" technology that suites their needs.

But when we want to use several microservices that form a bigger software system in the browser, we need a way to load them side by side and to isolate them from each other so that they cannot interact in an unplanned manner. The fact that each team can use different frameworks in different versions brings additional complexity into play.

Fortunately, there are several approaches for this. Unfortunately, no approach is perfect -- each of them has it's own pros and cons.

To decide for one, a software architect would evaluate those so called architectural candidates against the architectural goals given for the software system in question. Typical (but not the only) goals for SPAs in microservice-based environments are shown in the next section.

Architectural Goals

Architectural Goal Description
a) Isolation Can the clients influence each other in an unplanned way?
b) Separate Deployment Can the microservices be deployed separately without the need to coordinate with other teams responsible for other microservices?
c) Single Page Shell Is the shell, composing the loaded microfrontends a SPA -- or does it at least feel like one for the user (no postbacks, deep linking, holding state)
d) Different SPA-Frameworks Can we use different SPA frameworks (or libraries) in different versions
e) Tree Shaking Can we make use of tree shaking?
f) Vendor Bundles Can we reuse already loaded vendor bundles or do we need to load the same framework several times, if it's used by several microfrontends
g) Several microfrontends at the same time Can we display several microfrontends at the same time, e. g. a product list and a shopping basket
h) Prevents version conflicts Does the approach prevent version conflicts between used libraries?
i) Separate development Can separate teams develop their microfrontends independently of other ones
j) One optimized solution (bundle) Everything is compiled into one optimized solution. You don't have to duplicate libraries or frameworks for different parts of the system.

Normally, you would break those abstract goals down to concrete, measurable goals for you project. As this is not possible in an overview like this, I'm sticking with those abstract ones.


The following table evaluates some architectural candidates for microfrontends against the discussed goals (a - g).

Architectural Candidate a b c d e f g h i j
I) Just using Hyperlinks x x x x x x
II) Using iframes x x x x x x x x
III) Loading different SPAs into the same page S x x x x x x
IV) Plugins * S x x x x
V) Packages (npm, etc.) * S x x x x x
VI) Monorepo Approach * S x x x x x x
VII) Web Components S x x x x x x

An x means that the goal in question is supported. The S in the column Isolation means that you can leverage Shadow DOM as well as EcmaScript Modules to archive some amount of isolation.

* This dosn't lead to a typical microservice-based solution. But as the main goal is to reach defined goals, it would be wrong to don't consider these options.

If you are interested into some of those candidates, the next table provides some additional thoughts on them:

Nr Remarks
I) Just using Hyperlinks We could save the state before navigating to another microfrontend. Using something like Redux (@ngrx/store) could come in handy b/c it manages the state centrally. Also consider that the user can open several microservices in several browser windows/ tabs.
II) Using iframes We need something like a meta router that synchronizes the url with the iframes ones. We also need to solve some issues like resizing the iframes to prevent scrolling within them and elements cannot overlap their borders. Also, iframes are not the most popular feature ;-)
III) Loading different SPAs into the same page A popular framework that loads several SPAs into the browser is Single SPA. The main drawback seems to be the lack of isolation, cause all applications share the same global namespace and the same global browser objects. If the latter ones are monkey patched by a framework (like zone.js) this affects all the loaded SPAs
IV) Plugins Dynamically loading parts of a SPA can be done with Angular but webpack and so the CLI demands on compiling everything together. Switching to SystemJS would allow to load parts that have been compiled separately
V) Packages This means, providing each frontend as a package via (a private) npm registry or the monorepo approach and consuming it in a shell application. This also means, that there is one compilation step that goes through each frontend.
VI) Monorepo Approach This approach, heavily used at Google or Facebook, is similar to using packages but instead of distributing code via a npm registry everything is put into one source code repository. In addition, all projects in the repository share the same dependencies. Hence, there are no version conflicts b/c everyone has to use the same/ the latest version. And you don't need to deal with a registry when you just want to use your own libraries. A good post motivating this can be found here. To get started with this idea in the world of Angular, you should have a look at Nrwl's Nx - a carefully thought through library and code generator that helps (not only) with monorepos.
VII) Web Components This is similar to III) but Web Components seem to be a good fit here, b/c they can be used with any framework - at least, in theory. They also provide a bit of isolation when it comes to rendering and CSS due to the usage of Shadow DOM (not supported by IE 11). Currently, the Angular Team is working on a very promising Labs project called Angular Elements. The idea is to compile Angular Components down to Web Components.

To one thing clear: There is no perfect solution and it really depends on your current situation and upon how important the architectural goals are for you. E. g. I've seen many team writing successful applications leveraging libraries and at companies like Google and Facebook there is a long tradition of using monorepos. Also, I expect that Web Components will be used more and more due to the growing framework and browser support.

Some guidance

One way to work with the presented matrix is to extend it with all the other goals you have and evaluate it against your specific situation. Even though this seems to be straight forward, in practice it can be quite difficult. That's why I've decided to give you some (biased) guidance by pointing out the strength of the approaches presented:

If ... Try ...
1) You need just use one microfrontend at one time and have little/ no communication on UI level between them Hyperlinks
2) Otherwise: Legacy and need for very strong isolation iframes
3) Otherwise: One optimized and fully integrated UI, just one technology and no separate deployment Packages and/or Monorepo
4) Otherwise Bootstrap several SPAs in one browser window, use Shadow DOM for Isolation, consider Web Components if possible

If you don't have a legacy system and you've already decided to go with microservices, you should consider 1) and 4) in this very order.

It's not an "either/or thing"!

Mixing those approaches can also be a good idea. For instance, you could go with Hyperlinks for the general routing and when you have to display widgets from one microfrontend within an other one, you could choose for libraries or web components.

MSDN Team Blog AT [MS]: Weihnachstgrüße von Codefest.AT und der PowerShell UserGroup Austria

Hallo PowerShell Gemeinde!

Auch im letzten Monat war einiges los bei uns

Künftige Veranstaltungen

Newsletter - die "Schnipseljagd":  Unsere wöchentlichen PowerShell Newsletter kamen regelmäßig raus und beinhalteten folgende Themen:


  • Webhook anstossen
  • Keine Passwörter im Code – The Azure-way!
  • Try/catch/error blocks in PowerShell – Error Handling wie die Profis
  • Service Accounts: Passwort automatisch verändern
  • GPO – Konflikte erkennen


  • FTP
  • Mehrere CSV bearbeiten
  • PowerShell Gallery Informationen auslesen – mit PowerShell
  • Bilder sind auch für Kommandozeilen Enthusiasten wichtig
  • Mehrere PowerShell Fenster anordnen
  • Welche Infos gibt es über mich?


  • Neuigkeiten/Unterschiede in PowerShell 6.0
  • Schneller mit Vorlagen
  • Anzahl der Elemente eines Office365 Ordners auslesen
  • Eigene Eigenschaften hinzufügen
  • Sharepoint Recovery


  • PowerShell & Azure
  • Colour your Console
  • Sieger des PowerShell Contests
  • Adventkalender
  • Learn to build tools not to code
  • Multi-Valued Eigenschaften in ein CSV exportierren
  • Deep Learning


  • PowerShell Countdown Timer
  • Wie man ein Script auf die PowerShell Gallery postet
  • Den Manager eines Benutzers aus dem AD auslesen
  • Rechnen mit PowerShell


  • PowerShell in der Wolke (Microsoft Flow)
  • Betriebssysteminformationen auslesen
  • PowerShell Module
  • Microsoft MVP
  • Wo kann man mein Netzwerk verletzen?
  • Pester und Schleifen
  • Adventkalender

Wir hoffen es  war auch für Dich etwas dabei!

Wir wünschen ein Frohes Fest und einen Guten Rutsch ins Jahr 2018!

CodeFest.AT und die PowerShell UserGroup Austria - www.powershell.co.at

MSDN Team Blog AT [MS]: SQL Saturday Vienna (2018)

Am Freitag, den 19. Jänner 2018 dreht sich wieder einmal alles um die Microsoft Data Platform. Der SQL Saturday Vienna (2018) startet in die nunmehr fünfte Auflage mit noch mehr Sessions, noch mehr Speaker und vielen interessierten Teilnehmern!

Der SQL Saturday ist eine Ganztagesveranstaltung welche von der Community (SQL Pass Austria) für die Community veranstaltet wird. Am Freitag (Hauptkonferenz) steht unter anderem eine Keynote von Lindsey Allen (MS Corp Redmond) auf dem Program: Spannende Neuheiten aus dem Data Platform Universum inkludiert. Danach stehen 30 Sessions auf dem Programm – für DBAs, Entwickler, BI und Azure Themen ist gesorgt. Mehr Informationen zum Schedule: www.sqlsaturday.com/679/Sessions/Schedule.aspx.

Am Vortag der Konferenz besteht die Möglichkeit, an einer der drei angebotenen Pre-Cons (Ganztagesworkshops) teilzunehmen. Die geplanten Themen sind:

Die Plätze füllen sich langsam aber stetig – Eine Registrierung ist für beide Tage zwingend notwendig!

Die Eckdaten:

  • Donnerstag, 18. Jänner 2018: Pre Cons (Ganztagsworkshop)
  • Freitag, 19. Jänner 2018: SQL Saturday Vienna 2018
  • Ort:  Jufa Wien, Mautner-Markhof-Gasse 50, 1110 Wien
  • Veranstalter: SQL Pass Austria (http://austria.sqlpass.org,  @sqlsatvienna)

Wir freuen uns auf spannende Tage voll mit Data Platform Neuigkeiten!

SQL Pass Austria, das #SQLSatVienna Orga Team


Holger Schwichtenberg: Wenn Entity Framework Core Migrations die Kontextklasse nicht finden können

Eine Abweichung der Versionsnummer an der dritten Stelle kann dazu führen, dass die Schemamigrationen nicht mehr funktionieren.

Golo Roden: Einführung in React, Folge 5: Der unidirektionale Datenfluss

Mit React entwickelte Anwendungen verwenden einen unidirektionalen Datenfluss. Das bedeutet, dass Daten stets nur in einer Richtung weitergegeben und verarbeitet werden. Das wirft einige Fragen auf, beispielsweise wie mit Zustand umzugehen ist. Worauf gilt es zu achten?

Manfred Steyer: A lightweight and solid approach towards micro frontends (micro service clients) with Angular and/or other frameworks

Even though the word iframe causes bad feelings for most web devs, it turns out that using them for building SPAs for micro services -- aka micro frontends -- is a good choice. For instance, they allow for a perfect isolation between clients and for a separate deployment. Because of the isolation they also allow using different SPA frameworks. Besides iframes, there are other approaches to use SPAs in micro service architectures -- of course, each of them has their own pros and cons. A good overview can be found here. Another great resource comparing the options available is Brecht Billiet's presentation about this topic.

In addition to this, I've written another blog post comparing several approaches by evaluating them against some selected architectural goals.

As Asim Hussain shows in this blog article, using iframes can also be a nice solution for migrating an existing AngularJS application to Angular.

For the approach described here, I've written a "meta router" to load different spa clients for micro services in iframes. It takes care about the iframe's creation and about synchronizing their routes with the shell's url. It also resizes the iframe dynamically to prevent a scrolling bar within it. The library is written in a framework agnostic way.

The router can be installed via npm:

npm install meta-spa-router --save

The source code and an example can be found in my GitHub account.

In the example I'm using VanillaJS for the shell application and Angular for the routed child apps.

This is how to set up the shell with VanillaJS:

var MetaRouter = require('meta-spa-router').MetaRouter;

var config = [
        path: 'a',
        app: '/app-a/dist'
        path: 'b',
        app: '/app-b/dist'

window.addEventListener('load', function() { 

	var router = new MetaRouter();

            .addEventListener('click', function() { router.go('a') });

            .addEventListener('click', function() { router.go('b') });

            .addEventListener('click', function() { router.go('a', 'a') });

            .addEventListener('click', function() { router.go('a', 'b') });        


And here is the HTML for the shell:

    <a id="link-a">Route to A</a> |
    <a id="link-b">Route to B</a> |
    <a id="link-aa">Jump to A within A</a> |
    <a id="link-ab">Jump to B within A</a>

<!-- placeholder for routed apps -->
<div id="outlet"></div>

The router creates the iframes as children of the element with the id outlet and allows switching between them using the method go. As you see in the example, it also allows to jump to a subroute within an application.

The routed applications use the RoutedApp class to establish a connection with the shell. This is necessary to sync the client app's router with the shell's one. As I'm using Angular in my example, I'm registering it as a service. Instead of this, one could also directly instantiate it when going with other frameworks.

To register this service that comes without Angular Metadata for AOT because its framework agnostic, I'm creating a token in a new file app.tokens.ts:

import { RoutedApp } from 'meta-spa-router';
import { InjectionToken } from '@angular/core';

export const ROUTED_APP = new InjectionToken<RoutedApp>('ROUTED_APP');

Then I'm using it to create a service provider for the RoutedApp class:

import { RoutedApp } from 'meta-spa-router';

  providers: [{ provide: ROUTED_APP, useFactory: () => new RoutedApp() }],
  bootstrap: [AppComponent]
export class AppModule { }

In the AppComponent I'm getting hold of a RoutedApp instance by using dependency injection:

// app.component.ts in routed app

import { Router, NavigationEnd } from '@angular/router';
import { Component } from '@angular/core';
import { filter } from 'rxjs/operators';
import { RoutedApp } from 'meta-spa-router';

  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
export class AppComponent {
  title = 'app';

    private router: Router, 
    @Inject(ROUTED_APP) private routedApp: RoutedApp) {
  initRoutedApp() {
    this.routedApp.config({ appId: 'a' });

    this.router.events.pipe(filter(e => e instanceof NavigationEnd)).subscribe((e: NavigationEnd) => {

    this.routedApp.registerForRouteChange(url => this.router.navigateByUrl(url));


I'm assigning an appId which is by convention the same as the child app's path in the shell. In addition to that, I'm also synchronizing the meta router with the child's apps one.

Jürgen Gutsch: Trying BitBucket Pipelines with ASP.NET Core

BitBucket provides a continuous integration tool called Pipelines. This is based on Docker containers which are running on a Linux based Docker machine. Within this post I wanna try to use BitBucket Pipelines with an ASP.NET Core application.

In the past I preferred BitBucket over GitHub, because I used Mercurial more than Git. But that changed five years ago. Since than I use GitHub for almost every new personal project that doesn't need to be a private project. But at the YooApps we use the entire Atlassian ALM Stack including Jira, Confluence and BitBucket. (We don't use Bamboo yet, because we also use Azure a lot and we didn't get Bamboo running on Azure). BitBucket is a good choice, if you anyway use the other Atlassian tools, because the integration to Jira and Confluence is awesome.

Since a while, Atlassian provides Pipelines as a simple continuous integration tool directly on BitBucket. You don't need to setup Bamboo to build and test just a simple application. At the YooApps we actually use Pipelines in various projects which are not using .NET. For .NET Projects we are currently using CAKE or FAKE on Jenkins, hosted on an Azure VM.

Pipelines can also used to build and test branches and pull request, which is awesome. So why shouldn't we use Pipelines for .NET Core based projects? BitBucket actually provides an already prepared Pipelines configuration for .NET Core related projects, using the microsoft/dotnet Docker image. So let's try pipelines.

The project to build

As usual, I just setup a simple ASP.NET Core project and add a XUnit test project to it. In this case I use the same project as shown in the Unit testing ASP.NET Core post. I imported that project from GitHub to BitBucket. if you also wanna try Pipelines, feel free to use the same way or just download my solution and commit it into your repository on BitBucket. Once the sources are in the repository, you can start to setup Pipelines.

Setup Pipelines

Setting up Pipelines actually is pretty easy. In your repository on BitBucket.com is a menu item called Pipelines. After pressing it you'll see the setup page, where you are able to select a technology specific configuration. .NET Core is not the first choice for BitBucket, because the .NET Core configuration is placed under "More". It is available anyway, which is really nice. After selecting the configuration type, you'll see the configuration in an editor inside the browser. It is actually a YAML configuration, called bitbucket-pipelines.yml, which is pretty easy to read. This configuration is prepared to use the microsoft/dotnet:onbuild Docker image and it already has the most common .NET CLI commands prepared, that will be used with that ASP.NET Core projects. You just need to configure the projects names for the build and test commands.

The completed configuration for my current project looks like this:

# This is a sample build configuration for .NET Core.
# Check our guides at https://confluence.atlassian.com/x/5Q4SMw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: microsoft/dotnet:onbuild

    - step:
          - dotnetcore
        script: # Modify the commands below to build your repository.
          - export PROJECT_NAME=WebApiDemo
          - export TEST_NAME=WebApiDemo.Tests
          - dotnet restore
          - dotnet build $PROJECT_NAME
          - dotnet test $TEST_NAME

If you don't have tests yet, comment the last line out by adding a #-sign in front of that line.

After pressing "Commit file", this configuration file gets stored in the root of your repository, which makes it available for all the developers of that project.

Let's try it

After that config was saved, the build started immediately... and failed!

Why? Because that Docker image was pretty much outdated. It contains an older version with an SDK that still uses the the project.json for .NET Core projects.

Changing the name of the Docker image from microsoft/dotnet:onbuild to microsoft/dotnet:sdk helps. You now need to change the bitbucket-pipelines.yml in your local Git workspace or using the editor on BitBucket directly. After committing the changes, again the build starts immediately and is green now

Even the tests are passed. As expected, I got a pretty detailed output about every step configured in the "script" node of the bitbucket-pipelines.yml

You don't need to know how to configure Docker using the pipelines. This is awesome.

Let's try the PR build

To create a PR, I need to create a feature branch first. I created it locally using the name "feature/build-test" and pushed that branch to the origin. You now can see that this branch got built by Pipelines:

Now let's create the PR using the BitBucket web UI. It automatically assigns my latest feature branch and the main branch, which is develop in my case:

Here we see that both branches are successfully built and tested previously. After pressing save we see the build state in the PRs overview:

This is actually not a specific built for that PR, but the build of the feature branch. So in this case, it doesn't really build the PR. (Maybe it does, if the PR comes from a fork and the branch wasn't tested previously. I didn't test it yet.)

After merging that PR back to the develop (in that case), we will see that this merge commit was successfully built too:

We have four builds done here: The failing one, the one 11 hours ago and two builds 52 minutes ago in two different branches.

The Continuous Deployment pipeline

With this, I would be save to trigger a direct deployment on every successful build of the main branches. As you maybe know, it is super simple to deploy a web application to an Azure web app, by connecting it directly to any Git repository. Usually this is pretty dangerous, if you don't have any builds and tests before you deploy the code. But in this case, we are sure the PRs and the branches are building and testing successfully.

We just need to ensure that the deployment is only be triggered, if the build is successfully done. Does this work with Pipelines? I'm pretty curious. Let's try it.

To do that, I created a new Web App on Azure and connect this app to the Git repository on BitBucket. I'll now add a failing test and commit it to the Git repository. What now should happen is, that the build starts before the code gets pushed to Azure and the failing build should disable the push to Azure.

I'm skeptical whether this is working or not. We will see.

The Azure Web App is created and running on http://build-with-bitbucket-pipelines.azurewebsites.net/. The deployment is configured to listen on the develop branch. That means, every time we push changes to that branch, the deployment to Azure will start.

I'll now create a new feature branch called "feature/failing-test" and push it to the BitBucket. I don't follow the same steps as described in the previous section about the PRs, to keep the test simple. I merge the feature branch directly and without an PR to develop and push all the changes to BitBucket. Yes, I'm a rebel... ;-)

The build starts immediately and fails as expected:

But what about the deployment? Let's have a look at the deployments on Azure. We should only see the initial successful deployment. Unfortunately there is another successful deployment with the same commit message as the failing build on BitBucket:

This is bad. We now have an unstable application running on azure. Unfortunately there is no option on BitBucket to trigger the WebHook on a successful build. We are able trigger the Hook on a build state change, but it is not possible to define on what state we want to trigger the build.

Too bad, this doesn't seem to be the right way to configure the continuous deployment pipeline in the same easy way than the continuous integration process. Sure there are many other, but more complex ways to do that.

Update 12/8/2017

There is anyway a simple option to setup an deployment after successful build. This could be done by triggering the Azure webhook inside the Pipelines. An sample bash script to do that can be found here: https://bitbucket.org/mojall/bitbucket-pipelines-deploy-to-azure/ Without the comments it looks like this:

curl -X POST "https://\$$SITE_NAME:$FTP_PASSWORD@$SITE_NAME.scm.azurewebsites.net/deploy" \
  --header "Content-Type: application/json" \
  --header "Accept: application/json" \
  --header "Transfer-encoding: chunked" \
  --data "{\"format\":\"basic\", \"url\":\"https://$BITBUCKET_USERNAME:$BITBUCKET_PASSWORD@bitbucket.org/$BITBUCKET_USERNAME/$REPOSITORY_NAME.git\"}"

echo Finished uploading files to site $SITE_NAME.

I now need to set the environment variables in the Pipelines configuration:

Be sure to check the "Secured" checkbox for every password variable, to hide the password in this UI and in the log output of Pipelines.

And we need to add two script commands to the bitbucket-pipelines.yml:

- chmod +x ./deploy-to-azure.bash
- ./deploy-to-azure.bash

The last step is to remove the Azure web hook from the web hook configuration in BitBucket and to remove the failing test. After pushing the changes to BitBucket the build and the first successfull deployment starts immediately.

I now add the failing test again to test the failing deployment again and it worked as expected. The test fails and the next commands don't get executed. The web hook will never triggered and the unstable app will not be deployed.

Now there is a failing build on Pipelines:

(See the commit messages)

And that failing commit is not deployed to azure:

The Continuous Deployment is successfully done.


Isn't it super easy to setup a continuous integration? ~~Unfortunately we are not able to complete the deployment using this.~~ But anyway, we now have a build on any branch and on any pull-request. That helps a lot.


  • (+++) super easy to setup
  • (++) almost fully integrated
  • (+++) flexibility based on Docker


  • (--) runs only on Linux. I would love to see windows containers working
  • (---) not fully integrated into web hooks. "trigger on successful build state" is missing for the hooks

I would like to have something like this on GitHub too. The usage is almost similar to AppVeyor, but pretty much simpler to configure, less complex and it just works. The reason is Docker, I think. For sure, AppVeyor can do a lot more stuff and couldn't really compared to Pipelines. Anyway, I will do compare it to AppVeyor and will do the same with it in one of the next posts.

Currently there is a big downside with BitBucket Pipelines: Currently this is only working with Docker images that are running on Linux. It is not yet possible to use it for full .NET Framework projects. This is the reason why we never used it at the YooApps for .NET Projects. I'm sure we need to think about doing more projects using .NET Core ;-)

David Tielke: DDC 2017 - Inhalte meiner Keynote, DevSession und Workshops

Wie in jedem Jahr so fand auch in diesem wieder die Dotnet Developer Conference im Pullman Hotel in Köln statt. Zum ersten Mal an vier Tagen, genauer gesagt vom 27.11.2017 bis 30.11.2017, wurden neben DevSessions und dem eigentlichen Koferenztag auch zwei Workshoptage angeboten. 

An dieser Stelle möchte ich nun allen Teilnehmern die Materialien meiner einzelnen Beiträge zur Verfügung stellen.

Keynote "C# - vNow & vNext"

Eröffnet wurde die Konferenz am Mittwoch mit meiner Keynote zum aktuellen und zukünftigen Stand von Microsofts Entwicklungssprache C#. In 55 Minuten zeigte ich den Zuhörern zunächst die Geschichte und den Werdegang der Sprache, die aktuell hinzugekommenen Features (C# 6.0 & C#7.0), die kürzlichen Updates 7.1 und 7.2 sowie die anstehende Sprachversion 8.0. Anschließend wurde argumentiert, warum meiner Meinung nach C# eine der besten und sichersten Plattformen für die Zukunft ist.


DevSession "Mehr Softwarequalität durch Tests"

Einen Tag vor der eigentlichen Konferenz, konnten die Teilnehmer zwischen vier parallelen DevSessions wählen, welche eine Dauer von vier Stunden hatten. Mein Beitrag zum Thema "Mehr Softwarequalität durch Tests" stellte das Thema der Tests auf eine etwas andere Art und Weise vor: Nach einer grundlegenden Betrachtung des Themas "Softwarequalität" zeigt ich den Teilnehmern, was in der Softwareentwicklung so alles getestet werden kann. Neben dem Testen von Architekturen, Teams, Funktionalitäten, Code wurde besonders dem Thema "Prozesse" viel Aufmerksamkeit gewidmet.


Workshop "Scrum mit dem Team Foundation Server 2018"

Am Montag wurde die Konferenz mit dem ersten Workshoptag eröffnet. Dabei stellte ich das agile Projektmanagementframework "Scrum" in Verbindung mit dem Team Foundation Server 2018 vor. Dazu wurde zu Beginn der Fokus zunächst auf Prozesse im allgemeinen und später dann auf die praktische Implementierung eines solchen anhand von Scrum, gelegt. Neben den Grundlagen wurde besonders die Einführung von Scrum anhand von zahlreichen Praxisbeispielen gezeigt und dabei auf Risiken und mögliche Probleme hingewiesen. Anschließend wurde am Beispiel des Team Foundation Servers gezeigt, wie eine toolgestützte Umsetzung einer agilen Planung aussehen kann.

Folien & Notizen

Workshop "Composite Component Architecture 2.0"

Am letzten Konferenztag stellte ich den Teilnehmern die neuste Version meiner "Composite Component Architecture" vor. Nach einem kurzen Überflug über die Grundlagen wurden die neuen Aspekte im Bereich Logging, Konfiguration, EventBroker, Bootstrapping sowie das verfügbare Tooling behandelt. Final dann durften die Teilnehmer einen exklusiven Blick auf eine frühe Alpha von "CoCo.Core" werfen, dem lange geplanten Framework für eine einfachere und flexiblere Implementierung der CoCo-Architektur.

Da das Framework im Workshop alles andere als stabil lief, werde ich das Release noch einmal separat in diesem Blog ankündigen, sobald eine stabile erste Alpha verfügbar ist.


Ich möchte an dieser Stelle noch einmal allen Teilnehmern für ihr Feedback danken, egal ob persönlich oder per Twitter, Email oder auf sonst irgend einem Wege. Die Konferenz hat mal wieder unheimlich viel Spaß gemacht! Weiter geht ein riesig großer Dank an das Team von Developer Media, die mal wieder eine grandiose Konferenz auf die Beine gestellt haben.

Bis zur DDC 2018 :) 

Manfred Steyer: Automatically Updating Angular Modules with Schematics and the CLI

Table of Contents

This blog post is part of an article series.

Thanks to Hans Larsen from the Angular CLI Team for providing valuable feedback

In my last blog article, I've shown how to leverage Schematics, the Angular CLI's code generator, to scaffold custom components. This article goes one step further and shows how to register generated building blocks like Components, Directives, Pipes, or Services with an existing NgModule. For this I'll extend the example from the last article that generates a SideMenuComponent. The source code shown here can also be found in my GitHub repository.

Schematics is currently experimental and can change in future.
Angular Labs


To register the generated SideMenuComponent we need to perform several tasks. For instance, we have to lookup the file with respective NgModule. After this, we have to insert several lines into this file:

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

// Add this line to reference component
import { SideMenuComponent } from './side-menu/side-menu.component';

  imports: [
  // Add this Line
  declarations: [SideMenuComponent],

  // Add this Line if we want to export the component too
  exports: [SideMenuComponent]
export class CoreModule { }

As you've seen in the last listing, we have to create an import statement at the beginning of the file. And then we have to add the imported component to the declarations array and - if the caller requests it - to the exports array too. If those arrays don't exist, we have to create them too.

The good message is, that the Angular CLI contains existing code for such tasks. Hence, we don't have to build everything from scratch. The next section shows some of those existing utility functions.

Utility Functions provided by the Angular CLI

The Schematics Collection @schematics/angular used by the Angular CLI for generating stuff like components or services turns out to be a real gold mine for modifying existing NgModules. For instance, you find some function to look up modules within @schematics/angular/utility/find-module. The following table shows two of them which I will use in the course of this article:

Function Description
findModuleFromOptions Looks up the current module file. For this, it starts in a given folder and looks for a file with the suffix .module.ts while the suffix .routing.module.ts is not accepted. If nothing has been found in the current folder, its parent folders are searched.
buildRelativePath Builds a relative path that points from one file to another one. This function comes in handy for generating the import statement pointing from the module file to the file with the component to register.

Another file containing useful utility functions is @schematics/angular/utility/ast-utils. It helps with modifying existing TypeScript files by leveraging services provided by the TypeScript compiler. The next table shows some of its functions used here:

Function Description
addDeclarationToModule Adds a component, directive or pipe to the declarations array of an NgModule. If necessary, this array is created
addExportToModule Adds an export to the NgModule

There are also other methods that add entries to the other sections of an NgModule (addImportToModule, addProviderToModule, addBootstrapToModule).

Please note, that those files are currently not part of the package's public API. Therefore, they can change in future. To be on the safe side, Hans Larsen from the Angular CLI Team suggested to fork it. My fork of the DevKit Repository containing those functions can be found here.

After forking, I've copied the contents of the folder packages\schematics\angular\utility containing the functions in question to the folder schematics-angular-utils in my project and adjusted some import statements. For the time being, you can also copy my folder with this adjustments for your own projects. I think that sooner or later the API will stabilize and be published as a public one so that we don't need this workaround.

Creating a Rule for adding a declaration to an NgModule

After we've seen that there are handy utility functions, let's use them to build a Rule for our endeavor. For this, we use a folder utils with the following two files:

Utils for custom Rule

The file add-to-module-context.ts gets a context class holding data for the planned modifications:

import * as ts from 'typescript';

export class AddToModuleContext {
    // source of the module file
    source: ts.SourceFile;

    // the relative path that points from  
    // the module file to the component file
    relativePath: string;

    // name of the component class
    classifiedName: string;

In the other file, ng-module-utils.ts, a factory function for the needed rule is created:

import { Rule, Tree, SchematicsException } from '@angular-devkit/schematics';
import { AddToModuleContext } from './add-to-module-context';
import * as ts from 'typescript';
import { dasherize, classify } from '@angular-devkit/core';

import { ModuleOptions, buildRelativePath } from '../schematics-angular-utils/find-module';
import { addDeclarationToModule, addExportToModule } from '../schematics-angular-utils/ast-utils';
import { InsertChange } from '../schematics-angular-utils/change';

const stringUtils = { dasherize, classify };

export function addDeclarationToNgModule(options: ModuleOptions, exports: boolean): Rule {
  return (host: Tree) => {

This function takes an ModuleOptions instance that describes the NgModule in question. It can be deduced by the options object containing the command line arguments the caller passes to the CLI.

It also takes a flag exports that indicates whether the declared component should be exported too. The returned Rule is just a function that gets a Tree object representing the part of the file system it modifies. For implementing this Rule I've looked up the implementation of similar rules within the CLI's Schematics in @schematics/angular and "borrowed" the patterns found there. Especially the Rule triggered by ng generated component was very helpful for this.

Before we discuss how this function is implemented, let's have a look at some helper functions I've put in the same file. The first one collects the context information we've talked about before:

function createAddToModuleContext(host: Tree, options: ModuleOptions): AddToModuleContext {

  const result = new AddToModuleContext();

  if (!options.module) {
    throw new SchematicsException(`Module not found.`);

  // Reading the module file
  const text = host.read(options.module);

  if (text === null) {
    throw new SchematicsException(`File ${options.module} does not exist.`);

  const sourceText = text.toString('utf-8');
  result.source = ts.createSourceFile(options.module, sourceText, ts.ScriptTarget.Latest, true);

  const componentPath = `/${options.sourceDir}/${options.path}/`
		      + stringUtils.dasherize(options.name) + '/'
		      + stringUtils.dasherize(options.name)
		      + '.component';

  result.relativePath = buildRelativePath(options.module, componentPath);

  result.classifiedName = stringUtils.classify(`${options.name}Component`);

  return result;


The second helper function is addDeclaration. It delegates to addDeclarationToModule located within the package @schematics/angular to add the component to the module's declarations array:

function addDeclaration(host: Tree, options: ModuleOptions) {

  const context = createAddToModuleContext(host, options);
  const modulePath = options.module || '';

  const declarationChanges = addDeclarationToModule(

  const declarationRecorder = host.beginUpdate(modulePath);
  for (const change of declarationChanges) {
    if (change instanceof InsertChange) {
      declarationRecorder.insertLeft(change.pos, change.toAdd);

The addDeclarationToModule function takes the retrieved context information and the modulePath from the passed ModuleOptions. Instead of directly updating the module file it returns an array with necessary modifications. These are iterated and applied to the module file within a transaction, started with beginUpdate and completed with commitUpdate.

The second helper function is addExport. It adds the component to the module's exports array and works exactly like the addDeclaration:

function addExport(host: Tree, options: ModuleOptions) {
  const context = createAddToModuleContext(host, options);
  const modulePath = options.module || '';

  const exportChanges = addExportToModule(

  const exportRecorder = host.beginUpdate(modulePath);

  for (const change of exportChanges) {
    if (change instanceof InsertChange) {
      exportRecorder.insertLeft(change.pos, change.toAdd);

Now, as we've looked at these helper function, let's finish the implementation of our Rule:

export function addDeclarationToNgModule(options: ModuleOptions, exports: boolean): Rule {
  return (host: Tree) => {
    addDeclaration(host, options);
    if (exports) {
      addExport(host, options);
    return host;

As you've seen, it just delegates to addDeclaration and addExport. After this, it returns the modified file tree represented by the variable host.

Extending the used Options Class and its JSON schema

Before we put our new Rule in place, we have to extend the class MenuOptions which describes the passed (command line) arguments. As usual in Schematics, it's defined in the file schema.ts. For our purpose, it gets two new properties:

export interface MenuOptions {
    name: string;
    appRoot: string;
    path: string;
    sourceDir: string;
    menuService: boolean;

	// New Properties:
    module: string;
    export: boolean;

The property module holds the path for the module file to modify and export defines whether the generated component should be exported too.

After this, we have to declare these additional property in the file schema.json:

    "$schema": "http://json-schema.org/schema",
    "id": "SchemanticsForMenu",
    "title": "Menu Schema",
    "type": "object",
    "properties": {
      "module":  {
        "type": "string",
        "description": "The declaring module.",
        "alias": "m"
      "export": {
        "type": "boolean",
        "default": false,
        "description": "Export component from module?"

As mentioned in the last blog article, we also could generate the file schema.ts with the information provided by schema.json.

Calling the Rule

Now, as we've created our rule, let's put it in place. For this, we have to call it within the Rule function in index.ts:

export default function (options: MenuOptions): Rule {

    return (host: Tree, context: SchematicContext) => {

      options.path = options.path ? normalize(options.path) : options.path;

	  // Infer module path, if not passed:
      options.module = options.module || findModuleFromOptions(host, options) || '';


      const rule = chain([


		  // Call new rule
          addDeclarationToNgModule(options, options.export)


      return rule(host, context);


As the passed MenuOptions object is structurally compatible to the needed ModuleOptions we can directly pass it to addDeclarationToNgModule. This is the way, the CLI currently deals with option objects.

In addition to that, we infer the module path at the beginning using findModuleFromOptions.

Testing the extended Schematic

To try the modified Schematic out, compile it and copy everything to the node_modules folder of an example application. As in the former blog article, I've decided to copy it to node_modules/nav. Please make sure to exclude the collection's node_modules folder, so that there is no folder node_modules/nav/node_modules.

After this, switch to the example application's root, generate a module core and navigate to its folder:

ng g module core
cd src\app\core

Now call the custom Schematic:

Calling Schematic which generated component and registers it with the module

This not only generates the SideMenuComponent but also registers it with the CoreModule:

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { SideMenuComponent } from './side-menu/side-menu.component';

  imports: [
  declarations: [SideMenuComponent],
  exports: [SideMenuComponent]
export class CoreModule { }

Code-Inside Blog: Signing with SignTool.exe - don't forget the timestamp!

If you currently not touching signtool.exe at all or have nothing to do with “signing” you can just pass this blogpost, because this is more or less a stupid “Today I learned I did a mistake”-blogpost.


We use authenticode code signing for our software just to prove that the installer is from us and “safe to use”, otherwise you might see a big warning from Windows that the application is from an “unknown publisher”:


To avoid this, you need a code signing certificate and need to sign your program (e.g. the installer and the .exe)

The problem…

We are doing this code signing since the first version of our application. Last year we needed to buy a new certificate because the first code signing certificate was getting stale. Sadly, after the first certificate was expired we got a call from a customer who recently tried to install our software and the installer was signed with the “old” certificate. The result was the big “Warning”-screen from above.

I checked the file and compared it to other installers (with expired certificates) and noticed that our signature didn’t had a timestamp:


The solution

I stumbled upon this great blogpost about authenticode code signing and the timestamp was indeed important:

When signing your code, you have the opportunity to timestamp your code; you should definitely do this. Time-stamping adds a cryptographically-verifiable timestamp to your signature, proving when the code was signed. If you do not timestamp your code, the signature will be treated as invalid upon the expiration of your digital certificate. Since it would probably be cumbersome to re-sign every package you’ve shipped when your certificate expires, you should take advantage of time-stamping. A signed, time-stamped package remains valid indefinitely, so long as the timestamp marks the package as having been signed during the validity period of the certificate.

Time-stamping itself is pretty easy and only one parameter was missing all the time… now we invoke Signtool.exe like this and we have a digitial signature with a timestamp:

signtool.exe sign /tr http://timestamp.digicert.com /sm /n "Subject..." /d "Description..." file.msi


  • Our code signing cert is from Digicert and they provide the timestamp URL.
  • SignTool.exe is part of the Windows SDK and currently is in the ClickOnce folder (e.g. C:\Program Files (x86)\Microsoft SDKs\ClickOnce\SignTool)

Hope this helps.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.