Golo Roden: Einführung in React, Folge 2: React-Setup

Wie installiert man React und entwickelt eine erste Anwendung? Um diese Themen geht es in der zweiten Folge des Videokurses zu React von the native web, die ab sofort kostenlos verfügbar ist.

Holger Schwichtenberg: PowerShell 7: Verbesserte Fehlerdarstellung

Seit PowerShell 7.0 ist "ConciseView" der neue, übersichtlichere Standard für die Fehlerausgabe.

Golo Roden: Ich reagiere auf "Wie bleibt man auf dem Laufenden?"

In der aktuellen Folge von "Götz & Golo" lautete die Fragestellung, wie man auf dem Laufenden bleibt. Golos Reaktion auf den Blogpost von Götz im Video.

Stefan Lieser: Folien zum Webinar Clean Code Development mit Flow Design

Am 30. April habe ich bei der GFU mein Webinar “Clean Code Development mit Flow Design” gehalten. Es drehte sich um die Frage, wie man mit Hilfe von Flow Design von den Anforderungen zum Code gelangt. Die Folien zu diesem Webinar findest du unten. Leider wurde das Webinar nicht aufgezeichnet. Es wird aber weitere Webinare ... Read more

Der Beitrag Folien zum Webinar Clean Code Development mit Flow Design erschien zuerst auf Refactoring Legacy Code.

Golo Roden: Einführung in React, Folge 1: Vorstellung und Einführung

Heute startet der neue Videokurs zu React von the native web. Der Kurs steht vollständig kostenlos zur Verfügung und führt Entwickler von der ersten Zeile bis zum Schreiben komplexer React-Anwendungen.

Holger Schwichtenberg: Entwickler-Update 2020 zu .NET 5.0, C# 9.0 und Blazor am 26. Mai

Der diesjährige Softwareentwickler-Infotag für .NET- und Webentwickler findet am 26. Mai als Online-Veranstaltung per Webkonferenzsoftware und Chat statt.

Golo Roden: Wie man relevante Nachrichten aus dem Grundrauschen herausfiltert

Für Entwickler ist es wichtig, über technologische Neuerungen informiert zu sein. Doch woher kommen diese Informationen? Folgt man zu wenigen Quellen, läuft man Gefahr, wichtige Nachrichten zu verpassen. Folgt man zu vielen, gehen wichtige Themen in der Masse unter. Wie findet man die richtige Balance?

Holger Schwichtenberg: PowerShell 7: Mehrere Zufallszahlen erzeugen

Seit PowerShell 7.0 kann man bei Get-Random mit -count anfordern, dass mehr als eine Zufallszahl geliefert wird. So erzeugt man schnell die Lottozahlen oder ein Zufallskennwort.

Code-Inside Blog: Blazor for Office Add-ins: First look

Last week I did some research and tried to build a pretty basic Office Addin (within the “new” web based Addin model) with Blazor.

Side note: Last year I blogged about how to build Office Add-ins with ASP.NET Core.

Why Blazor?

My daily work home is in the C# and .NET land, so it would be great to use Blazor for Office Addins, right? A Office Add-in is just a web application with a “communication tunnel” to the hosting Office application - not very different from the real web.

What (might) work: Serverside Blazor

My first try was with a “standard” serverside Blazor application and I just pointed the dummy Office Add-in manifest file to the site and it (obviously) worked:

I assume that serverside Blazor is for the client not very “complicated” and it would probably work.

After my initial tweet Manuel Sidler jumped in and made a simple demo project, which also invokes the Office.js APIs from C#!

Checkout his repository on GitHub for further information.

What won’t work: WebAssembly (if I don’t miss anything)

Serverside Blazor is cool, but has some problems (e.g. a server connection is needed and scaling is not that easy) - what about WebAssembly?

Well… Blazor WebAssembly is still in preview and I tried the same setup that worked for serverside blazor.


The desktop PowerPoint (I tried to build a PowerPoint addin) keeps crashing after I add the addin. On Office Online it seems to work, but not for a very long time:

Possible reasons:

The default Blazor WebAssembly installs a service worker. I removed that part, but I’m not 100% sure if I did it correctly. At least they are currently not supported from the Office Add-in Edge WebView. My experience with Office Online and the Blazor addin failed as well and I don’t think that service workers are the problem.

I’m not really sure why its not working, but its quite early for Blazor WebAssembly, so… time will tell.

What does the Office Dev Team think of Blazor?

Currently I just found one comment on this blogpost regarding Blazor:

Will Blazor be supported for Office Add-ins?

No, it will be a React Office.js add-in. We don’t have any plans to support Blazor yet. For that, please put a note on our UserVoice channel: https://officespdev.uservoice.com. There are several UserVoice items already on this, so know that we are listening to your feedback and prioritizing based on customer requests. The more requests we get for particular features, the more we will consider moving forward with developing it. 

Well… vote for it! ;)

Golo Roden: Wochenrückblick: GitHub, Apple, SpaceX & Co.

In der vergangenen Woche gab es einige Neuigkeiten, unter anderem bezüglich GitHub, Apple und SpaceX. Die wichtigsten kommentiert Golo im Rückblick.

Holger Schwichtenberg: PowerShell 7: Parallele Ausführung mit ForEach-Object -parallel

Seit PowerShell 7.0 kann man mit dem Parameter -parallel die Ausführung der Schleifendurchläufe auf verschiedene Threads parallelisieren (via Multithreading).

Golo Roden: Ich reagiere auf "Pfeiler eines guten Arbeitsklimas"

In der vergangenen Folge von "Götz & Golo" lautete die Fragestellung, was die Pfeiler eines guten Arbeitsklimas seien. Golos Reaktion auf den Blogpost von Götz im Video.

Golo Roden: Pfeiler eines guten Arbeitsklimas

Vermutlich jeder wünscht sich ein gutes Arbeitsklima, doch was zeichnet ein solches überhaupt aus? Und welche Ziele gibt es für Verbesserungen?

Holger Schwichtenberg: PowerShell 7.0: Funktionsumfang

Die PowerShell 7.0 hat sich hinsichtlich des Befehlsumfangs stark an die PowerShell 5.1 angenähert. 95 Befehle fehlen aber weiterhin. Und unter Linux und macOS ist nur ein Bruchteil der Funktionen verfügbar.

Code-Inside Blog: Escape enviroment variables in MSIEXEC parameters


Customers can install our product on Windows with a standard MSI package. To automate the installation administrators can use MSIEXEC and MSI parameters to configure our client.

A simple installation can look like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/OneOffixx/"

The “CACHEFOLDER” parameter will be written in the .exe.config file and our program will read it and stores offline content under the given location.

So far, so good.

For Terminal Server installations or “multi-user” scenarios this will not work, because each cache is bound to a local account. To solve this we could just insert the “%username%” enviroment variable, right?

Well… no… at least not with the obvious call, because this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%username%/OneOffixx/"

will result in a call like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/admin/OneOffixx/"


I needed a few hours and some Google-Fu to found the answer.

To “escape” those variables we need to invoke it like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%%username%%/OneOffixx/"

Be aware: This stuff is a mess. It depends on your scenario. Checkout this Stackoverflow answer to learn more. The double percent did the trick for us, so I guess it is “ok-ish”.

Hope this helps!

Holger Schwichtenberg: PowerShell 7.0: Technische Basis und Installation

Der Dotnet-Doktor beginnt mit diesem Beitrag eine Blog-Serie zur PowerShell 7.0, der neuen Version der .NET-basierten Shell für Windows, Linux und macOS.

Golo Roden: Was zeichnet lesbaren Code aus?

Während Entwickler Code schreiben, achten sie primär darauf, dass er funktioniert. Die Lesbarkeit ist aber das, was über die Wartbarkeit von Code entscheidet.

Code-Inside Blog: TLS/SSL problem: 'Could not create SSL/TLS secure channel'


Last week I had some fun debugging a weird bug. Within our application one module makes HTTP requests to a 3rd party service and depending on the running Windows version this call worked or failed with:

'Could not create SSLTLS secure channel'

I knew that older TLS/SSL versions are deprecated and that many services refuse those protocols, but we still didn’t finally understand the issue:

  • The HTTPS call worked without any issues on a Windows 10 1903 machine
  • The HTTPS call didn’t work on a Windows 7 SP1 (yeah… customers…) and a Windows 10 1803 machine.

Our software uses the .NET Framework 4.7.2 and therefore I thought that this should be enough.

Root cause

Both systems (or at least they represents two different customer enviroments) didn’t enable TLS 1.2.

On Windows 7 (and I think on the older Windows 10 releases) there are multiple ways. On way is to set a registry key to enable the newer protocols.

Our setup was a bit more complex than this and I needed like a day to figure everything out. A big mystery was, that some services were accessible even under the old systems till I figured out, that some sites even support a pure HTTP connection without any TLS.

Well… to summarize it: Keep your systems up to date. If you have any issues with TLS/SSL make sure your system does support it.

Hope this helps!

Christina Hirth : You Don’t Need To Work In Silos If You Don’t Want To

… but if you do then you should stop reading here. It is Ok for me.

How many of you have built features in backend services which were never used in any application? Or implemented requests in the wrong way because nobody cared to give you the whole story, the whole problem this feature should solve? Or felt demotivated because of the lack of feedback, if that what you do makes an impact, or it was wasted energy and time? How many of you are still working under these unsatisfying circumstances? For those of you is this article.

I did all of this. One case I will never forget: I should implement a feature request resulting in returning some object property as a string. This property was containing a URL, but the feature didn’t say “I need to know how to navigate to X or Y” but “please include the URL X in the result”.

It turned out that another 2 teams used this “string” to build navigation on it or to include it in emails without ever telling me. Why should they? I was done with the feature: it was their turn. Both of them have validated this string, have built URLs with them (using information exclusively owned by the backend service…), etc.

Let me be more explicit:

Failure No. 1: If I would have changed some internals in the backend service, I could’ve broken the UI code without knowing. My colleagues relied on things they had no chance to control. We were dependent on each other without being able to see it.

Failure No. 2: the company paid 3 different developers to write the same validation functions and the customer flow had to pass the same validations 3 times instead of only once. A totally wrong decision, from an economical point of view.

I think that was the moment I decided to change the way we deliver features, the way we work together. This was 6 or 7 years ago and since then I followed the same method to reorganize not only the teams but also the source code. Because one thing is sure: changing one without the other only leads to bigger pains and even more frustration.

Step 1. Visit the “other side” of that wall and learn what they are doing and how they are doing it. You will observe bottlenecks and wasted time and energy in your value stream (the road a feature passes from the idea to the customer)

Step 2. Get the buy-in by the next level in your hierarchy: in most situations (in both cases I were in this situation) you are not the first one noticing these problems, but you could be the first one offering a solution. Grab this chance, don’t hesitate!

Step 3. Remove the wall between the silos: find a good time to make your move, after the biggest project ended or before the next one starts. Don’t wait too long, there always will be unfinished features.

Step 4. This depends on how many team members we are talking about. In both cases, we were around 15 people, and nobody wants stand-ups or even meetings with 15 people! You become even slower and even less capable to take decisions. But this step is important for a few things:

  • both “parties” should learn and understand what the others do, how the parts are connected, what language, concept, design is used to build them
  • all members should understand and accept that it is important to split up in teams – and this is always hard because it means “we have to change”! Developers are – against all expectations – very reluctant to change. Even more reluctant when they realize that they won’t work with their buddies anymore but with some hardly known people they do not really or trust.
  • you and/or your boss, your colleagues, your buddy in this change must start to discover how the domain is shaped, how can the teams being split up – because this will be the next step.

Up to this point you didn’t improve the developer experience, it will become rather worse. What you have improved is the life of the product manager or CTO or whoever brings the requests to the teams: instead of explaining two teams the two parts of a feature (cut in the “middle” between backend and frontend), he/she must explain it only once. At the same time, the Delivery Lead Time (the first key metric in measuring team performance) will become shorter because all the ping-pong between BE and FE can be eliminated before the feature development starts.

After you all spent a longer or shorter time together is time to take the next step: align the organization to the business

Designing Autonomous Services & Teams Together – Nick Tune – KanDDDinsky 2017

The most important part is to find the natural boundaries of the domain and create business teams who OWN this (sub) domains. 

I did this 3 times in all kinds of environments: brownfield monolith or greenfield new biz, it doesn’t matter. Having a monolith as cash cow doesn’t make this change easy of course but it can be made, with discipline and a good plan on how to take over control. (this topic is much to complex to be included in this article)

The last thing which must be said is, when NOT to start this transformation:

  • If you don’t find any fellow to support you. In this case, either the problem isn’t big enough to be felt by the others, or you are in the wrong company and maybe should start to think to transform yourself instead (and leave).
  • If you or your fellow and/or boss aren’t patient people. Changing is hard and should be accompanied carefully and patiently – so that one does not need to repeat it again after even greater frustrations and chaos (was there, saw this :-/ )
  • If you expect that this is all. Because it isn’t: every change toward more transparency – because this is what happens when you break up silos and let others look at the existing solutions – all these changes will make issues transparent. A few of these issues will be technical (like CI/CD, code coupling, infrastructure coupling, etc.). But the hard problems will be missing communication skills and missing trust. Nothing that cannot be solved – but it will take time, that is sure.

If you reach this point, you can start to form an autonomous team: one which not only decides, what to do but also in charge to do it. Working in an environment created by you and your team allows you all to discover and live up to your creativity, to make mistakes and learn from them.

This ownership and responsibility make the difference between somebody hired to type lines of code and somebody solving problems.

What do you think? Could you start this change in your company? What would you need?

Now you know about my experience. I would be really happy to find out yours – here or on twitter.

One last question: what would you like more to read of: how to find the right boundaries or how can your team become a REALLY autonomous team – and how autonomous can that be?

Holger Schwichtenberg: Migrationsskript für die Umstellung von .NET Framework auf .NET Core

Der Dotnet-Doktor bietet eine skriptbasiertes Migrationswerkzeug für die Umstellung auf .NET Core an.

Golo Roden: Günstigere Software durch weniger Tests

Softwareentwicklung gilt als teure Disziplin. Nicht nur Fachfremde sind häufig überrascht, wie viel Geld die professionelle Entwicklung einer Software bedarf. Code, Tests, Dokumentation und Integration, das alles will bezahlt werden. Doch woran lässt sich sparen?

Holger Schwichtenberg: Der Stand der .NET-Familie zum Jahresbeginn 2020

Der aktuelle Stand von .NET Framework, .NET Core und Mono in einem Schaubild.

Code-Inside Blog: Accessibility Insights: Spot accessibilities issues easily for Web Apps and Windows Apps


Accessibility is a huge and important topic nowadays. Keep in mind that in some sectors (e.g. government, public service etc.) accessibility is a requirement by law (in Europe the European Standards EN 301 549).

If you want to learn more about accessibility in general this might be handy: MDN Web Docs: What is accessibility?

Tooling support

In my day to day job for OneOffixx I was looking for a good tool to spot accessibility issues in our Windows and Web App. I knew that there must be some good tools for web development, but was not sure about Windows app support.

Accessibility itself has many aspects, but these were some non obvious key aspects in our application that we needed to address:

  • Good contrasts: This one is easy to understand, but sometimes some colors or hints in the software didn’t match the required contrast ratios. High contrast modes are even harder.
  • Keyboard navigation: This one is also easy to understand, but can be really hard. Some elements are nice to look at, but hard to focus with pure keyboard commands.
  • Screen reader: After your application can be navigated with the keyboard you can checkout screen reader support.

Accessibility Insights

Then I found this app from Microsoft: Accessibility Insights


The tool scans active applications for any accessibility issues. Side node: The UX is a bit strange, but OK - you get used to it.

Live inspect:

The starting point is to select a window or a visible element on the screen and Accessibility Insights will highlight it:


Then you can click on “Test”, which gives you detailed test result:


(I’m not 100% if each error is really problematic, because a lot of Microsofts very own applications have many issues here.)

Tab Stops:

As already written: Keyboard navigation is a key aspect. This tool has a nice way to visualize “Tab” navigation and might help you to better understand the navigation with a keyboard:



The third nice helper in Accessibility Insights is the contrast checker. It highlights contrast issues and has an easy to use color picker integrated.


Behind the scenes this tool uses the Windows Automation API / Windows UI Automation API.

Accessibility Insights for Chrome

Accessibility Insights can be used in Chrome (or Edge) as well to check web apps. The extension is similar to the Windows counter part, but has a much better “assessment” story:





This tool was really a time saver. The UX might not be the best on Windows, but it gives you some good hints. After we discovered this app for our Windows Application we used the Chrome version for our Web Application as well.

If you use or used other tools in the past: Please let me know. I’m pretty sure there are some good apps out there to help build better applications.

Hope this helps!

Holger Schwichtenberg: Die Finanzverwaltung und ihre schlechte Elster-Software

Die Umstellung von "ElsterFormular" auf das Elster-Webportal ist gespickt mit lästigen Bugs und notwendigen Anrufen bei der Hotline.

Norbert Eder: Basteln mit dem Nachwuchs: NFC Tags programmieren

Mein Nachwuchs interessiert sich für Technik, Computer und alles was so dazu gehört. Natürlich wird gerne gespielt, aber schön langsam durstet es ihm nach mehr. Mit kleinen Projekten kann man Technik recht schnell näher bringen, vermitteln und Interessen ausloten.

Richtig gut kamen NFC Tags an. Diese sind für kleines Geld zu haben, aber es lassen sich damit ganz nette Projekte umsetzen.

NFC Tag | Norbert Eder

Ein einfacher NFC Tag

Auf so eine NFC Chip können unterschiedliche Informationen gespeichert werden. Im einfachsten Fall handelt es sich dabei um einen Link, eine Wifi- bzw. Bluuetooth-Verbindung, eine E-Mail-Adresse oder Telefonnummer, einen Standort oder eine Anweisung, eine SMS zu senden oder eine App zu starten. Kommt das NFC-fähige Telefon mit dem NFC-Tag in Berührung, wird diese Aktion ausgeführt.

NXP TagWriter | Norbert Eder

NXP TagWriter liest und schreibt NFC Tags

Um NFC-Tags zu lesen und zu schreiben ist lediglich eine Smartphone-App notwendig. Hierzu gibt es unterschiedlichste Apps. Eine der einfachsten ist der NXP TagWriter (Android, Apple).

NFC Tags auslesen | Norbert Eder

NFC Tag auslesen

Neben diesen Standard-Funktionen gibt es weitere Apps (z.B. die NFC Tools), die zusätzliche Funktionen unterstützen. So ist es damit möglich, Bedingungen zu setzen und beispielweise folgende Konfigurationen vorzunehmen:

  • Smartphone auf lautlos stellen
  • Flugmodus aktivieren
  • Wenn Sonntag bis Donnerstag, Wecker für den kommenden Tag um 6:00 Uhr stellen

Das kann für viele ein guter NFC-Tag für das Nachtkasterl sein.

Viele weitere Möglichkeiten sind gegeben und laden vor allem auch zu experimentieren ein. Mein Bub hatte viele Ideen und einige davon auch gleich umgesetzt. Für Spaß war gesorgt und auch gelernt hat er viel.

Hinweis: NFC-Tags gibt es in unterschiedlichen Größen (Speicherplatz), mit und ohne Passwortschutz, als selbstklebender Sticker oder als Schlüsselanhänger.

Viel Spaß beim Experimentieren. Die Kids werden richtig Spaß daran haben.

Der Beitrag Basteln mit dem Nachwuchs: NFC Tags programmieren erschien zuerst auf Norbert Eder.

Jürgen Gutsch: Using the .editorconfig in VS2019 and VSCode

In the backend developer team of the YOO we are currently discussing coding style guidelines and ways to enforce them. Since we are developer with different mindsets and backgrounds, we need to find a way to enforce the rules in a way that works in different editors too.

BTW: C# developers often came from other languages and technologies before they started to work with this awesome language. Universities mostly teach Java, or the developer were front end developers in the past, or started with PHP. Often .NET developers start with VB.NET and switch to C# later. Me as well: I also started as a front end developer with HTML4, CSS2 and JavaScript, used VB Script and VB6 on the server side in 2001. Later I used VB.NET on the server and switched to C# in 2007.

In our company we use ASP.NET Core more and more. This also means we are more and more free to use the best editor we want to use. And we are more and more free to use platform we want to work with. Some of us use already and prefer VSCode to work on ASP.NET Core projects. Maybe we'll have a fellow colleague in the future who prefers VSCode on Linux or VS on a Mac. This also makes the development environments divers.

Back when we used Visual Studio only, Style Cop was the tool to enforce coding style guidelines. Since a couple of years there is a new tool that works in almost all editors out in the world.

The .editorconfig is a text file that overrides the settings of the editor of your choice.

Almost every code editor has settings to style the code in a way you like, or the way your team likes. If this editor supports the .editorconfig you are able to override this settings with a simple text file that usually is checked in with your source code and available for all developers who work on those sources.

Visual Studio 2019 supports the .editorconfig by default, VS for Mac also supports it and VSCode supports it with a few special settings.

The only downside of the .editorconfig I can see

Since the .editorconfig is a settings file that overrides the settings of the code editor, it might be that only those settings will work that are supported by the editor. So it might work that not all of the settings will work on all code editors.

But there is a workaround at least on the NodeJS side and on the .NET side. Both technologies support the .editorconfig on the code analysis side instead of the editor side, which means NodeJS or the .NET Compiler will check the code and enforce the rules instead of the editor. The editor only displays the error and helps the author to fix those errors.

As far as I got it: On the .NET side it VS2019 on the one hand and Omnisharp on the other hand. Omnisharp is a project that support .NET development on many code editors, so on VSCode as well. Even if VSCode is called a Visual Studio, it doesn't support .NET and C# natively. It is the Omnisharp add-in that enables .NET and brings the Roslyn compiler to the Editor.

"CROSS PLATFORM .NET DEVELOPMENT! OmniSharp is a family of Open Source projects, each with one goal: To enable a great .NET experience in YOUR editor of choice" http://www.omnisharp.net/

So the .editorconfig is supported by Omnisharp in VSCode. This means it might be that the support of the .editorconfig differs between VS2019 and VSCode.

Enable the .editorconfig in VSCode

As I wrote the .editorconfig is enabled by default in VS2019. There is nothing to do about it. If VS2019 finds an .editorconfig it will use it immediately and it will check your code on every code change. If VS2019 finds an .editorconfig in your solution, it will tell you about it and propose to add it to a solution folder to make it easier accessible for you in the editor.

In VSCode you need to install an add-in called EditorConfig. This doesn't enable the .editorconfig even if it is telling you about it. Maybe it actually does, but it doesn't work with C# because Omnisharp does something. But this add-in supports you to create or edit your .editorconfig.

To actually enable the support of the .editorconfig in VSCode you need to change two Omnisharp settings in VSCode:

Open the settings in VSCode and search for Omnisharp. Than you need to "Enable Editor Config Support" and to "Enable Roslyn Analyzers"

After you changed those settings, you need to restart VSCode to restart the Omnisharp server in the background.

That's it!


Now the .editorconfig works in VSCode almost the same way as in VS2019. And it works great. I tried it by opening the same project in VSCode and in VS2019 and changed some settings in the .editorconfig. The changed settings immediately changed the editors. Both editors helped me to change the code to match the code styles.

We at the YOO still need to discuss about some coding styles, but for now we use the recommended styles and change the things we discuss as soon we have a decision.

Do you ever discussed about coding styles in a team? If yes, you know that we are discussing about how to enforce var over the explicit type and whether to use simple usings or not, or whether to always use curly braces with if statements or not... This might be annoying, but it is really important to get a common sense and it is important that everybody agree on it.

Golo Roden: Module für JavaScript und Node.js auswählen

Die Auswahl von npm-Modulen ist im Wesentlichen Erfahrungssache, als Indizien für tragfähige und zukunftsträchtige Module können aber deren Verbreitung und die Aktivität der Autoren dienen.

Uli Armbruster: Eigene Domain als Gmail Alias verwenden

In dieser Schritt-für-Schritt Anleitung erläutere ich wie ihr andere Absender für eure Gmail-Adresse definiert. Dies ist z.B. nützlich, wenn ihr eine eigene Domain betreibt (wie ich mit http://www.uliarmbruster.de) und unter der Domain mit eurem Gmail Konto auch E-Mails empfangen und versenden wollt.

Ich nutze dies beispielsweise, um für meine Familie E-Mail-Adressen anzulegen und alle auf ein zentrales Gmail-Konto umzuleiten. Das ist unter anderem nützlich, wenn man so Dinge wie Handy-, Internet und Stromverträge für mehrere Leute managt.

Schritt 1: Weniger sichere Apps zulassen

Sofern nicht bereits aktiviert, müsst ihr noch unter dieser Adresse „Weniger sichere Apps zulassen“ anschalten.

Schritt 2: 2-Faktor-Authentifizierung aktivieren

Einfach diesem Link folgen und aktivieren.

Schritt 3: App-Passwort erstellen

Über diesen Link geht ihr wie folgt vor:


Bei „App auswählen“ selektiert ihr „Andere (benutzerdefiniert)“


Dann gebt ihr einen Namen dafür ein, z.B. für die externe E-Mail-Adresse, die ihr einbindet.


Schritt 4: Gmail konfigurieren

Jetzt geht ihr in euer Gmail-Konto und führt die folgenden Schritte aus:

  • Zahnrad in Gmail anklicken
  • Einstellungen auswählen
  • Tab Konten & Import auswählen

E-Mail-Settings 1

Unter „Senden als“ fügt ihr eine weitere Adresse hinzu. Daufhin kommt der folgende Dialog. Den Namen, den ihr dort eingebt, ist der, der als Alias beim Empfänger angezeigt wird. Als E-Mail-Adresse wählt ihr dann eure externe Adresse aus, die ihr einbinden wollt.

E-Mail-Settings 2

Im nächsten Schritt gebt ihr eure Gmail-Adresse (also die aktuell verwendete) als auch das in Schritt 2 generierte App Passwort ein. Den SMTP-Server und Port könnt ihr aus dem Screenshot übernehmen.

E-Mail-Settings 2-3

Im letzten Schritt müsst ihr dann den zugesendeten Code eingeben. Die E-Mail hierzu, die ihr erhalten müsstet, sieht so aus:

Sie haben angefordert, dass alias@euredomain.com Ihrem
Gmail-Konto hinzugefügt werden soll.
Bestätigungscode: 694072788

Bevor Sie Nachrichten von alias@euredomain.com aus über Ihr
Gmail-Konto (eure-gmail-adresse@gmail.com) senden können, klicken Sie
bitte auf den folgenden Link, um Ihre Anfrage zu bestätigen:

E-Mail-Settings 3


Das sollte es gewesen sein.

Christian Dennig [MS]: VS Code Git integration with ssh stops working

The Problem

I recently updated my ssh keys for GitHub account on my Mac and after adding the public key in GitHub everything worked as expected from the command line. When I did a…

$ git pull

…I was asked for the passphrase of my private key and the pull was executed. Great.

$ git pull
Enter passphrase for key '/Users/christiandennig/.ssh/id_rsa':
Already up to date.

But when I did the same from Visual Studio Code, I got the following error:

As you can see in the git logs, it says “Permission denied (publickey)”. This is odd, because VS Code is using the same git executable as the commandline (see the log output).

It seems that VS Code isn’t able to ask for the passphrase during the access to the git repo?!

ssh-agent FTW

The solution is simple. Just use ssh-agent on your machine to enter the passphrase for your ssh private key once…all subsequent calls that need your ssh key will use the saved passphrase.

Add your key to ssh-agent (storing the passphrase in MacOS Keychain!)

$ ssh-add -K ~/.ssh/id_rsa
Enter passphrase for /Users/christiandennig/.ssh/id_rsa:
Identity added: /Users/christiandennig/.ssh/id_rsa (XXXX@microsoft.com)

The result in Keychain will look like that:

When you now open Visual Studio Code and start to synchronize your git repository the calls to GitHub will use the credentials saved via ssh-agent and all commands executed will be successful.


( BTW: Happy New Year 🙂 )

Code-Inside Blog: T-SQL Pagination

The problem

This is pretty trivial: Let’s say you have blog with 1000 posts in your database, but you only want to show 10 entries “per page”. You need to find a way how to slice this dataset into smaller pieces.

The solution

In theory you could load everything from the database and filter the results “in memory”, but this would be quite stupid for many reasons (e.g. you load much more data then you need and the computing resources could be used for other requests etc.).

If you use plain T-SQL (and Microsoft SQL Server 2012 or higher) you can express a query with paging like this:


Read it like this: Return the first 10 entries from the table. To get the next 10 entries use OFFSET 10 and so on.

If you use the Entity Framework (or Entity Framework Core or any other O/R-Mapper) the chances are high they do exact the same thing internally for you.

Currently all “supported” SQL Servers support this syntax nowadays. If you try this syntax on SQL Server 2008 or SQL Server 2008 R2 you will receive a SQL error


Checkout the documentation for further information.

This topic might seem “simple”, but during my developer life I was suprised how “hard” paging was with SQL Server. Some 10 years ago (… I’m getting old!) I was using MySQL and the OFFSET and FETCH syntax was introduced with Microsoft SQL Server 2012. This Stackoverflow.com Question shows the different ways how to implement it. The “older” ways are quite weird and complicated.

I also recommend this blog for everyone who needs to write T-SQL.

Hope this helps!

Holger Schwichtenberg: Entwickler-Events 2020 für .NET- und Webentwickler

Ein Sammlung der wichtigsten Konferenz- und Eventtermine für .NET- und Webentwickler im nächsten Jahr.

Jürgen Gutsch: ASP.NET Hack Advent Post 24: When environments are not enough, use sub-environments!

ASP.NET Core knows the concept of runtime environments like Development, Staging and Production. But sometimes those environments are not enough. To solve this, you could use sub-environments. This is not a built-in feature, but is easily implemented in ASP.NET Core. Thomas Levesque describes how:


Thomas Levesque is a French developer living in Paris, France. He is a Microsoft MVP since 2012 and pretty involved in the open source community.

Twitter: https://twitter.com/thomaslevesque

GitHub: https://github.com/thomaslevesque

Jürgen Gutsch: ASP.NET Hack Advent Post 23: Setting up Azure DevOps CI/CD for a .NET Core 3.1 Web App hosted in Azure App Service for Linux

After you migrated your ASP.NET Core application to a Linux based App Service, you should setup a CI/CD pipeline ideally on Azure DevOps. And again it is Scott Hanselman who wrote a great post about it:

Setting up Azure DevOps CI/CD for a .NET Core 3.1 Web App hosted in Azure App Service for Linux

So, read this post to learn more about ASP.NET Core on Linux.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 22: User Secrets in Docker-based .NET Core Worker Applications

Do you want to know how to manage the user secrets in Docker based .NET Core Worker applications? As a part of the Message Endpoints in Azure series Jimmy Bogard is writing an awesome blog post about this.

User Secrets in Docker-based .NET Core Worker Applications

Jimmy Bogard is chief architect at Headspring, creator of AutoMapper and MediatR, author of the MVC in Action books, international speaker and prolific OSS developer. Expert in distributed systems, REST, messaging, domain-driven design and CQRS.

Twitter: https://twitter.com/jbogard

GitHub: https://github.com/jbogard

Blog: https://jimmybogard.com/

LinkedIn: https://linkedin.com/in/jimmybogard

Christian Dennig [MS]: Keep your AKS worker nodes up-to-date with kured


When you are running several AKS / Kubernetes clusters in production, the process of keeping your application(s), their dependencies, Kubernetes itself with the worker nodes up to date, turns into a time-consuming task for (sometimes) more than one person. Looking at the worker nodes that form your AKS cluster, Microsoft will help you by applying the latest OS / security updates on a nightly basis. Great, but the downside is, when the worker node needs a restart to be able to fully apply these patches, Microsoft will not reboot that particular machine. The reason is obvious: they simply don’t know, when it is best to do so. So basically, you would have to do this on your own.

Luckily, there is a project from WeaveWorks called “Kubernetes Restart Daemon” or kured, that gives you the ability to define a timeslot where it will be okay to automatically pull a node from your cluster and do a simple reboot on it.

Under the hood, kured works by adding a DaemonSet to your cluster that watches for a reboot sentinel like e.g. the file /var/run/reboot-required. If that file is present on a node, kured “cordons and drains” that particular node, inits a reboot and uncordons it afterwards. Of course, there are situations where you want to suppress that behavior and fortunately kured gives us a few options to do so (Prometheus alerts or the presence of specific pods on a node…).

So, let’s give it a try…

Installation of kured

I assume, you already have a running Kubernetes cluster, so we start by installing kured.

$ kubectl apply -f https://github.com/weaveworks/kured/releases/download/1.2.0/kured-1.2.0-dockerhub.yaml

clusterrole.rbac.authorization.k8s.io/kured created
clusterrolebinding.rbac.authorization.k8s.io/kured created
role.rbac.authorization.k8s.io/kured created
rolebinding.rbac.authorization.k8s.io/kured created
serviceaccount/kured created
daemonset.apps/kured created

Let’s have a look at what has been installed.

$ kubectl get pods -n kube-system -o wide | grep kured
kured-5rd66                             1/1     Running   0          4m18s    aks-npstandard-11778863-vmss000001   <none>           <none>
kured-g9nhc                             1/1     Running   0          4m20s    aks-npstandard-11778863-vmss000000   <none>           <none>
kured-vfzjk                             1/1     Running   0          4m20s   aks-npstandard-11778863-vmss000002   <none>           <none>

As you can see, we now have three kured pods running.

Test kured

To be able to test the installation, we simply simulate the “node reboot required” by creating the corresponding file on one of the worker nodes. We need to access a node by ssh. Just follow the official documentation on docs.microsoft.com:


Once you have access to a worker node via ssh, create the file via:

$ sudo touch /var/run/reboot-required

Now exit the pod, wait for the kured daemon to trigger a reboot and watch the cluster nodes by executing kubectl get nodes -w

$ kubectl get nodes -w
NAME                                 STATUS   ROLES   AGE   VERSION
aks-npstandard-11778863-vmss000000   Ready    agent   34m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready    agent   34m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000000   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready                      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000000   Ready                      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   NotReady,SchedulingDisabled   agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   NotReady,SchedulingDisabled   agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready                         agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready                         agent   36m   v1.15.5

Corresponding output of the kured pod on that particular machine:

$ kubectl logs -n kube-system kured-ngb5t -f
time="2019-12-23T12:39:25Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:39:25Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:25Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:39:25Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:39:25Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:39:30Z" level=info msg="Holding lock"
time="2019-12-23T12:39:30Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:31Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:39:31Z" level=info msg="Releasing lock"
time="2019-12-23T12:41:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:43:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:45:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:47:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:49:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:51:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:53:04Z" level=info msg="Reboot required"
time="2019-12-23T12:53:04Z" level=info msg="Acquired reboot lock"
time="2019-12-23T12:53:04Z" level=info msg="Draining node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:53:06Z" level=info msg="node/aks-npstandard-11778863-vmss000000 cordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:06Z" level=warning msg="WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: aks-ssh; Ignoring DaemonSet-managed pods: kube-proxy-7rhfs, kured-ngb5t" cmd=/usr/bin/kubectl std=err
time="2019-12-23T12:53:42Z" level=info msg="pod/aks-ssh evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="node/aks-npstandard-11778863-vmss000000 evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="Commanding reboot"
time="2019-12-23T12:53:42Z" level=info msg="Waiting for reboot"
time="2019-12-23T12:54:15Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:54:15Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:15Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:54:15Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:54:15Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:54:21Z" level=info msg="Holding lock"
time="2019-12-23T12:54:21Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:22Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:54:22Z" level=info msg="Releasing lock"

As you can see, the pods have been drained off the node (SchedulingDisabled), which has then been successfully rebooted, uncordoned afterwards and is now ready to run pods again.

Customize kured Installation / Best Practices

Reboot only on certain days/hours

Of course, it is not always a good option to reboot your worker nodes during “office hours”. When you want to limit the timeslot where kured is allowed to reboot your machines, you can make use of the following parameters during the installation:

  • reboot-days – the days kured is allowed to reboot a machine
  • start-time – reboot is possible after specified time
  • end-time – reboot is possible before specified time
  • time-zone – timezone for start-time/end-time

Skip rebooting when certain pods are on a node

Another option that is very useful regarding production workloads, is the possibility to skip a reboot, when certain pods are present on a node. The reason for that could be, that the service is very critical to your application and therefor pretty “expensive” when not available. You may want to surveil the process of rebooting such a node and being able to intervene quickly, if something goes wrong.

As always in the Kubernetes environment, you can achieve this by using label selectors for kured – an option set during installation called blocking-pod-selector.

Notify via WebHook

kured also offers the possibility to call a Slack webhook when nodes are about to be rebooted. Well, we can “misuse” that webhook to trigger our own action, because such a webhook is just a simple https POST with a predefined body, e.g.:

   "text": "Rebooting node aks-npstandard-11778863-vmss000000",
   "username": "kured"

To be as flexible as possible, we leverage the 200+ Azure Logic Apps connectors that are currently available to basically do anything we want. In the current sample, we want to receive a Teams notification to a certain team/channel and send a mail to our Kubernetes admins whenever kured triggers an action.

You can find the important parts of the sample Logic App on my GitHub account. Here is a basic overview of it:

What you basically have to do is to create an Azure Logic App with a http trigger, parse the JSON body of the POST request and trigger “Send Email” and “Post a Teams Message” actions. When you save the Logic App for the first time, the webhook endpoint will be generated for you. Take that URL and use the value for the slack-hook-url parameter during the installation of kured.

If you need more information on creating an Azure Logic App, please see the official documentation: https://docs.microsoft.com/en-us/azure/connectors/connectors-native-reqres.

When everything is setup, Teams notifications and emails you receive, will look like that:


In this sample, we got to know the Kubernetes Reboot Daemon that helps you keep your AKS cluster up to date by simply specifying a timeslot where the daemon is allowed to reboot you cluster/worker nodes and apply security patches to the underlying OS. We also saw, how you can make use of the “Slack” webhook feature to basically do anything you want with kured notifications by using Azure Logic Apps.

Tip: if you have a huge cluster, you should think about running multiple DaemonSets where each of them is responsible for certain nodes/nodepools. It is pretty easy to set this up, just by using Kubernetes node affinities.

Jürgen Gutsch: ASP.NET Hack Advent Post 21: Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first

Scott Hanselman again writes about ASP.NET Core applications on Linux. This time the post is about moving an ASP.NET Core application from a Windows to a Linux based App Service:

Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first

Again this is one of his pretty detailed and deep dive posts. You definitely have to read it, if you want to run your ASP.NET Core application on Linux.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 20: The ultimate guide to secure cookies with web.config in .NET

For the todays ASP.NET Hack Advent, I found a awesome post about cookie security. This post is the latest part of a series about ASP.NET Security. Cookie security is important to avoid cookie hijacking via cross-site scripting and something like this.

The ultimate guide to secure cookies with web.config in .NET

This post was written by Thomas Ardal, who is a speaker, software consultant and the founder of elma.io.

Twitter: https://twitter.com/thomasardal

Website: https://thomasardal.com/

Jürgen Gutsch: ASP.NET Hack Advent Post 19: Migrate a real project from ASP.NET Core 2.2 to 3.1

Because I got a lot of questions about migrating ASP.NET Core applications to 3.1, I will introduce another really good blog post about it. This time it is a post about a real project that needs to be migrated from ASP.NET Core 2.2. to 2.1. He is writing about how to update the project file and about what needs to be changed in the Startup.cs:

Migration from Asp.Net Core 2.2 to 3.1 — Real project

This post was written by Alexandre Malavasi on December 16. He is a consultant and .NET developer from Brazil, who is currently working and living in Dublin Ireland.

Twitter: https://twitter.com/alemalavasi

Medium: https://medium.com/@alexandre.malavasi

LinkedIn: https://www.linkedin.com/in/alexandremalavasi/

Jürgen Gutsch: ASP.NET Hack Advent Post 18: The .NET Foundation has a new Executive Director

On December 16th, Jon Galloway announced that Oren Novotny will follow him as the new Executive Director of the .NET Foundation. Jon started as Executive Director in February 2016. Until now, the .NET Foundation added a lot of value for the .NET community. They added a lot of more awesome projects to the Foundation and provided many services for them. The .NET Foundation launched a worldwide Meetup program, where .NET related meetups get a Meetup Pro for free and will be marked as part of the .NET Foundation. They also support the local communities with contents and sponsorships. In March 2019 the .NET Foundation runs an election for the board’s first elected directors. Orin will officially take over at the start of January. Jon will continue supporting the community as a Vice President of the .NET Foundation and as a member of the voluntary Advisory Council.

Welcoming Oren Novotny as the new Executive Director of .NET Foundation

At the same day, also Oren announced that he will follow Jon Galloway as Executive Director of the .NET Foundation. He also announced that he is joining Microsoft as a Program Manager on the .NET Team under Scott Hanselman. So he is one of the many, many MVPs that joins Microsoft. Congratulations :-)

.NET Foundation Executive Director, Joining Microsoft

I'm really looking forward to see how the .NET Foundation evolves.

Jürgen Gutsch: ASP.NET Hack Advent Post 17: Creating Common Intermediate Language projects with .NET SDK

For the todays ASP.NET Hack Advent post, I found a link to one of the awesome posts of Filip W. In this post Filip describes the new project type that allows you to write .NET Projects in IL code directly. He writes how to create a new Microsoft.NET.Sdk.IL project and how to write IL code. He also answered the most important question about why you might need to write IL code directly.


Filip is working as a senior software developer and lead developer near Zurich in Switzerland. He is a Microsoft MVP since 2013 and one of most important, influencing and well known member of the .NET developer community. He is creator and main contributor on scrtiptcs, contributes to roslyn and OmniSharp and many more open source projects. You should definitely follow him on Twitter and have a look into his other open source projects on GitHub: https://github.com/filipw/

Golo Roden: Das war die CODEx 2019

Am 4. November 2019 veranstaltete die HDI in Hannover die erste CODEx-Konferenz, eine Veranstaltung für Entwickler und IT-Experten. Golo Roden, Autor bei heise Developer, war als Sprecher dort.

Jürgen Gutsch: ASP.NET Hack Advent Post 16: ConfigureAwait & System.Threading.Channels

Stephen Toub published two really good blog post about in the Microsoft .NET Net blog.

The first one is a really good and detailed FAQ style post about ConfigureAwait. If you would like to learn about ConfigureAwait, you should read it:

ConfigureAwait FAQ

The second one is an introduction into System.Threading.Channels. This post is really good introduction and goes more and mote deep into that topic:

An Introduction to System.Threading.Channels

Christian Dennig [MS]: Fully automated creation of an AAD-integrated Kubernetes cluster with Terraform


To run your Kubernetes cluster in Azure integrated with Azure Active Directory as your identity provider is a best practice in terms of security and compliance. You can give (and remove – when people are leaving your organisation) fine-grained permissions to your team members, to resources and/or namespaces as they need them. Sounds good? Well, you have to do a lot of manual steps to create such a cluster. If you don’t believe me, follow the official documentation 🙂 https://docs.microsoft.com/en-us/azure/aks/azure-ad-integration.

So, we developers are known to be lazy folks…then how can this automatically be achieved e.g. with Terraform (which is one of the most popular tools out there to automate the creation/management of your cloud resources)? It took me a while to figure out, but here’s a working example how to create an AAD integrated AKS cluster with “near-zero” manual work.

The rest of this blog post will guide you through the complete Terraform script which can be found on my GitHub account.

Create the cluster

To work with Terraform (TF), it is best-practice to store the Terraform state not on you workstation as other team members also need the state-information to be able to work on the same environment. So, first…let’s create a storage account in your Azure subscription to store the TF state.

Basic setup

With the commands below, we will be creating a resource group in Azure, a basic storage account and a corresponding container where the TF state will be put in.

# Resource Group

$ az group create --name tf-rg --location westeurope

# Storage Account

$ az storage account create -n tfstatestac -g tf-rg --sku Standard_LRS

# Storage Account Container

$ az storage container create -n tfstate --account-name tfstatestac --account-key `az storage account keys list -n tfstatestac -g tf-rg --query "[0].value" -otsv`

Terraform Providers + Resource Group

Of course, we need a few Terraform providers for our example. First and foremost, we need the Azure and also the Azure Active Directory resource providers.

One of the first things we need is – as always in Azure – a resource group where we will be the deploying our AKS cluster to.

provider "azurerm" {
  version = "=1.38.0"

provider "azuread" {
  version = "~> 0.3"

terraform {
  backend "azurerm" {
    resource_group_name  = "tf-rg"
    storage_account_name = "tfstatestac"
    container_name       = "tfstate"
    key                  = "org.terraform.tfstate"

data "azurerm_subscription" "current" {}

# Resource Group creation
resource "azurerm_resource_group" "k8s" {
  name     = "${var.rg-name}"
  location = "${var.location}"

AAD Applications for K8s server / client components

To be able to integrate AKS with Azure Active Directory, we need to register two applications in the directory. The first AAD application is the server component (Kubernetes API) that provides user authentication. The second application is the client component (e.g. kubectl) that’s used when you’re prompted by the CLI for authentication.

We will assign certain permissions to these two applications, that need “admin consent”. Therefore, the Terraform script needs to be executed by someone who is able to grant that for the whole AAD.

# AAD K8s Backend App

resource "azuread_application" "aks-aad-srv" {
  name                       = "${var.clustername}srv"
  homepage                   = "https://${var.clustername}srv"
  identifier_uris            = ["https://${var.clustername}srv"]
  reply_urls                 = ["https://${var.clustername}srv"]
  type                       = "webapp/api"
  group_membership_claims    = "All"
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = false
  required_resource_access {
    resource_app_id = "00000003-0000-0000-c000-000000000000"
    resource_access {
      id   = "7ab1d382-f21e-4acd-a863-ba3e13f7da61"
      type = "Role"
    resource_access {
      id   = "06da0dbc-49e2-44d2-8312-53f166ab848a"
      type = "Scope"
    resource_access {
      id   = "e1fe6dd8-ba31-4d61-89e7-88639da4683d"
      type = "Scope"
  required_resource_access {
    resource_app_id = "00000002-0000-0000-c000-000000000000"
    resource_access {
      id   = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"
      type = "Scope"

resource "azuread_service_principal" "aks-aad-srv" {
  application_id = "${azuread_application.aks-aad-srv.application_id}"

resource "random_password" "aks-aad-srv" {
  length  = 16
  special = true

resource "azuread_application_password" "aks-aad-srv" {
  application_object_id = "${azuread_application.aks-aad-srv.object_id}"
  value                 = "${random_password.aks-aad-srv.result}"
  end_date              = "2024-01-01T01:02:03Z"

# AAD AKS kubectl app

resource "azuread_application" "aks-aad-client" {
  name       = "${var.clustername}client"
  homepage   = "https://${var.clustername}client"
  reply_urls = ["https://${var.clustername}client"]
  type       = "native"
  required_resource_access {
    resource_app_id = "${azuread_application.aks-aad-srv.application_id}"
    resource_access {
      id   = "${azuread_application.aks-aad-srv.oauth2_permissions.0.id}"
      type = "Scope"

resource "azuread_service_principal" "aks-aad-client" {
  application_id = "${azuread_application.aks-aad-client.application_id}"

The important parts regarding the permissions are highlighted above. If you wonder what these “magic permission GUIDs” stand for, here’s a list of what will be assigned.

Microsoft Graph (AppId: 00000003-0000-0000-c000-000000000000) Persmissions

7ab1d382-f21e-4acd-a863-ba3e13f7da61Read directory data (Application Permission)
06da0dbc-49e2-44d2-8312-53f166ab848aRead directory data (Delegated Permission)
e1fe6dd8-ba31-4d61-89e7-88639da4683dSign in and read user profile

Windows Azure Active Directory (AppId: 00000002-0000-0000-c000-000000000000) Permissions

311a71cc-e848-46a1-bdf8-97ff7156d8e6Sign in and read user profile

After a successful run of the Terraform script, it will look like that in the portal.

AAD applications
Server app permissions

By the way, you can query the permissions of the applications (MS Graph/Azure Active Directory) mentioned above. Here’s a quick sample for one of the MS Graph permissions:

$ az ad sp show --id 00000003-0000-0000-c000-000000000000 | grep -A 6 -B 3 06da0dbc-49e2-44d2-8312-53f166ab848a
      "adminConsentDescription": "Allows the app to read data in your organization's directory, such as users, groups and apps.",
      "adminConsentDisplayName": "Read directory data",
      "id": "06da0dbc-49e2-44d2-8312-53f166ab848a",
      "isEnabled": true,
      "type": "Admin",
      "userConsentDescription": "Allows the app to read data in your organization's directory.",
      "userConsentDisplayName": "Read directory data",
      "value": "Directory.Read.All"

Cluster Admin AAD Group

Now that we have the script for the applications we need to integrate our cluster with Azure Active Directory, let’s also add a default AAD group for our cluster admins.

# AAD K8s cluster admin group / AAD

resource "azuread_group" "aks-aad-clusteradmins" {
  name = "${var.clustername}clusteradmin"

Service Principal for AKS Cluster

Last but not least, before we can finally create the Kubernetes cluster, a service principal is required. That’s basically the technical user Kubernetes uses to interact with Azure (e.g. acquire a public IP at the Azure load balancer). We will assign the role “Contributor” (for the whole subscription – please adjust to your needs!) to that service principal.

# Service Principal for AKS

resource "azuread_application" "aks_sp" {
  name                       = "${var.clustername}"
  homepage                   = "https://${var.clustername}"
  identifier_uris            = ["https://${var.clustername}"]
  reply_urls                 = ["https://${var.clustername}"]
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = false

resource "azuread_service_principal" "aks_sp" {
  application_id = "${azuread_application.aks_sp.application_id}"

resource "random_password" "aks_sp_pwd" {
  length  = 16
  special = true

resource "azuread_service_principal_password" "aks_sp_pwd" {
  service_principal_id = "${azuread_service_principal.aks_sp.id}"
  value                = "${random_password.aks_sp_pwd.result}"
  end_date             = "2024-01-01T01:02:03Z"

resource "azurerm_role_assignment" "aks_sp_role_assignment" {
  scope                = "${data.azurerm_subscription.current.id}"
  role_definition_name = "Contributor"
  principal_id         = "${azuread_service_principal.aks_sp.id}"

  depends_on = [

Create the AKS cluster

Everything is now ready for the provisioning of the cluster. But hey, we created the AAD applications, but haven’t granted admin consent?! We can also do this via our Terrform script and that’s what we will be doing before finally creating the cluster.

Azure is sometimes a bit too fast in sending a 200 and signalling that a resource is ready. In the background, not all services have already access to e.g. newly created applications. So it happens, that things fail although they shouldn’t 🙂 Therefore, we simply wait a few seconds and give AAD time to distribute application information, before kicking off the cluster creation.

# K8s cluster

# Before giving consent, wait. Sometimes Azure returns a 200, but not all services have access to the newly created applications/services.

resource "null_resource" "delay_before_consent" {
  provisioner "local-exec" {
    command = "sleep 60"
  depends_on = [

# Give admin consent - SP/az login user must be AAD admin

resource "null_resource" "grant_srv_admin_constent" {
  provisioner "local-exec" {
    command = "az ad app permission admin-consent --id ${azuread_application.aks-aad-srv.application_id}"
  depends_on = [
resource "null_resource" "grant_client_admin_constent" {
  provisioner "local-exec" {
    command = "az ad app permission admin-consent --id ${azuread_application.aks-aad-client.application_id}"
  depends_on = [

# Again, wait for a few seconds...

resource "null_resource" "delay" {
  provisioner "local-exec" {
    command = "sleep 60"
  depends_on = [

# Create the cluster

resource "azurerm_kubernetes_cluster" "aks" {
  name                = "${var.clustername}"
  location            = "${var.location}"
  resource_group_name = "${var.rg-name}"
  dns_prefix          = "${var.clustername}"

  default_node_pool {
    name            = "default"
    type            = "VirtualMachineScaleSets"
    node_count      = 2
    vm_size         = "Standard_B2s"
    os_disk_size_gb = 30
    max_pods        = 50
  service_principal {
    client_id     = "${azuread_application.aks_sp.application_id}"
    client_secret = "${random_password.aks_sp_pwd.result}"
  role_based_access_control {
    azure_active_directory {
      client_app_id     = "${azuread_application.aks-aad-client.application_id}"
      server_app_id     = "${azuread_application.aks-aad-srv.application_id}"
      server_app_secret = "${random_password.aks-aad-srv.result}"
      tenant_id         = "${data.azurerm_subscription.current.tenant_id}"
    enabled = true
  depends_on = [

Assign the AAD admin group to be cluster-admin

When the cluster is finally created, we need to assign the Kubernetes cluster role cluster-admin to our AAD cluster admin group. We simply get access to the Kubernetes cluster by adding the Kubernetes Terraform provider. Because we already have a working integration with AAD, we need to use the admin credentials of our cluster! But that will be the last time, we will ever need them again.

To be able to use the admin credentials, we point the Kubernetes provider to use kube_admin_config which is automatically provided for us.

In the last step, we bind the cluster role to the fore-mentioned AAD cluster group id.

# Role assignment

# Use ADMIN credentials
provider "kubernetes" {
  host                   = "${azurerm_kubernetes_cluster.aks.kube_admin_config.0.host}"
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)}"

# Cluster role binding to AAD group

resource "kubernetes_cluster_role_binding" "aad_integration" {
  metadata {
    name = "${var.clustername}admins"
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  subject {
    kind = "Group"
    name = "${azuread_group.aks-aad-clusteradmins.id}"
  depends_on = [

Run the Terraform script

We now have discussed all the relevant parts of the script, it’s time to let the Terraform magic happen 🙂 Run the script via…

$ terraform init

# ...and then...

$ terraform apply

Access the Cluster

When the script has finished, it’s time to access the cluster and try to logon. First, let’s do the “negativ check” and try to access it without having been added as cluster admin (AAD group member).

After downloading the user credentials and querying the cluster nodes, the OAuth 2.0 Device Authorization Grant flow kicks in and we need to authenticate against our Azure directory (as you might know it from logging in with Azure CLI).

$ az aks get-credentials --resource-group <RESOURCE_GROUP> -n <CLUSTER_NAME>

$ kubectl get nodes
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code DP9JA76WS to authenticate.

Error from server (Forbidden): nodes is forbidden: User "593736cb-1f95-4f23-bfbd-75891886b05f" cannot list resource "nodes" in API group "" at the cluster scope

Great, we get the expected authorization error!

Now add a user from the Azure Active Directory to the AAD admin group in the portal. Navigate to “Azure Active Directory” –> “Groups” and select your cluster-admin group. On the left navigation, select “Members” and add e.g. your own Azure user.

Now go back to the command line and try again. One last time, download the user credentials with az aks get-credentials (it will simply overwrite the former entry in you .kubeconfig to make sure we get the latest information from AAD).

$ az aks get-credentials --resource-group <RESOURCE_GROUP> -n <CLUSTER_NAME>

$ kubectl get nodes
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code ASGRA765S to authenticate.

NAME                              STATUS   ROLES   AGE   VERSION
aks-default-41331054-vmss000000   Ready    agent   18m   v1.13.12
aks-default-41331054-vmss000001   Ready    agent   18m   v1.13.12

Wrap Up

So, that’s all we wanted to achieve! We have created an AKS cluster with fully-automated Azure Active Directory integration, added a default AAD group for our Kubernetes admins and bound it to the “cluster-admin” role of Kubernetes – all done by a Terraform script which can now be integrated with you CI/CD pipeline to create compliant and AAD-secured AKS clusters (as many as you want ;)).

Well, we also could have added a user to the admin group, but that’s the only manual step in our scenario…but hey, you would have needed to do it anyway 🙂

You can find the complete script including the variables.tf file on my Github account. Feel free to use it in your own projects.


To remove all of the provisioned resources (service principals, AAD groups, Kubernetes service, storage accounts etc.) simply…

$ terraform destroy

# ...and then...

$ az group delete -n tf-rg

Jürgen Gutsch: ASP.NET Hack Advent Post 15: About silos and hierarchies in software development

This post is a special one. Not really related to .NET Core or ASP.NET Core, but to software development in general. I recently stumbled upon this post and while reading it I found myself remembering the days back when I needed to write code others estimated and specified for me.

About silos and hierarchies in software development

The woman who wrote this post lives in Cologne Germany and worked in really special environments, like self organizing teams and companies. I met Krisztina Hirth several times at community events in Germany. I really like her ideas, the way she thinks and the way she writes. You should definitely also read her other posts on her blog: https://yellow-brick-code.org/

Twitter: https://twitter.com/yellowbrickc

Jürgen Gutsch: ASP.NET Hack Advent Post 14: MailKit

This fourteenth post is about a cross-platform .NET library for IMAP, POP3, and SMTP.

On Twitter I got asked about sending emails out from a worker service. So I searched for the documentation about System.Net.Mail and the SmtpClient Class and was really surprised that this class was marked as obsolete. It seems I missed the announcement about this.

The .NET team recommend to use the MailKit and MimeKit to send emails out.

Both libraries are open sourced under the MIT license and free to use in commercial projects. It seems that this libraries are really complete and provide a lot of useful features.

Website: http://www.mimekit.net/


GitHub: https://github.com/jstedfast/MailKit

NuGet: https://www.nuget.org/packages/MailKit/


GitHub: https://github.com/jstedfast/MimeKit

NuGet: https://www.nuget.org/packages/MimeKit/

Jürgen Gutsch: ASP.NET Hack Advent Post 13:.NET Conf: Focus on Blazor

The .NET Conf in September this year was great and it was a pleasure to also do a talk on the 25th which was the community cay with a lot of awesome talks from community folks around the world. I'm really looking forward to the next one and hope I can do another talk next year.

Yesterday I was surprised when I stumbled upon the announcement about another special .NET Conf that is scheduled for January 14th. This is really a special one with the focus on Blazor:


The schedule isn't online yet, but will be there soon, as they wrote.

I like the idea to have special focused .NET Confs. The infrastructure with the Channel9 studios already is available, so it is cheap to setup a virtual conference like this. And I can imagine a few more topics to focus on:

  • Entity Framework Core
  • ASP.NET Core
  • Async/Await
  • Desktop
  • And maybe a MVP driven .NET Conf during the MVP Summit 2020 in March ;-)

Holger Schwichtenberg: .NET Core 3.1 ist ein ungewöhnliches Release

Fast keine neuen Funktionen, im Wesentlichen nur Bugfixes und sogar Breaking Changes, die es in einer Version mit Änderung an der zweiten Stelle der Versionsnummer gar nicht geben dürfte.

Jürgen Gutsch: ASP.NET Hack Advent Post 12:.NET Rocks Podcasts

Do you like podcasts? Do you like entertaining and funny technical podcasts about .NET? I definitely do. I like to listen to them while commuting. The best and (I guess) the most famous .NET related podcast is .NET Rocks:


Carl Franklin and Richard Campbell really do a great show, they invite a lot of cool and well known experts to their shows and discuss about cutting-edge topics around .NET and Microsoft technologies.

Jürgen Gutsch: ASP.NET Hack Advent Post 11: Updating an ASP.NET Core 2.2 Web Site to .NET Core 3.1

.NET Core 3.1 is out, but how to update your ASP.NET Core 2.2 application? Scott Hanselman recently wrote a pretty detailed and complete post about it.


This post also includes details on how to update the deployment and hosting part on Azure DevOps and Azure App Service.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 10: Wasmtime

WebAssembly is pretty popular this time for .NET Developers. With Blazor we have the possibility to run .NET assemblies inside the WebAssembly inside a browser.

But did you know that you can run WebAssembly outside the web and that you can run WebAssembly code without a browser? This can be done with the open-source, and cross-platform application runtime called Wasmtime. With Wasmtime you are able to load and execute WebAssemby code directly from your program. Wasmtime is programmed and maintained by the Bytecode Alliance

The Bytecode Alliance is an open source community dedicated to creating secure new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI).

Website: https://wasmtime.dev/

GitHub: https://github.com/bytecodealliance/wasmtime/

I wouldn't write about it, if it wouldn't be somehow related to .NET Core. The Bytecode Alliance just added an preview of an API for .NET Core. That means that you now can execute WebAssembly code from your .NET Core application. For more details see this blog post by Peter Huene:


He wrote a pretty detailed blog post about Wasmtime and how to use it within a .NET Core application. Also the Bytecode Alliance added a .NET Core sample and created a NuGet package:



So Wasmtime is the opposite of Blazor. Instead of running .NET Code inside the WebAssembly, you are now also able to run WebAssembly in .NET Core.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.