Holger Schwichtenberg: Migrationsskript für die Umstellung von .NET Framework auf .NET Core

Der Dotnet-Doktor bietet eine skriptbasiertes Migrationswerkzeug für die Umstellung auf .NET Core an.

Golo Roden: Günstigere Software durch weniger Tests

Softwareentwicklung gilt als teure Disziplin. Nicht nur Fachfremde sind häufig überrascht, wie viel Geld die professionelle Entwicklung einer Software bedarf. Code, Tests, Dokumentation und Integration, das alles will bezahlt werden. Doch woran lässt sich sparen?

Holger Schwichtenberg: Der Stand der .NET-Familie zum Jahresbeginn 2020

Der aktuelle Stand von .NET Framework, .NET Core und Mono in einem Schaubild.

Code-Inside Blog: Accessibility Insights: Spot accessibilities issues easily for Web Apps and Windows Apps

Accessibility

Accessibility is a huge and important topic nowadays. Keep in mind that in some sectors (e.g. government, public service etc.) accessibility is a requirement by law (in Europe the European Standards EN 301 549).

If you want to learn more about accessibility in general this might be handy: MDN Web Docs: What is accessibility?

Tooling support

In my day to day job for OneOffixx I was looking for a good tool to spot accessibility issues in our Windows and Web App. I knew that there must be some good tools for web development, but was not sure about Windows app support.

Accessibility itself has many aspects, but these were some non obvious key aspects in our application that we needed to address:

  • Good contrasts: This one is easy to understand, but sometimes some colors or hints in the software didn’t match the required contrast ratios. High contrast modes are even harder.
  • Keyboard navigation: This one is also easy to understand, but can be really hard. Some elements are nice to look at, but hard to focus with pure keyboard commands.
  • Screen reader: After your application can be navigated with the keyboard you can checkout screen reader support.

Accessibility Insights

Then I found this app from Microsoft: Accessibility Insights

x

The tool scans active applications for any accessibility issues. Side node: The UX is a bit strange, but OK - you get used to it.

Live inspect:

The starting point is to select a window or a visible element on the screen and Accessibility Insights will highlight it:

x

Then you can click on “Test”, which gives you detailed test result:

x

(I’m not 100% if each error is really problematic, because a lot of Microsofts very own applications have many issues here.)

Tab Stops:

As already written: Keyboard navigation is a key aspect. This tool has a nice way to visualize “Tab” navigation and might help you to better understand the navigation with a keyboard:

x

Contrasts:

The third nice helper in Accessibility Insights is the contrast checker. It highlights contrast issues and has an easy to use color picker integrated.

x

Behind the scenes this tool uses the Windows Automation API / Windows UI Automation API.

Accessibility Insights for Chrome

Accessibility Insights can be used in Chrome (or Edge) as well to check web apps. The extension is similar to the Windows counter part, but has a much better “assessment” story:

x

x

x

Summary

This tool was really a time saver. The UX might not be the best on Windows, but it gives you some good hints. After we discovered this app for our Windows Application we used the Chrome version for our Web Application as well.

If you use or used other tools in the past: Please let me know. I’m pretty sure there are some good apps out there to help build better applications.

Hope this helps!

Holger Schwichtenberg: Die Finanzverwaltung und ihre schlechte Elster-Software

Die Umstellung von "ElsterFormular" auf das Elster-Webportal ist gespickt mit lästigen Bugs und notwendigen Anrufen bei der Hotline.

Norbert Eder: Basteln mit dem Nachwuchs: NFC Tags programmieren

Mein Nachwuchs interessiert sich für Technik, Computer und alles was so dazu gehört. Natürlich wird gerne gespielt, aber schön langsam durstet es ihm nach mehr. Mit kleinen Projekten kann man Technik recht schnell näher bringen, vermitteln und Interessen ausloten.

Richtig gut kamen NFC Tags an. Diese sind für kleines Geld zu haben, aber es lassen sich damit ganz nette Projekte umsetzen.

NFC Tag | Norbert Eder

Ein einfacher NFC Tag

Auf so eine NFC Chip können unterschiedliche Informationen gespeichert werden. Im einfachsten Fall handelt es sich dabei um einen Link, eine Wifi- bzw. Bluuetooth-Verbindung, eine E-Mail-Adresse oder Telefonnummer, einen Standort oder eine Anweisung, eine SMS zu senden oder eine App zu starten. Kommt das NFC-fähige Telefon mit dem NFC-Tag in Berührung, wird diese Aktion ausgeführt.

NXP TagWriter | Norbert Eder

NXP TagWriter liest und schreibt NFC Tags

Um NFC-Tags zu lesen und zu schreiben ist lediglich eine Smartphone-App notwendig. Hierzu gibt es unterschiedlichste Apps. Eine der einfachsten ist der NXP TagWriter (Android, Apple).

NFC Tags auslesen | Norbert Eder

NFC Tag auslesen

Neben diesen Standard-Funktionen gibt es weitere Apps (z.B. die NFC Tools), die zusätzliche Funktionen unterstützen. So ist es damit möglich, Bedingungen zu setzen und beispielweise folgende Konfigurationen vorzunehmen:

  • Smartphone auf lautlos stellen
  • Flugmodus aktivieren
  • Wenn Sonntag bis Donnerstag, Wecker für den kommenden Tag um 6:00 Uhr stellen

Das kann für viele ein guter NFC-Tag für das Nachtkasterl sein.

Viele weitere Möglichkeiten sind gegeben und laden vor allem auch zu experimentieren ein. Mein Bub hatte viele Ideen und einige davon auch gleich umgesetzt. Für Spaß war gesorgt und auch gelernt hat er viel.

Hinweis: NFC-Tags gibt es in unterschiedlichen Größen (Speicherplatz), mit und ohne Passwortschutz, als selbstklebender Sticker oder als Schlüsselanhänger.

Viel Spaß beim Experimentieren. Die Kids werden richtig Spaß daran haben.

Der Beitrag Basteln mit dem Nachwuchs: NFC Tags programmieren erschien zuerst auf Norbert Eder.

Jürgen Gutsch: Using the .editorconfig in VS2019 and VSCode

In the backend developer team of the YOO we are currently discussing coding style guidelines and ways to enforce them. Since we are developer with different mindsets and backgrounds, we need to find a way to enforce the rules in a way that works in different editors too.

BTW: C# developers often came from other languages and technologies before they started to work with this awesome language. Universities mostly teach Java, or the developer were front end developers in the past, or started with PHP. Often .NET developers start with VB.NET and switch to C# later. Me as well: I also started as a front end developer with HTML4, CSS2 and JavaScript, used VB Script and VB6 on the server side in 2001. Later I used VB.NET on the server and switched to C# in 2007.

In our company we use ASP.NET Core more and more. This also means we are more and more free to use the best editor we want to use. And we are more and more free to use platform we want to work with. Some of us use already and prefer VSCode to work on ASP.NET Core projects. Maybe we'll have a fellow colleague in the future who prefers VSCode on Linux or VS on a Mac. This also makes the development environments divers.

Back when we used Visual Studio only, Style Cop was the tool to enforce coding style guidelines. Since a couple of years there is a new tool that works in almost all editors out in the world.

The .editorconfig is a text file that overrides the settings of the editor of your choice.

Almost every code editor has settings to style the code in a way you like, or the way your team likes. If this editor supports the .editorconfig you are able to override this settings with a simple text file that usually is checked in with your source code and available for all developers who work on those sources.

Visual Studio 2019 supports the .editorconfig by default, VS for Mac also supports it and VSCode supports it with a few special settings.

The only downside of the .editorconfig I can see

Since the .editorconfig is a settings file that overrides the settings of the code editor, it might be that only those settings will work that are supported by the editor. So it might work that not all of the settings will work on all code editors.

But there is a workaround at least on the NodeJS side and on the .NET side. Both technologies support the .editorconfig on the code analysis side instead of the editor side, which means NodeJS or the .NET Compiler will check the code and enforce the rules instead of the editor. The editor only displays the error and helps the author to fix those errors.

As far as I got it: On the .NET side it VS2019 on the one hand and Omnisharp on the other hand. Omnisharp is a project that support .NET development on many code editors, so on VSCode as well. Even if VSCode is called a Visual Studio, it doesn't support .NET and C# natively. It is the Omnisharp add-in that enables .NET and brings the Roslyn compiler to the Editor.

"CROSS PLATFORM .NET DEVELOPMENT! OmniSharp is a family of Open Source projects, each with one goal: To enable a great .NET experience in YOUR editor of choice" http://www.omnisharp.net/

So the .editorconfig is supported by Omnisharp in VSCode. This means it might be that the support of the .editorconfig differs between VS2019 and VSCode.

Enable the .editorconfig in VSCode

As I wrote the .editorconfig is enabled by default in VS2019. There is nothing to do about it. If VS2019 finds an .editorconfig it will use it immediately and it will check your code on every code change. If VS2019 finds an .editorconfig in your solution, it will tell you about it and propose to add it to a solution folder to make it easier accessible for you in the editor.

In VSCode you need to install an add-in called EditorConfig. This doesn't enable the .editorconfig even if it is telling you about it. Maybe it actually does, but it doesn't work with C# because Omnisharp does something. But this add-in supports you to create or edit your .editorconfig.

To actually enable the support of the .editorconfig in VSCode you need to change two Omnisharp settings in VSCode:

Open the settings in VSCode and search for Omnisharp. Than you need to "Enable Editor Config Support" and to "Enable Roslyn Analyzers"

After you changed those settings, you need to restart VSCode to restart the Omnisharp server in the background.

That's it!

Conclusion

Now the .editorconfig works in VSCode almost the same way as in VS2019. And it works great. I tried it by opening the same project in VSCode and in VS2019 and changed some settings in the .editorconfig. The changed settings immediately changed the editors. Both editors helped me to change the code to match the code styles.

We at the YOO still need to discuss about some coding styles, but for now we use the recommended styles and change the things we discuss as soon we have a decision.

Do you ever discussed about coding styles in a team? If yes, you know that we are discussing about how to enforce var over the explicit type and whether to use simple usings or not, or whether to always use curly braces with if statements or not... This might be annoying, but it is really important to get a common sense and it is important that everybody agree on it.

Golo Roden: Module für JavaScript und Node.js auswählen

Die Auswahl von npm-Modulen ist im Wesentlichen Erfahrungssache, als Indizien für tragfähige und zukunftsträchtige Module können aber deren Verbreitung und die Aktivität der Autoren dienen.

Uli Armbruster: Eigene Domain als Gmail Alias verwenden

In dieser Schritt-für-Schritt Anleitung erläutere ich wie ihr andere Absender für eure Gmail-Adresse definiert. Dies ist z.B. nützlich, wenn ihr eine eigene Domain betreibt (wie ich mit http://www.uliarmbruster.de) und unter der Domain mit eurem Gmail Konto auch E-Mails empfangen und versenden wollt.

Ich nutze dies beispielsweise, um für meine Familie E-Mail-Adressen anzulegen und alle auf ein zentrales Gmail-Konto umzuleiten. Das ist unter anderem nützlich, wenn man so Dinge wie Handy-, Internet und Stromverträge für mehrere Leute managt.

Schritt 1: Weniger sichere Apps zulassen

Sofern nicht bereits aktiviert, müsst ihr noch unter dieser Adresse „Weniger sichere Apps zulassen“ anschalten.

Schritt 2: 2-Faktor-Authentifizierung aktivieren

Einfach diesem Link folgen und aktivieren.

Schritt 3: App-Passwort erstellen

Über diesen Link geht ihr wie folgt vor:


App_Settings_1

Bei „App auswählen“ selektiert ihr „Andere (benutzerdefiniert)“


App_Settings_2

Dann gebt ihr einen Namen dafür ein, z.B. für die externe E-Mail-Adresse, die ihr einbindet.


 

Schritt 4: Gmail konfigurieren

Jetzt geht ihr in euer Gmail-Konto und führt die folgenden Schritte aus:


  • Zahnrad in Gmail anklicken
  • Einstellungen auswählen
  • Tab Konten & Import auswählen

E-Mail-Settings 1


Unter „Senden als“ fügt ihr eine weitere Adresse hinzu. Daufhin kommt der folgende Dialog. Den Namen, den ihr dort eingebt, ist der, der als Alias beim Empfänger angezeigt wird. Als E-Mail-Adresse wählt ihr dann eure externe Adresse aus, die ihr einbinden wollt.

E-Mail-Settings 2


Im nächsten Schritt gebt ihr eure Gmail-Adresse (also die aktuell verwendete) als auch das in Schritt 2 generierte App Passwort ein. Den SMTP-Server und Port könnt ihr aus dem Screenshot übernehmen.

E-Mail-Settings 2-3


Im letzten Schritt müsst ihr dann den zugesendeten Code eingeben. Die E-Mail hierzu, die ihr erhalten müsstet, sieht so aus:

Sie haben angefordert, dass alias@euredomain.com Ihrem
Gmail-Konto hinzugefügt werden soll.
Bestätigungscode: 694072788

Bevor Sie Nachrichten von alias@euredomain.com aus über Ihr
Gmail-Konto (eure-gmail-adresse@gmail.com) senden können, klicken Sie
bitte auf den folgenden Link, um Ihre Anfrage zu bestätigen:

E-Mail-Settings 3

 

Das sollte es gewesen sein.

Christian Dennig [MS]: VS Code Git integration with ssh stops working

The Problem

I recently updated my ssh keys for GitHub account on my Mac and after adding the public key in GitHub everything worked as expected from the command line. When I did a…

$ git pull

…I was asked for the passphrase of my private key and the pull was executed. Great.

$ git pull
Enter passphrase for key '/Users/christiandennig/.ssh/id_rsa':
Already up to date.

But when I did the same from Visual Studio Code, I got the following error:

As you can see in the git logs, it says “Permission denied (publickey)”. This is odd, because VS Code is using the same git executable as the commandline (see the log output).

It seems that VS Code isn’t able to ask for the passphrase during the access to the git repo?!

ssh-agent FTW

The solution is simple. Just use ssh-agent on your machine to enter the passphrase for your ssh private key once…all subsequent calls that need your ssh key will use the saved passphrase.

Add your key to ssh-agent (storing the passphrase in MacOS Keychain!)

$ ssh-add -K ~/.ssh/id_rsa
Enter passphrase for /Users/christiandennig/.ssh/id_rsa:
Identity added: /Users/christiandennig/.ssh/id_rsa (XXXX@microsoft.com)

The result in Keychain will look like that:

When you now open Visual Studio Code and start to synchronize your git repository the calls to GitHub will use the credentials saved via ssh-agent and all commands executed will be successful.

HTH.

( BTW: Happy New Year 🙂 )

Code-Inside Blog: T-SQL Pagination

The problem

This is pretty trivial: Let’s say you have blog with 1000 posts in your database, but you only want to show 10 entries “per page”. You need to find a way how to slice this dataset into smaller pieces.

The solution

In theory you could load everything from the database and filter the results “in memory”, but this would be quite stupid for many reasons (e.g. you load much more data then you need and the computing resources could be used for other requests etc.).

If you use plain T-SQL (and Microsoft SQL Server 2012 or higher) you can express a query with paging like this:

SELECT * FROM TableName ORDER BY id OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY;

Read it like this: Return the first 10 entries from the table. To get the next 10 entries use OFFSET 10 and so on.

If you use the Entity Framework (or Entity Framework Core or any other O/R-Mapper) the chances are high they do exact the same thing internally for you.

Currently all “supported” SQL Servers support this syntax nowadays. If you try this syntax on SQL Server 2008 or SQL Server 2008 R2 you will receive a SQL error

Links

Checkout the documentation for further information.

This topic might seem “simple”, but during my developer life I was suprised how “hard” paging was with SQL Server. Some 10 years ago (… I’m getting old!) I was using MySQL and the OFFSET and FETCH syntax was introduced with Microsoft SQL Server 2012. This Stackoverflow.com Question shows the different ways how to implement it. The “older” ways are quite weird and complicated.

I also recommend this blog for everyone who needs to write T-SQL.

Hope this helps!

Holger Schwichtenberg: Entwickler-Events 2020 für .NET- und Webentwickler

Ein Sammlung der wichtigsten Konferenz- und Eventtermine für .NET- und Webentwickler im nächsten Jahr.

Jürgen Gutsch: ASP.NET Hack Advent Post 24: When environments are not enough, use sub-environments!

ASP.NET Core knows the concept of runtime environments like Development, Staging and Production. But sometimes those environments are not enough. To solve this, you could use sub-environments. This is not a built-in feature, but is easily implemented in ASP.NET Core. Thomas Levesque describes how:

ASP.NET CORE: WHEN ENVIRONMENTS ARE NOT ENOUGH, USE SUB-ENVIRONMENTS!

Thomas Levesque is a French developer living in Paris, France. He is a Microsoft MVP since 2012 and pretty involved in the open source community.

Twitter: https://twitter.com/thomaslevesque

GitHub: https://github.com/thomaslevesque

Jürgen Gutsch: ASP.NET Hack Advent Post 23: Setting up Azure DevOps CI/CD for a .NET Core 3.1 Web App hosted in Azure App Service for Linux

After you migrated your ASP.NET Core application to a Linux based App Service, you should setup a CI/CD pipeline ideally on Azure DevOps. And again it is Scott Hanselman who wrote a great post about it:

Setting up Azure DevOps CI/CD for a .NET Core 3.1 Web App hosted in Azure App Service for Linux

So, read this post to learn more about ASP.NET Core on Linux.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Marco Scheel: Microsoft Teams - Known issues... no more!?

Microsoft is documenting a list of things (about Microsoft Teams) that are not working as expected by Microsoft or customers. The list has been around for quite some time. New things are added, but I noticed things on the list that are clearly working as of this writing are not removed. So let’s have a look together and maybe you support my pull request.

The article: Known issues for Microsoft Teams

My pull request:  Known issues that are no longer known issues

Audit logs may report an incorrect username as initiator when someone has been removed from a Team

Teams team is a modern group in AAD. When you add/remove a member through the Teams user interface, the flow knows exactly which user initiated the change, and the Audit log reflects the correct info. However, if a user adds/removes a member through AAD, the change is synced to the Teams backend without telling Teams who initiated the action. Microsoft Teams picks the first owner of team as the initiator, which is eventually reflected in the Audit log as well.
https://docs.microsoft.com/en-us/microsoftteams/known-issues#administration

Related Issue: Audit logs may report an incorrect username as initiator when someone has been removed from a Team

I validated group membership edits from the new Microsoft Admin Portal (https://admin.microsoft.com) and from the Azure Portal (https://portal.azure.com). In both cases the audit logs showed the correct user. In both cases I used my cloudadmin account that is not part of the team and the audit logs documented the operation as the executing user.

image

Unable to delete connectors as a team owner

Attempting to delete a connector as an owner, that can otherwise add a connector, while “Allow members to create, update, and remove connectors” is disabled throws an error indicating the user does not have permission to do so.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#apps

I tested this in my lab environment in various combinations and I did not run into this issue. For example:

  1. Leia created a team and added Luke as a member
  2. Luke added an Incoming Webhook as a connector
  3. Leia didn’t like Luke’s webhook so she decided to remove the member permission to configure connectors for the team
  4. Luke wanted to add another connector, but the option is now missing from his context menu for the channel
  5. Leia deleted the Incoming Webhook that look created without a problem
image

Planner on single sign-on (SSO) build

SSO does not apply to Planner. You will have to sign in again the first time you use Planner on each client.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#authentication

I’m using planner within Microsoft Teams on a weekly basis (not on a daily basis as some of my colleagues would like me to use it) and it is working as expected.

Wiki not created for channels created by guests

When a guest creates a new channel, the Wiki tab is not created. There isn’t a way to manually attach a Wiki tab to the channel.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#guest-access

I did check this in my lab tenant using my work account as a guest.

  1. Luke created a team
  2. Luke added my work account as a guest to the team
  3. Luke configured the teams guest permission to allow channel creation 
  4. I opened teams in my browser and switch to the lab tenant (friends don’t let friends switch tenants in the real teams app, even with fast tenant switching!)
  5. I opened the team and created a new channel
  6. Wiki tab was present and working
image

Teams Planner integration with Planner online

Tasks buckets in Planner do not show up in Planner online experience.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#tabs

This is core feature of Planner and the issue was created 2+ year ago. It is just working as expected.

image

Unable to move, delete or rename files after editing

After a file is edited in Teams it cannot be moved or renamed or deleted immediately
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#teams

I tried this with a mix of accounts and apps (Teams app and Teams in the browser) today. I could not reproduce it. It is still a “common” issue SharePoint Online but I never experienced it or got clients reporting this issue regarding Teams.

A team name with an & symbol in it breaks connector functionality

When a team name is created with the & symbol, connectors within the Team/Group cannot be established.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#teams

I created two teams:

  1. Good & Bad Characters
  2. Only Good Characters

The connector option for both Teams showed up at about the same time. Maybe it is related to the special character but it took quite some time until the Exchange Online mailbox was provisioned. Longer than any test today for all the other use cases. But at the end I had no problems managing connectors for a Team with an “&” or without it in the name.

I find the Microsoft article in general very valuable and I noticed a few other things I want to talk about in the future. So stay tuned.

Jürgen Gutsch: ASP.NET Hack Advent Post 22: User Secrets in Docker-based .NET Core Worker Applications

Do you want to know how to manage the user secrets in Docker based .NET Core Worker applications? As a part of the Message Endpoints in Azure series Jimmy Bogard is writing an awesome blog post about this.

User Secrets in Docker-based .NET Core Worker Applications

Jimmy Bogard is chief architect at Headspring, creator of AutoMapper and MediatR, author of the MVC in Action books, international speaker and prolific OSS developer. Expert in distributed systems, REST, messaging, domain-driven design and CQRS.

Twitter: https://twitter.com/jbogard

GitHub: https://github.com/jbogard

Blog: https://jimmybogard.com/

LinkedIn: https://linkedin.com/in/jimmybogard

Christian Dennig [MS]: Keep your AKS worker nodes up-to-date with kured

Introduction

When you are running several AKS / Kubernetes clusters in production, the process of keeping your application(s), their dependencies, Kubernetes itself with the worker nodes up to date, turns into a time-consuming task for (sometimes) more than one person. Looking at the worker nodes that form your AKS cluster, Microsoft will help you by applying the latest OS / security updates on a nightly basis. Great, but the downside is, when the worker node needs a restart to be able to fully apply these patches, Microsoft will not reboot that particular machine. The reason is obvious: they simply don’t know, when it is best to do so. So basically, you would have to do this on your own.

Luckily, there is a project from WeaveWorks called “Kubernetes Restart Daemon” or kured, that gives you the ability to define a timeslot where it will be okay to automatically pull a node from your cluster and do a simple reboot on it.

Under the hood, kured works by adding a DaemonSet to your cluster that watches for a reboot sentinel like e.g. the file /var/run/reboot-required. If that file is present on a node, kured “cordons and drains” that particular node, inits a reboot and uncordons it afterwards. Of course, there are situations where you want to suppress that behavior and fortunately kured gives us a few options to do so (Prometheus alerts or the presence of specific pods on a node…).

So, let’s give it a try…

Installation of kured

I assume, you already have a running Kubernetes cluster, so we start by installing kured.

$ kubectl apply -f https://github.com/weaveworks/kured/releases/download/1.2.0/kured-1.2.0-dockerhub.yaml

clusterrole.rbac.authorization.k8s.io/kured created
clusterrolebinding.rbac.authorization.k8s.io/kured created
role.rbac.authorization.k8s.io/kured created
rolebinding.rbac.authorization.k8s.io/kured created
serviceaccount/kured created
daemonset.apps/kured created

Let’s have a look at what has been installed.

$ kubectl get pods -n kube-system -o wide | grep kured
kured-5rd66                             1/1     Running   0          4m18s   10.244.1.6    aks-npstandard-11778863-vmss000001   <none>           <none>
kured-g9nhc                             1/1     Running   0          4m20s   10.244.2.5    aks-npstandard-11778863-vmss000000   <none>           <none>
kured-vfzjk                             1/1     Running   0          4m20s   10.244.0.10   aks-npstandard-11778863-vmss000002   <none>           <none>

As you can see, we now have three kured pods running.

Test kured

To be able to test the installation, we simply simulate the “node reboot required” by creating the corresponding file on one of the worker nodes. We need to access a node by ssh. Just follow the official documentation on docs.microsoft.com:

https://docs.microsoft.com/en-us/azure/aks/ssh

Once you have access to a worker node via ssh, create the file via:

$ sudo touch /var/run/reboot-required

Now exit the pod, wait for the kured daemon to trigger a reboot and watch the cluster nodes by executing kubectl get nodes -w

$ kubectl get nodes -w
NAME                                 STATUS   ROLES   AGE   VERSION
aks-npstandard-11778863-vmss000000   Ready    agent   34m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready    agent   34m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000000   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready                      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000000   Ready                      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   NotReady,SchedulingDisabled   agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   NotReady,SchedulingDisabled   agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready                         agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready                         agent   36m   v1.15.5

Corresponding output of the kured pod on that particular machine:

$ kubectl logs -n kube-system kured-ngb5t -f
time="2019-12-23T12:39:25Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:39:25Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:25Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:39:25Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:39:25Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:39:30Z" level=info msg="Holding lock"
time="2019-12-23T12:39:30Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:31Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:39:31Z" level=info msg="Releasing lock"
time="2019-12-23T12:41:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:43:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:45:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:47:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:49:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:51:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:53:04Z" level=info msg="Reboot required"
time="2019-12-23T12:53:04Z" level=info msg="Acquired reboot lock"
time="2019-12-23T12:53:04Z" level=info msg="Draining node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:53:06Z" level=info msg="node/aks-npstandard-11778863-vmss000000 cordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:06Z" level=warning msg="WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: aks-ssh; Ignoring DaemonSet-managed pods: kube-proxy-7rhfs, kured-ngb5t" cmd=/usr/bin/kubectl std=err
time="2019-12-23T12:53:42Z" level=info msg="pod/aks-ssh evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="node/aks-npstandard-11778863-vmss000000 evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="Commanding reboot"
time="2019-12-23T12:53:42Z" level=info msg="Waiting for reboot"
...
...
<AFTER_THE_REBOOT>
...
...
time="2019-12-23T12:54:15Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:54:15Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:15Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:54:15Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:54:15Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:54:21Z" level=info msg="Holding lock"
time="2019-12-23T12:54:21Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:22Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:54:22Z" level=info msg="Releasing lock"

As you can see, the pods have been drained off the node (SchedulingDisabled), which has then been successfully rebooted, uncordoned afterwards and is now ready to run pods again.

Customize kured Installation / Best Practices

Reboot only on certain days/hours

Of course, it is not always a good option to reboot your worker nodes during “office hours”. When you want to limit the timeslot where kured is allowed to reboot your machines, you can make use of the following parameters during the installation:

  • reboot-days – the days kured is allowed to reboot a machine
  • start-time – reboot is possible after specified time
  • end-time – reboot is possible before specified time
  • time-zone – timezone for start-time/end-time

Skip rebooting when certain pods are on a node

Another option that is very useful regarding production workloads, is the possibility to skip a reboot, when certain pods are present on a node. The reason for that could be, that the service is very critical to your application and therefor pretty “expensive” when not available. You may want to surveil the process of rebooting such a node and being able to intervene quickly, if something goes wrong.

As always in the Kubernetes environment, you can achieve this by using label selectors for kured – an option set during installation called blocking-pod-selector.

Notify via WebHook

kured also offers the possibility to call a Slack webhook when nodes are about to be rebooted. Well, we can “misuse” that webhook to trigger our own action, because such a webhook is just a simple https POST with a predefined body, e.g.:

{
   "text": "Rebooting node aks-npstandard-11778863-vmss000000",
   "username": "kured"
}

To be as flexible as possible, we leverage the 200+ Azure Logic Apps connectors that are currently available to basically do anything we want. In the current sample, we want to receive a Teams notification to a certain team/channel and send a mail to our Kubernetes admins whenever kured triggers an action.

You can find the important parts of the sample Logic App on my GitHub account. Here is a basic overview of it:

What you basically have to do is to create an Azure Logic App with a http trigger, parse the JSON body of the POST request and trigger “Send Email” and “Post a Teams Message” actions. When you save the Logic App for the first time, the webhook endpoint will be generated for you. Take that URL and use the value for the slack-hook-url parameter during the installation of kured.

If you need more information on creating an Azure Logic App, please see the official documentation: https://docs.microsoft.com/en-us/azure/connectors/connectors-native-reqres.

When everything is setup, Teams notifications and emails you receive, will look like that:

Wrap-Up

In this sample, we got to know the Kubernetes Reboot Daemon that helps you keep your AKS cluster up to date by simply specifying a timeslot where the daemon is allowed to reboot you cluster/worker nodes and apply security patches to the underlying OS. We also saw, how you can make use of the “Slack” webhook feature to basically do anything you want with kured notifications by using Azure Logic Apps.

Tip: if you have a huge cluster, you should think about running multiple DaemonSets where each of them is responsible for certain nodes/nodepools. It is pretty easy to set this up, just by using Kubernetes node affinities.

Jürgen Gutsch: ASP.NET Hack Advent Post 21: Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first

Scott Hanselman again writes about ASP.NET Core applications on Linux. This time the post is about moving an ASP.NET Core application from a Windows to a Linux based App Service:

Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first

Again this is one of his pretty detailed and deep dive posts. You definitely have to read it, if you want to run your ASP.NET Core application on Linux.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 20: The ultimate guide to secure cookies with web.config in .NET

For the todays ASP.NET Hack Advent, I found a awesome post about cookie security. This post is the latest part of a series about ASP.NET Security. Cookie security is important to avoid cookie hijacking via cross-site scripting and something like this.

The ultimate guide to secure cookies with web.config in .NET

This post was written by Thomas Ardal, who is a speaker, software consultant and the founder of elma.io.

Twitter: https://twitter.com/thomasardal

Website: https://thomasardal.com/

Jürgen Gutsch: ASP.NET Hack Advent Post 19: Migrate a real project from ASP.NET Core 2.2 to 3.1

Because I got a lot of questions about migrating ASP.NET Core applications to 3.1, I will introduce another really good blog post about it. This time it is a post about a real project that needs to be migrated from ASP.NET Core 2.2. to 2.1. He is writing about how to update the project file and about what needs to be changed in the Startup.cs:

Migration from Asp.Net Core 2.2 to 3.1 — Real project

This post was written by Alexandre Malavasi on December 16. He is a consultant and .NET developer from Brazil, who is currently working and living in Dublin Ireland.

Twitter: https://twitter.com/alemalavasi

Medium: https://medium.com/@alexandre.malavasi

LinkedIn: https://www.linkedin.com/in/alexandremalavasi/

Jürgen Gutsch: ASP.NET Hack Advent Post 18: The .NET Foundation has a new Executive Director

On December 16th, Jon Galloway announced that Oren Novotny will follow him as the new Executive Director of the .NET Foundation. Jon started as Executive Director in February 2016. Until now, the .NET Foundation added a lot of value for the .NET community. They added a lot of more awesome projects to the Foundation and provided many services for them. The .NET Foundation launched a worldwide Meetup program, where .NET related meetups get a Meetup Pro for free and will be marked as part of the .NET Foundation. They also support the local communities with contents and sponsorships. In March 2019 the .NET Foundation runs an election for the board’s first elected directors. Orin will officially take over at the start of January. Jon will continue supporting the community as a Vice President of the .NET Foundation and as a member of the voluntary Advisory Council.

Welcoming Oren Novotny as the new Executive Director of .NET Foundation

At the same day, also Oren announced that he will follow Jon Galloway as Executive Director of the .NET Foundation. He also announced that he is joining Microsoft as a Program Manager on the .NET Team under Scott Hanselman. So he is one of the many, many MVPs that joins Microsoft. Congratulations :-)

.NET Foundation Executive Director, Joining Microsoft

I'm really looking forward to see how the .NET Foundation evolves.

Jürgen Gutsch: ASP.NET Hack Advent Post 17: Creating Common Intermediate Language projects with .NET SDK

For the todays ASP.NET Hack Advent post, I found a link to one of the awesome posts of Filip W. In this post Filip describes the new project type that allows you to write .NET Projects in IL code directly. He writes how to create a new Microsoft.NET.Sdk.IL project and how to write IL code. He also answered the most important question about why you might need to write IL code directly.

https://www.strathweb.com/2019/12/creating-common-intermediate-language-projects-with-net-sdk/

Filip is working as a senior software developer and lead developer near Zurich in Switzerland. He is a Microsoft MVP since 2013 and one of most important, influencing and well known member of the .NET developer community. He is creator and main contributor on scrtiptcs, contributes to roslyn and OmniSharp and many more open source projects. You should definitely follow him on Twitter and have a look into his other open source projects on GitHub: https://github.com/filipw/

Golo Roden: Das war die CODEx 2019

Am 4. November 2019 veranstaltete die HDI in Hannover die erste CODEx-Konferenz, eine Veranstaltung für Entwickler und IT-Experten. Golo Roden, Autor bei heise Developer, war als Sprecher dort.

Jürgen Gutsch: ASP.NET Hack Advent Post 16: ConfigureAwait & System.Threading.Channels

Stephen Toub published two really good blog post about in the Microsoft .NET Net blog.

The first one is a really good and detailed FAQ style post about ConfigureAwait. If you would like to learn about ConfigureAwait, you should read it:

ConfigureAwait FAQ

The second one is an introduction into System.Threading.Channels. This post is really good introduction and goes more and mote deep into that topic:

An Introduction to System.Threading.Channels

Christian Dennig [MS]: Fully automated creation of an AAD-integrated Kubernetes cluster with Terraform

Introduction

To run your Kubernetes cluster in Azure integrated with Azure Active Directory as your identity provider is a best practice in terms of security and compliance. You can give (and remove – when people are leaving your organisation) fine-grained permissions to your team members, to resources and/or namespaces as they need them. Sounds good? Well, you have to do a lot of manual steps to create such a cluster. If you don’t believe me, follow the official documentation 🙂 https://docs.microsoft.com/en-us/azure/aks/azure-ad-integration.

So, we developers are known to be lazy folks…then how can this automatically be achieved e.g. with Terraform (which is one of the most popular tools out there to automate the creation/management of your cloud resources)? It took me a while to figure out, but here’s a working example how to create an AAD integrated AKS cluster with “near-zero” manual work.

The rest of this blog post will guide you through the complete Terraform script which can be found on my GitHub account.

Create the cluster

To work with Terraform (TF), it is best-practice to store the Terraform state not on you workstation as other team members also need the state-information to be able to work on the same environment. So, first…let’s create a storage account in your Azure subscription to store the TF state.

Basic setup

With the commands below, we will be creating a resource group in Azure, a basic storage account and a corresponding container where the TF state will be put in.

# Resource Group

$ az group create --name tf-rg --location westeurope

# Storage Account

$ az storage account create -n tfstatestac -g tf-rg --sku Standard_LRS

# Storage Account Container

$ az storage container create -n tfstate --account-name tfstatestac --account-key `az storage account keys list -n tfstatestac -g tf-rg --query "[0].value" -otsv`

Terraform Providers + Resource Group

Of course, we need a few Terraform providers for our example. First and foremost, we need the Azure and also the Azure Active Directory resource providers.

One of the first things we need is – as always in Azure – a resource group where we will be the deploying our AKS cluster to.

provider "azurerm" {
  version = "=1.38.0"
}

provider "azuread" {
  version = "~> 0.3"
}

terraform {
  backend "azurerm" {
    resource_group_name  = "tf-rg"
    storage_account_name = "tfstatestac"
    container_name       = "tfstate"
    key                  = "org.terraform.tfstate"
  }
}

data "azurerm_subscription" "current" {}

# Resource Group creation
resource "azurerm_resource_group" "k8s" {
  name     = "${var.rg-name}"
  location = "${var.location}"
}

AAD Applications for K8s server / client components

To be able to integrate AKS with Azure Active Directory, we need to register two applications in the directory. The first AAD application is the server component (Kubernetes API) that provides user authentication. The second application is the client component (e.g. kubectl) that’s used when you’re prompted by the CLI for authentication.

We will assign certain permissions to these two applications, that need “admin consent”. Therefore, the Terraform script needs to be executed by someone who is able to grant that for the whole AAD.

# AAD K8s Backend App

resource "azuread_application" "aks-aad-srv" {
  name                       = "${var.clustername}srv"
  homepage                   = "https://${var.clustername}srv"
  identifier_uris            = ["https://${var.clustername}srv"]
  reply_urls                 = ["https://${var.clustername}srv"]
  type                       = "webapp/api"
  group_membership_claims    = "All"
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = false
  required_resource_access {
    resource_app_id = "00000003-0000-0000-c000-000000000000"
    resource_access {
      id   = "7ab1d382-f21e-4acd-a863-ba3e13f7da61"
      type = "Role"
    }
    resource_access {
      id   = "06da0dbc-49e2-44d2-8312-53f166ab848a"
      type = "Scope"
    }
    resource_access {
      id   = "e1fe6dd8-ba31-4d61-89e7-88639da4683d"
      type = "Scope"
    }
  }
  required_resource_access {
    resource_app_id = "00000002-0000-0000-c000-000000000000"
    resource_access {
      id   = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "aks-aad-srv" {
  application_id = "${azuread_application.aks-aad-srv.application_id}"
}

resource "random_password" "aks-aad-srv" {
  length  = 16
  special = true
}

resource "azuread_application_password" "aks-aad-srv" {
  application_object_id = "${azuread_application.aks-aad-srv.object_id}"
  value                 = "${random_password.aks-aad-srv.result}"
  end_date              = "2024-01-01T01:02:03Z"
}

# AAD AKS kubectl app

resource "azuread_application" "aks-aad-client" {
  name       = "${var.clustername}client"
  homepage   = "https://${var.clustername}client"
  reply_urls = ["https://${var.clustername}client"]
  type       = "native"
  required_resource_access {
    resource_app_id = "${azuread_application.aks-aad-srv.application_id}"
    resource_access {
      id   = "${azuread_application.aks-aad-srv.oauth2_permissions.0.id}"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "aks-aad-client" {
  application_id = "${azuread_application.aks-aad-client.application_id}"
}

The important parts regarding the permissions are highlighted above. If you wonder what these “magic permission GUIDs” stand for, here’s a list of what will be assigned.

Microsoft Graph (AppId: 00000003-0000-0000-c000-000000000000) Persmissions

GUIDPermission
7ab1d382-f21e-4acd-a863-ba3e13f7da61Read directory data (Application Permission)
06da0dbc-49e2-44d2-8312-53f166ab848aRead directory data (Delegated Permission)
e1fe6dd8-ba31-4d61-89e7-88639da4683dSign in and read user profile

Windows Azure Active Directory (AppId: 00000002-0000-0000-c000-000000000000) Permissions

GUIDPermission
311a71cc-e848-46a1-bdf8-97ff7156d8e6Sign in and read user profile

After a successful run of the Terraform script, it will look like that in the portal.

AAD applications
Server app permissions

By the way, you can query the permissions of the applications (MS Graph/Azure Active Directory) mentioned above. Here’s a quick sample for one of the MS Graph permissions:

$ az ad sp show --id 00000003-0000-0000-c000-000000000000 | grep -A 6 -B 3 06da0dbc-49e2-44d2-8312-53f166ab848a
    
{
      "adminConsentDescription": "Allows the app to read data in your organization's directory, such as users, groups and apps.",
      "adminConsentDisplayName": "Read directory data",
      "id": "06da0dbc-49e2-44d2-8312-53f166ab848a",
      "isEnabled": true,
      "type": "Admin",
      "userConsentDescription": "Allows the app to read data in your organization's directory.",
      "userConsentDisplayName": "Read directory data",
      "value": "Directory.Read.All"
}

Cluster Admin AAD Group

Now that we have the script for the applications we need to integrate our cluster with Azure Active Directory, let’s also add a default AAD group for our cluster admins.

# AAD K8s cluster admin group / AAD

resource "azuread_group" "aks-aad-clusteradmins" {
  name = "${var.clustername}clusteradmin"
}

Service Principal for AKS Cluster

Last but not least, before we can finally create the Kubernetes cluster, a service principal is required. That’s basically the technical user Kubernetes uses to interact with Azure (e.g. acquire a public IP at the Azure load balancer). We will assign the role “Contributor” (for the whole subscription – please adjust to your needs!) to that service principal.

# Service Principal for AKS

resource "azuread_application" "aks_sp" {
  name                       = "${var.clustername}"
  homepage                   = "https://${var.clustername}"
  identifier_uris            = ["https://${var.clustername}"]
  reply_urls                 = ["https://${var.clustername}"]
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = false
}

resource "azuread_service_principal" "aks_sp" {
  application_id = "${azuread_application.aks_sp.application_id}"
}

resource "random_password" "aks_sp_pwd" {
  length  = 16
  special = true
}

resource "azuread_service_principal_password" "aks_sp_pwd" {
  service_principal_id = "${azuread_service_principal.aks_sp.id}"
  value                = "${random_password.aks_sp_pwd.result}"
  end_date             = "2024-01-01T01:02:03Z"
}

resource "azurerm_role_assignment" "aks_sp_role_assignment" {
  scope                = "${data.azurerm_subscription.current.id}"
  role_definition_name = "Contributor"
  principal_id         = "${azuread_service_principal.aks_sp.id}"

  depends_on = [
    azuread_service_principal_password.aks_sp_pwd
  ]
}

Create the AKS cluster

Everything is now ready for the provisioning of the cluster. But hey, we created the AAD applications, but haven’t granted admin consent?! We can also do this via our Terrform script and that’s what we will be doing before finally creating the cluster.

Azure is sometimes a bit too fast in sending a 200 and signalling that a resource is ready. In the background, not all services have already access to e.g. newly created applications. So it happens, that things fail although they shouldn’t 🙂 Therefore, we simply wait a few seconds and give AAD time to distribute application information, before kicking off the cluster creation.

# K8s cluster

# Before giving consent, wait. Sometimes Azure returns a 200, but not all services have access to the newly created applications/services.

resource "null_resource" "delay_before_consent" {
  provisioner "local-exec" {
    command = "sleep 60"
  }
  depends_on = [
    azuread_service_principal.aks-aad-srv,
    azuread_service_principal.aks-aad-client
  ]
}

# Give admin consent - SP/az login user must be AAD admin

resource "null_resource" "grant_srv_admin_constent" {
  provisioner "local-exec" {
    command = "az ad app permission admin-consent --id ${azuread_application.aks-aad-srv.application_id}"
  }
  depends_on = [
    null_resource.delay_before_consent
  ]
}
resource "null_resource" "grant_client_admin_constent" {
  provisioner "local-exec" {
    command = "az ad app permission admin-consent --id ${azuread_application.aks-aad-client.application_id}"
  }
  depends_on = [
    null_resource.delay_before_consent
  ]
}

# Again, wait for a few seconds...

resource "null_resource" "delay" {
  provisioner "local-exec" {
    command = "sleep 60"
  }
  depends_on = [
    null_resource.grant_srv_admin_constent,
    null_resource.grant_client_admin_constent
  ]
}

# Create the cluster

resource "azurerm_kubernetes_cluster" "aks" {
  name                = "${var.clustername}"
  location            = "${var.location}"
  resource_group_name = "${var.rg-name}"
  dns_prefix          = "${var.clustername}"

  default_node_pool {
    name            = "default"
    type            = "VirtualMachineScaleSets"
    node_count      = 2
    vm_size         = "Standard_B2s"
    os_disk_size_gb = 30
    max_pods        = 50
  }
  service_principal {
    client_id     = "${azuread_application.aks_sp.application_id}"
    client_secret = "${random_password.aks_sp_pwd.result}"
  }
  role_based_access_control {
    azure_active_directory {
      client_app_id     = "${azuread_application.aks-aad-client.application_id}"
      server_app_id     = "${azuread_application.aks-aad-srv.application_id}"
      server_app_secret = "${random_password.aks-aad-srv.result}"
      tenant_id         = "${data.azurerm_subscription.current.tenant_id}"
    }
    enabled = true
  }
  depends_on = [
    azurerm_role_assignment.aks_sp_role_assignment,
    azuread_service_principal_password.aks_sp_pwd
  ]
}

Assign the AAD admin group to be cluster-admin

When the cluster is finally created, we need to assign the Kubernetes cluster role cluster-admin to our AAD cluster admin group. We simply get access to the Kubernetes cluster by adding the Kubernetes Terraform provider. Because we already have a working integration with AAD, we need to use the admin credentials of our cluster! But that will be the last time, we will ever need them again.

To be able to use the admin credentials, we point the Kubernetes provider to use kube_admin_config which is automatically provided for us.

In the last step, we bind the cluster role to the fore-mentioned AAD cluster group id.

# Role assignment

# Use ADMIN credentials
provider "kubernetes" {
  host                   = "${azurerm_kubernetes_cluster.aks.kube_admin_config.0.host}"
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)}"
}

# Cluster role binding to AAD group

resource "kubernetes_cluster_role_binding" "aad_integration" {
  metadata {
    name = "${var.clustername}admins"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }
  subject {
    kind = "Group"
    name = "${azuread_group.aks-aad-clusteradmins.id}"
  }
  depends_on = [
    azurerm_kubernetes_cluster.aks
  ]
}

Run the Terraform script

We now have discussed all the relevant parts of the script, it’s time to let the Terraform magic happen 🙂 Run the script via…

$ terraform init

# ...and then...

$ terraform apply

Access the Cluster

When the script has finished, it’s time to access the cluster and try to logon. First, let’s do the “negativ check” and try to access it without having been added as cluster admin (AAD group member).

After downloading the user credentials and querying the cluster nodes, the OAuth 2.0 Device Authorization Grant flow kicks in and we need to authenticate against our Azure directory (as you might know it from logging in with Azure CLI).

$ az aks get-credentials --resource-group <RESOURCE_GROUP> -n <CLUSTER_NAME>

$ kubectl get nodes
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code DP9JA76WS to authenticate.

Error from server (Forbidden): nodes is forbidden: User "593736cb-1f95-4f23-bfbd-75891886b05f" cannot list resource "nodes" in API group "" at the cluster scope

Great, we get the expected authorization error!

Now add a user from the Azure Active Directory to the AAD admin group in the portal. Navigate to “Azure Active Directory” –> “Groups” and select your cluster-admin group. On the left navigation, select “Members” and add e.g. your own Azure user.

Now go back to the command line and try again. One last time, download the user credentials with az aks get-credentials (it will simply overwrite the former entry in you .kubeconfig to make sure we get the latest information from AAD).

$ az aks get-credentials --resource-group <RESOURCE_GROUP> -n <CLUSTER_NAME>

$ kubectl get nodes
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code ASGRA765S to authenticate.

NAME                              STATUS   ROLES   AGE   VERSION
aks-default-41331054-vmss000000   Ready    agent   18m   v1.13.12
aks-default-41331054-vmss000001   Ready    agent   18m   v1.13.12

Wrap Up

So, that’s all we wanted to achieve! We have created an AKS cluster with fully-automated Azure Active Directory integration, added a default AAD group for our Kubernetes admins and bound it to the “cluster-admin” role of Kubernetes – all done by a Terraform script which can now be integrated with you CI/CD pipeline to create compliant and AAD-secured AKS clusters (as many as you want ;)).

Well, we also could have added a user to the admin group, but that’s the only manual step in our scenario…but hey, you would have needed to do it anyway 🙂

You can find the complete script including the variables.tf file on my Github account. Feel free to use it in your own projects.

House-Keeping

To remove all of the provisioned resources (service principals, AAD groups, Kubernetes service, storage accounts etc.) simply…

$ terraform destroy

# ...and then...

$ az group delete -n tf-rg

Jürgen Gutsch: ASP.NET Hack Advent Post 15: About silos and hierarchies in software development

This post is a special one. Not really related to .NET Core or ASP.NET Core, but to software development in general. I recently stumbled upon this post and while reading it I found myself remembering the days back when I needed to write code others estimated and specified for me.

About silos and hierarchies in software development

The woman who wrote this post lives in Cologne Germany and worked in really special environments, like self organizing teams and companies. I met Krisztina Hirth several times at community events in Germany. I really like her ideas, the way she thinks and the way she writes. You should definitely also read her other posts on her blog: https://yellow-brick-code.org/

Twitter: https://twitter.com/yellowbrickc

Jürgen Gutsch: ASP.NET Hack Advent Post 14: MailKit

This fourteenth post is about a cross-platform .NET library for IMAP, POP3, and SMTP.

On Twitter I got asked about sending emails out from a worker service. So I searched for the documentation about System.Net.Mail and the SmtpClient Class and was really surprised that this class was marked as obsolete. It seems I missed the announcement about this.

The .NET team recommend to use the MailKit and MimeKit to send emails out.

Both libraries are open sourced under the MIT license and free to use in commercial projects. It seems that this libraries are really complete and provide a lot of useful features.

Website: http://www.mimekit.net/

MailKit:

GitHub: https://github.com/jstedfast/MailKit

NuGet: https://www.nuget.org/packages/MailKit/

MimeKit:

GitHub: https://github.com/jstedfast/MimeKit

NuGet: https://www.nuget.org/packages/MimeKit/

Jürgen Gutsch: ASP.NET Hack Advent Post 13:.NET Conf: Focus on Blazor

The .NET Conf in September this year was great and it was a pleasure to also do a talk on the 25th which was the community cay with a lot of awesome talks from community folks around the world. I'm really looking forward to the next one and hope I can do another talk next year.

Yesterday I was surprised when I stumbled upon the announcement about another special .NET Conf that is scheduled for January 14th. This is really a special one with the focus on Blazor:

https://focus.dotnetconf.net/

The schedule isn't online yet, but will be there soon, as they wrote.

I like the idea to have special focused .NET Confs. The infrastructure with the Channel9 studios already is available, so it is cheap to setup a virtual conference like this. And I can imagine a few more topics to focus on:

  • Entity Framework Core
  • ASP.NET Core
  • Async/Await
  • Desktop
  • And maybe a MVP driven .NET Conf during the MVP Summit 2020 in March ;-)

Holger Schwichtenberg: .NET Core 3.1 ist ein ungewöhnliches Release

Fast keine neuen Funktionen, im Wesentlichen nur Bugfixes und sogar Breaking Changes, die es in einer Version mit Änderung an der zweiten Stelle der Versionsnummer gar nicht geben dürfte.

Jürgen Gutsch: ASP.NET Hack Advent Post 12:.NET Rocks Podcasts

Do you like podcasts? Do you like entertaining and funny technical podcasts about .NET? I definitely do. I like to listen to them while commuting. The best and (I guess) the most famous .NET related podcast is .NET Rocks:

https://www.dotnetrocks.com/

Carl Franklin and Richard Campbell really do a great show, they invite a lot of cool and well known experts to their shows and discuss about cutting-edge topics around .NET and Microsoft technologies.

Jürgen Gutsch: ASP.NET Hack Advent Post 11: Updating an ASP.NET Core 2.2 Web Site to .NET Core 3.1

.NET Core 3.1 is out, but how to update your ASP.NET Core 2.2 application? Scott Hanselman recently wrote a pretty detailed and complete post about it.

https://www.hanselman.com/blog/UpdatingAnASPNETCore22WebSiteToNETCore31LTS.aspx

This post also includes details on how to update the deployment and hosting part on Azure DevOps and Azure App Service.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 10: Wasmtime

WebAssembly is pretty popular this time for .NET Developers. With Blazor we have the possibility to run .NET assemblies inside the WebAssembly inside a browser.

But did you know that you can run WebAssembly outside the web and that you can run WebAssembly code without a browser? This can be done with the open-source, and cross-platform application runtime called Wasmtime. With Wasmtime you are able to load and execute WebAssemby code directly from your program. Wasmtime is programmed and maintained by the Bytecode Alliance

The Bytecode Alliance is an open source community dedicated to creating secure new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI).

Website: https://wasmtime.dev/

GitHub: https://github.com/bytecodealliance/wasmtime/

I wouldn't write about it, if it wouldn't be somehow related to .NET Core. The Bytecode Alliance just added an preview of an API for .NET Core. That means that you now can execute WebAssembly code from your .NET Core application. For more details see this blog post by Peter Huene:

https://hacks.mozilla.org/2019/12/using-webassembly-from-dotnet-with-wasmtime/

He wrote a pretty detailed blog post about Wasmtime and how to use it within a .NET Core application. Also the Bytecode Alliance added a .NET Core sample and created a NuGet package:

https://github.com/bytecodealliance/wasmtime-demos/tree/master/dotnet

https://www.nuget.org/packages/Wasmtime

So Wasmtime is the opposite of Blazor. Instead of running .NET Code inside the WebAssembly, you are now also able to run WebAssembly in .NET Core.

Jürgen Gutsch: ASP.NET Hack Advent Post 09: November 2019 .NET/ASP.NET Documentation Update

For the ninth post I found a pretty useful blog post about .NET Core and ASP.NET Core documentation updates for the version 3.0. This post was written by Maxime Rouiller, a former MVP, who now works for Microsoft as a Microsoft Cloud Developer Advocate.

In this post he shows all the important updates related to version 3.0 structured by topic including the links to the updated documentations. There is definitely a lot of stuff he is mentioning and you should read:

https://blog.maximerouiller.com/post/november-2019-net-aspnet-documentation-update/

BTW: I personally met Maxime during the MVP Summit back when he still was MVP. The first time I met him during the breakfast at the summit hotels. He asked the MVPs at the breakfast table to try to spell his name and I was one of them who tried to speak his name in French which was right. This guy is so cool and funny. It was a pleasure to meet him.

Blog: https://blog.maximerouiller.com

Twitter: https://twitter.com/MaximRouiller

GitHub: https://github.com/MaximRouiller

Christian Dennig [MS]: Using Rook / Ceph with PVCs on Azure Kubernetes Service

Introduction

As you all know by now, Kubernetes is a quite popular platform for running cloud-native applications at scale. A common recommendation when doing so, is to ousource as much state as possible, because managing state in Kubernetes is not a trivial task. It can be quite hard, especially when you have a lot of attach/detach operations on your workloads. Things can go terribly wrong and – of course – your application and your users will suffer from that. A solution that becomes more and more popular in that space is Rook in combination with Ceph.

Rook is described on their homepage rook.io as follows:

Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.

Rook is a project of the Cloud Native Computing Foundation, at the time of writing in status “incubating”.

Ceph in turn is a free-software storage platform that implements storage on a cluster, and provides interfaces for object-, block- and file-level storage. It has been around for many years in the open-source space and is a battle-proven distributed storage system. Huge storage systems have been implemented with Ceph.

So in a nutshell, Rook enables Ceph storage systems to run on Kubernetes using Kubernetes primitives. The basic architecture for that inside a Kubernetes cluster looks as follows:

rook-architecture
Rook in-cluster architecture

I won’t go into all of the details of Rook / Ceph, because I’d like to focus on simply running and using it on AKS in combination with PVCs. If you want to have a step-by-step introduction, there is a pretty good “Getting Started” video by Tim Serewicz on Vimeo:

First, we need a Cluster!

So, let’s start by creating a Kubernetes cluster on Azure. We will be using different nodepools for running our storage (nodepool: npstorage) and application workloads (nodepool: npstandard).

# Create a resource group

$ az group create --name rooktest-rg --location westeurope

# Create the cluster

$ az aks create \
--resource-group rooktest-rg \
--name myrooktestclstr \
--node-count 3 \
--kubernetes-version 1.14.8 \
--enable-vmss \
--nodepool-name npstandard \
--generate-ssh-keys

Add Storage Nodes

After the cluster has been created, add the npstorage nodepool:

az aks nodepool add --cluster-name myrooktestclstr \
--name npstorage --resource-group rooktest-rg \ 
--node-count 3 \
--node-taints storage-node=true:NoSchedule

Please be aware that we add taints to our nodes to make sure that no pods will be scheduled on this nodepool as long as we explicitly tolerate it. We want to have these nodes exclusively for storage pods!

If you need a refresh regarding the concept of “taints and tolerations”, please see the Kubernetes documentation.

So, now that we have a cluster and a dedicated nodepool for storage, we can download the cluster config.

az aks get-credentials \
--resource-group rooktest-rg \
--name myrooktestclstr

Let’s look at the nodes of our cluster:

$ kubectl get nodes

NAME                                 STATUS   ROLES   AGE    VERSION
aks-npstandard-33852324-vmss000000   Ready    agent   10m    v1.14.8
aks-npstandard-33852324-vmss000001   Ready    agent   10m    v1.14.8
aks-npstandard-33852324-vmss000002   Ready    agent   10m    v1.14.8
aks-npstorage-33852324-vmss000000    Ready    agent   2m3s   v1.14.8
aks-npstorage-33852324-vmss000001    Ready    agent   2m9s   v1.14.8
aks-npstorage-33852324-vmss000002    Ready    agent   119s   v1.14.8

So, we now have three nodes for storage and three nodes for our application workloads. From an infrastructure level, we are now ready to install Rook.

Install Rook

Let’s start installing Rook by cloning the repository from GitHub:

$ git clone https://github.com/rook/rook.git

After we have downloaded the repo to our local machine, there are three steps we need to perform to install Rook:

  1. Add Rook CRDs / namespace / common resources
  2. Add and configure the Rook operator
  3. Add the Rook cluster

So, switch to the /cluster/examples/kubernetes/ceph directory and follow the steps below.

1. Add Common Resources

$ kubectl apply -f common.yaml

The common.yaml contains the namespace rook-ceph, common resources (e.g. clusterroles, bindings, service accounts etc.) and some Custom Resource Definitions from Rook.

2. Add the Rook Operator

The operator is responsible for managing Rook resources and needs to be configured to run on Azure Kubernetes Service. To manage Flex Volumes, AKS uses a directory that’s different from the “default directory”. So, we need to tell the operator which directory to use on the cluster nodes.

Furthermore, we need to adjust the settings for the CSI plugin to run the corresponding daemonsets on the storage nodes (remember, we added taints to the nodes. By default, the pods of the daemonsets Rook needs to work, won’t be scheduled on our storage nodes – we need to “tolerate” this).

So, here’s the full operator.yaml file (→ important parts)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rook-ceph-operator
  namespace: rook-ceph
  labels:
    operator: rook
    storage-backend: ceph
spec:
  selector:
    matchLabels:
      app: rook-ceph-operator
  replicas: 1
  template:
    metadata:
      labels:
        app: rook-ceph-operator
    spec:
      serviceAccountName: rook-ceph-system
      containers:
      - name: rook-ceph-operator
        image: rook/ceph:master
        args: ["ceph", "operator"]
        volumeMounts:
        - mountPath: /var/lib/rook
          name: rook-config
        - mountPath: /etc/ceph
          name: default-config-dir
        env:
        - name: ROOK_CURRENT_NAMESPACE_ONLY
          value: "false"
        - name: FLEXVOLUME_DIR_PATH
          value: "/etc/kubernetes/volumeplugins"
        - name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS
          value: "false"
        - name: ROOK_LOG_LEVEL
          value: "INFO"
        - name: ROOK_CEPH_STATUS_CHECK_INTERVAL
          value: "60s"
        - name: ROOK_MON_HEALTHCHECK_INTERVAL
          value: "45s"
        - name: ROOK_MON_OUT_TIMEOUT
          value: "600s"
        - name: ROOK_DISCOVER_DEVICES_INTERVAL
          value: "60m"
        - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
          value: "false"
        - name: ROOK_ENABLE_SELINUX_RELABELING
          value: "true"
        - name: ROOK_ENABLE_FSGROUP
          value: "true"
        - name: ROOK_DISABLE_DEVICE_HOTPLUG
          value: "false"
        - name: ROOK_ENABLE_FLEX_DRIVER
          value: "false"
        # Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
        # This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs. --> CHANGED to false
        - name: ROOK_ENABLE_DISCOVERY_DAEMON
          value: "false"
        - name: ROOK_CSI_ENABLE_CEPHFS
          value: "true"
        - name: ROOK_CSI_ENABLE_RBD
          value: "true"
        - name: ROOK_CSI_ENABLE_GRPC_METRICS
          value: "true"
        - name: CSI_ENABLE_SNAPSHOTTER
          value: "true"
        - name: CSI_PROVISIONER_TOLERATIONS
          value: |
            - effect: NoSchedule
              key: storage-node
              operator: Exists
        - name: CSI_PLUGIN_TOLERATIONS
          value: |
            - effect: NoSchedule
              key: storage-node
              operator: Exists
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
      - name: rook-config
        emptyDir: {}
      - name: default-config-dir
        emptyDir: {}

3. Create the Cluster

Deploying the Rook cluster is as easy as installing the Rook operator. As we are running our cluster with the Azure Kuberntes Service – a managed service – we don’t want to manually add disks to our storage nodes. Also, we don’t want to use a directory on the OS disk (which most of the examples out there will show you) as this will be deleted when the node will be upgraded to a new Kubernetes version.

In this sample, we want to leverage Persistent Volumes / Persistent Volume Claims that will be used to request Azure Managed Disks which will in turn be dynamically attached to our storage nodes. Thankfully, when we installed our cluster, a corresponding storage class for using Premium SSDs from Azure was also created.

$ kubectl get storageclass

NAME                PROVISIONER                AGE
default (default)   kubernetes.io/azure-disk   15m
managed-premium     kubernetes.io/azure-disk   15m

Now, let’s create the Rook Cluster. Again, we need to adjust the tolerations and add a node affinity that our OSDs will be scheduled on the storage nodes (→ important parts):

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3
    allowMultiplePerNode: false
    volumeClaimTemplate:
      spec:
        storageClassName: managed-premium
        resources:
          requests:
            storage: 10Gi
  cephVersion:
    image: ceph/ceph:v14.2.4-20190917
    allowUnsupported: false
  dashboard:
    enabled: true
    ssl: true
  network:
    hostNetwork: false
  storage:
    storageClassDeviceSets:
    - name: set1
      # The number of OSDs to create from this device set
      count: 4
      # IMPORTANT: If volumes specified by the storageClassName are not portable across nodes
      # this needs to be set to false. For example, if using the local storage provisioner
      # this should be false.
      portable: true
      # Since the OSDs could end up on any node, an effort needs to be made to spread the OSDs
      # across nodes as much as possible. Unfortunately the pod anti-affinity breaks down
      # as soon as you have more than one OSD per node. If you have more OSDs than nodes, K8s may
      # choose to schedule many of them on the same node. What we need is the Pod Topology
      # Spread Constraints, which is alpha in K8s 1.16. This means that a feature gate must be
      # enabled for this feature, and Rook also still needs to add support for this feature.
      # Another approach for a small number of OSDs is to create a separate device set for each
      # zone (or other set of nodes with a common label) so that the OSDs will end up on different
      # nodes. This would require adding nodeAffinity to the placement here.
      placement:
        tolerations:
        - key: storage-node
          operator: Exists
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: agentpool
                operator: In
                values:
                - npstorage
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd-prepare
              topologyKey: kubernetes.io/hostname
      resources:
        limits:
          cpu: "500m"
          memory: "4Gi"
        requests:
          cpu: "500m"
          memory: "2Gi"
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          resources:
            requests:
              storage: 100Gi
          storageClassName: managed-premium
          volumeMode: Block
          accessModes:
            - ReadWriteOnce
  disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

So, after a few minutes, you will see some pods running in the rook-ceph namespace. Make sure, that the OSD pods a running, before continuing with configuring the storage pool.

$ kubectl get pods -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-4qxsv                                            3/3     Running     0          28m
csi-cephfsplugin-d2klt                                            3/3     Running     0          28m
csi-cephfsplugin-jps5r                                            3/3     Running     0          28m
csi-cephfsplugin-kzgrt                                            3/3     Running     0          28m
csi-cephfsplugin-provisioner-dd9775cd6-nsn8q                      4/4     Running     0          28m
csi-cephfsplugin-provisioner-dd9775cd6-tj826                      4/4     Running     0          28m
csi-cephfsplugin-rt6x2                                            3/3     Running     0          28m
csi-cephfsplugin-tdhg6                                            3/3     Running     0          28m
csi-rbdplugin-6jkx5                                               3/3     Running     0          28m
csi-rbdplugin-clfbj                                               3/3     Running     0          28m
csi-rbdplugin-dxt74                                               3/3     Running     0          28m
csi-rbdplugin-gspqc                                               3/3     Running     0          28m
csi-rbdplugin-pfrm4                                               3/3     Running     0          28m
csi-rbdplugin-provisioner-6dfd6db488-2mrbv                        5/5     Running     0          28m
csi-rbdplugin-provisioner-6dfd6db488-2v76h                        5/5     Running     0          28m
csi-rbdplugin-qfndk                                               3/3     Running     0          28m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000c8gdp   1/1     Running     0          16m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000tfk2s   1/1     Running     0          13m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000xfnhx   1/1     Running     0          13m
rook-ceph-crashcollector-aks-npstorage-33852324-vmss000001c6cbd   1/1     Running     0          5m31s
rook-ceph-crashcollector-aks-npstorage-33852324-vmss000002t6sgq   1/1     Running     0          2m48s
rook-ceph-mgr-a-5fb458578-s2lgc                                   1/1     Running     0          15m
rook-ceph-mon-a-7f9fc6f497-mm54j                                  1/1     Running     0          26m
rook-ceph-mon-b-5dc55c8668-mb976                                  1/1     Running     0          24m
rook-ceph-mon-d-b7959cf76-txxdt                                   1/1     Running     0          16m
rook-ceph-operator-5cbdd65df7-htlm7                               1/1     Running     0          31m
rook-ceph-osd-0-dd74f9b46-5z2t6                                   1/1     Running     0          13m
rook-ceph-osd-1-5bcbb6d947-pm5xh                                  1/1     Running     0          13m
rook-ceph-osd-2-9599bd965-hprb5                                   1/1     Running     0          5m31s
rook-ceph-osd-3-557879bf79-8wbjd                                  1/1     Running     0          2m48s
rook-ceph-osd-prepare-set1-0-data-sv78n-v969p                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-1-data-r6d46-t2c4q                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-2-data-fl8zq-rrl4r                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-3-data-qrrvf-jjv5b                     0/1     Completed   0          15m

Configuring Storage

Before Rook can provision persistent volumes, either a filesystem or a storage pool should be configured. In our example, a Ceph Block Pool is used:

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3

Next, we also need a storage class that will be using the Rook cluster / storage pool. In our example, we will not be using Flex Volume (which will be deprecated in furture versions of Rook/Ceph), instead we use Container Storage Interface.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
    clusterID: rook-ceph
    pool: replicapool
    imageFormat: "2"
    imageFeatures: layering
    csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete

Test

Now, let’s have a look at the dashboard which was also installed when we created the Rook cluster. Therefore, we are port-forwarding the dashboard service to our local machine. The service itself is secured by username/password. The default username is admin and the password is stored in a K8s secret. To get the password, simply run the following command.

$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password \ 
    -o jsonpath="{['data']['password']}" | base64 --decode && echo
# copy the password

$ kubectl port-forward svc/rook-ceph-mgr-dashboard 8443:8443 \ 
    -n rook-ceph

Now access the dasboard by heading to https://localhost:8443/#/dashboard

Screenshot 2019-12-08 at 22.25.01
Ceph Dashboard

As you can see, everything looks healthy. Now let’s create a pod that’s using a newly created PVC leveraging that Ceph storage class.

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-pv-claim
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Pod

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pv-pod
spec:
  volumes:
    - name: ceph-pv-claim
      persistentVolumeClaim:
        claimName: ceph-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: ceph-pv-claim

As a result, you will now have an NGINX pod running in your Kuberntes cluster with a PV attached/mounted under /usr/share/nginx/html.

Wrap Up

So…what exactly did we achieve with this solution now? We have created a Ceph storage cluster on an AKS that uses PVCs to manage storage. Okay, so what? Well, the usage of volume mounts in your deployments with Ceph is now super-fast and rock-solid, because we do not have to attach physical disks to our worker nodes anymore. We just use the ones we have created during Rook cluster provisioning (remember these four 100GB disks?)! We minimized the amount of “physical attach/detach” actions on our nodes. That’s why now, you won’t see these popular “WaitForAttach”- or “Can not find LUN for disk”-errors anymore.

Hope this helps someone out there! Have fun with it.

Update: Benchmarks

Short update on this. Today, I did some benchmarking with dbench (https://github.com/leeliu/dbench/) comparing Rook Ceph and “plain” PVCs with the same Azure Premium SSD disks (default AKS StorageClass managed-premium, VM types: Standard_DS2_v2). Here are the results…as you can see, it depends on your workload…so, judge by yourself.

Rook Ceph

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 10.6k/571. BW: 107MiB/s / 21.2MiB/s
Average Latency (usec) Read/Write: 715.53/31.70
Sequential Read/Write: 100MiB/s / 43.2MiB/s
Mixed Random Read/Write IOPS: 1651/547

PVC with Azure Premium SSD

100GB disk used to have a fair comparison

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 8155/505. BW: 63.7MiB/s / 63.9MiB/s
Average Latency (usec) Read/Write: 505.73/
Sequential Read/Write: 63.6MiB/s / 65.3MiB/s
Mixed Random Read/Write IOPS: 1517/505

Jürgen Gutsch: ASP.NET Hack Advent Post 08: Hanselman debugs a .NET Core Linux app in WSL2 with VS on Windows

Scott Hanselman also loves hacking. Hacking on small devices and on Windows and Linux. In this post, I want to introduce, he shows how to debug a .NET Core Linux app that runs in the WSL2 with a Visual Studio on Windows:

Remote Debugging a .NET Core Linux app in WSL2 from Visual Studio on Windows

This is one of those posts where he put things together that might not match, or things that didn't match in the past. Even tough the fact that Linux is running natively inside Windows was hard to imagine in the past, the fact that we as developers where able to remote debug an .NET Core app on any platform is incredibly awesome. Hacking things together that might not match is the most interesting topic for me as well. Things like getting .NET apps running on Linux based small devices like the RaspberryPi or hosting Mono based ASP.NET Webform apps on an Apache running on Suse Linux where things I did in the past and I still do whenever I find some time. This is why I really love those posts written by Hanselman.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 07: Blazorise

Recently I stumbled upon a really cool project that provides frontend components for Blazor. It supports Blazor server side and Blazor WebAssembly on the client side. I found that project, while I was searching for a chart component for a Blazor demo application I'm currently working on.

This project is called Blazorise, is completely open source and hosted on GitHub. It is built on top of Blazor and CSS frameworks like Bootstrap, Material and Bulma. (Actually I've never heard about Bulma.)

Blazorise contains a lot of useful components, including a library to create Charts and DataGrids. It is actively maintained, well documented and also has demo pages for all three CSS Framework implementations.

If you are working with Blazor, you should have a look at it:

Website: https://blazorise.com/

GitHub: https://github.com/stsrki/Blazorise

Jürgen Gutsch: ASP.NET Hack Advent Post 06: Andrew Lock's blog

This sixth post is about a blog that is full of different, but detailed posts about .NET Core and ASP.NET Core. The blog's name ".NET Escapades" kinda describes, that author is writing about almost all he is experiencing related to .NET Core and ASP.NET Core.

This blog is running by Andrew Lock, who is a full stack ASP.NET developer, living in Devon (UK). As well as the other blog authors I introduced in the last advent posts, he is a Microsoft MVP and pretty much involved and well known in the .NET developer community.

He also published the book ASP.NET Core in Action in June last year.

Blog: https://andrewlock.net/

GitHub: https://github.com/andrewlock

Twitter: https://twitter.com/andrewlocknet

Golo Roden: Wie viele Programmiersprachen sind zu viel?

Verschiedene Ansätze ermöglichen in zunehmendem Maße, innerhalb eines Projekts unterschiedliche Programmiersprachen zu kombinieren. Doch nicht alles, was technisch möglich ist, ist auch sinnvoll.

Code-Inside Blog: Did you know that you can build .NET Core apps with MSBuild.exe?

The problem

We recently updated a bunch of our applications to .NET Core 3.0. Because of the compatibility changes to the “old framework” we try to move more and more projects to .NET Core, but some projects still target “.NET Framework 4.7.2”, but they should work “ok-ish” when used from .NET Core 3.0 applications.

The first tests were quite successful, but unfortunately when we tried to build and pulish the updated .NET Core 3.0 app via ‘dotnet publish’ (with a reference to a .NET Framework 4.7.2 app) we faced this error:

C:\Program Files\dotnet\sdk\3.0.100\Microsoft.Common.CurrentVersion.targets(3639,5): error MSB4062: The "Microsoft.Build.Tasks.AL" task could not be loaded from the assembly Microsoft.Build.Tasks.Core, Version=15.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a.  Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. 

The root cause

After some experiments we saw a pattern:

Each .NET Framework 4.7.2 based project with a ‘.resx’ file would result in the above error.

The solution

‘.resx’ files are still a valid thing to do, so we checked out if we could work around this problem, but unfortunately this was not super successful. We moved some resources, but in the end some resources must stay in the corresponding file.

We used the ‘dotnet publish…’ command to build and publish .NET Core based applications, but then I tried to build the .NET Core application from MSBuild.exe and discovered that this worked.

Lessons learned

If you have a mixed environment with “old” .NET Framework based applications with resources in use and want to use this in combination with .NET Core: Try to use the “old school” MSBuild.exe way.

MSBuild.exe is capable of building .NET Core applications and it is more or less the same.

Be aware

Regarding ASP.NET Core applications: The ‘dotnet publish’ command will create a web.config file - if you use the MSBuild approach this file will not be created automatically. I’m not sure if there is a hidden switch, but if you just treat .NET Core apps like .NET Framework console applications the web.config file is not generated. This might lead to some problems when you deploy this to an IIS.

Hope this helps!

Holger Schwichtenberg: Ist ASP.NET Core Blazor nun fertig oder noch nicht?

Der Dotnet-Doktor erklärt den Unterschied zwischen Blazor Server (im RTM-Status) und Blazor WebAssembly (im Preview-Status).

Golo Roden: Tools für Web- und Cloud-Entwicklung

Die vergangene Folge von "Götz & Golo" hat sich mit der Frage beschäftigt, wann Teams gut zusammen arbeiten. Der Fokus lag dabei auf der Arbeit remote und vor Ort. Doch wie sieht es mit den eingesetzten Tools aus?

Holger Schwichtenberg: User-Group-Vortrag und Workshop zu Continuous Delivery mit Azure DevOps

Der Dotnet-Doktor hält am 7. November einen Vortrag und bietet vom 2. bis 4. Dezember einen Workshop in Essen an.

Norbert Eder: MySQL-Queries mitloggen

Beim Microsoft SQL Server kann man SQL Queries recht einfach mitloggen in dem man mal schnell den SQL Server Profiler startet. MySQL bietet ein derartiges Tool nicht an, zumindest kann es die MySQL Workbench nicht. Dennoch kann man aber die abgesetzten Queries aufzeichnen.

So kann man beispielsweise alle Queries in eine Log-Datei schreiben:

SET global general_log_file='c:/Temp/mysql.log'; 
SET global general_log = on; 
SET global log_output = 'file';

Das kann man dann natürlich auch wieder deaktivieren:

SET global general_log = off; 

Weiterführende Informationen finden sich in der MySQL Dokumentation.

Der Beitrag MySQL-Queries mitloggen erschien zuerst auf Norbert Eder.

Christina Hirth : My Reading List @KDDDConf

(formerly known as KanDDDinsky 😉)

Accelerate - Building and Scaling High Performing Technology Organizations

Accelerate by Nicole Forsgren, Gene Kim, Jez Humble

This book was referenced to in a lot of talks, mostly with the same phrase “hey folks, you have to read this!”


Domain Modeling Made Functional by Scott Wlaschin

The book was called as the only real currently published reference work for DDD for functional programming.

More books and videos to find on fsharpforfunandprofit


Functional Core, Imperative Shell by Gary Bernhard – a talk

The comments on this tweet are telling me, watching this video is long overdue …


37 Things One Architect Knows About IT Transformation by Gregor Hohpe

The name @ghohpe was also mentioned a few times at @KDDDconf


Domain Storytelling

A Collaborative Modeling Method

by Stefan Hofer and Henning Schwentner


Drive: The surprising truth about what motivates us by Daniel H Pink

There is also a TLDR-Version: a talk on vimeo


Sapiens – A Brief History of Humankind by Yuval Noah Harari

This book was recommended by @weltraumpirat after our short discussion about how broken or industry is. Thank you Tobias! I’m afraid, the book will give me no happy ending.

UPDATE:

It is not a take-away from KDDD-Conf but still a must have book (thank you Thomas): The Phoenix Project

Code-Inside Blog: IdentityServer & Azure AD Login: Unkown Response Type text/html

The problem

Last week we had some problems with our Microsoft Graph / Azure AD login based system. From a user perspective it was all good until the redirect from the Microsoft Account to our IdentityServer.

As STS and for all auth related stuff we use the excellent IdentityServer4.

We used the following configuration:

services.AddAuthentication()
            .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
            {
                options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                options.ClientSecret = office365Config.MicrosoftAppClientSecret;    // Client-Secret from the AppRegistration 
                options.Authority = office365Config.AuthorizationEndpoint;          // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                options.ResponseType = "code id_token";
                options.GetClaimsFromUserInfoEndpoint = true;
                options.SaveTokens = true;
                options.CallbackPath = "/oidc-signin"; 
                
                foreach (var scope in office365Scopes)
                {
                    options.Scope.Add(scope);
                }
            });

The “office365config” contains the basic OpenId Connect configuration entries like ClientId and ClientSecret and the needed scopes.

Unfortunatly with this configuration we couldn’t login to our system, because after we successfully signed in to the Microsoft Account this error occured:

System.Exception: An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
   --- End of inner exception stack trace ---
   at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
   at IdentityServer4.Hosting.FederatedSignOut.AuthenticationRequestHandlerWrapper.HandleRequestAsync() in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\FederatedSignOut\AuthenticationRequestHandlerWrapper.cs:line 38
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Cors.Infrastructure.CorsMiddleware.InvokeCore(HttpContext context)
   at IdentityServer4.Hosting.BaseUrlMiddleware.Invoke(HttpContext context) in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\BaseUrlMiddleware.cs:line 36
   at Microsoft.AspNetCore.Server.IIS.Core.IISHttpContextOfT`1.ProcessRequestAsync()

Fix

After some code research I found the problematic code:

We just needed to disable “GetClaimsFromUserInfoEndpoint” and everything worked. I’m not sure why we the error occured, because this code was more or less untouched a couple of month and worked as intended. I’m not even sure what “GetClaimsFromUserInfoEndpoint” really does in the combination with a Microsoft Account.

I wasted one or two hours with this behavior and maybe this will help someone in the future. If someone knows why this happend: Use the comment section or write me an email :)

Full code:

   services.AddAuthentication()
                .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
                {
                    options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                    options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                    options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                    options.ClientSecret = office365Config.MicrosoftAppClientSecret;  // Client-Secret from the AppRegistration 
                    options.Authority = office365Config.AuthorizationEndpoint;        // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                    options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                    options.ResponseType = "code id_token";
                    // Don't enable the UserInfoEndpoint, otherwise this may happen
                    // An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
                    // at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
                    options.GetClaimsFromUserInfoEndpoint = false; 
                    options.SaveTokens = true;
                    options.CallbackPath = "/oidc-signin"; 
                    
                    foreach (var scope in office365Scopes)
                    {
                        options.Scope.Add(scope);
                    }
                });

Hope this helps!

Martin Richter: Note 1 für den Support von Schaudin / RC-WinTrans

Seit Jahren benutzen wir RC-WinTrans von Schaudin.com für unsere die Multilinguale Unterstützung unserer Software.

Durch eine Änderung in VC-2019 16.3.3 wurden nun RC Dateien nicht mehr ANSI Codepage 1252 gespeichert sondern grundsätzlich als UTF-8 Dateien. D.h. alle RC Dateien, die nicht in UTF-8 oder UTF-16 vorliegen, werden zwangsweise in UTF-8 konvertiert.

Jetzt hatten wir ein Problem. Unsere Tools von Schaudin (RC-WinTrans) können kein UTF-8 in der von uns genutzten Version. Zuerst habe ich bei Microsoft einen Case zu öffnen, weil so ein erzwungenes Encoding ist für mich ein No-Go.

Eine Anfrage in Stackoverflow brachte keine Erkenntnis außer, das das Problem ist bereits bekannt unter mehreren Incidents
Link1, Link2, Link3

Also habe ich mich an den Support von Schaudin gewandt. Neuere Tools können zwar kein UFT-8 aber UTF-16 verarbeiten. Also müssen wir eben ein Update kaufen.
Nach einigen Emails hin und her bot mir Schaudin an, die nächste Version nach meiner (die auch UTF-16 unterstützt) kostenlos zu erhalten.

Ich bin etwas sprachlos! So etwas (kostenlos) auf die nächste Version, ist doch nicht so ganz üblich in unserer Welt.

Ich sage Danke und gebe der Firma Schaudin die Note 1 in Kulanz und Support.


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Holger Schwichtenberg: Aktuelle Fachbücher zu C# 8.0 und Entity Framework Core 3.0

Der Dotnet-Doktor hat seine Fachbücher zu C# 8.0 und Entity Framework Core 3.0 auf den Stand der am 23. September 2019 erschienen endgültigen Versionen gebracht.

Norbert Eder: Cascadia Code: Neuer Font für Visual Studio Code

Microsoft hat einen neuen nichtproportionalen (monospaced) Font (für Visual Studio Code, Terminal etc.) veröffentlicht: Cascadia Code.

This is a fun, new monospaced font that includes programming ligatures and is designed to enhance the modern look and feel of the Windows Terminal.

Ich habe den Font getestet und finde ihn empfehlenswert. Und so kannst du ihn auch verwenden:

Installation

Öffne die Cascadia Code Releases. Klicke auf Cascadia.ttf und lade somit die Datei auf deinen Computer. Öffne den Font anschließend mit der Windows Schriftartenanzeige.

Links oben kann der Font nun via Installieren am System installiert und registriert werden.

Nun kann der Font in jeder Anwendung verwendet werden.

Font in Visual Studio Code ändern

Unter File > Preferences > Settings > Text Editor > Font kann der verwendete Font in Visual Studio Code geändert werden. Hierfür einfach im Feld Font Family einfach 'Cascadia Code', Consolas, 'Courier New', monospace eintragen. Um Ligaturen zu verwenden, ist das entsprechende Flag zu aktivieren:

Cascadia Code und Ligaturen in Visual Studio Code konfigurieren | Norbert Eder

Cascadia Code und Ligaturen in Visual Studio Code konfigurieren

Der Beitrag Cascadia Code: Neuer Font für Visual Studio Code erschien zuerst auf Norbert Eder.

Christina Hirth : About silos and hierarchies in software development

Disclaimer: this is NOT a rant about people. In most of the situations all devs I know want to deliver a good work. This is a rant about organisations imposing such structures calling themselves “an agile company”.

To give you some context: a digital product, sold online as a subscription. The application in my scenario is the usual admin portal to manage customers, get an overview of their payment situation, like balance, etc.
The application is built and maintained by a frontend team. The team is using the GraphQL API built and maintained by a backend team. Every team has a team lead and over all of them is at least one other lead. (Of course there are also a lot of other middle-management, etc.) 

Some time ago somebody must have decided to include in the API a field called “total” containing the balance of the customer so that it can be displayed in the portal. Obviously I cannot know what happened (I’m just a user of this product), but fact is, this total was implemented as an integer. Do you see the problem? We are talking about money displayed on the website, about a balance which is almost never an integer. This small mistake made the whole feature unusable.

Point 1: Devs implement technical requests instead of improving the product 
I don’t know if the developer who implemented this made an error by not thinking about what this total should represent or he/she simple didn’t had the experience in e-commerce but it is not my point. My point is that this person was obviously not involved in the discussion about this feature, why it is needed, what is the benefit. I can see it with my spiritual eyes how this feature became turned in code: The team lead, software lead (xyz lead) decided that this task has to be done. The task didn’t referred to the customer benefit, it stripped everything down to “include a new property called total having as value the sum of some other numbers”. I can see it because I had a lot of meetings like this. I delivered a string to the other team and this string was sometimes a URL and sometimes a name. But I did this in a company which didn’t called himself agile. 

Point 2: No chance for feedback, no chance for commitment for the product
Again: I wasn’t there as this feature was requested and built, I just can imagine that this is what it happened, but it really doesn’t matter. It is not about a special company or about special people but about the ability to deliver features or only just some lines of code sold as a product. Back to my “total”: this code was reviewed, integrated, deployed to development, then to some in-between stages and finally to production. NOBODY on this whole chain asked himself if the new field included in a public(!) API is implemented as it should. And I would bet that nobody from the frontend team was asked to review the API to see if their needs can be fulfilled.

Point 3: Power play, information hiding makes teams slow artificially (and kills innovation and the wish to commit themselves to the product they build) 
If this structure wouldn’t be built on power and position and titles then the first person observing the error could have talked to the very first developer in the team responsible for the feature to correct it. They could have changed it in a few minutes (this was the first person noticing the error ergo nobody was using it yet) and everybody would have been happy. But not if you have leads of every kind who must be involved in everything (because this is why they have their position, isn’t it?) Then somebody young and enthusiastic wanting to deliver a good product would create a JIRA ticket. In a week or two this ticket will be eventually discussed (by the leads of course)  and analyzed and it will eventually moved forward in the backlog – or not. It doesn’t matter anyway because the frontend team had a deadline and they had to solve their problem somehow.

Epilogue: the culture of “talk only to the leads” bans the cooperation between teams
at this moment I did finally understood the reason behind of another annoying behavior in the admin panel: the balance is calculated in the frontend and is equal with the sum of the shown items. I needed some time to discover this and was always wondering WTF… Now I can see what happened: the total in the API was not a total (only the integer part of the balance) and the ticket had to be finished so that somebody had this idea to create a total adding the values from the displayed items. Unfortunately this was a very short-sighted idea because it only works if you have less then 25 payments, the default number of items pro page. Or you can use the calculator app to add the single totals on every page…

All this is on so many levels wrong! For every involved person is a lose-lose situation. 

What do you think? Is this only me arguing for better “habitat for devs” or it is time that this kind of structures disappear.

Golo Roden: Virtuell vereint: Wann Teams remote und/oder vor Ort gut zusammenarbeiten

Bei der Frage nach Arbeit vor Ort oder remote gibt es keine pauschale Antwort. Teams funktionieren gut, wenn sie gemeinsame Ziele aus eigenem Antrieb verfolgen.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.