Marco Scheel: Microsoft Teams Live Events für (Krisen-)Vorträge

Online Meetings sind aus den heutigen Unternehmen nicht mehr wegzudenken. Microsoft Teams ist die Meeting-Lösung im Microsoft 365 Service. Ein Teams Meeting ermöglicht es allen Teilnehmern, aktiv an der Besprechung teilzunehmen. Es gibt nur wenig Kontrolle für den Besprechungsleiter. Die Teilnehmer müssen die Disziplin aufbringen und sich im Meeting “korrekt” verhalten. Durch die aktuelle Corona-Krise wird aber deutlich, dass sich viele Teilnehmer schwerer damit tun als gedacht. Besonders für Neulinge sind die verschiedenen Optionen in der Software ungewohnt und Grundregeln für ein gutes Meeting eventuell unbekannt. Es wäre toll, wenn die Software hier besser unterstützt, aber der aktuelle Stand (”Mute all”, …) wird sich kurzfristig nicht ändern.

Ich möchte euch heute eine Alternative zum klassischen Meeting zeigen. Es wird nicht für jede Situation passen, aber ihr solltet es kennenlernen und selbst entscheiden. Ich wurde von einer Schule angesprochen, welche Optionen es gibt, die Schüler Online zu unterrichten. Die Zielgruppe ist nicht besonders gut ausgebildet, wenn es um eine gute Meeting-Kultur geht :) Die Eltern und die Schüler dürften oft Anfänger sein und da passieren automatisch Fehler. In dieser Situation kann es sin machen auf ein Microsoft Teams Live Event auszuweichen.

Microsoft Teams Live Event

Ein Live Event kann über den Teams Client geplant und durchgeführt werden. Anders als in einem normalen Meeting gibt es eine klare Trennung zwischen Meeting Organisator und Teilnehmern. Der Organisator wird zum Producer des Events (Meetings) und muss den Microsoft Team Client nutzen. Er kann weitere Personen als Producer einladen und das macht auch Sinn, damit man so ein Meeting professionelle über die Bühne bekommt. Die Teilnehmer verwenden den Browser (ohne Plugins oder ähnlicher) oder falls vorhanden den Teams Client. Im schulischen Szenario kann man sehr wenig Rückschlüsse über die technischen Möglichkeiten der Teilnehmer ziehen. Es ist also gut, dass die Lösung eigentlich mit allen Optionen zurechtkommt.

Für die Teilnahme am Meeting gibt es zwei Links. Einmal der Link für den Producer (so ähnlich wie ein normaler Teams Meeting-Link) und nur über den Teams Client kommt man an den Teilnehmerlink. Das Meeting (besser Live Event) muss nach dem Beitritt des Producers nochmal über die Software gestartet werden, sonst sehen die Teilnehmer nichts.

Das Meeting wird mit einem Zeitversatz (10-40 Sekunde) an die Teilnehmer gestreamt. Die Teilnehmer können nur über die eingebaute Frage und Antwort-Funktion (Q&A) mit den Producern kommunizieren. Die Zuschauer können das Meeting jederzeit pausieren und zurück spulen. Es stehen mehr als 50 Sprachen für optionalen Live-Untertitel zur Verfügung.

Übersicht der Limits

Live Event vorbereiten

Der Benutzer benötigt eine Office 365 Lizenz (E1 oder höher, A3 oder höher) und der Administrator kann über eine Live Event Policy im Admin-Center festlegen, wer Events erstellen kann. Im Standard sind externe Benutzer nicht erlaubt. Für das Schul-Szenario ist es also wichtig, dass dieses Setting verändert wird.

Teams Admin Center - Meetings - Live event policies - Global (Org-wide default)

“Who can join scheduled live events” = “Everyone” statt “Everyone in your organization“.

image

Live Event Planen

In der Teams App wechselt ihr auf den Kalender und könnt dann oben rechts das DropDown auslösen und “Live event” wählen:

image

Über Yammer kann man ebenfalls ein Live Event starten. Begebt euch in die entsprechende Gruppe/Community und dann findet ihr rechts im Bereich “Group Actions” die Möglichkeit ein Live Meeting zu planen.

image

Über Microsoft Stream kann man zwar Live Events starten, allerdings muss man hier spezielle Software verwenden und kann nicht Microsoft Teams als Producer wählen.

image

Wir wählen den Weg über Microsoft Teams und erstellen ein Live Event. Hier findet ihr die Anleitung von Microsoft. Es muss wie bei jedem Meeting ein Titel angegeben werden und die Start- sowie Endzeit. Die optionale Ortsangabe (Location) könnt ihr offenlassen, Microsoft Teams oder Online eintragen. An dieser Stelle ladet ihr auch die weiteren Presenter ein. Diese Personen stammen aus eurer Organisation und unterstützen euch später beim Erzeugen der Inhalte im Meeting. 

image

Auf der nächsten Seite werdet ihr nach den Berechtigungen gefragt. Wenn euer Administrator die richtigen Voreinstellungen getroffen hat, dann könnt ihr “Public” (Öffentlich) auswählen. Für das Scenario Schule ist Public zwingend erforderlich. Setzt ihr ein Live Event für eure Kollegen auf, dann könnt ihr natürlich auch Org-wide oder einzelne Personen auswählen. In beiden letzten Fällen wird ein Login des Unternehmens benötigt. Wichtig: Hier geht es um Berechtigungen und nicht darum, wer eingeladen wird. Teilnehmen kann an einem Org-wide Meeting nur wer auch den Teilnahme-Link erhalten hat!

image

Das Live Event ist bereits als Teams Event gekennzeichnet und erlaubt so auch nur die Teams relevanten Einstellungen. Trefft eure Entscheidung, ob die Aufzeichnung für alle Teilnehmer verfügbar gemacht werden soll. Je nach Publikum bietet es sich an, dass “Live-Untertitel” angeboten werden. Es stehen mehr als 50 Sprachen zur Verfügung, aber ein Event kann immer nur 6 gleichzeitig anbieten. Ihr müsst hier entscheiden, welche Sprache gesprochen wird und in welche Sprache übersetzt werden kann. Die Konfiguration des Teilnehmer-Berichtes erlaubt es nach dem Meeting zu sehen (CSV), wer teilgenommen hat sowie wann und wie oft beigetreten wurde. Die Option für die “Frage&Antwort” Funktion (Q&A) sollte immer gewählt sein, damit Teilnehmer und Producer interagieren können.

image

Das Erstellen des Live Events ist abgeschlossen und wird angezeigt.

image

Über die Anzeige kann man dann auch den Teilnehmerlink kopieren. Dieser link muss dann an alle externen (oder internen) Teilnehmer weitergeleitet werden. Am besten wählt man hier den üblichen Kommunikationskanal (Teams, Email, WhatsApp, …) für die Teilnehmergruppe.

image

Der Link zum Teilnehmen sieht ungefähr so aus wie jeder andere Meeting Link:
https://teams.microsoft.com/l/meetup-join/19%3ameeting_…
aber für die Teilnehmer-Links endet der Link mit
IsBroadcastMeeting=true

Live Event produzieren

Rechtzeitig vor Beginn des eigentlichen Live Events sollten sich alle Producer ins Meeting einwählen und die nötigen Vorbereitungen zum Start des Events durchführen. Für den Producer sieht der Beitritt so aus wie bei jeden anderen Meeting (Abweichung über dem Titel: “Als Produzent teilnehmen”). Wollt ihr euer Video übermitteln, so müsst ihr natürlich die Webcam aktivieren. Hinweis: Leider gibt es kein Background-Blur in Live Meetings. Achtet also auf eure Umgebung! 

image

Über die Einstellungen (Zahnrad mit dem aktuellen Audiogerät als Name) könnt ihr das passende Audiogerät verwenden. Es ist extrem wichtig hier die hochwertigsten Komponenten zu verwenden. Laptop Mikrophone und Lautsprecher erzeugen in der Regel das schlechteste Ergebnis. Ein Kopfhörer mit entsprechendem Mikro macht es für alle einfacher und reduzieren Nebengeräusche. Der Auditorium-Modus sollte in Zeiten von Social Distancing nicht relevant sein, aber eventuell für zukünftige Ereignisse von Interesse sein, um Publikum nicht durch Teams rausrechnen zu lassen. Hinweis: Wenn ihr unsicher seid, dann testet eure Optionen mit einem Testanruf

image

Seid ihr dem Live Event beigetreten, dann seht ihr die Producer Oberfläche und spätestens jetzt wird klar, dass es kein normales Meeting ist und warum ihr das vorher üben solltet :) Es gibt zwei Inhaltsanzeigen. Links befindet sich der Inhalt, der noch nicht Live gesendet wird. Hier bereitet ihr die Szene vor, welche dann Live geschaltet wird. Rechts seht ihr, was die Teilnehmer sehen werden und sich gerade “On-Air” befindet. Mit dem Beitritt zum Live Event wird das Meeting also noch nicht gestartet! Hierfür muss man erstmal die Inhalte anordnen.

image

Wenn ihr eure Webcam übertragen wollt, dann schaltet auf das entsprechende Layout unten links um.

image

Jetzt könnt ihr aus dem unteren Bereich eure Webcam nach oben in den kleinen Bereich des linken Fensters ziehen. Genau so geht das auch mit den Inhalten, die ihr teilen wollt. Ihr könnt ein einzelnes Fenster freigeben oder den ganzen Desktop. Ich empfehle immer den Desktop freizugeben, da es kompliziert wird, wenn man auch andere Dinge zeigen möchte. Es kommt schneller als man denkt.

image

Oben: Fenster freigeben | Unten: Desktop freigeben

image

Wie in jedem anderen Meeting wird nun das geteilte Element rot umrandet und es gibt im oberen Bereich die Möglichkeit weitere Kontrollen einzustellen oder abzubrechen.

image

In der Producer-Ansicht habt ihr nun Video und Inhalt ausgerichtet. Wenn die Zeit gekommen ist, dann muss man die Inhalte “Live schalten”.

image

Das Live Event ist aber noch immer nicht für die Teilnehmer sichtbar. Erst mit einem weiteren Klick auf “Start” werden die Inhalte (Audio und Video) an die Teilnehmer übermittelt. Auf der linken Seite, kann zum Beispiel ein weiterer Producer das nächste Layout (Speaker, Speaker-Runde, …) vorbereitet und gegebenenfalls sofort “Live schalten

image

Es ist ein Microsoft Produkt also wird nochmal nachgefragt! 

image

In der Producer-Ansicht wird der sichtbare Teil für die Teilnehmer rot umrandet.

image

Stoppt der aktuelle Benutzer seine Freigabe, dann wird sein Video in Vollbilddarstellung gebracht.

image

In der rechten Leiste der Produceransicht gibt es verschiedene Reiter mit Informationen zum Meeting.

Staus und Leistung

image

Frage und Antwort

image

Teilnehmer können hier Fragen stellen und Producer können diese privat beantworten. ist eine Frage/Antwort für alle Teilnehmer interessant, dann kann der Producer die Frage (samt Antwort) veröffentlichen und so allen zugänglich machen.

Besprechungsnotizen 

image

Die Notizen werden wie aus Meetings bekannt, über die Wiki-Funktion abgewickelt.

image

Es gibt eine rudimentäre Struktur und Formatierung der Inhalte für die Teilnehmer. Die Notizen können nur vom Producer erstellt werde und sind sofort für die Teilnehmer sichtbar.

Besprechungschat (nur Producer)

image

Der Chat ist nur für Producer und kann nicht von Teilnehmern eingesehen werden. Teilnehmer können nur über die Frage&Antwort Funktion interagieren. Eine Interaktion unter Teilnehmern ist ausgeschlossen.

Kontakte

image

Hier werden alle Producer aufgelistet. Teilnehmer sind hier nicht sichtbar.

Geräteeinstellungen

image

Hier kann das Audio und Webcam Setup (Producer) verändert werden. Es können auch nachträglich die Live-Untertitel abgeschaltet werden.

Besprechungsdetails

image

Wie in jedem Meeting werden auch hier die Einwahlinformationen für das Meeting angezeigt. Hinweis: Diese Einladung ist nur für Producer und nicht für Teilnehmer geeignet! 

Live Event teilnehmen

Die Teilnahme an einem Live Event kann mit einem Browser oder Microsoft Teams Client durchgeführt werden. Hier findet ihr die Systemanforderungen. Mit dem Teilnehmerlink kommt man auf eine Website, welcher dem normalen Meeting Beitritt sehr ähnelt. Über “Watch on the web instead” kann man über alle modernen Browser teilnehmen, ohne extra Software zu installieren. 

image

Wenn man über ein Teams Account (Azure AD) verfügt, dann kann dieser genutzt werden oder ihr tretet anonym bei.

image

Nach dem Beitritt seht ihr den aktuell geteilten Inhalt. Die Ansicht ähnelt einem normalen Meeting. Der Teilnehmer kann aber jederzeit Pause machen oder sogar zurückspulen.

image

Über die Einstellungen im Wiedergabefenster können die optionalen Untertitel eingeschaltet werden.

image

Die Auswahl entspricht den Einstellungen, welche zum Setup des Live Events gemacht wurden.

image

Hier sehen wir den englischen Untertitel.

image

Frage & Antwort

Hier haben wir die Teilnehmersicht der Q&A Funktion. Der Teilnehmer kann seine Fragen stellen und bekommt gegebenenfalls Antworten vom Producer Team. Der Teilnehmer kann sich für die Frage einen Namen geben oder aber anonym schreiben.

image

Die Fragen können weiter “verfeinert” werden.

image

Hier sehen wir die Producer Ansicht für die gestellt Frage. Die Frage kann ignoriert oder veröffentlicht werden (sichtbar für alle Teilnehmer).

image

Wird die Frage (inclusive Antwort) veröffentlicht, dann wird sie im Bereich “Featured” angezeigt. Hier gibt es die Möglichkeit für die Teilnehmer eine Frage zu “liken” und damit Feedback in die Runde zu geben.

image

Ist das Live Event beendet, dann wird es entsprechend im Stream angezeigt.

image

Zusammenfassung

Live Events sind eine sehr spezielle Form von Meetings in Microsoft 365. Der Einsatz macht nur in wenigen Szenarien Sinn, aber es ist wichtig, diese Option zu kennen. Besonders mit vielen “Meeting” Anfängern kann es die Situation vereinfachen, wenn eventuell zwei Meetings aufgesetzt werden. Ein Live Meeting für den Transport der Inhalte mit absoluter Kontrolle über die Präsentation ohne störende Zwischenrufe und ein anschließendes Ad-Hoc Q&A Meeting im normalen Teams Meeting Modus, damit jeder sprechen kann und sich auch per Video präsentiert.

Holger Schwichtenberg: PowerShell 7.0: Funktionsumfang

Die PowerShell 7.0 hat sich hinsichtlich des Befehlsumfangs stark an die PowerShell 5.1 angenähert. 95 Befehle fehlen aber weiterhin. Und unter Linux und macOS ist nur ein Bruchteil der Funktionen verfügbar.

Code-Inside Blog: Escape enviroment variables in MSIEXEC parameters

Problem

Customers can install our product on Windows with a standard MSI package. To automate the installation administrators can use MSIEXEC and MSI parameters to configure our client.

A simple installation can look like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/OneOffixx/"

The “CACHEFOLDER” parameter will be written in the .exe.config file and our program will read it and stores offline content under the given location.

So far, so good.

For Terminal Server installations or “multi-user” scenarios this will not work, because each cache is bound to a local account. To solve this we could just insert the “%username%” enviroment variable, right?

Well… no… at least not with the obvious call, because this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%username%/OneOffixx/"

will result in a call like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/admin/OneOffixx/"

Solution

I needed a few hours and some Google-Fu to found the answer.

To “escape” those variables we need to invoke it like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%%username%%/OneOffixx/"

Be aware: This stuff is a mess. It depends on your scenario. Checkout this Stackoverflow answer to learn more. The double percent did the trick for us, so I guess it is “ok-ish”.

Hope this helps!

Marco Scheel: Is Microsoft Teams a remote support tool?

My buddy Oliver Kieselbach did a blog post about the capabilities of Microsoft Quick Assist (as part of the current operation system). In his post he raised the question if Microsoft Teams is not enough for this kind of IT support scenarios. Check out his blog to see it live in action and what the biggest shortcoming is. Microsoft Teams is not a good option for anything UAC related. Even without the so called secure desktop feature Microsoft Teams will not allow the support staff to enter admin credentials if needed. I also would suggest (for the most customers) to pick a proper IT support tool for these scenarios.

Microsoft Teams is the hub for teamwork, but that doesn’t mean you could not support your colleagues, if they are experiencing non admin related issues. But first things first, we should check, if your tenant settings are ready to allow remote control during a desktop sharing session. In quite a few customer environments I’ve noticed that remote sharing is restricted or completely disabled.

Desktop sharing and remote control is configured through the tenant meeting policies. Check out your teams admin center:
Meeting policies - Pick a policy (”Global - Org Wide Default”) - Content sharing

image

Ensure that “Screen sharing mode“ is set to “Entire screen” (1) otherwise remote control will be limited to a single window. I’ve seen people struggle using the single app sharing mode more than once. For a remote support scenario please enable “Allow a participant to give or request control” (2)! If your users are asking for support from a valuable and skilled colleague, don’t waste their time yelling what button to press next. The last option needs a decision, if you want to allow the same privilege to be granted for people outside of your organization. I’m a fan of enabling “Allow an external participant to give or request control” (3), because I’m often the external user trying to help, but please align with your corporate security requirements. By the way: Settings (1) and (2) are configured like shown by default in any tenant!

1:1

Now that we have setup the prerequisites let’s have a look at the user experience. In my scenario Luke is trying to organize a new funding round to order some spaceships. Now he tries to prepare a nice Excel to present at the next procurement meeting, but he is not happy with the visual display of one of his charts. He needs help from an expert and he is in contact with Leia (she is running the rebellion so she is awesome in Excel!). Luke is starting a chat to make his point:

image

To start the screen sharing in a 1:1 session you will find the icon for screen sharing (1) in the top right corner. Luke needs to start it and share the complete screen (2):

image

Leia will receive a request to accept the screen sharing session. You should only accept a request if you talked/chatted with the person! Otherwise you could end up seeing things you don’t want to see. 

image

If you started through chat the system will asked you if it is a good idea to add audio to the conversation. Normally this is a good idea. Especially if you are not willing to give control to the person you request help from. 

image

Because Leia is a busy employee and has lots of stuff to do she will request control, because the particular action to execute is hard to describe and maybe takes some poking around with various settings. So Leia requests to have control over the screen/application that was shared. In the far right part of the call control bar you will find the option to request control. If this option is missing, talk to your Teams admins! They didn’t execute all prerequisites as described above:

image

Luke will see the request at the top of the shared content and has the option to accept or deny sharing. Sometimes meeting participants accidently hit the button so think twice if this is what you want:

image

Now comes a really impressive upgrade from previous Skype for Business based screen sharing. Both parties are represented by their Teams profile avatar (in my case the Office 365 profile pictures). You will always see what the other is pointing at or clicking.

image

I often prefer in a support case that the requesting party will do all the clicks and I’m just advising what’s up next and where to find it on screen. With this solution I think it is a great learning opportunity and I can gently show where to click instead of yelling where not to click :)

Here are the two screens side by side (Luke on the left vs Leia on the right):

image

Leia fixed the request by switching to a logarithmic scale and Luke is happy and the session can be ended. Leia can click on “Stop control”:

image

Or Luke can end the session (”Cancel Control”):

image

Just for completeness this is the way to give control if the supported doesn’t find the “Request control” option:

image

Extra: Meetings

In a normal meeting everything is working alike, but the UX is looking a bit different. If you are in a scheduled meeting the share button is located in the call control bar:

image

If you select the icon a new screen will appear from the bottom of the teams app:

image

(1) Share the complete screen. If you have more than one screen, you can only share one at a time.
(2) Share a window.
(3) You can also upload a PowerPoint presentation, but this is beyond a remote control/support session.
(4) Open a whiteboard, but this is beyond a remote control/support session.
(5) While sharing your screen or an application also include audio (for example a Microsoft Stream video that you want to trim/edit) Note: This option is only available in planed meetings and not 1:1 support session started from chat.

Conclusion

The capabilities for remote support in Microsoft Teams are available and very useful. Thinks like the AAD picture next to the mouse cursor is a great addition and helping a lot. Is Teams a better remote support tool then Quick Assist?

For your IT staff: No! A proper Remote Assist tool will be a better choice.

For your typical Information Worker: Yes! Sharing your desktop to get support from a colleague is a quick and proper solution! No need to walk to someone’s desk and touch a maybe filthy mouse/keyboard. Not always do you have the right person in the same building.

I definitely know that Microsoft Quick Assist is not a proper collaboration solution. Try to co-author a Excel document for the next #FreeCookiesFriday campaign ;)

image

Holger Schwichtenberg: PowerShell 7.0: Technische Basis und Installation

Der Dotnet-Doktor beginnt mit diesem Beitrag eine Blog-Serie zur PowerShell 7.0, der neuen Version der .NET-basierten Shell für Windows, Linux und macOS.

Golo Roden: Was zeichnet lesbaren Code aus?

Während Entwickler Code schreiben, achten sie primär darauf, dass er funktioniert. Die Lesbarkeit ist aber das, was über die Wartbarkeit von Code entscheidet.

Code-Inside Blog: TLS/SSL problem: 'Could not create SSL/TLS secure channel'

Problem

Last week I had some fun debugging a weird bug. Within our application one module makes HTTP requests to a 3rd party service and depending on the running Windows version this call worked or failed with:

'Could not create SSLTLS secure channel'

I knew that older TLS/SSL versions are deprecated and that many services refuse those protocols, but we still didn’t finally understand the issue:

  • The HTTPS call worked without any issues on a Windows 10 1903 machine
  • The HTTPS call didn’t work on a Windows 7 SP1 (yeah… customers…) and a Windows 10 1803 machine.

Our software uses the .NET Framework 4.7.2 and therefore I thought that this should be enough.

Root cause

Both systems (or at least they represents two different customer enviroments) didn’t enable TLS 1.2.

On Windows 7 (and I think on the older Windows 10 releases) there are multiple ways. On way is to set a registry key to enable the newer protocols.

Our setup was a bit more complex than this and I needed like a day to figure everything out. A big mystery was, that some services were accessible even under the old systems till I figured out, that some sites even support a pure HTTP connection without any TLS.

Well… to summarize it: Keep your systems up to date. If you have any issues with TLS/SSL make sure your system does support it.

Hope this helps!

Kazim Bahar: ML.NET: interessante Blogartikel, Projekte und Vorträge

ML.NET – Machine Learning für C# .NET Entwickler Ein praktischer Einstiegspunkt für jeden C# Entwickler,...

Christina Hirth : You Don’t Need To Work In Silos If You Don’t Want To

… but if you do then you should stop reading here. It is Ok for me.

How many of you have built features in backend services which were never used in any application? Or implemented requests in the wrong way because nobody cared to give you the whole story, the whole problem this feature should solve? Or felt demotivated because of the lack of feedback, if that what you do makes an impact, or it was wasted energy and time? How many of you are still working under these unsatisfying circumstances? For those of you is this article.

I did all of this. One case I will never forget: I should implement a feature request resulting in returning some object property as a string. This property was containing a URL, but the feature didn’t say “I need to know how to navigate to X or Y” but “please include the URL X in the result”.

It turned out that another 2 teams used this “string” to build navigation on it or to include it in emails without ever telling me. Why should they? I was done with the feature: it was their turn. Both of them have validated this string, have built URLs with them (using information exclusively owned by the backend service…), etc.

Let me be more explicit:

Failure No. 1: If I would have changed some internals in the backend service, I could’ve broken the UI code without knowing. My colleagues relied on things they had no chance to control. We were dependent on each other without being able to see it.

Failure No. 2: the company paid 3 different developers to write the same validation functions and the customer flow had to pass the same validations 3 times instead of only once. A totally wrong decision, from an economical point of view.

I think that was the moment I decided to change the way we deliver features, the way we work together. This was 6 or 7 years ago and since then I followed the same method to reorganize not only the teams but also the source code. Because one thing is sure: changing one without the other only leads to bigger pains and even more frustration.

Step 1. Visit the “other side” of that wall and learn what they are doing and how they are doing it. You will observe bottlenecks and wasted time and energy in your value stream (the road a feature passes from the idea to the customer)

Step 2. Get the buy-in by the next level in your hierarchy: in most situations (in both cases I were in this situation) you are not the first one noticing these problems, but you could be the first one offering a solution. Grab this chance, don’t hesitate!

Step 3. Remove the wall between the silos: find a good time to make your move, after the biggest project ended or before the next one starts. Don’t wait too long, there always will be unfinished features.

Step 4. This depends on how many team members we are talking about. In both cases, we were around 15 people, and nobody wants stand-ups or even meetings with 15 people! You become even slower and even less capable to take decisions. But this step is important for a few things:

  • both “parties” should learn and understand what the others do, how the parts are connected, what language, concept, design is used to build them
  • all members should understand and accept that it is important to split up in teams – and this is always hard because it means “we have to change”! Developers are – against all expectations – very reluctant to change. Even more reluctant when they realize that they won’t work with their buddies anymore but with some hardly known people they do not really or trust.
  • you and/or your boss, your colleagues, your buddy in this change must start to discover how the domain is shaped, how can the teams being split up – because this will be the next step.

Up to this point you didn’t improve the developer experience, it will become rather worse. What you have improved is the life of the product manager or CTO or whoever brings the requests to the teams: instead of explaining two teams the two parts of a feature (cut in the “middle” between backend and frontend), he/she must explain it only once. At the same time, the Delivery Lead Time (the first key metric in measuring team performance) will become shorter because all the ping-pong between BE and FE can be eliminated before the feature development starts.

After you all spent a longer or shorter time together is time to take the next step: align the organization to the business

Designing Autonomous Services & Teams Together – Nick Tune – KanDDDinsky 2017

The most important part is to find the natural boundaries of the domain and create business teams who OWN this (sub) domains. 

I did this 3 times in all kinds of environments: brownfield monolith or greenfield new biz, it doesn’t matter. Having a monolith as cash cow doesn’t make this change easy of course but it can be made, with discipline and a good plan on how to take over control. (this topic is much to complex to be included in this article)

The last thing which must be said is, when NOT to start this transformation:

  • If you don’t find any fellow to support you. In this case, either the problem isn’t big enough to be felt by the others, or you are in the wrong company and maybe should start to think to transform yourself instead (and leave).
  • If you or your fellow and/or boss aren’t patient people. Changing is hard and should be accompanied carefully and patiently – so that one does not need to repeat it again after even greater frustrations and chaos (was there, saw this :-/ )
  • If you expect that this is all. Because it isn’t: every change toward more transparency – because this is what happens when you break up silos and let others look at the existing solutions – all these changes will make issues transparent. A few of these issues will be technical (like CI/CD, code coupling, infrastructure coupling, etc.). But the hard problems will be missing communication skills and missing trust. Nothing that cannot be solved – but it will take time, that is sure.

If you reach this point, you can start to form an autonomous team: one which not only decides, what to do but also in charge to do it. Working in an environment created by you and your team allows you all to discover and live up to your creativity, to make mistakes and learn from them.

This ownership and responsibility make the difference between somebody hired to type lines of code and somebody solving problems.

What do you think? Could you start this change in your company? What would you need?

Now you know about my experience. I would be really happy to find out yours – here or on twitter.

One last question: what would you like more to read of: how to find the right boundaries or how can your team become a REALLY autonomous team – and how autonomous can that be?

Holger Schwichtenberg: Migrationsskript für die Umstellung von .NET Framework auf .NET Core

Der Dotnet-Doktor bietet eine skriptbasiertes Migrationswerkzeug für die Umstellung auf .NET Core an.

Golo Roden: Günstigere Software durch weniger Tests

Softwareentwicklung gilt als teure Disziplin. Nicht nur Fachfremde sind häufig überrascht, wie viel Geld die professionelle Entwicklung einer Software bedarf. Code, Tests, Dokumentation und Integration, das alles will bezahlt werden. Doch woran lässt sich sparen?

Holger Schwichtenberg: Der Stand der .NET-Familie zum Jahresbeginn 2020

Der aktuelle Stand von .NET Framework, .NET Core und Mono in einem Schaubild.

Code-Inside Blog: Accessibility Insights: Spot accessibilities issues easily for Web Apps and Windows Apps

Accessibility

Accessibility is a huge and important topic nowadays. Keep in mind that in some sectors (e.g. government, public service etc.) accessibility is a requirement by law (in Europe the European Standards EN 301 549).

If you want to learn more about accessibility in general this might be handy: MDN Web Docs: What is accessibility?

Tooling support

In my day to day job for OneOffixx I was looking for a good tool to spot accessibility issues in our Windows and Web App. I knew that there must be some good tools for web development, but was not sure about Windows app support.

Accessibility itself has many aspects, but these were some non obvious key aspects in our application that we needed to address:

  • Good contrasts: This one is easy to understand, but sometimes some colors or hints in the software didn’t match the required contrast ratios. High contrast modes are even harder.
  • Keyboard navigation: This one is also easy to understand, but can be really hard. Some elements are nice to look at, but hard to focus with pure keyboard commands.
  • Screen reader: After your application can be navigated with the keyboard you can checkout screen reader support.

Accessibility Insights

Then I found this app from Microsoft: Accessibility Insights

x

The tool scans active applications for any accessibility issues. Side node: The UX is a bit strange, but OK - you get used to it.

Live inspect:

The starting point is to select a window or a visible element on the screen and Accessibility Insights will highlight it:

x

Then you can click on “Test”, which gives you detailed test result:

x

(I’m not 100% if each error is really problematic, because a lot of Microsofts very own applications have many issues here.)

Tab Stops:

As already written: Keyboard navigation is a key aspect. This tool has a nice way to visualize “Tab” navigation and might help you to better understand the navigation with a keyboard:

x

Contrasts:

The third nice helper in Accessibility Insights is the contrast checker. It highlights contrast issues and has an easy to use color picker integrated.

x

Behind the scenes this tool uses the Windows Automation API / Windows UI Automation API.

Accessibility Insights for Chrome

Accessibility Insights can be used in Chrome (or Edge) as well to check web apps. The extension is similar to the Windows counter part, but has a much better “assessment” story:

x

x

x

Summary

This tool was really a time saver. The UX might not be the best on Windows, but it gives you some good hints. After we discovered this app for our Windows Application we used the Chrome version for our Web Application as well.

If you use or used other tools in the past: Please let me know. I’m pretty sure there are some good apps out there to help build better applications.

Hope this helps!

Holger Schwichtenberg: Die Finanzverwaltung und ihre schlechte Elster-Software

Die Umstellung von "ElsterFormular" auf das Elster-Webportal ist gespickt mit lästigen Bugs und notwendigen Anrufen bei der Hotline.

Norbert Eder: Basteln mit dem Nachwuchs: NFC Tags programmieren

Mein Nachwuchs interessiert sich für Technik, Computer und alles was so dazu gehört. Natürlich wird gerne gespielt, aber schön langsam durstet es ihm nach mehr. Mit kleinen Projekten kann man Technik recht schnell näher bringen, vermitteln und Interessen ausloten.

Richtig gut kamen NFC Tags an. Diese sind für kleines Geld zu haben, aber es lassen sich damit ganz nette Projekte umsetzen.

NFC Tag | Norbert Eder

Ein einfacher NFC Tag

Auf so eine NFC Chip können unterschiedliche Informationen gespeichert werden. Im einfachsten Fall handelt es sich dabei um einen Link, eine Wifi- bzw. Bluuetooth-Verbindung, eine E-Mail-Adresse oder Telefonnummer, einen Standort oder eine Anweisung, eine SMS zu senden oder eine App zu starten. Kommt das NFC-fähige Telefon mit dem NFC-Tag in Berührung, wird diese Aktion ausgeführt.

NXP TagWriter | Norbert Eder

NXP TagWriter liest und schreibt NFC Tags

Um NFC-Tags zu lesen und zu schreiben ist lediglich eine Smartphone-App notwendig. Hierzu gibt es unterschiedlichste Apps. Eine der einfachsten ist der NXP TagWriter (Android, Apple).

NFC Tags auslesen | Norbert Eder

NFC Tag auslesen

Neben diesen Standard-Funktionen gibt es weitere Apps (z.B. die NFC Tools), die zusätzliche Funktionen unterstützen. So ist es damit möglich, Bedingungen zu setzen und beispielweise folgende Konfigurationen vorzunehmen:

  • Smartphone auf lautlos stellen
  • Flugmodus aktivieren
  • Wenn Sonntag bis Donnerstag, Wecker für den kommenden Tag um 6:00 Uhr stellen

Das kann für viele ein guter NFC-Tag für das Nachtkasterl sein.

Viele weitere Möglichkeiten sind gegeben und laden vor allem auch zu experimentieren ein. Mein Bub hatte viele Ideen und einige davon auch gleich umgesetzt. Für Spaß war gesorgt und auch gelernt hat er viel.

Hinweis: NFC-Tags gibt es in unterschiedlichen Größen (Speicherplatz), mit und ohne Passwortschutz, als selbstklebender Sticker oder als Schlüsselanhänger.

Viel Spaß beim Experimentieren. Die Kids werden richtig Spaß daran haben.

Der Beitrag Basteln mit dem Nachwuchs: NFC Tags programmieren erschien zuerst auf Norbert Eder.

Jürgen Gutsch: Using the .editorconfig in VS2019 and VSCode

In the backend developer team of the YOO we are currently discussing coding style guidelines and ways to enforce them. Since we are developer with different mindsets and backgrounds, we need to find a way to enforce the rules in a way that works in different editors too.

BTW: C# developers often came from other languages and technologies before they started to work with this awesome language. Universities mostly teach Java, or the developer were front end developers in the past, or started with PHP. Often .NET developers start with VB.NET and switch to C# later. Me as well: I also started as a front end developer with HTML4, CSS2 and JavaScript, used VB Script and VB6 on the server side in 2001. Later I used VB.NET on the server and switched to C# in 2007.

In our company we use ASP.NET Core more and more. This also means we are more and more free to use the best editor we want to use. And we are more and more free to use platform we want to work with. Some of us use already and prefer VSCode to work on ASP.NET Core projects. Maybe we'll have a fellow colleague in the future who prefers VSCode on Linux or VS on a Mac. This also makes the development environments divers.

Back when we used Visual Studio only, Style Cop was the tool to enforce coding style guidelines. Since a couple of years there is a new tool that works in almost all editors out in the world.

The .editorconfig is a text file that overrides the settings of the editor of your choice.

Almost every code editor has settings to style the code in a way you like, or the way your team likes. If this editor supports the .editorconfig you are able to override this settings with a simple text file that usually is checked in with your source code and available for all developers who work on those sources.

Visual Studio 2019 supports the .editorconfig by default, VS for Mac also supports it and VSCode supports it with a few special settings.

The only downside of the .editorconfig I can see

Since the .editorconfig is a settings file that overrides the settings of the code editor, it might be that only those settings will work that are supported by the editor. So it might work that not all of the settings will work on all code editors.

But there is a workaround at least on the NodeJS side and on the .NET side. Both technologies support the .editorconfig on the code analysis side instead of the editor side, which means NodeJS or the .NET Compiler will check the code and enforce the rules instead of the editor. The editor only displays the error and helps the author to fix those errors.

As far as I got it: On the .NET side it VS2019 on the one hand and Omnisharp on the other hand. Omnisharp is a project that support .NET development on many code editors, so on VSCode as well. Even if VSCode is called a Visual Studio, it doesn't support .NET and C# natively. It is the Omnisharp add-in that enables .NET and brings the Roslyn compiler to the Editor.

"CROSS PLATFORM .NET DEVELOPMENT! OmniSharp is a family of Open Source projects, each with one goal: To enable a great .NET experience in YOUR editor of choice" http://www.omnisharp.net/

So the .editorconfig is supported by Omnisharp in VSCode. This means it might be that the support of the .editorconfig differs between VS2019 and VSCode.

Enable the .editorconfig in VSCode

As I wrote the .editorconfig is enabled by default in VS2019. There is nothing to do about it. If VS2019 finds an .editorconfig it will use it immediately and it will check your code on every code change. If VS2019 finds an .editorconfig in your solution, it will tell you about it and propose to add it to a solution folder to make it easier accessible for you in the editor.

In VSCode you need to install an add-in called EditorConfig. This doesn't enable the .editorconfig even if it is telling you about it. Maybe it actually does, but it doesn't work with C# because Omnisharp does something. But this add-in supports you to create or edit your .editorconfig.

To actually enable the support of the .editorconfig in VSCode you need to change two Omnisharp settings in VSCode:

Open the settings in VSCode and search for Omnisharp. Than you need to "Enable Editor Config Support" and to "Enable Roslyn Analyzers"

After you changed those settings, you need to restart VSCode to restart the Omnisharp server in the background.

That's it!

Conclusion

Now the .editorconfig works in VSCode almost the same way as in VS2019. And it works great. I tried it by opening the same project in VSCode and in VS2019 and changed some settings in the .editorconfig. The changed settings immediately changed the editors. Both editors helped me to change the code to match the code styles.

We at the YOO still need to discuss about some coding styles, but for now we use the recommended styles and change the things we discuss as soon we have a decision.

Do you ever discussed about coding styles in a team? If yes, you know that we are discussing about how to enforce var over the explicit type and whether to use simple usings or not, or whether to always use curly braces with if statements or not... This might be annoying, but it is really important to get a common sense and it is important that everybody agree on it.

Golo Roden: Module für JavaScript und Node.js auswählen

Die Auswahl von npm-Modulen ist im Wesentlichen Erfahrungssache, als Indizien für tragfähige und zukunftsträchtige Module können aber deren Verbreitung und die Aktivität der Autoren dienen.

Uli Armbruster: Eigene Domain als Gmail Alias verwenden

In dieser Schritt-für-Schritt Anleitung erläutere ich wie ihr andere Absender für eure Gmail-Adresse definiert. Dies ist z.B. nützlich, wenn ihr eine eigene Domain betreibt (wie ich mit http://www.uliarmbruster.de) und unter der Domain mit eurem Gmail Konto auch E-Mails empfangen und versenden wollt.

Ich nutze dies beispielsweise, um für meine Familie E-Mail-Adressen anzulegen und alle auf ein zentrales Gmail-Konto umzuleiten. Das ist unter anderem nützlich, wenn man so Dinge wie Handy-, Internet und Stromverträge für mehrere Leute managt.

Schritt 1: Weniger sichere Apps zulassen

Sofern nicht bereits aktiviert, müsst ihr noch unter dieser Adresse „Weniger sichere Apps zulassen“ anschalten.

Schritt 2: 2-Faktor-Authentifizierung aktivieren

Einfach diesem Link folgen und aktivieren.

Schritt 3: App-Passwort erstellen

Über diesen Link geht ihr wie folgt vor:


App_Settings_1

Bei „App auswählen“ selektiert ihr „Andere (benutzerdefiniert)“


App_Settings_2

Dann gebt ihr einen Namen dafür ein, z.B. für die externe E-Mail-Adresse, die ihr einbindet.


 

Schritt 4: Gmail konfigurieren

Jetzt geht ihr in euer Gmail-Konto und führt die folgenden Schritte aus:


  • Zahnrad in Gmail anklicken
  • Einstellungen auswählen
  • Tab Konten & Import auswählen

E-Mail-Settings 1


Unter „Senden als“ fügt ihr eine weitere Adresse hinzu. Daufhin kommt der folgende Dialog. Den Namen, den ihr dort eingebt, ist der, der als Alias beim Empfänger angezeigt wird. Als E-Mail-Adresse wählt ihr dann eure externe Adresse aus, die ihr einbinden wollt.

E-Mail-Settings 2


Im nächsten Schritt gebt ihr eure Gmail-Adresse (also die aktuell verwendete) als auch das in Schritt 2 generierte App Passwort ein. Den SMTP-Server und Port könnt ihr aus dem Screenshot übernehmen.

E-Mail-Settings 2-3


Im letzten Schritt müsst ihr dann den zugesendeten Code eingeben. Die E-Mail hierzu, die ihr erhalten müsstet, sieht so aus:

Sie haben angefordert, dass alias@euredomain.com Ihrem
Gmail-Konto hinzugefügt werden soll.
Bestätigungscode: 694072788

Bevor Sie Nachrichten von alias@euredomain.com aus über Ihr
Gmail-Konto (eure-gmail-adresse@gmail.com) senden können, klicken Sie
bitte auf den folgenden Link, um Ihre Anfrage zu bestätigen:

E-Mail-Settings 3

 

Das sollte es gewesen sein.

Christian Dennig [MS]: VS Code Git integration with ssh stops working

The Problem

I recently updated my ssh keys for GitHub account on my Mac and after adding the public key in GitHub everything worked as expected from the command line. When I did a…

$ git pull

…I was asked for the passphrase of my private key and the pull was executed. Great.

$ git pull
Enter passphrase for key '/Users/christiandennig/.ssh/id_rsa':
Already up to date.

But when I did the same from Visual Studio Code, I got the following error:

As you can see in the git logs, it says “Permission denied (publickey)”. This is odd, because VS Code is using the same git executable as the commandline (see the log output).

It seems that VS Code isn’t able to ask for the passphrase during the access to the git repo?!

ssh-agent FTW

The solution is simple. Just use ssh-agent on your machine to enter the passphrase for your ssh private key once…all subsequent calls that need your ssh key will use the saved passphrase.

Add your key to ssh-agent (storing the passphrase in MacOS Keychain!)

$ ssh-add -K ~/.ssh/id_rsa
Enter passphrase for /Users/christiandennig/.ssh/id_rsa:
Identity added: /Users/christiandennig/.ssh/id_rsa (XXXX@microsoft.com)

The result in Keychain will look like that:

When you now open Visual Studio Code and start to synchronize your git repository the calls to GitHub will use the credentials saved via ssh-agent and all commands executed will be successful.

HTH.

( BTW: Happy New Year 🙂 )

Code-Inside Blog: T-SQL Pagination

The problem

This is pretty trivial: Let’s say you have blog with 1000 posts in your database, but you only want to show 10 entries “per page”. You need to find a way how to slice this dataset into smaller pieces.

The solution

In theory you could load everything from the database and filter the results “in memory”, but this would be quite stupid for many reasons (e.g. you load much more data then you need and the computing resources could be used for other requests etc.).

If you use plain T-SQL (and Microsoft SQL Server 2012 or higher) you can express a query with paging like this:

SELECT * FROM TableName ORDER BY id OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY;

Read it like this: Return the first 10 entries from the table. To get the next 10 entries use OFFSET 10 and so on.

If you use the Entity Framework (or Entity Framework Core or any other O/R-Mapper) the chances are high they do exact the same thing internally for you.

Currently all “supported” SQL Servers support this syntax nowadays. If you try this syntax on SQL Server 2008 or SQL Server 2008 R2 you will receive a SQL error

Links

Checkout the documentation for further information.

This topic might seem “simple”, but during my developer life I was suprised how “hard” paging was with SQL Server. Some 10 years ago (… I’m getting old!) I was using MySQL and the OFFSET and FETCH syntax was introduced with Microsoft SQL Server 2012. This Stackoverflow.com Question shows the different ways how to implement it. The “older” ways are quite weird and complicated.

I also recommend this blog for everyone who needs to write T-SQL.

Hope this helps!

Holger Schwichtenberg: Entwickler-Events 2020 für .NET- und Webentwickler

Ein Sammlung der wichtigsten Konferenz- und Eventtermine für .NET- und Webentwickler im nächsten Jahr.

Jürgen Gutsch: ASP.NET Hack Advent Post 24: When environments are not enough, use sub-environments!

ASP.NET Core knows the concept of runtime environments like Development, Staging and Production. But sometimes those environments are not enough. To solve this, you could use sub-environments. This is not a built-in feature, but is easily implemented in ASP.NET Core. Thomas Levesque describes how:

ASP.NET CORE: WHEN ENVIRONMENTS ARE NOT ENOUGH, USE SUB-ENVIRONMENTS!

Thomas Levesque is a French developer living in Paris, France. He is a Microsoft MVP since 2012 and pretty involved in the open source community.

Twitter: https://twitter.com/thomaslevesque

GitHub: https://github.com/thomaslevesque

Jürgen Gutsch: ASP.NET Hack Advent Post 23: Setting up Azure DevOps CI/CD for a .NET Core 3.1 Web App hosted in Azure App Service for Linux

After you migrated your ASP.NET Core application to a Linux based App Service, you should setup a CI/CD pipeline ideally on Azure DevOps. And again it is Scott Hanselman who wrote a great post about it:

Setting up Azure DevOps CI/CD for a .NET Core 3.1 Web App hosted in Azure App Service for Linux

So, read this post to learn more about ASP.NET Core on Linux.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Marco Scheel: Microsoft Teams - Known issues... no more!?

Microsoft is documenting a list of things (about Microsoft Teams) that are not working as expected by Microsoft or customers. The list has been around for quite some time. New things are added, but I noticed things on the list that are clearly working as of this writing are not removed. So let’s have a look together and maybe you support my pull request.

The article: Known issues for Microsoft Teams

My pull request:  Known issues that are no longer known issues

Audit logs may report an incorrect username as initiator when someone has been removed from a Team

Teams team is a modern group in AAD. When you add/remove a member through the Teams user interface, the flow knows exactly which user initiated the change, and the Audit log reflects the correct info. However, if a user adds/removes a member through AAD, the change is synced to the Teams backend without telling Teams who initiated the action. Microsoft Teams picks the first owner of team as the initiator, which is eventually reflected in the Audit log as well.
https://docs.microsoft.com/en-us/microsoftteams/known-issues#administration

Related Issue: Audit logs may report an incorrect username as initiator when someone has been removed from a Team

I validated group membership edits from the new Microsoft Admin Portal (https://admin.microsoft.com) and from the Azure Portal (https://portal.azure.com). In both cases the audit logs showed the correct user. In both cases I used my cloudadmin account that is not part of the team and the audit logs documented the operation as the executing user.

image

Unable to delete connectors as a team owner

Attempting to delete a connector as an owner, that can otherwise add a connector, while “Allow members to create, update, and remove connectors” is disabled throws an error indicating the user does not have permission to do so.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#apps

I tested this in my lab environment in various combinations and I did not run into this issue. For example:

  1. Leia created a team and added Luke as a member
  2. Luke added an Incoming Webhook as a connector
  3. Leia didn’t like Luke’s webhook so she decided to remove the member permission to configure connectors for the team
  4. Luke wanted to add another connector, but the option is now missing from his context menu for the channel
  5. Leia deleted the Incoming Webhook that look created without a problem
image

Planner on single sign-on (SSO) build

SSO does not apply to Planner. You will have to sign in again the first time you use Planner on each client.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#authentication

I’m using planner within Microsoft Teams on a weekly basis (not on a daily basis as some of my colleagues would like me to use it) and it is working as expected.

Wiki not created for channels created by guests

When a guest creates a new channel, the Wiki tab is not created. There isn’t a way to manually attach a Wiki tab to the channel.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#guest-access

I did check this in my lab tenant using my work account as a guest.

  1. Luke created a team
  2. Luke added my work account as a guest to the team
  3. Luke configured the teams guest permission to allow channel creation 
  4. I opened teams in my browser and switch to the lab tenant (friends don’t let friends switch tenants in the real teams app, even with fast tenant switching!)
  5. I opened the team and created a new channel
  6. Wiki tab was present and working
image

Teams Planner integration with Planner online

Tasks buckets in Planner do not show up in Planner online experience.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#tabs

This is core feature of Planner and the issue was created 2+ year ago. It is just working as expected.

image

Unable to move, delete or rename files after editing

After a file is edited in Teams it cannot be moved or renamed or deleted immediately
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#teams

I tried this with a mix of accounts and apps (Teams app and Teams in the browser) today. I could not reproduce it. It is still a “common” issue SharePoint Online but I never experienced it or got clients reporting this issue regarding Teams.

A team name with an & symbol in it breaks connector functionality

When a team name is created with the & symbol, connectors within the Team/Group cannot be established.
Source: https://docs.microsoft.com/en-us/microsoftteams/known-issues#teams

I created two teams:

  1. Good & Bad Characters
  2. Only Good Characters

The connector option for both Teams showed up at about the same time. Maybe it is related to the special character but it took quite some time until the Exchange Online mailbox was provisioned. Longer than any test today for all the other use cases. But at the end I had no problems managing connectors for a Team with an “&” or without it in the name.

I find the Microsoft article in general very valuable and I noticed a few other things I want to talk about in the future. So stay tuned.

Jürgen Gutsch: ASP.NET Hack Advent Post 22: User Secrets in Docker-based .NET Core Worker Applications

Do you want to know how to manage the user secrets in Docker based .NET Core Worker applications? As a part of the Message Endpoints in Azure series Jimmy Bogard is writing an awesome blog post about this.

User Secrets in Docker-based .NET Core Worker Applications

Jimmy Bogard is chief architect at Headspring, creator of AutoMapper and MediatR, author of the MVC in Action books, international speaker and prolific OSS developer. Expert in distributed systems, REST, messaging, domain-driven design and CQRS.

Twitter: https://twitter.com/jbogard

GitHub: https://github.com/jbogard

Blog: https://jimmybogard.com/

LinkedIn: https://linkedin.com/in/jimmybogard

Christian Dennig [MS]: Keep your AKS worker nodes up-to-date with kured

Introduction

When you are running several AKS / Kubernetes clusters in production, the process of keeping your application(s), their dependencies, Kubernetes itself with the worker nodes up to date, turns into a time-consuming task for (sometimes) more than one person. Looking at the worker nodes that form your AKS cluster, Microsoft will help you by applying the latest OS / security updates on a nightly basis. Great, but the downside is, when the worker node needs a restart to be able to fully apply these patches, Microsoft will not reboot that particular machine. The reason is obvious: they simply don’t know, when it is best to do so. So basically, you would have to do this on your own.

Luckily, there is a project from WeaveWorks called “Kubernetes Restart Daemon” or kured, that gives you the ability to define a timeslot where it will be okay to automatically pull a node from your cluster and do a simple reboot on it.

Under the hood, kured works by adding a DaemonSet to your cluster that watches for a reboot sentinel like e.g. the file /var/run/reboot-required. If that file is present on a node, kured “cordons and drains” that particular node, inits a reboot and uncordons it afterwards. Of course, there are situations where you want to suppress that behavior and fortunately kured gives us a few options to do so (Prometheus alerts or the presence of specific pods on a node…).

So, let’s give it a try…

Installation of kured

I assume, you already have a running Kubernetes cluster, so we start by installing kured.

$ kubectl apply -f https://github.com/weaveworks/kured/releases/download/1.2.0/kured-1.2.0-dockerhub.yaml

clusterrole.rbac.authorization.k8s.io/kured created
clusterrolebinding.rbac.authorization.k8s.io/kured created
role.rbac.authorization.k8s.io/kured created
rolebinding.rbac.authorization.k8s.io/kured created
serviceaccount/kured created
daemonset.apps/kured created

Let’s have a look at what has been installed.

$ kubectl get pods -n kube-system -o wide | grep kured
kured-5rd66                             1/1     Running   0          4m18s   10.244.1.6    aks-npstandard-11778863-vmss000001   <none>           <none>
kured-g9nhc                             1/1     Running   0          4m20s   10.244.2.5    aks-npstandard-11778863-vmss000000   <none>           <none>
kured-vfzjk                             1/1     Running   0          4m20s   10.244.0.10   aks-npstandard-11778863-vmss000002   <none>           <none>

As you can see, we now have three kured pods running.

Test kured

To be able to test the installation, we simply simulate the “node reboot required” by creating the corresponding file on one of the worker nodes. We need to access a node by ssh. Just follow the official documentation on docs.microsoft.com:

https://docs.microsoft.com/en-us/azure/aks/ssh

Once you have access to a worker node via ssh, create the file via:

$ sudo touch /var/run/reboot-required

Now exit the pod, wait for the kured daemon to trigger a reboot and watch the cluster nodes by executing kubectl get nodes -w

$ kubectl get nodes -w
NAME                                 STATUS   ROLES   AGE   VERSION
aks-npstandard-11778863-vmss000000   Ready    agent   34m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready    agent   34m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000000   Ready    agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled   agent   35m   v1.15.5
aks-npstandard-11778863-vmss000001   Ready                      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000000   Ready                      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   NotReady,SchedulingDisabled   agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   NotReady,SchedulingDisabled   agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready,SchedulingDisabled      agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready                         agent   36m   v1.15.5
aks-npstandard-11778863-vmss000002   Ready                         agent   36m   v1.15.5

Corresponding output of the kured pod on that particular machine:

$ kubectl logs -n kube-system kured-ngb5t -f
time="2019-12-23T12:39:25Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:39:25Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:25Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:39:25Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:39:25Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:39:30Z" level=info msg="Holding lock"
time="2019-12-23T12:39:30Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:39:31Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:39:31Z" level=info msg="Releasing lock"
time="2019-12-23T12:41:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:43:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:45:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:47:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:49:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:51:04Z" level=info msg="Reboot not required"
time="2019-12-23T12:53:04Z" level=info msg="Reboot required"
time="2019-12-23T12:53:04Z" level=info msg="Acquired reboot lock"
time="2019-12-23T12:53:04Z" level=info msg="Draining node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:53:06Z" level=info msg="node/aks-npstandard-11778863-vmss000000 cordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:06Z" level=warning msg="WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: aks-ssh; Ignoring DaemonSet-managed pods: kube-proxy-7rhfs, kured-ngb5t" cmd=/usr/bin/kubectl std=err
time="2019-12-23T12:53:42Z" level=info msg="pod/aks-ssh evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="node/aks-npstandard-11778863-vmss000000 evicted" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:53:42Z" level=info msg="Commanding reboot"
time="2019-12-23T12:53:42Z" level=info msg="Waiting for reboot"
...
...
<AFTER_THE_REBOOT>
...
...
time="2019-12-23T12:54:15Z" level=info msg="Kubernetes Reboot Daemon: 1.2.0"
time="2019-12-23T12:54:15Z" level=info msg="Node ID: aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:15Z" level=info msg="Lock Annotation: kube-system/kured:weave.works/kured-node-lock"
time="2019-12-23T12:54:15Z" level=info msg="Reboot Sentinel: /var/run/reboot-required every 2m0s"
time="2019-12-23T12:54:15Z" level=info msg="Blocking Pod Selectors: []"
time="2019-12-23T12:54:21Z" level=info msg="Holding lock"
time="2019-12-23T12:54:21Z" level=info msg="Uncordoning node aks-npstandard-11778863-vmss000000"
time="2019-12-23T12:54:22Z" level=info msg="node/aks-npstandard-11778863-vmss000000 uncordoned" cmd=/usr/bin/kubectl std=out
time="2019-12-23T12:54:22Z" level=info msg="Releasing lock"

As you can see, the pods have been drained off the node (SchedulingDisabled), which has then been successfully rebooted, uncordoned afterwards and is now ready to run pods again.

Customize kured Installation / Best Practices

Reboot only on certain days/hours

Of course, it is not always a good option to reboot your worker nodes during “office hours”. When you want to limit the timeslot where kured is allowed to reboot your machines, you can make use of the following parameters during the installation:

  • reboot-days – the days kured is allowed to reboot a machine
  • start-time – reboot is possible after specified time
  • end-time – reboot is possible before specified time
  • time-zone – timezone for start-time/end-time

Skip rebooting when certain pods are on a node

Another option that is very useful regarding production workloads, is the possibility to skip a reboot, when certain pods are present on a node. The reason for that could be, that the service is very critical to your application and therefor pretty “expensive” when not available. You may want to surveil the process of rebooting such a node and being able to intervene quickly, if something goes wrong.

As always in the Kubernetes environment, you can achieve this by using label selectors for kured – an option set during installation called blocking-pod-selector.

Notify via WebHook

kured also offers the possibility to call a Slack webhook when nodes are about to be rebooted. Well, we can “misuse” that webhook to trigger our own action, because such a webhook is just a simple https POST with a predefined body, e.g.:

{
   "text": "Rebooting node aks-npstandard-11778863-vmss000000",
   "username": "kured"
}

To be as flexible as possible, we leverage the 200+ Azure Logic Apps connectors that are currently available to basically do anything we want. In the current sample, we want to receive a Teams notification to a certain team/channel and send a mail to our Kubernetes admins whenever kured triggers an action.

You can find the important parts of the sample Logic App on my GitHub account. Here is a basic overview of it:

What you basically have to do is to create an Azure Logic App with a http trigger, parse the JSON body of the POST request and trigger “Send Email” and “Post a Teams Message” actions. When you save the Logic App for the first time, the webhook endpoint will be generated for you. Take that URL and use the value for the slack-hook-url parameter during the installation of kured.

If you need more information on creating an Azure Logic App, please see the official documentation: https://docs.microsoft.com/en-us/azure/connectors/connectors-native-reqres.

When everything is setup, Teams notifications and emails you receive, will look like that:

Wrap-Up

In this sample, we got to know the Kubernetes Reboot Daemon that helps you keep your AKS cluster up to date by simply specifying a timeslot where the daemon is allowed to reboot you cluster/worker nodes and apply security patches to the underlying OS. We also saw, how you can make use of the “Slack” webhook feature to basically do anything you want with kured notifications by using Azure Logic Apps.

Tip: if you have a huge cluster, you should think about running multiple DaemonSets where each of them is responsible for certain nodes/nodepools. It is pretty easy to set this up, just by using Kubernetes node affinities.

Jürgen Gutsch: ASP.NET Hack Advent Post 21: Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first

Scott Hanselman again writes about ASP.NET Core applications on Linux. This time the post is about moving an ASP.NET Core application from a Windows to a Linux based App Service:

Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first

Again this is one of his pretty detailed and deep dive posts. You definitely have to read it, if you want to run your ASP.NET Core application on Linux.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 20: The ultimate guide to secure cookies with web.config in .NET

For the todays ASP.NET Hack Advent, I found a awesome post about cookie security. This post is the latest part of a series about ASP.NET Security. Cookie security is important to avoid cookie hijacking via cross-site scripting and something like this.

The ultimate guide to secure cookies with web.config in .NET

This post was written by Thomas Ardal, who is a speaker, software consultant and the founder of elma.io.

Twitter: https://twitter.com/thomasardal

Website: https://thomasardal.com/

Jürgen Gutsch: ASP.NET Hack Advent Post 19: Migrate a real project from ASP.NET Core 2.2 to 3.1

Because I got a lot of questions about migrating ASP.NET Core applications to 3.1, I will introduce another really good blog post about it. This time it is a post about a real project that needs to be migrated from ASP.NET Core 2.2. to 2.1. He is writing about how to update the project file and about what needs to be changed in the Startup.cs:

Migration from Asp.Net Core 2.2 to 3.1 — Real project

This post was written by Alexandre Malavasi on December 16. He is a consultant and .NET developer from Brazil, who is currently working and living in Dublin Ireland.

Twitter: https://twitter.com/alemalavasi

Medium: https://medium.com/@alexandre.malavasi

LinkedIn: https://www.linkedin.com/in/alexandremalavasi/

Jürgen Gutsch: ASP.NET Hack Advent Post 18: The .NET Foundation has a new Executive Director

On December 16th, Jon Galloway announced that Oren Novotny will follow him as the new Executive Director of the .NET Foundation. Jon started as Executive Director in February 2016. Until now, the .NET Foundation added a lot of value for the .NET community. They added a lot of more awesome projects to the Foundation and provided many services for them. The .NET Foundation launched a worldwide Meetup program, where .NET related meetups get a Meetup Pro for free and will be marked as part of the .NET Foundation. They also support the local communities with contents and sponsorships. In March 2019 the .NET Foundation runs an election for the board’s first elected directors. Orin will officially take over at the start of January. Jon will continue supporting the community as a Vice President of the .NET Foundation and as a member of the voluntary Advisory Council.

Welcoming Oren Novotny as the new Executive Director of .NET Foundation

At the same day, also Oren announced that he will follow Jon Galloway as Executive Director of the .NET Foundation. He also announced that he is joining Microsoft as a Program Manager on the .NET Team under Scott Hanselman. So he is one of the many, many MVPs that joins Microsoft. Congratulations :-)

.NET Foundation Executive Director, Joining Microsoft

I'm really looking forward to see how the .NET Foundation evolves.

Jürgen Gutsch: ASP.NET Hack Advent Post 17: Creating Common Intermediate Language projects with .NET SDK

For the todays ASP.NET Hack Advent post, I found a link to one of the awesome posts of Filip W. In this post Filip describes the new project type that allows you to write .NET Projects in IL code directly. He writes how to create a new Microsoft.NET.Sdk.IL project and how to write IL code. He also answered the most important question about why you might need to write IL code directly.

https://www.strathweb.com/2019/12/creating-common-intermediate-language-projects-with-net-sdk/

Filip is working as a senior software developer and lead developer near Zurich in Switzerland. He is a Microsoft MVP since 2013 and one of most important, influencing and well known member of the .NET developer community. He is creator and main contributor on scrtiptcs, contributes to roslyn and OmniSharp and many more open source projects. You should definitely follow him on Twitter and have a look into his other open source projects on GitHub: https://github.com/filipw/

Golo Roden: Das war die CODEx 2019

Am 4. November 2019 veranstaltete die HDI in Hannover die erste CODEx-Konferenz, eine Veranstaltung für Entwickler und IT-Experten. Golo Roden, Autor bei heise Developer, war als Sprecher dort.

Jürgen Gutsch: ASP.NET Hack Advent Post 16: ConfigureAwait & System.Threading.Channels

Stephen Toub published two really good blog post about in the Microsoft .NET Net blog.

The first one is a really good and detailed FAQ style post about ConfigureAwait. If you would like to learn about ConfigureAwait, you should read it:

ConfigureAwait FAQ

The second one is an introduction into System.Threading.Channels. This post is really good introduction and goes more and mote deep into that topic:

An Introduction to System.Threading.Channels

Christian Dennig [MS]: Fully automated creation of an AAD-integrated Kubernetes cluster with Terraform

Introduction

To run your Kubernetes cluster in Azure integrated with Azure Active Directory as your identity provider is a best practice in terms of security and compliance. You can give (and remove – when people are leaving your organisation) fine-grained permissions to your team members, to resources and/or namespaces as they need them. Sounds good? Well, you have to do a lot of manual steps to create such a cluster. If you don’t believe me, follow the official documentation 🙂 https://docs.microsoft.com/en-us/azure/aks/azure-ad-integration.

So, we developers are known to be lazy folks…then how can this automatically be achieved e.g. with Terraform (which is one of the most popular tools out there to automate the creation/management of your cloud resources)? It took me a while to figure out, but here’s a working example how to create an AAD integrated AKS cluster with “near-zero” manual work.

The rest of this blog post will guide you through the complete Terraform script which can be found on my GitHub account.

Create the cluster

To work with Terraform (TF), it is best-practice to store the Terraform state not on you workstation as other team members also need the state-information to be able to work on the same environment. So, first…let’s create a storage account in your Azure subscription to store the TF state.

Basic setup

With the commands below, we will be creating a resource group in Azure, a basic storage account and a corresponding container where the TF state will be put in.

# Resource Group

$ az group create --name tf-rg --location westeurope

# Storage Account

$ az storage account create -n tfstatestac -g tf-rg --sku Standard_LRS

# Storage Account Container

$ az storage container create -n tfstate --account-name tfstatestac --account-key `az storage account keys list -n tfstatestac -g tf-rg --query "[0].value" -otsv`

Terraform Providers + Resource Group

Of course, we need a few Terraform providers for our example. First and foremost, we need the Azure and also the Azure Active Directory resource providers.

One of the first things we need is – as always in Azure – a resource group where we will be the deploying our AKS cluster to.

provider "azurerm" {
  version = "=1.38.0"
}

provider "azuread" {
  version = "~> 0.3"
}

terraform {
  backend "azurerm" {
    resource_group_name  = "tf-rg"
    storage_account_name = "tfstatestac"
    container_name       = "tfstate"
    key                  = "org.terraform.tfstate"
  }
}

data "azurerm_subscription" "current" {}

# Resource Group creation
resource "azurerm_resource_group" "k8s" {
  name     = "${var.rg-name}"
  location = "${var.location}"
}

AAD Applications for K8s server / client components

To be able to integrate AKS with Azure Active Directory, we need to register two applications in the directory. The first AAD application is the server component (Kubernetes API) that provides user authentication. The second application is the client component (e.g. kubectl) that’s used when you’re prompted by the CLI for authentication.

We will assign certain permissions to these two applications, that need “admin consent”. Therefore, the Terraform script needs to be executed by someone who is able to grant that for the whole AAD.

# AAD K8s Backend App

resource "azuread_application" "aks-aad-srv" {
  name                       = "${var.clustername}srv"
  homepage                   = "https://${var.clustername}srv"
  identifier_uris            = ["https://${var.clustername}srv"]
  reply_urls                 = ["https://${var.clustername}srv"]
  type                       = "webapp/api"
  group_membership_claims    = "All"
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = false
  required_resource_access {
    resource_app_id = "00000003-0000-0000-c000-000000000000"
    resource_access {
      id   = "7ab1d382-f21e-4acd-a863-ba3e13f7da61"
      type = "Role"
    }
    resource_access {
      id   = "06da0dbc-49e2-44d2-8312-53f166ab848a"
      type = "Scope"
    }
    resource_access {
      id   = "e1fe6dd8-ba31-4d61-89e7-88639da4683d"
      type = "Scope"
    }
  }
  required_resource_access {
    resource_app_id = "00000002-0000-0000-c000-000000000000"
    resource_access {
      id   = "311a71cc-e848-46a1-bdf8-97ff7156d8e6"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "aks-aad-srv" {
  application_id = "${azuread_application.aks-aad-srv.application_id}"
}

resource "random_password" "aks-aad-srv" {
  length  = 16
  special = true
}

resource "azuread_application_password" "aks-aad-srv" {
  application_object_id = "${azuread_application.aks-aad-srv.object_id}"
  value                 = "${random_password.aks-aad-srv.result}"
  end_date              = "2024-01-01T01:02:03Z"
}

# AAD AKS kubectl app

resource "azuread_application" "aks-aad-client" {
  name       = "${var.clustername}client"
  homepage   = "https://${var.clustername}client"
  reply_urls = ["https://${var.clustername}client"]
  type       = "native"
  required_resource_access {
    resource_app_id = "${azuread_application.aks-aad-srv.application_id}"
    resource_access {
      id   = "${azuread_application.aks-aad-srv.oauth2_permissions.0.id}"
      type = "Scope"
    }
  }
}

resource "azuread_service_principal" "aks-aad-client" {
  application_id = "${azuread_application.aks-aad-client.application_id}"
}

The important parts regarding the permissions are highlighted above. If you wonder what these “magic permission GUIDs” stand for, here’s a list of what will be assigned.

Microsoft Graph (AppId: 00000003-0000-0000-c000-000000000000) Persmissions

GUIDPermission
7ab1d382-f21e-4acd-a863-ba3e13f7da61Read directory data (Application Permission)
06da0dbc-49e2-44d2-8312-53f166ab848aRead directory data (Delegated Permission)
e1fe6dd8-ba31-4d61-89e7-88639da4683dSign in and read user profile

Windows Azure Active Directory (AppId: 00000002-0000-0000-c000-000000000000) Permissions

GUIDPermission
311a71cc-e848-46a1-bdf8-97ff7156d8e6Sign in and read user profile

After a successful run of the Terraform script, it will look like that in the portal.

AAD applications
Server app permissions

By the way, you can query the permissions of the applications (MS Graph/Azure Active Directory) mentioned above. Here’s a quick sample for one of the MS Graph permissions:

$ az ad sp show --id 00000003-0000-0000-c000-000000000000 | grep -A 6 -B 3 06da0dbc-49e2-44d2-8312-53f166ab848a
    
{
      "adminConsentDescription": "Allows the app to read data in your organization's directory, such as users, groups and apps.",
      "adminConsentDisplayName": "Read directory data",
      "id": "06da0dbc-49e2-44d2-8312-53f166ab848a",
      "isEnabled": true,
      "type": "Admin",
      "userConsentDescription": "Allows the app to read data in your organization's directory.",
      "userConsentDisplayName": "Read directory data",
      "value": "Directory.Read.All"
}

Cluster Admin AAD Group

Now that we have the script for the applications we need to integrate our cluster with Azure Active Directory, let’s also add a default AAD group for our cluster admins.

# AAD K8s cluster admin group / AAD

resource "azuread_group" "aks-aad-clusteradmins" {
  name = "${var.clustername}clusteradmin"
}

Service Principal for AKS Cluster

Last but not least, before we can finally create the Kubernetes cluster, a service principal is required. That’s basically the technical user Kubernetes uses to interact with Azure (e.g. acquire a public IP at the Azure load balancer). We will assign the role “Contributor” (for the whole subscription – please adjust to your needs!) to that service principal.

# Service Principal for AKS

resource "azuread_application" "aks_sp" {
  name                       = "${var.clustername}"
  homepage                   = "https://${var.clustername}"
  identifier_uris            = ["https://${var.clustername}"]
  reply_urls                 = ["https://${var.clustername}"]
  available_to_other_tenants = false
  oauth2_allow_implicit_flow = false
}

resource "azuread_service_principal" "aks_sp" {
  application_id = "${azuread_application.aks_sp.application_id}"
}

resource "random_password" "aks_sp_pwd" {
  length  = 16
  special = true
}

resource "azuread_service_principal_password" "aks_sp_pwd" {
  service_principal_id = "${azuread_service_principal.aks_sp.id}"
  value                = "${random_password.aks_sp_pwd.result}"
  end_date             = "2024-01-01T01:02:03Z"
}

resource "azurerm_role_assignment" "aks_sp_role_assignment" {
  scope                = "${data.azurerm_subscription.current.id}"
  role_definition_name = "Contributor"
  principal_id         = "${azuread_service_principal.aks_sp.id}"

  depends_on = [
    azuread_service_principal_password.aks_sp_pwd
  ]
}

Create the AKS cluster

Everything is now ready for the provisioning of the cluster. But hey, we created the AAD applications, but haven’t granted admin consent?! We can also do this via our Terrform script and that’s what we will be doing before finally creating the cluster.

Azure is sometimes a bit too fast in sending a 200 and signalling that a resource is ready. In the background, not all services have already access to e.g. newly created applications. So it happens, that things fail although they shouldn’t 🙂 Therefore, we simply wait a few seconds and give AAD time to distribute application information, before kicking off the cluster creation.

# K8s cluster

# Before giving consent, wait. Sometimes Azure returns a 200, but not all services have access to the newly created applications/services.

resource "null_resource" "delay_before_consent" {
  provisioner "local-exec" {
    command = "sleep 60"
  }
  depends_on = [
    azuread_service_principal.aks-aad-srv,
    azuread_service_principal.aks-aad-client
  ]
}

# Give admin consent - SP/az login user must be AAD admin

resource "null_resource" "grant_srv_admin_constent" {
  provisioner "local-exec" {
    command = "az ad app permission admin-consent --id ${azuread_application.aks-aad-srv.application_id}"
  }
  depends_on = [
    null_resource.delay_before_consent
  ]
}
resource "null_resource" "grant_client_admin_constent" {
  provisioner "local-exec" {
    command = "az ad app permission admin-consent --id ${azuread_application.aks-aad-client.application_id}"
  }
  depends_on = [
    null_resource.delay_before_consent
  ]
}

# Again, wait for a few seconds...

resource "null_resource" "delay" {
  provisioner "local-exec" {
    command = "sleep 60"
  }
  depends_on = [
    null_resource.grant_srv_admin_constent,
    null_resource.grant_client_admin_constent
  ]
}

# Create the cluster

resource "azurerm_kubernetes_cluster" "aks" {
  name                = "${var.clustername}"
  location            = "${var.location}"
  resource_group_name = "${var.rg-name}"
  dns_prefix          = "${var.clustername}"

  default_node_pool {
    name            = "default"
    type            = "VirtualMachineScaleSets"
    node_count      = 2
    vm_size         = "Standard_B2s"
    os_disk_size_gb = 30
    max_pods        = 50
  }
  service_principal {
    client_id     = "${azuread_application.aks_sp.application_id}"
    client_secret = "${random_password.aks_sp_pwd.result}"
  }
  role_based_access_control {
    azure_active_directory {
      client_app_id     = "${azuread_application.aks-aad-client.application_id}"
      server_app_id     = "${azuread_application.aks-aad-srv.application_id}"
      server_app_secret = "${random_password.aks-aad-srv.result}"
      tenant_id         = "${data.azurerm_subscription.current.tenant_id}"
    }
    enabled = true
  }
  depends_on = [
    azurerm_role_assignment.aks_sp_role_assignment,
    azuread_service_principal_password.aks_sp_pwd
  ]
}

Assign the AAD admin group to be cluster-admin

When the cluster is finally created, we need to assign the Kubernetes cluster role cluster-admin to our AAD cluster admin group. We simply get access to the Kubernetes cluster by adding the Kubernetes Terraform provider. Because we already have a working integration with AAD, we need to use the admin credentials of our cluster! But that will be the last time, we will ever need them again.

To be able to use the admin credentials, we point the Kubernetes provider to use kube_admin_config which is automatically provided for us.

In the last step, we bind the cluster role to the fore-mentioned AAD cluster group id.

# Role assignment

# Use ADMIN credentials
provider "kubernetes" {
  host                   = "${azurerm_kubernetes_cluster.aks.kube_admin_config.0.host}"
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.aks.kube_admin_config.0.cluster_ca_certificate)}"
}

# Cluster role binding to AAD group

resource "kubernetes_cluster_role_binding" "aad_integration" {
  metadata {
    name = "${var.clustername}admins"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }
  subject {
    kind = "Group"
    name = "${azuread_group.aks-aad-clusteradmins.id}"
  }
  depends_on = [
    azurerm_kubernetes_cluster.aks
  ]
}

Run the Terraform script

We now have discussed all the relevant parts of the script, it’s time to let the Terraform magic happen 🙂 Run the script via…

$ terraform init

# ...and then...

$ terraform apply

Access the Cluster

When the script has finished, it’s time to access the cluster and try to logon. First, let’s do the “negativ check” and try to access it without having been added as cluster admin (AAD group member).

After downloading the user credentials and querying the cluster nodes, the OAuth 2.0 Device Authorization Grant flow kicks in and we need to authenticate against our Azure directory (as you might know it from logging in with Azure CLI).

$ az aks get-credentials --resource-group <RESOURCE_GROUP> -n <CLUSTER_NAME>

$ kubectl get nodes
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code DP9JA76WS to authenticate.

Error from server (Forbidden): nodes is forbidden: User "593736cb-1f95-4f23-bfbd-75891886b05f" cannot list resource "nodes" in API group "" at the cluster scope

Great, we get the expected authorization error!

Now add a user from the Azure Active Directory to the AAD admin group in the portal. Navigate to “Azure Active Directory” –> “Groups” and select your cluster-admin group. On the left navigation, select “Members” and add e.g. your own Azure user.

Now go back to the command line and try again. One last time, download the user credentials with az aks get-credentials (it will simply overwrite the former entry in you .kubeconfig to make sure we get the latest information from AAD).

$ az aks get-credentials --resource-group <RESOURCE_GROUP> -n <CLUSTER_NAME>

$ kubectl get nodes
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code ASGRA765S to authenticate.

NAME                              STATUS   ROLES   AGE   VERSION
aks-default-41331054-vmss000000   Ready    agent   18m   v1.13.12
aks-default-41331054-vmss000001   Ready    agent   18m   v1.13.12

Wrap Up

So, that’s all we wanted to achieve! We have created an AKS cluster with fully-automated Azure Active Directory integration, added a default AAD group for our Kubernetes admins and bound it to the “cluster-admin” role of Kubernetes – all done by a Terraform script which can now be integrated with you CI/CD pipeline to create compliant and AAD-secured AKS clusters (as many as you want ;)).

Well, we also could have added a user to the admin group, but that’s the only manual step in our scenario…but hey, you would have needed to do it anyway 🙂

You can find the complete script including the variables.tf file on my Github account. Feel free to use it in your own projects.

House-Keeping

To remove all of the provisioned resources (service principals, AAD groups, Kubernetes service, storage accounts etc.) simply…

$ terraform destroy

# ...and then...

$ az group delete -n tf-rg

Jürgen Gutsch: ASP.NET Hack Advent Post 15: About silos and hierarchies in software development

This post is a special one. Not really related to .NET Core or ASP.NET Core, but to software development in general. I recently stumbled upon this post and while reading it I found myself remembering the days back when I needed to write code others estimated and specified for me.

About silos and hierarchies in software development

The woman who wrote this post lives in Cologne Germany and worked in really special environments, like self organizing teams and companies. I met Krisztina Hirth several times at community events in Germany. I really like her ideas, the way she thinks and the way she writes. You should definitely also read her other posts on her blog: https://yellow-brick-code.org/

Twitter: https://twitter.com/yellowbrickc

Jürgen Gutsch: ASP.NET Hack Advent Post 14: MailKit

This fourteenth post is about a cross-platform .NET library for IMAP, POP3, and SMTP.

On Twitter I got asked about sending emails out from a worker service. So I searched for the documentation about System.Net.Mail and the SmtpClient Class and was really surprised that this class was marked as obsolete. It seems I missed the announcement about this.

The .NET team recommend to use the MailKit and MimeKit to send emails out.

Both libraries are open sourced under the MIT license and free to use in commercial projects. It seems that this libraries are really complete and provide a lot of useful features.

Website: http://www.mimekit.net/

MailKit:

GitHub: https://github.com/jstedfast/MailKit

NuGet: https://www.nuget.org/packages/MailKit/

MimeKit:

GitHub: https://github.com/jstedfast/MimeKit

NuGet: https://www.nuget.org/packages/MimeKit/

Jürgen Gutsch: ASP.NET Hack Advent Post 13:.NET Conf: Focus on Blazor

The .NET Conf in September this year was great and it was a pleasure to also do a talk on the 25th which was the community cay with a lot of awesome talks from community folks around the world. I'm really looking forward to the next one and hope I can do another talk next year.

Yesterday I was surprised when I stumbled upon the announcement about another special .NET Conf that is scheduled for January 14th. This is really a special one with the focus on Blazor:

https://focus.dotnetconf.net/

The schedule isn't online yet, but will be there soon, as they wrote.

I like the idea to have special focused .NET Confs. The infrastructure with the Channel9 studios already is available, so it is cheap to setup a virtual conference like this. And I can imagine a few more topics to focus on:

  • Entity Framework Core
  • ASP.NET Core
  • Async/Await
  • Desktop
  • And maybe a MVP driven .NET Conf during the MVP Summit 2020 in March ;-)

Holger Schwichtenberg: .NET Core 3.1 ist ein ungewöhnliches Release

Fast keine neuen Funktionen, im Wesentlichen nur Bugfixes und sogar Breaking Changes, die es in einer Version mit Änderung an der zweiten Stelle der Versionsnummer gar nicht geben dürfte.

Jürgen Gutsch: ASP.NET Hack Advent Post 12:.NET Rocks Podcasts

Do you like podcasts? Do you like entertaining and funny technical podcasts about .NET? I definitely do. I like to listen to them while commuting. The best and (I guess) the most famous .NET related podcast is .NET Rocks:

https://www.dotnetrocks.com/

Carl Franklin and Richard Campbell really do a great show, they invite a lot of cool and well known experts to their shows and discuss about cutting-edge topics around .NET and Microsoft technologies.

Jürgen Gutsch: ASP.NET Hack Advent Post 11: Updating an ASP.NET Core 2.2 Web Site to .NET Core 3.1

.NET Core 3.1 is out, but how to update your ASP.NET Core 2.2 application? Scott Hanselman recently wrote a pretty detailed and complete post about it.

https://www.hanselman.com/blog/UpdatingAnASPNETCore22WebSiteToNETCore31LTS.aspx

This post also includes details on how to update the deployment and hosting part on Azure DevOps and Azure App Service.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 10: Wasmtime

WebAssembly is pretty popular this time for .NET Developers. With Blazor we have the possibility to run .NET assemblies inside the WebAssembly inside a browser.

But did you know that you can run WebAssembly outside the web and that you can run WebAssembly code without a browser? This can be done with the open-source, and cross-platform application runtime called Wasmtime. With Wasmtime you are able to load and execute WebAssemby code directly from your program. Wasmtime is programmed and maintained by the Bytecode Alliance

The Bytecode Alliance is an open source community dedicated to creating secure new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI).

Website: https://wasmtime.dev/

GitHub: https://github.com/bytecodealliance/wasmtime/

I wouldn't write about it, if it wouldn't be somehow related to .NET Core. The Bytecode Alliance just added an preview of an API for .NET Core. That means that you now can execute WebAssembly code from your .NET Core application. For more details see this blog post by Peter Huene:

https://hacks.mozilla.org/2019/12/using-webassembly-from-dotnet-with-wasmtime/

He wrote a pretty detailed blog post about Wasmtime and how to use it within a .NET Core application. Also the Bytecode Alliance added a .NET Core sample and created a NuGet package:

https://github.com/bytecodealliance/wasmtime-demos/tree/master/dotnet

https://www.nuget.org/packages/Wasmtime

So Wasmtime is the opposite of Blazor. Instead of running .NET Code inside the WebAssembly, you are now also able to run WebAssembly in .NET Core.

Jürgen Gutsch: ASP.NET Hack Advent Post 09: November 2019 .NET/ASP.NET Documentation Update

For the ninth post I found a pretty useful blog post about .NET Core and ASP.NET Core documentation updates for the version 3.0. This post was written by Maxime Rouiller, a former MVP, who now works for Microsoft as a Microsoft Cloud Developer Advocate.

In this post he shows all the important updates related to version 3.0 structured by topic including the links to the updated documentations. There is definitely a lot of stuff he is mentioning and you should read:

https://blog.maximerouiller.com/post/november-2019-net-aspnet-documentation-update/

BTW: I personally met Maxime during the MVP Summit back when he still was MVP. The first time I met him during the breakfast at the summit hotels. He asked the MVPs at the breakfast table to try to spell his name and I was one of them who tried to speak his name in French which was right. This guy is so cool and funny. It was a pleasure to meet him.

Blog: https://blog.maximerouiller.com

Twitter: https://twitter.com/MaximRouiller

GitHub: https://github.com/MaximRouiller

Christian Dennig [MS]: Using Rook / Ceph with PVCs on Azure Kubernetes Service

Introduction

As you all know by now, Kubernetes is a quite popular platform for running cloud-native applications at scale. A common recommendation when doing so, is to ousource as much state as possible, because managing state in Kubernetes is not a trivial task. It can be quite hard, especially when you have a lot of attach/detach operations on your workloads. Things can go terribly wrong and – of course – your application and your users will suffer from that. A solution that becomes more and more popular in that space is Rook in combination with Ceph.

Rook is described on their homepage rook.io as follows:

Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.

Rook is a project of the Cloud Native Computing Foundation, at the time of writing in status “incubating”.

Ceph in turn is a free-software storage platform that implements storage on a cluster, and provides interfaces for object-, block- and file-level storage. It has been around for many years in the open-source space and is a battle-proven distributed storage system. Huge storage systems have been implemented with Ceph.

So in a nutshell, Rook enables Ceph storage systems to run on Kubernetes using Kubernetes primitives. The basic architecture for that inside a Kubernetes cluster looks as follows:

rook-architecture
Rook in-cluster architecture

I won’t go into all of the details of Rook / Ceph, because I’d like to focus on simply running and using it on AKS in combination with PVCs. If you want to have a step-by-step introduction, there is a pretty good “Getting Started” video by Tim Serewicz on Vimeo:

First, we need a Cluster!

So, let’s start by creating a Kubernetes cluster on Azure. We will be using different nodepools for running our storage (nodepool: npstorage) and application workloads (nodepool: npstandard).

# Create a resource group

$ az group create --name rooktest-rg --location westeurope

# Create the cluster

$ az aks create \
--resource-group rooktest-rg \
--name myrooktestclstr \
--node-count 3 \
--kubernetes-version 1.14.8 \
--enable-vmss \
--nodepool-name npstandard \
--generate-ssh-keys

Add Storage Nodes

After the cluster has been created, add the npstorage nodepool:

az aks nodepool add --cluster-name myrooktestclstr \
--name npstorage --resource-group rooktest-rg \ 
--node-count 3 \
--node-taints storage-node=true:NoSchedule

Please be aware that we add taints to our nodes to make sure that no pods will be scheduled on this nodepool as long as we explicitly tolerate it. We want to have these nodes exclusively for storage pods!

If you need a refresh regarding the concept of “taints and tolerations”, please see the Kubernetes documentation.

So, now that we have a cluster and a dedicated nodepool for storage, we can download the cluster config.

az aks get-credentials \
--resource-group rooktest-rg \
--name myrooktestclstr

Let’s look at the nodes of our cluster:

$ kubectl get nodes

NAME                                 STATUS   ROLES   AGE    VERSION
aks-npstandard-33852324-vmss000000   Ready    agent   10m    v1.14.8
aks-npstandard-33852324-vmss000001   Ready    agent   10m    v1.14.8
aks-npstandard-33852324-vmss000002   Ready    agent   10m    v1.14.8
aks-npstorage-33852324-vmss000000    Ready    agent   2m3s   v1.14.8
aks-npstorage-33852324-vmss000001    Ready    agent   2m9s   v1.14.8
aks-npstorage-33852324-vmss000002    Ready    agent   119s   v1.14.8

So, we now have three nodes for storage and three nodes for our application workloads. From an infrastructure level, we are now ready to install Rook.

Install Rook

Let’s start installing Rook by cloning the repository from GitHub:

$ git clone https://github.com/rook/rook.git

After we have downloaded the repo to our local machine, there are three steps we need to perform to install Rook:

  1. Add Rook CRDs / namespace / common resources
  2. Add and configure the Rook operator
  3. Add the Rook cluster

So, switch to the /cluster/examples/kubernetes/ceph directory and follow the steps below.

1. Add Common Resources

$ kubectl apply -f common.yaml

The common.yaml contains the namespace rook-ceph, common resources (e.g. clusterroles, bindings, service accounts etc.) and some Custom Resource Definitions from Rook.

2. Add the Rook Operator

The operator is responsible for managing Rook resources and needs to be configured to run on Azure Kubernetes Service. To manage Flex Volumes, AKS uses a directory that’s different from the “default directory”. So, we need to tell the operator which directory to use on the cluster nodes.

Furthermore, we need to adjust the settings for the CSI plugin to run the corresponding daemonsets on the storage nodes (remember, we added taints to the nodes. By default, the pods of the daemonsets Rook needs to work, won’t be scheduled on our storage nodes – we need to “tolerate” this).

So, here’s the full operator.yaml file (→ important parts)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rook-ceph-operator
  namespace: rook-ceph
  labels:
    operator: rook
    storage-backend: ceph
spec:
  selector:
    matchLabels:
      app: rook-ceph-operator
  replicas: 1
  template:
    metadata:
      labels:
        app: rook-ceph-operator
    spec:
      serviceAccountName: rook-ceph-system
      containers:
      - name: rook-ceph-operator
        image: rook/ceph:master
        args: ["ceph", "operator"]
        volumeMounts:
        - mountPath: /var/lib/rook
          name: rook-config
        - mountPath: /etc/ceph
          name: default-config-dir
        env:
        - name: ROOK_CURRENT_NAMESPACE_ONLY
          value: "false"
        - name: FLEXVOLUME_DIR_PATH
          value: "/etc/kubernetes/volumeplugins"
        - name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS
          value: "false"
        - name: ROOK_LOG_LEVEL
          value: "INFO"
        - name: ROOK_CEPH_STATUS_CHECK_INTERVAL
          value: "60s"
        - name: ROOK_MON_HEALTHCHECK_INTERVAL
          value: "45s"
        - name: ROOK_MON_OUT_TIMEOUT
          value: "600s"
        - name: ROOK_DISCOVER_DEVICES_INTERVAL
          value: "60m"
        - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
          value: "false"
        - name: ROOK_ENABLE_SELINUX_RELABELING
          value: "true"
        - name: ROOK_ENABLE_FSGROUP
          value: "true"
        - name: ROOK_DISABLE_DEVICE_HOTPLUG
          value: "false"
        - name: ROOK_ENABLE_FLEX_DRIVER
          value: "false"
        # Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
        # This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs. --> CHANGED to false
        - name: ROOK_ENABLE_DISCOVERY_DAEMON
          value: "false"
        - name: ROOK_CSI_ENABLE_CEPHFS
          value: "true"
        - name: ROOK_CSI_ENABLE_RBD
          value: "true"
        - name: ROOK_CSI_ENABLE_GRPC_METRICS
          value: "true"
        - name: CSI_ENABLE_SNAPSHOTTER
          value: "true"
        - name: CSI_PROVISIONER_TOLERATIONS
          value: |
            - effect: NoSchedule
              key: storage-node
              operator: Exists
        - name: CSI_PLUGIN_TOLERATIONS
          value: |
            - effect: NoSchedule
              key: storage-node
              operator: Exists
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
      - name: rook-config
        emptyDir: {}
      - name: default-config-dir
        emptyDir: {}

3. Create the Cluster

Deploying the Rook cluster is as easy as installing the Rook operator. As we are running our cluster with the Azure Kuberntes Service – a managed service – we don’t want to manually add disks to our storage nodes. Also, we don’t want to use a directory on the OS disk (which most of the examples out there will show you) as this will be deleted when the node will be upgraded to a new Kubernetes version.

In this sample, we want to leverage Persistent Volumes / Persistent Volume Claims that will be used to request Azure Managed Disks which will in turn be dynamically attached to our storage nodes. Thankfully, when we installed our cluster, a corresponding storage class for using Premium SSDs from Azure was also created.

$ kubectl get storageclass

NAME                PROVISIONER                AGE
default (default)   kubernetes.io/azure-disk   15m
managed-premium     kubernetes.io/azure-disk   15m

Now, let’s create the Rook Cluster. Again, we need to adjust the tolerations and add a node affinity that our OSDs will be scheduled on the storage nodes (→ important parts):

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3
    allowMultiplePerNode: false
    volumeClaimTemplate:
      spec:
        storageClassName: managed-premium
        resources:
          requests:
            storage: 10Gi
  cephVersion:
    image: ceph/ceph:v14.2.4-20190917
    allowUnsupported: false
  dashboard:
    enabled: true
    ssl: true
  network:
    hostNetwork: false
  storage:
    storageClassDeviceSets:
    - name: set1
      # The number of OSDs to create from this device set
      count: 4
      # IMPORTANT: If volumes specified by the storageClassName are not portable across nodes
      # this needs to be set to false. For example, if using the local storage provisioner
      # this should be false.
      portable: true
      # Since the OSDs could end up on any node, an effort needs to be made to spread the OSDs
      # across nodes as much as possible. Unfortunately the pod anti-affinity breaks down
      # as soon as you have more than one OSD per node. If you have more OSDs than nodes, K8s may
      # choose to schedule many of them on the same node. What we need is the Pod Topology
      # Spread Constraints, which is alpha in K8s 1.16. This means that a feature gate must be
      # enabled for this feature, and Rook also still needs to add support for this feature.
      # Another approach for a small number of OSDs is to create a separate device set for each
      # zone (or other set of nodes with a common label) so that the OSDs will end up on different
      # nodes. This would require adding nodeAffinity to the placement here.
      placement:
        tolerations:
        - key: storage-node
          operator: Exists
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: agentpool
                operator: In
                values:
                - npstorage
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd-prepare
              topologyKey: kubernetes.io/hostname
      resources:
        limits:
          cpu: "500m"
          memory: "4Gi"
        requests:
          cpu: "500m"
          memory: "2Gi"
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          resources:
            requests:
              storage: 100Gi
          storageClassName: managed-premium
          volumeMode: Block
          accessModes:
            - ReadWriteOnce
  disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

So, after a few minutes, you will see some pods running in the rook-ceph namespace. Make sure, that the OSD pods a running, before continuing with configuring the storage pool.

$ kubectl get pods -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-4qxsv                                            3/3     Running     0          28m
csi-cephfsplugin-d2klt                                            3/3     Running     0          28m
csi-cephfsplugin-jps5r                                            3/3     Running     0          28m
csi-cephfsplugin-kzgrt                                            3/3     Running     0          28m
csi-cephfsplugin-provisioner-dd9775cd6-nsn8q                      4/4     Running     0          28m
csi-cephfsplugin-provisioner-dd9775cd6-tj826                      4/4     Running     0          28m
csi-cephfsplugin-rt6x2                                            3/3     Running     0          28m
csi-cephfsplugin-tdhg6                                            3/3     Running     0          28m
csi-rbdplugin-6jkx5                                               3/3     Running     0          28m
csi-rbdplugin-clfbj                                               3/3     Running     0          28m
csi-rbdplugin-dxt74                                               3/3     Running     0          28m
csi-rbdplugin-gspqc                                               3/3     Running     0          28m
csi-rbdplugin-pfrm4                                               3/3     Running     0          28m
csi-rbdplugin-provisioner-6dfd6db488-2mrbv                        5/5     Running     0          28m
csi-rbdplugin-provisioner-6dfd6db488-2v76h                        5/5     Running     0          28m
csi-rbdplugin-qfndk                                               3/3     Running     0          28m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000c8gdp   1/1     Running     0          16m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000tfk2s   1/1     Running     0          13m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000xfnhx   1/1     Running     0          13m
rook-ceph-crashcollector-aks-npstorage-33852324-vmss000001c6cbd   1/1     Running     0          5m31s
rook-ceph-crashcollector-aks-npstorage-33852324-vmss000002t6sgq   1/1     Running     0          2m48s
rook-ceph-mgr-a-5fb458578-s2lgc                                   1/1     Running     0          15m
rook-ceph-mon-a-7f9fc6f497-mm54j                                  1/1     Running     0          26m
rook-ceph-mon-b-5dc55c8668-mb976                                  1/1     Running     0          24m
rook-ceph-mon-d-b7959cf76-txxdt                                   1/1     Running     0          16m
rook-ceph-operator-5cbdd65df7-htlm7                               1/1     Running     0          31m
rook-ceph-osd-0-dd74f9b46-5z2t6                                   1/1     Running     0          13m
rook-ceph-osd-1-5bcbb6d947-pm5xh                                  1/1     Running     0          13m
rook-ceph-osd-2-9599bd965-hprb5                                   1/1     Running     0          5m31s
rook-ceph-osd-3-557879bf79-8wbjd                                  1/1     Running     0          2m48s
rook-ceph-osd-prepare-set1-0-data-sv78n-v969p                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-1-data-r6d46-t2c4q                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-2-data-fl8zq-rrl4r                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-3-data-qrrvf-jjv5b                     0/1     Completed   0          15m

Configuring Storage

Before Rook can provision persistent volumes, either a filesystem or a storage pool should be configured. In our example, a Ceph Block Pool is used:

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3

Next, we also need a storage class that will be using the Rook cluster / storage pool. In our example, we will not be using Flex Volume (which will be deprecated in furture versions of Rook/Ceph), instead we use Container Storage Interface.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
    clusterID: rook-ceph
    pool: replicapool
    imageFormat: "2"
    imageFeatures: layering
    csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete

Test

Now, let’s have a look at the dashboard which was also installed when we created the Rook cluster. Therefore, we are port-forwarding the dashboard service to our local machine. The service itself is secured by username/password. The default username is admin and the password is stored in a K8s secret. To get the password, simply run the following command.

$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password \ 
    -o jsonpath="{['data']['password']}" | base64 --decode && echo
# copy the password

$ kubectl port-forward svc/rook-ceph-mgr-dashboard 8443:8443 \ 
    -n rook-ceph

Now access the dasboard by heading to https://localhost:8443/#/dashboard

Screenshot 2019-12-08 at 22.25.01
Ceph Dashboard

As you can see, everything looks healthy. Now let’s create a pod that’s using a newly created PVC leveraging that Ceph storage class.

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-pv-claim
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Pod

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pv-pod
spec:
  volumes:
    - name: ceph-pv-claim
      persistentVolumeClaim:
        claimName: ceph-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: ceph-pv-claim

As a result, you will now have an NGINX pod running in your Kuberntes cluster with a PV attached/mounted under /usr/share/nginx/html.

Wrap Up

So…what exactly did we achieve with this solution now? We have created a Ceph storage cluster on an AKS that uses PVCs to manage storage. Okay, so what? Well, the usage of volume mounts in your deployments with Ceph is now super-fast and rock-solid, because we do not have to attach physical disks to our worker nodes anymore. We just use the ones we have created during Rook cluster provisioning (remember these four 100GB disks?)! We minimized the amount of “physical attach/detach” actions on our nodes. That’s why now, you won’t see these popular “WaitForAttach”- or “Can not find LUN for disk”-errors anymore.

Hope this helps someone out there! Have fun with it.

Update: Benchmarks

Short update on this. Today, I did some benchmarking with dbench (https://github.com/leeliu/dbench/) comparing Rook Ceph and “plain” PVCs with the same Azure Premium SSD disks (default AKS StorageClass managed-premium, VM types: Standard_DS2_v2). Here are the results…as you can see, it depends on your workload…so, judge by yourself.

Rook Ceph

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 10.6k/571. BW: 107MiB/s / 21.2MiB/s
Average Latency (usec) Read/Write: 715.53/31.70
Sequential Read/Write: 100MiB/s / 43.2MiB/s
Mixed Random Read/Write IOPS: 1651/547

PVC with Azure Premium SSD

100GB disk used to have a fair comparison

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 8155/505. BW: 63.7MiB/s / 63.9MiB/s
Average Latency (usec) Read/Write: 505.73/
Sequential Read/Write: 63.6MiB/s / 65.3MiB/s
Mixed Random Read/Write IOPS: 1517/505

Jürgen Gutsch: ASP.NET Hack Advent Post 08: Hanselman debugs a .NET Core Linux app in WSL2 with VS on Windows

Scott Hanselman also loves hacking. Hacking on small devices and on Windows and Linux. In this post, I want to introduce, he shows how to debug a .NET Core Linux app that runs in the WSL2 with a Visual Studio on Windows:

Remote Debugging a .NET Core Linux app in WSL2 from Visual Studio on Windows

This is one of those posts where he put things together that might not match, or things that didn't match in the past. Even tough the fact that Linux is running natively inside Windows was hard to imagine in the past, the fact that we as developers where able to remote debug an .NET Core app on any platform is incredibly awesome. Hacking things together that might not match is the most interesting topic for me as well. Things like getting .NET apps running on Linux based small devices like the RaspberryPi or hosting Mono based ASP.NET Webform apps on an Apache running on Suse Linux where things I did in the past and I still do whenever I find some time. This is why I really love those posts written by Hanselman.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 07: Blazorise

Recently I stumbled upon a really cool project that provides frontend components for Blazor. It supports Blazor server side and Blazor WebAssembly on the client side. I found that project, while I was searching for a chart component for a Blazor demo application I'm currently working on.

This project is called Blazorise, is completely open source and hosted on GitHub. It is built on top of Blazor and CSS frameworks like Bootstrap, Material and Bulma. (Actually I've never heard about Bulma.)

Blazorise contains a lot of useful components, including a library to create Charts and DataGrids. It is actively maintained, well documented and also has demo pages for all three CSS Framework implementations.

If you are working with Blazor, you should have a look at it:

Website: https://blazorise.com/

GitHub: https://github.com/stsrki/Blazorise

Jürgen Gutsch: ASP.NET Hack Advent Post 06: Andrew Lock's blog

This sixth post is about a blog that is full of different, but detailed posts about .NET Core and ASP.NET Core. The blog's name ".NET Escapades" kinda describes, that author is writing about almost all he is experiencing related to .NET Core and ASP.NET Core.

This blog is running by Andrew Lock, who is a full stack ASP.NET developer, living in Devon (UK). As well as the other blog authors I introduced in the last advent posts, he is a Microsoft MVP and pretty much involved and well known in the .NET developer community.

He also published the book ASP.NET Core in Action in June last year.

Blog: https://andrewlock.net/

GitHub: https://github.com/andrewlock

Twitter: https://twitter.com/andrewlocknet

Golo Roden: Wie viele Programmiersprachen sind zu viel?

Verschiedene Ansätze ermöglichen in zunehmendem Maße, innerhalb eines Projekts unterschiedliche Programmiersprachen zu kombinieren. Doch nicht alles, was technisch möglich ist, ist auch sinnvoll.

Code-Inside Blog: Did you know that you can build .NET Core apps with MSBuild.exe?

The problem

We recently updated a bunch of our applications to .NET Core 3.0. Because of the compatibility changes to the “old framework” we try to move more and more projects to .NET Core, but some projects still target “.NET Framework 4.7.2”, but they should work “ok-ish” when used from .NET Core 3.0 applications.

The first tests were quite successful, but unfortunately when we tried to build and pulish the updated .NET Core 3.0 app via ‘dotnet publish’ (with a reference to a .NET Framework 4.7.2 app) we faced this error:

C:\Program Files\dotnet\sdk\3.0.100\Microsoft.Common.CurrentVersion.targets(3639,5): error MSB4062: The "Microsoft.Build.Tasks.AL" task could not be loaded from the assembly Microsoft.Build.Tasks.Core, Version=15.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a.  Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. 

The root cause

After some experiments we saw a pattern:

Each .NET Framework 4.7.2 based project with a ‘.resx’ file would result in the above error.

The solution

‘.resx’ files are still a valid thing to do, so we checked out if we could work around this problem, but unfortunately this was not super successful. We moved some resources, but in the end some resources must stay in the corresponding file.

We used the ‘dotnet publish…’ command to build and publish .NET Core based applications, but then I tried to build the .NET Core application from MSBuild.exe and discovered that this worked.

Lessons learned

If you have a mixed environment with “old” .NET Framework based applications with resources in use and want to use this in combination with .NET Core: Try to use the “old school” MSBuild.exe way.

MSBuild.exe is capable of building .NET Core applications and it is more or less the same.

Be aware

Regarding ASP.NET Core applications: The ‘dotnet publish’ command will create a web.config file - if you use the MSBuild approach this file will not be created automatically. I’m not sure if there is a hidden switch, but if you just treat .NET Core apps like .NET Framework console applications the web.config file is not generated. This might lead to some problems when you deploy this to an IIS.

Hope this helps!

Holger Schwichtenberg: Ist ASP.NET Core Blazor nun fertig oder noch nicht?

Der Dotnet-Doktor erklärt den Unterschied zwischen Blazor Server (im RTM-Status) und Blazor WebAssembly (im Preview-Status).

Golo Roden: Tools für Web- und Cloud-Entwicklung

Die vergangene Folge von "Götz & Golo" hat sich mit der Frage beschäftigt, wann Teams gut zusammen arbeiten. Der Fokus lag dabei auf der Arbeit remote und vor Ort. Doch wie sieht es mit den eingesetzten Tools aus?

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.