Golo Roden: Was man über React wissen sollte

Was ist React? Kurz gefasst handelt es sich bei React um eine Bibliothek zur Entwicklung grafischer Oberflächen für Webanwendungen. Doch was gibt es darüber hinaus über React zu wissen?

Holger Schwichtenberg: Infonachmittag: Eine moderne User Experience (UX) für Ihre Software am 26. November 2020

Bei diesem Online-Event präsentieren User-Experience-Experten ihre Erfahrungen aus spannenden Softwareanwendungen von Versicherungen, Energieversorgern, dem Deutschen Notruf sowie aus dem militärischen Umfeld.

Golo Roden: Zwölf Regeln für Web- und Cloud-Anwendungen

Das Entwickeln skalierbarer und verlässlicher Anwendungen für Web und Cloud ist ein komplexes Thema, das eine gewisse Erfahrung fordert. Dennoch gibt es Leitplanken für den einfachen Einstieg, allen voran die Regeln der 12-Factor-Apps. Was hat es mit damit auf sich?

Holger Schwichtenberg: Über 80 Neuerungen in Entity Framework Core 5.0

Die am 10. November erschienene Version des OR-Mappers enthält zahlreiche Neuerungen.

Holger Schwichtenberg: .NET 5.0 erscheint am 10. November im Rahmen der .NET Conf

Microsoft wird morgen im Rahmen der Online-Konferenz ".NET Conf 2020" die fertige Version von .NET 5.0 veröffentlichen.

Golo Roden: Extreme Programming, Scrum, Kanban – was für wen?

Extreme Programming (XP), Scrum und Kanban sind die am weitesten verbreiteten agilen Methoden. Doch obwohl es sich bei allen drei um agile Methoden handelt, unterscheiden sie sich teilweise gravierend voneinander. Was sind die Gemeinsamkeiten, was die Unterschiede, und welche Methode eignet sich für wen?

Marco Scheel: Eigene Vorlagen für Microsoft Teams

Microsoft hat im Mai angekündigt, dass man in Kürze auf Microsoft definierten Templates bei der Anlage zurückgreifen kann und in Zukunft auch eigene Templates im Admin-Center erstellen kann. In der Vergangenheit brauchte man eine Teams Provisioning Lösung und konnte nicht auf die eingebauten Dialoge zurückgreifen. Hier ein Beispiel, wie man über ein Site Design ein Microsoft Flow startet, um mit Teams zu interagieren.

In meinem Lab-Tenant ist nun endlich die Erstellung eigener Templates angekommen. Ich zeige euch, was es mit den Templates von Microsoft auf sich hat und was ihr mit den eigenen Templates erreichen könnt.

image

Microsoft Templates

Microsoft bietet aktuell 13 Templates an, die man im Tenant auswählen kann. Die Templates werden mit einer “Industry” Information versehen. Auch wenn euer Unternehmen nicht aus dem Bereich stammt, sind die Templates trotzdem sinnvoll. Hier findet ihr die Dokumentation, was das einzelne Template ausmacht:

Get started with Teams templates using Microsoft Graph - Teams template capabilities

Aus der User-Sicht startet ihr über den normalen Dialog zum Erstellen eines Teams. Habt ihr die Self-Service Creation abgeschaltet, dann sollten wir mal ein ernstes Wort reden. Wenn ihr alles “Richtig” gemacht habt, dann sieht der User folgendes:

image

Auf dieser Seite kann der Benutzer sich über eine kurze Beschreibung über das Template informieren. Ist die Entscheidung gefallen, dann kommen beim Klick weitere Details zum Template:

image

Ab hier geht es wie gewohnt weiter. Klassifizierung + Privacy auswählen und den Namen für das Team festlegen:

image

Jetzt kommen die ersten Unterschiede. Ohne Template ist das Team in Sekunden erstellt und ich kann weitere Benutzer auswählen. Bei der Verwendung einer Vorlage dauert das Erstellen deutlich länger. Es geht jetzt nicht um Stunden aber der Prozess ist ab jetzt asynchron. Der Benutzer sieht folgende Meldung, welche in meinem Test sich auch nach Minuten nicht verändert hat. Das Schließen des Dialoges ist also nicht optional, wie angedeutet.

image

Das fertige Team sieht dann so aus:

image

In meinem Test hat das System am Ende immer “undefinded” an den gewählten Namen angehängt, aber als Owner kann man den Namen ja jederzeit ändern.

In der aktuellen Implementierung kann man keine der Microsoft Templates ausblenden. Es gibt auch keine Möglichkeit Templates an eine Zielgruppe zu verteilen. Alle Templates sind immer für alle Benutzer zu sehen. Ich bin gespannt, wie es hier weiter geht.

Templates administrieren

Als Microsoft Teams Administrator kann ich ab sofort eigene Templates erstellen und in den gezeigten Dialog integrieren. Die eigenen Templates werden immer vor den Microsoft Templates angezeigt. Hier könnt ihr die Microsoft Dokumentation einsehen:

Get started with Teams templates in the admin center

In der aktuelle Implementierung ist der Funktionsumfang recht überschaubar. Ihr könnt Kanäle vordefinieren, Tabs in die Kanäle hängen und generell Apps in das Team einfügen.

Im Admin Center gibt es im Bereich “Teams” einen neuen Navigationspunkt:

image

Hier kann man ein neues Template erstellen. Aktuell hat man drei Optionen:

image

Eigene Template definieren

Die erste Option “Create a custom team template” führt uns durch den folgenden Dialog. Name des Template, Beschreibung für den Endbenutzer und Hinweise für die Kollegen im Admin Team sind verpflichtend:

image

Auf der folgenden Seite kann ich jetzt einen Kanal anlegen und gleichzeitig auch Tabs (Apps) hinzufügen:

image

Sollte eine App nicht über einen Tab hinzugefügt werden, dann kann ich diese auch direkt in das Team einfügen:

image

Ihr könnt den erstellten Kanälen (z.b. dem Standard-Kanal) nachträglich Apps hinzufügen:

image

Hier seht ihr die fertige Definition des Templates:

image

Eigenes Template von einem bestehenden Template

Die zweite Option “Create a team template from an existing team template” schaltete vor die eigentliche Definition des Templates die Auswahl eines anderen (Microsoft oder eigenes) Template vor:

image

Dann geht es weiter wie bei der Definition eins leeren Template.

Eigenes Template von einem bestehendem Team

Die dritte Option “Create a template from an existing team” erlaubt es ein bestehendes Team auszuwählen:

image

Dann geht es weiter wie bei der Definition eins leeren Template.

In dieser Auswahl werden aber auch nur die Channels, Tabs und die Apps übernommen. Inhalte wie Chat, Tab-Konfigurationen oder Dateien kommen nicht mit.

Zusammenfassung

Es ist eine willkommene Option mit viel Potential für die Zukunft. Wenn ihr kein eigenes Teams Provisioning machen wollt, ist das die erste Möglichkeit euren Usern vorgefertigte Strukturen an die Hand zu geben. Es fehlen aber ganz essenzielle Dinge. Ich kann keine Dateien vorprovisionieren und zum Beispiel als Tab integrieren. Tabs können nicht mit Inhalten gefüllt werden. Ein “Website” Tab bekommt einen Namen, aber man kann nicht die URL hinterlegen.

Es ist gut zu wissen, dass man hier „out-of-the-box“ Funktionen hat, aber solange Microsoft folgende Punkte nicht adressiert, wird es eine Nischenlösung bleiben:

  • Teams Eigenschaften verändern (Moderation, etc.)
  • Erstellungsdialog muss schneller werden
  • Kein User/Group Targeting für Templates -> Heute sehen alle User alle Templates
  • Die Verwendung eines Templates kann nicht erzwungen werden, der User kann immer noch “From scratch” wählen und sein eigenes Ding machen
  • Microsoft Templates können nicht ausgeblendet werden
  • Keine Inhalte (Tabs, Dateien, …)

Holger Schwichtenberg: Zwangweises Herunterfahren einer Reihe von Windows-Systemen per PowerShell

Dieses PowerShell-Skript fährt eine Reihe von Windows-Rechnern herunter. Die Rechnernamen kann man im Skript hinterlegen oder aus einer Datei oder dem Active Directory auslesen.

Golo Roden: Node.js und Deno im Vergleich

Mit Node.js und Deno stehen inzwischen zwei Laufzeitumgebungen auf Basis von V8 zur Verfügung, die JavaScript (und TypeScript) ausführen können. Während Node.js seit über 10 Jahren existiert, ist Deno noch verhältnismäßig jung. Wie unterscheiden sich die beiden Plattformen, und was spricht für Node.js, was für Deno?

Marco Scheel: Microsoft Teams Recording mit Externen teilen

Microsoft arbeitet an einer neuen Version von Microsoft Stream. Microsoft Teams nutzte bis vor kurzem genau dieses Video-Backend für die Ablage der Meeting Recordings. Seit heute (01.11.2020) beginnt der Rollout für alle Tenants, es sei denn ihr habt per Meeting Policy ein Opt-Out für eure Benutzer gesetzt. Was das Recording in OneDrive/Teams bedeutet habe ich euch in folgendem Blogpost demonstriert.

Das Meeting Recording liegt also im SharePoint (für Channel Meetings) oder im OneDrive (für alle anderen Meetings). Ist damit der Blog bereits zu Ende? Teilen auf SharePoint kann doch jeder, oder? Hier gibt es die Doku von Microsoft. Natürlich kann ich die Datei einfach über einen neuen Sharing Link teilen. Wenn der externe Benutzer aber im Meeting-Chat auf das Recording klickt, dann gibt es folgenden Fehler, den wir aber “einfach” Lösen können.

image

Meeting Recordings richtig berechtigen

Microsoft nutzt die normalen Share-Features für das Teilen des Recordings für interne Benutzer. Wenn ihr jetzt einfach den Share-Dialog am Video nutzt, dann kann der externe über den Link aus diesem Sharing (wird normal per E-Mail verschickt) zugreifen. Versucht er aber vielleicht später über Teams und den Meeting-Chat zuzugreifen, dann kommt es wieder zum Fehler, da dort ein anderer Sharing-Link hinterlegt ist:

image

Jetzt zeige ich euch, wie man den Link im Meeting-Chat auch für Externe konfiguriert. Wenn ihr die Datei im SharePoint oder OneDrive geöffnet habt, dann könnt ihr über “Manage access” die aktuellen Sharing Links einsehen. Es gibt viele Möglichkeiten “Manage access” zu erreichen. In der Ordneransicht klickt ihr “…” auf der entsprechenden Videodatei und wählt dann “Manage access” aus:

image

Ihr seht die aktuell konfigurierten Freigaben:

image

Die Freigabe für die Anzeige der Datei (“View”-Berechtigung) könnt ihr über die “…” Option aufrufen:

image

Gebt die E-Mail des Gasts ein und bestätigt mit “Save”:

image

Wenn ich (als Gast der Besprechung) nun über den Meeting-Chat den Link aufrufe, dann bekomme ich keine Fehlermeldung beim Zugriff:

image

Zusammenfassung

Die meisten Benutzer werden einfach im SharePoint auf “Share” klicken. Der Zugriff für externe Benutzer ist ein tolles Feature und ich will nicht meckern. Sollten eure Benutzer über das Verhalten meckern, dann könnt ihr sie nun über den richtigen Weg aufklären und das Nutzererlebnis verbessern.

Code-Inside Blog: DllRegisterServer 0x80020009 Error

Last week I had a very strange issue and the solution was really “easy”, but took me a while.

Scenario

For our products we build Office COM Addins with a C++ based “Shim” that boots up our .NET code (e.g. something like this. As the nature of COM: It requires some pretty dumb registry entries to work and in theory our toolchain should “build” and automatically “register” the output.

Problem

The registration process just failed with a error message like that:

The module xxx.dll was loaded but the call to DllRegisterServer failed with error code 0x80020009

After some research you will find some very old stuff or only some general advises like in this Stackoverflow.com question, e.g. “run it as administrator”.

The solution

Luckily we had another project were we use the same approach and this worked without any issues. After comparing the files I notices some subtile differences: The file encoding was different!

In my failing project some C++ files were encoded with UTF8-BOM. I changed everything to UTF8 and after this change it worked.

My reaction:

(╯°□°)╯︵ ┻━┻

I’m not a C++ dev and I’m not even sure why some files had the wrong encoding in the first place. It “worked” - at least Visual Studio 2019 was able to build the stuff, but register it with “regsrv32” just failed.

I needed some hours to figure that out.

Hope this helps!

Marco Scheel: Microsoft Teams Recording jetzt in SharePoint statt Microsoft Stream

Auf der Ignite 2020 wurde angekündigt, dass man Microsoft Stream einstellen neu erfinden wird. Ich bin Feuer und Flamme für die Idee, wie ihr hier sehen könnt:

Den Microsoft Blogpost mit allen Details findet ihr hier. Heute wollen wir uns die Auswirkungen auf die Meeting Recordings in Microsoft Teams anschauen. In der “Vergangenheit” hatten wir folgende Probleme mit der Ablage in Microsoft Stream:

Mit dem neuen Microsoft Stream gehören diese Probleme der Vergangenheit an und es werden noch viele Funktionen in der nahen Zukunft ergänzt. Zum Start bekommen wir aber eine sehr rudimentäre Implementierung mit ihren eigenen Problemen. Wir schauen einmal auf die entsprechende Implementierung Stand Oktober 2020. Microsoft hat zur Ignite eine dedizierte Session zum Thema Besprechungsaufzeichnung erstellt in der ihr viele Details findet.

image

Im Meeting

Ich habe für euch ein Meeting dokumentiert und zeige wo die Unterschiede liegen. Im Microsoft Teams Client bleibt während der Besprechung alles beim Alten. Über die erweiterten Funktionen (…) kann jeder Moderator aus dem einladenden Unternehmen das Recording starten “Start recording”: image

Im Meeting sehen wir:

  • Luke (Meeting Organizer) - luke ät gkmm.org
  • Leia - leia ät gkmm.org
  • Rey - rey ät gkmm.org
  • Kylo - kylo ät gkmm.org
  • Marco (Gast) - marco.scheel ät glueckkanja.com

Die Benutzer werden wie üblich mit einem Banner über den Start der Aufnahme informiert: image

Wird die Aufzeichnung während des Meetings beendet, dann werden die Benutzer über das Speichern informiert: image

Die Aufzeichnung wird im Meeting Chat verlinkt. image

Recording ansehen

Hier kommt die erste Neuerung! Das Video ist deutlich schneller verfügbar. Microsoft Teams erzeugt das Video als MP4 in der Cloud und hat es in der Vergangenheit an Microsoft Steam übergeben. Stream hat dann die Azure Media Services bemüht, um das Video aufzubereiten und für ein adaptives Streaming auszuliefern. Simple gesagt: Stream hat das Video in verschiedenen Auflösungen gerechnet und kann nahtlos zwischen den Bitraten hin und her wechseln. Das neue Stream wirft einfach das MP4 in SharePoint (oder OneDrive for Business) und stellt es dann über einen einfachen HTML Player zur Verfügung. Es sollte klar werden, dass ohne die Integration der Azure Media Services in dieser ersten Version einige Funktionen entfallen:

  • Wiedergabegeschwindigkeit. Nicht jeder redet so schnell wie ich und dann kann man schon ein 60 Minuten Video auf 45 Minuten reduzieren, wenn man es in 1,5x wiedergibt.
  • Transkription (Sprache zu Text). Für ein Meeting Recording ziemlich relevant, um zum Beispiel im 2 Stunden Meeting den Moment zu finden, als es um Produkt X ging.
  • Bandbreiten- und Performanceabhängiges Adaptives Streaming (wechseln von 1.1 Mbps bis 58Kbps). Für ein Meeting Recording nicht besonders relevant.

Anfang 2020 hatten unsere Video Recordings in Stream noch alle Bitraten. Aktuell sind auch alle Stream Videos (Meeting Recording) nur in der Original 1.1 Mbps (1080p) Auflösung verfügbar und macht somit kein adaptives Streaming. Corona lässt grüßen?

Der folgende Screenshot zeigt Lukes Browser bei der Wiedergabe, nachdem er im Chat auf “Open in OneDrive” klickt. image

Genau so wird heute bereits jedes andere Video in SharePoint und OneDrive dargestellt. In der Zukunft wird der Content Typ für Video im SharePoint aufgewertet und die Darstellung, die Metadaten und der Lifecycle werden optimiert.

Für alle Meeting Teilnehmer aus der einladenden Organisation wird das Sharing der Datei automatisch eingerichtet: image

Teams vergibt immer zwei Berechtigungen für die Video Datei. Es wird unterschieden in Moderator und Teilnehmer. Moderatoren erhalten Edit Berechtigungen. Teilnehmer des Meetings erhalten nur View Berechtigungen.

Gäste werden nicht berücksichtig. Für mich als Gast endet der Versuch das Recording anzusehen so: image

In diesem Blogpost gehe ich auf den richtig Umgang mit Gästen und dem Recording ein.

Die Videodatei

Es bleibt zu beantworten, wo die Datei dann eigentlich liegt! Wenn man im Chat ein Video angeklickt, dann öffnet sich in Teams die Video Datei und wird abgespielt. Keine gute Idee, denn jede Interaktion mit Teams führt zum Abbruch der Wiedergabe und man muss von vorne anfangen. Also am besten gleich die “…” anklicken und “Open in OneDrive” auswählen: image

Die Datei liegt also im OneDrive und wird in einem Ordner mit dem Namen “Recordings” (bestimmt auch irgendwo “Aufzeichungen” oder “grabación”). OneDrive ist immer eine persönliche Ablage. Sie gehört einem Benutzer! Keine schöne Lösung. Also ein “Meet now” oder ein geplanter Termin landen im OneDrive eines Benutzers. Welcher Benutzer? Meeting Organizer oder der Benutzer der “Start recording” klickt. Es ist tatsächlich das OneDrive des Benutzers, der am schnellsten Klicken kann. Ich hätte den Meeting Organizer bevorzugt, da er auch der Besitzer des Termins ist. Hier hatte Stream mit seiner “neutralen” Ablageplattform klare Vorteile. Verlässt ein Benutzer das Unternehmen und sein OneDrive wird gelöscht, verschwinden alle Meeting Recordings mit seinem OneDrive! Microsoft hat angekündigt, dass es Mechanismen geben wird, welche automatisch “alte” Aufzeichnungen löschen wird. Der Speicherverbrauch auf SharePoint soll so reduziert werden. Eventuell kommen hier Retention Policies zum Einsatz. Diese Policies können nicht nur Dateien löschen, sondern auch für einen bestimmten Zeitraum garantiert Vorhalten. Es kann also sein, dass es hier auch eine Lösung für das Löschen eines OneDrives geben wird.

Meetings können aber auch alternativ in einem Teams Kanal geplant werden. Diese Channel-Meetings speichern ihr Recording in dem entsprechenden Ordner im Kanal. Wenn ich also im “General” Kanal aufzeichne, dann liegt die Datei hier “/sites/YOURTEAMSITE/Shared Documents/General/Recordings”. Ich bin semi-zufrieden. Die Ablage im Team löst die Probleme beim Zugriff und die Frage wem die Datei gehört. Solltet ihr aber zum Beispiel den Ordner General per OneDrive Client auf eurem Rechner syncen und immer alles Offline wollen… dann kommen jetzt auch größere Videodateien mit. Aber man kann nicht alles haben und ich hoffe, dass Microsoft in Zukunft diese Szenarien weiter optimiert.

Das Label der Schaltfläche im Meeting Chat heißt immer “Open in OneDrive” und ändert sich für ein Channel Meeting NICHT in “Open in SharePoint”. Auch hier gibt es die Chance auf Besserung in der Zukunft.

Schauen wir mal direkt auf die Datei. Hier ein Screenshot der erweiterten MP4 Eigenschaften: image

Die Aufzeichnung ist etwas unter 5 Minuten lang und verbraucht ca. 25 MB. Das Video besitzt eine FullHD (1080p) Auflösung. Wenn ich mir den Dateinamen anschaue, dann bin ich super happy. In Stream waren besonders die Titel der Channel Meetings oft Nichtssagend z.B. “Meeting in General”. So baut sich der Dateiname im neuen Stream auf:

  • Demo Luke and Leia-20201024_152149-Meeting Recording.mp4
    • TitelDesMeeting-yyyyMMdd_HHmmss-Meeting Recording
    • TitelDesMeeting = Demo Luke and Leia
    • yyyyMMdd_HHmmss = Start der Aufnahme (Lokale Zeit und nicht UTC), also der Moment in dem “Start recording” geklickt wird
  • Meeting in _Workplace_-20201023_100348-Meeting Recording.mp4
    • MeetingInKanal–yyyyMMdd_HHmmss-Meeting Recording
    • MeetingInKanal = Der Kanal heißt Workplace und in dem Fall wurde leider der Titel des Meetings nicht übernommen.
    • yyyyMMdd_HHmmss = Start der Aufnahme (Lokale Zeit und nicht UTC), also der Moment in dem “Start recording” geklickt wird

Einschalten im Tenant

Microsoft hat aktuelle Dokumentation wann es wie weiter geht. Aktuell kann ein Tenant Admin den Opt-In durchführen und in wenigen Stunden ist der Tenant bereit und alle neuen Meeting Recordings landen direkt im OneDrive/SharePoint. Wenn der Admin kein Opt-In oder Opt-Out macht, dann wird ab Mitte Q4 2020 die Funktion für den Tenant konfiguriert. Habt ihr für euren Tenant ein Opt-Out konfiguriert, dann ist trotzdem im Q1 2021 Schluss und das Recording auf OneDrive/SharePoint wird auch für euren Tenant umgestellt. Bedeutet, dass mit Start Q2 2021 alle Meeting Recordings nur noch im neuen Microsoft Stream (aka SharePoint) laden.

Für den Opt-In braucht ihr die Skype for Business PowerShell, um die entsprechende Meeting Policy zu setzen. Seit der Ignite 2020 sind die CommandLets aber auch in das Microsoft Teams PowerShell Modul integriert.

Import-Module SkypeOnlineConnector
$sfbSession = New-CsOnlineSession
Import-PSSession $sfbSession
Set-CsTeamsMeetingPolicy -Identity Global -RecordingStorageMode "OneDriveForBusiness"

Da es eine Meeting Policy ist, kann man das Feature auch erstmal nur für einzelne Benutzer freischalten.

Für den Op-Out setzt ihr den Wert einfach auf “Stream”:

Set-CsTeamsMeetingPolicy -Identity Global -RecordingStorageMode "Stream"

Zusammenfassung

Microsoft Teams läutet eine neue Video Ära in Microsoft 365 ein. Die Meeting Recordings können ab sofort für all oder einzelne Benutzer nach SharePoint/OneDrive umgeleitet werden. Der Funktionsverlust ist angesichts der neuen Freiheit beim Teilen der Aufzeichnungen zu verschmerzen. Folgende Punkte sind also zu beachten:

  • Aufzeichnungen im Benutzer OneDrive können verloren gehen, wenn das OneDrive gelöscht wird (Mitarbeiter verlässt das Unternehmen)
  • Kein x-fache Wiedergabegeschwindigkeit
  • Keine adaptive Bandbreitenanpassung (aktuell auch in Stream nicht gegeben)
  • Keine Integration in die aktuelle Stream Mobile App
  • Wird eine Datei umbenannt, verschoben oder gelöscht, dann geht der Link im Meeting Chat kaputt
  • Das richtige Teilen mit Externen erfordert einige Klicks (Blogpost coming soon)
  • Keine Möglichkeit alle Meeting Recordings auf einen Blick zu sehen, man muss jedes Mal den Meeting Chat suchen

Holger Schwichtenberg: PowerShell 7: Ternärer Operator ? :

In PowerShell 7.0 hat Microsoft auch den ternären Operator ? : als Alternative zu if … else eingeführt.

Golo Roden: Node.js 15 – das ist alles neu

Node.js 15 ist erschienen – und Node.js 14 LTS erscheint nächste Woche. Was gibt es Neues, was hat sich geändert, und welche Besonderheiten sind beim Installieren zu beachten?

Golo Roden: Was ist Systemarchitektur?

Architektur ist die Beschäftigung mit der Frage, wie man Code strukturiert und orchestriert. Welche Architekturmöglichkeiten gibt es, und welche Vor- und Nachteile haben diese?

Marco Scheel: Teilnehmer in Microsoft Teams für immer stumm schalten

Wer kennt es nicht: Das Meeting startet und plötzlich “piiiiiiiiep-piiiiiep-piiep-piep” … der Projektleiter parkt rückwärts ein. Die Flexibilität in der Teilnahme an einem Teams Meeting ist “nahezu” unbegrenzt. Die Freiheitsgrade sind aber für viele neu und ungeübt. Der geübte Umgang mit dem Meeting Equipment ist noch in weiter Ferne. Im optimalen Fall setzen wir alle nur Microsoft Teams zertifizierte Geräte ein und Hardware + Software arbeiten in Harmonie. Wenn ich zu spät in ein Meeting komme, dann hoffentlich “on mute”. Die Realität sieht noch immer anders aus. Besonders in großen Meetings war es ein Problem, dass Teilnehmer jederzeit das Mikrofon öffnen konnten. Ein Microsoft Teams Live-Events waren oft keine Lösung, da die Interaktivität in einem späteren Moment fehlte. Durch die Einführung des Roadmap Item 66575 ist das Problem lösbar:

Prevent attendees from unmuting in Teams Meetings

Gives meeting organizers the ability to mute one to many meeting participants while removing the muted participants' ability to unmute themselves.

Hier der Screenshot zum Feature:

image

Teilnehmer (Attendee) vs Moderator(Presenter)

Microsoft Teams kennt in einem Meeting drei wesentliche Rollen. Für unser Szenario sind nur zwei Rollen interessant. Das Unternehmen kann vorgeben, wie strikt die Moderator-Rolle (Presenter) gehandhabt wird. Im Standard ist jeder Benutzer dieser Rolle zugeordnet und kann damit das Meeting unterstützen oder empfindlich stören. Im Meeting selbst kann jeder Moderator andere Benutzer zum Teilnehmer herabstufen. Teilnehmer können nicht präsentieren und keine Benutzer aus der Konferenz werfen. Für die genau Übersicht checkt diesen Link.

Welche Optionen das Unternehmen zur Vorgabe hat, findet ihr mit allen Details hier.

  • Jeder
  • Jeder aus dem Unternehmen
  • Nur der Organisator

Meeting planen

Wenn ihr ein Meeting fertig geplant habt, könnt ihr nachträglich in die Meeting Optionen schauen. In Skype for Business konnte man das schon direkt beim Planen, aber Microsoft hat es sich hier “einfach” gemacht und springt einfach im Browser auf diese Optionen. Im Beschreibungstext des Termins neben dem Teilnahme-Link, kann der Organisator auch aus allen anderen Programmen auf diese Optionen zugreifen:

image

Hier die Meeting Optionen im Browser:

image

Wer kann präsentieren? Hier ist der Unternhemnsstandard vorgegeben und ihr könnt den Wert nach euren Vorlieben anpassen.

image

Solltet ihr die Option “Specific people” auswählen, konnte ich in meinem Test nur eingeladenen Personen aus dem Unternehmen selbst auswählen. Je nachdem wer den Termin tatsächlich steuert, müsst ihr hier aufpassen, wenn ihr zum Beispiel durch einen externen Projektleiter unterstützt werden, der normalerweise das Meeting leitet.

Jetzt kommen wir zum eigentlichen Feature, wenn ihr schon zur Planung die Entscheidung treffen könnt, dann stellt ihr hier die Option “Allow attendees to unmute” aus.

image

Im Meeting

Während des Teams Meetings kann ein Moderator die Option zum “Stumm-schalten” der Teilnehmer direkt im Client bedienen. In der Teilnehmerliste kann man über das “…"-Menü die Benutzergruppe stumm schalten.

image

Für den Teilnehmer wird es im Client kurz signalisiert, dass Sie aktuell stumm sind. Ein “Unmute” über zum Beispiel zertifizierte Hardware-Button wird sofort wieder zurückgesetzt und das Headset (Dongle) zeigt weiterhin Mute (Rot) an.

image

Für mich als Teilnehmer ist das Symbol zum “Unmuten” ausgegraut und nicht bedienbar. Der Rest der Teilnehmer wird ebenfalls als deaktiviert angezeigt.

image

In meinem Test kann man übrigens sehen, dass noch nicht alles rund läuft. Für ein kurzen Moment war mein Mikrofon noch offen (wie beim berühmten Double-Mute) aber Teams hatte mich im Client schon stumm geschaltet. In dem Moment hat mich der Client dann drauf hingewiesen, dass ich noch mute bin, obwohl ich es ja nicht ändern kann :) Wird schon noch werden.

Ein Moderator kann jederzeit die Option wieder für alle entfernen oder einzelne Teilnehmer zum Moderator befördern.

Abschluss

Ich kannte die Funktion bisher nur von WebEx und wurde schon das eine oder andere Mal darauf angesprochen. Bisher war meine Antwort eine Kombination aus Live-Event und Teams Meeting im Anschluss. Durch diese neue Funktion wird es für alle viel einfacher. Ich würde trotzdem sparsam mit dem Setting umgehen, da die Tücke im Detail liegt. Aktuell können zum Beispiel nur die Web- und Desktop-Versionen damit umgehen. Auf einem Microsoft Teams Room System oder dem Mobile Clients (Android/iOS) gibt es die Funktion Stand Oktober 2020 noch nicht. Es kann also passieren, dass ein Meeting nicht stattfinden kann, weil die Teilnehmer nicht zu Wort kommen können. Wie so oft kann man durch gute User-Erziehung mehr erreichen als durch harte Regulation.

Microsoft hat wie so oft in letzter Zeit hervorragen Dokumentation zum Feature Online. Schaut also selbst nochmal rein.

Golo Roden: Einführung in React, Folge 9: Hands-on

Nach fünf Monaten, acht Folgen und mehr als sieben Stunden Laufzeit ist es an der Zeit für eine Retrospektive. Die neunte Folge der Einführung in React ist ein mehr als zweistündiges Hands-On, in dem alle bisher vorgestellten Konzepte miteinander kombiniert werden.

Marco Scheel: My blog has moved - From Tumblr to Hugo on GitHub Pages

My blog now has a new home. It is no longer hosted on Tumblr.com and it is now hosted on GitHub Pages. The main reason to get of Tumblr is the poor image handling. The overall experience was OK. I liked the editor and best of all it is all free including running on your own domain! Having my own name was a key driver. I was running my blog on my own v-server back in the days. I tried a lot of platforms (blogger.com, wordpress.com and prior Tumblr I ran on a “self” hosted WordPress instance). The only constant was and will be my RSS hosting. Believe it or not I’m still running Feedburner from Google. One service that is still not (yet?) killed by search giant (RIP Google Reader). With all the previous choices there was also on driving factor: I’m cheap, can I get it for free? Yes and it will stay 100% free for you and me!

Today is the day I switched to a static website! It is 2020 and the hipster way to go. So, what does it take to run a blog on a static site generator?

Main benefits:

  • Still free
  • I own my content 100%
  • Better image handling (high-res with zoom)
  • Better inline code handling and highlighting
  • Learning new stuff

HUGO

Hugo is one of the most popular open-source static site generators. With its amazing speed and flexibility, Hugo makes building websites fun again.

image

Why Hugo and not Jekyll? Because there are blogs out there that I’m reading, and I liked the idea of being one of them :) Who?

There is even content on Microsoft Docs on hosting Hugo on Azure Static Websites: https://docs.microsoft.com/en-us/azure/static-web-apps/publish-hugo

It was easy to start. Just follow the steps on the Getting started using choco installation for Windows users.

I’ve chosen the Fuji theme as a great staring point and integrated it as a git submodule. As mentioned in the docs I copied the settings into my config.toml and I was ready to go.

hugo new post/hugo/2020/10/my-blog-has-moved.md
hugo server -D

Open localhost:1313 in your browser of choice and check the result.

image

My tweaks

To get result in the picture above I needed some tweaks. Also, some other settings are notable if you are like me :)

The chosen theme is not very colorful and I really wanted a site image. I’m sure it is my missing knowledge about Hugo and theming but I ended up messing this the CSS to get a header image. I have put a classic CSS file in my “static/css” folder.

header {
    background-image: url(/bg-skate-2020.jpg);
    background-size: cover;
}

body header a {
    color: #ffffff;
    white-space: normal;
}

body header .title-sub{
    color: #ffffff;
    white-space: normal;
}

body .markdown-body strong{
    color: #000000;
}

To integrate this into your theme we use partials. To not mess with my theme (it is a submodule and controlled by the original author) I had to copy the “head.html” from my theme into “layouts/_partials” and I added the link to my CSS at the end of the file. While I’m in here I will also add the RSS tag to my FeedBurner account.

...
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/disqusjs@1.3/dist/disqusjs.css" />
{{ end }}
{{ partial "analytic-gtag.html" . }}
{{ partial "analytic-cfga.html" . }}

<link rel="stylesheet" href="/css/custom.css">
<link rel="alternate" type="application/rss+xml" href="http://feeds.marcoscheel.de/marcoscheel">

I also modified the Google Analytics integration in the same way. I copied the “analytic-gtag.html” file to my partials folder and added the attribute “anonymize_ip” to anonymize the IP address.

...
        dataLayer.push(arguments);
    }
    gtag('js', new Date());
    gtag('config', '{{ . }}', {'anonymize_ip': true});
</script>
<script async src="https://www.googletagmanager.com/gtag/js?id={{ . }}"></script>

To get a favicon I followed the instructions on my theme site doc.

By default, the RSS feed generated will include only a summary (I HATE THAT) and return all items. I’ve found this post about solving my RSS “problem”. This time we had to grab the content from the Hugo website and copy the file into “layouts/_default/rss.xml”. Switch from “.Summary” to “.Content” and switched the description of the RSS feed to my site description. Also, I configured the XML feed to only return 25 items.

...
<description>{{.Site.Params.subTitle}}</description>
...
<pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
{{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
<guid>{{ .Permalink }}</guid>
<description>{{ .Content | html }}</description>

config.toml

rssLimit = 25

Content migration

I also need to take care about my old content living on Tumblr and if possible, on WordPress. It was kind of easy. I checked the migration article on the Hugo docs site.

Tumblr: https://gohugo.io/tools/migrations/#tumblr
All of the solutions require a Tumblr app registration, so I created on. To not mess with my fresh Windows install I enabled WSL2 and used the Ubuntu distro. This way I was able to clone the tumblr-importr repo and build my application. The important part was to place the GO binary into the right location. Otherwise the command was unknown. After that I was able to generate the needed files.

git clone https://github.com/carlmjohnson/tumblr-importr
cd tumblr-importr
go build
sudo cp tumblr-importr $GOPATH/bin
tumblr-importr -api-key 'MYAPIKEYHERE' -blog 'marcoscheel.de'

I copied the files into a subfolder named “tmblr” in my “content/post” folder. My main problem was that the content was not markdown. The files used HTML. I ended up opening all the blog posts on Tumblr in edit mode and switched to markdown mode and copied the source to the corresponding .md file. I only had 12 posts, so the work was doable and the result is clean. The main benefit of the conversion was that the front-matter attributes were pre-generated I did not have to recreate those (title, old URL as alias, tags, date, …)

date = 2019-08-02T19:41:30Z
title = "Manage Microsoft Teams membership with Azure AD Access Review"
slug = "manage-microsoft-teams-membership-with-azure-ad"
id = "186728523052"
aliases = [ "/post/186728523052/manage-microsoft-teams-membership-with-azure-ad" ]
tags = [ "Microsoft 365", "Azure AD", "Microsoft Teams"]

The Tumblr export generated an image mapping JSON. I used the JSON (converted to a CSV) to rewrite my images to the downloaded (still to small) version.

"OldURI":"NewURI"
"https://64.media.tumblr.com/023c5bd633c51521feede1808ee7fc20/eb22dd4fa3026290-d8/s540x810/36e4547d82122343bec6a09acf4075bb15eae1c1.png": "tmblr/6b/23/64d506172093d1d548651e196cf7.png"
$images = Import-Csv -Delimiter ":" -Path ".\image-rewrites.csv";

Get-ChildItem -Filter "*.md" -Recurse | ForEach-Object {
    $file = $_;
    $content = get-content -Path $file.FullName -Raw
    foreach ($image in $images) {
        $content = $content -replace $image.OldURI, $image.NewURI
    }
    Set-Content -Value $content -Path ($file.FullName)
}

WordPress: https://gohugo.io/tools/migrations/#wordpress
Once again, I used my handy WSL2 instance to not mess with not loved language. So a save route was to use the WordPress export feature and the repo exitwp-for-hugo. I cloned the repo and a few “sudo apt-get” later I was ready to run the python script. I placed my downloaded XML into the “wordpress-xml” folder. I ended up changing the exitwp.py file to ignore all tags and replace it with a single “xArchived”.

git clone https://github.com/wooni005/exitwp-for-hugo.git
cd exitwp-for-hugo
./exitwp.py

At the end, my “content/post” folder looks like to following.

image

Github

Now that the content is available on my local drive and I’m able to generate the static files. It is already a git repo so where to host the primary authority? So, the Hugo site with all config and logic will go to GitHub. There are only two choices for me. GitHub or Azure DevOps. Microsoft is owning both services. Private repos are free in both services. It looks like in the future Azure DevOps will not get all the love and that is why my website source code is hosted on GitHub: https://github.com/marcoscheel/marcoscheel-de

image

GitHub Pages

Next up is to generate the final HTML and put it out there on the internet. Generating the content is as easy as running this command.

image

Now we need to decide how to host the content. My first try was to setup a new Azure Pay-As-You-Go subscription with a 200$ starting budget for the first month and my personal credit card from here. Based on Andrew Connell blog I setup a storage account and enabled the static website. I could setup a custom domain for the blob store, but I created a Azure CDN (MS Standard) to optimize traffic and reduce potential cost. I also checked for Cloudflare CDN. All options allowed to have a custom domain and easy HTTPS with build in certificates. At the end it was my credit card and if something went really wrong (too much traffic due to non-paid internet fame?) I would be paying a life lesson with real money. I took the easy route instead. GitHub Pages to the rescue.

Websites for you and your projects. Hosted directly from your GitHub repository. Just edit, push, and your changes are live.

For every account GitHub is offering one GitHub Pages repository. I created the repository at: https://github.com/marcoscheel/marcoscheel.github.io

Normally the content will be server on the github.io domain, but through the settings we can add a CNAME to the site. To achieve this we need to put a file called “CNAME” into the root directory. For my Hugo site and the publish process I placed the file into the “static” folder so every time the site is generated the file will be copied to the root of the site. Once the CNAME is in place we configure the HTTPS redirect.

image

Custom domain. HTTPS. No credit card. Everything is good.

Publishing

In the future I’m looking forward to enable GitHub Actions to publish my site. For the moment I rely on my local environment pushing content from my Hugo site to the GitHub Pages repository. I integrated the GitHub Pages repo as a submodule and with the publish process I put the files into “public/docs”.

publishDir = "public/docs"

A quick “hugo” on the Windows Terminal and a fresh version is ready to be pushed into the right repo.

hugo
cd public
git add -A && git commit -m "Ready for go live"
git push

Holger Schwichtenberg: Erstes Buch zu C# 9.0 erschienen

Das Buch behandelt die wesentlichen Neuerungen in der neunten Sprachversion.

Golo Roden: Einführung in Docker, Folge 4: Images bauen

Ein wesentlicher Bestandteil der Arbeit mit Docker ist das Bauen eigener Images. Die vierte Folge der Einführung in Docker zeigt, wie das funktioniert.

Code-Inside Blog: How to share an Azure subscription in a team

We at Sevitec are moving more and more workloads for us or our customers to Azure.

So the basic question needs an answer:

How can a team share an Azure subscription?

Be aware: This approach works for us. There might be better options. If we do something stupid, just tell me in the comments or via email - help is appreciated.

Step 1: Create a directory

We have a “company directory” with a fully configured Azure Active Directory (incl. User sync between our OnPrem system, Office 365 licenses etc.).

Our rule of thumb is: We create for each product team a individual directory and all team members are invited in the new directory.

Keep in mind: A directory itself costs you nothing but might help you to keep things manageable.

Create a new tenant directory

Step 2: Create a group

This step might be optional, but all team members - except the “Administrator” - have the same rights and permissions in our company. To keep things simple, we created a group with all team members.

Put all invited users in a group

Step 3: Create a subscription

Now create a subscription. The typical “Pay-as-you-go” offer will work. Be aware that the user who creates the subscription is initially setup as the Administrator.

Create a subscription

Step 4: “Share” the subscription

This is the most important step:

You need to grant the individual users or the group (from step 2) the “Contributor” role for this subscription via the “Access control (IAM)”. The hard part is to understand how those “Role assignment” affect the subscription. I’m not even sure if the “Contributor” is the best fit, but it works for us.

Pick the correct role assignment

Summary

I’m not really sure why such a basic concept is labeled so poorly but you really need to pick the correct role assignment and the other person should be able to use the subscription.

Hope this helps!

Holger Schwichtenberg: Rückschau auf die BASTA! hybrid 2020

Die Herbst-BASTA! fand als Hybrid-Konferenz, das heißt vor Ort in Mainz und online im Internet, mit in Summe rund 400 Teilnehmer, davon 200 vor Ort, statt.

Golo Roden: Einführung in Docker, Folge 3: Container verwenden

Das Verwenden von Containern gehört zu den tagtäglichen Aufgaben im Umgang mit Docker. Dazu zählen unter anderem das Starten, Beenden und Aufräumen von Containern. Stefan Scherer zeigt in der dritten Folge der Einführung in Docker, wie das alles funktioniert.

Stefan Henneken: IEC 61131-3: abstrakter FB vs. Schnittstelle

Seit TwinCAT V3.1 Build 4024 können Funktionsblöcke, Methoden und Eigenschaften als abstract gekennzeichnet werden. Abstrakte FBs können nur als Basis-FB für die Vererbung genutzt werden. Ein direktes Instanziieren von abstrakten FBs ist nicht möglich. Somit haben abstrakte FBs eine gewisse Ähnlichkeit zu Schnittstellen. Es stellt sich nun die Frage, wann eine Schnittstelle und wann ein abstrakter FB zum Einsatz kommen sollte.

Eine sehr gute Beschreibung zu abstract liefert der Post The ABSTRACT keyword aus dem Blog PLCCoder.com oder das Beckhoff Information System. Deshalb soll das Wichtigste nur kurz wiederholt werden.

abstrakte Methoden

METHOD PUBLIC ABSTRACT DoSomething : LREAL
  • bestehen ausschließlich aus der Deklaration und enthalten keine Implementierung. Der Methodenrumpf ist leer.
  • können public, protected oder internal sein. Der Zugriffsmodifizierer private ist nicht erlaubt.
  • können nicht zusätzlich als final deklariert werden.

abstrakte Eigenschaften

PROPERTY PUBLIC ABSTRACT nAnyValue : UINT
  • können Getter oder Setter oder beides enthalten.
  • Getter und Setter bestehen ausschließlich aus der Deklaration und enthalten keine Implementierung.
  • können public, protected oder internal sein. Der Zugriffsmodifizierer private ist nicht erlaubt.
  • können nicht zusätzlich als final deklariert werden.

abstrakte Funktionsblöcke

FUNCTION_BLOCK PUBLIC ABSTRACT FB_Foo
  • Sobald eine Methode oder eine Eigenschaft mit abstract deklariert wurde, muss auch der Funktionsblock mit abstract deklariert werden.
  • Von abstrakten FBs können keine Instanzen angelegt werden. Abstrakte FBs können nur bei der Vererbung als Basis-FB verwendet werden.
  • Alle abstrakte Methoden und alle abstrakte Eigenschaften müssen überschrieben werden, damit ein konkreter FB entsteht. Aus der abstrakten Methode oder einer abstrakten Eigenschaft wird durch das Überschreiben eine konkrete Methode oder eine konkrete Eigenschaft.
  • Abstrakte Funktionsblöcke können zusätzlich konkrete Methoden und/oder konkrete Eigenschaften enthalten.
  • Werden bei der Vererbung nicht alle abstrakte Methoden oder nicht alle abstrakte Eigenschaften überschrieben, so kann der erbende FB auch wieder nur ein abstrakter FB sein (schrittweise Konkretisierung).
  • Zeiger oder Referenzen von Typ eines abstrakten FBs sind erlaubt. Diese können aber auf konkrete FBs referenzieren und somit deren Methoden oder Eigenschaften aufrufen (Polymorphismus).

Unterschiede abstrakter FB und Schnittstelle

Besteht ein Funktionsblock ausschließlich aus abstrakten Methoden und abstrakten Eigenschaften, so enthält dieser Funktionsblock keinerlei Implementierungen und hat dadurch eine gewisse Ähnlichkeit mit Schnittstellen. Im Detail gibt es allerdings einige Besonderheiten zu beachten.

Schnittstelleabstrakter FB
unterstützt Mehrfachvererbung+
kann lokale Variablen enthalten+
kann konkrete Methoden enthalten+
kann konkrete Eigenschaften enthalten+
unterstützt neben public noch weitere Zugriffsmodifizierer+
Verwendung bei Arrays+nur über POINTER

Durch die Tabelle kann der Eindruck entstehen, dass Schnittstellen nahezu komplett durch abstrakte FBs austauschbar sind. Allerdings bieten Schnittstellen eine größere Flexibilität durch die Möglichkeit, in unterschiedlichen Vererbungshierarchien verwendet zu werden. In dem Post IEC 61131-3: Objektkomposition mit Hilfe von Interfaces wird hierzu ein Beispiel gezeigt.

Als Entwickler stellt sich somit die Frage, wann eine Schnittstelle und wann ein abstrakter FB genutzt werden sollte. Die einfache Antwort lautet: am besten beides gleichzeitig. Hierdurch steht eine Standardimplementierung im abstrakten Basis-FB zur Verfügung, wodurch das Ableiten erleichtert wird. Jedem Entwickler bleibt aber die Freiheit erhalten, die Schnittstelle direkt zu implementieren.

Beispiel

Für die Datenverwaltung von Angestellten sind Funktionsblöcke zu entwerfen. Hierbei wird unterschieden zwischen Festangestellten (FB_FullTimeEmployee) und Vertragsmitarbeiter (FB_ContractEmployee). Jeder Mitarbeiter wird durch seinen Vornamen (sFirstName), Nachnamen (sLastName) und der Personalnummer (nPersonnelNumber) identifiziert. Hierzu werden entsprechende Eigenschaften bereitgestellt. Außerdem wird eine Methode benötigt, die den vollständigen Namen inklusive Personalnummer als formatierten String ausgibt (GetFullName()). Die Berechnung des Monatseinkommens erfolgt durch die Methode GetMonthlySalary().

Die Unterschiede beider Funktionsblöcke bestehen in der Berechnung des Monatseinkommens. Während der Festangestellte ein Jahreseinkommen (nAnnualSalary) bezieht, ergibt sich das Monatseinkommen des Vertragsmitarbeiters aus dem Stundenlohn (nHourlyPay) und der Monatsarbeitszeit (nMonthlyHours). Somit besitzen die beiden Funktionsblöcke für die Berechnung des Monatseinkommens unterschiedliche Eigenschaften. Die Methode GetMonthlySalary() ist in beiden Funktionsblöcken enthalten, unterscheidet sich aber in der Implementierung.

1. Lösungsansatz: abstrakter FB

Da beide FBs etliche Gemeinsamkeiten haben, liegt es nahe einen Basis-FB (FB_Employee) zu erstellen. Dieser Basis-FB enthält alle Methoden und Eigenschaften, die in beiden FBs enthalten sind. Da sich aber die Methoden GetMonthlySalary() in der Implementierung unterscheiden, wird diese in FB_Employee als abstract gekennzeichnet. Dadurch müssen alle FBs, die von diesen Basis-FB erben, GetMonthlySalary() überschreiben.

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 1 (TwinCAT 3.1.4024) auf GitHub

Nachteile

Der Lösungsansatz sieht auf dem ersten Blick sehr solide aus. Wie aber weiter Oben schon erwähnt, kann der Einsatz von Vererbung auch Nachteile mit sich ziehen. Besonders dann, wenn FB_Employee Teil einer Vererbungskette ist. Alles was FB_Employee über diese Kette erbt, wird auch an FB_FullTimeEmployee und FB_ContractEmployee vererbt. Kommt FB_Employee in einem anderen Zusammenhang zum Einsatz, so kann eine umfangreiche Vererbungs-Hierarchie zu weiteren Problemen führen.

Auch gibt es Einschränkungen bei dem Versuch, alle Instanzen in einem Array als Referenzen abzulegen. Folgende Deklaration wird vom Compiler nicht zugelassen:

aEmployees : ARRAY [1..2] OF REFERENCE TO FB_Employee; // error

Statt Referenzen müssen Zeiger verwendet werden:

aEmployees : ARRAY [1..2] OF POINTER TO FB_Employee;

Allerdings ist bei der Verwendung von Zeigern einiges zu beachten (z.B. beim Online-Change). Aus diesem Grund versuche ich Zeiger so weit wie möglich zu vermeiden.

Vorteile

Es ist zwar nicht möglich, direkt eine Instanz eines abstrakten FB anzulegen, allerdings kann per Referenz auf die Methoden und Eigenschaften eines abstrakten FB zugegriffen werden.

VAR
  fbFullTimeEmployee :  FB_FullTimeEmployee;
  refEmployee        :  EFERENCE TO FB_Employee;
  sFullName          :  STRING;
END_VAR
refEmployee REF= fbFullTimeEmployee;
sFullName := refEmployee.GetFullName();

Auch kann es ein Vorteil sein, dass die Methode GetFullName() und die Eigenschaften sFirstName, sLastName und nPersonnelNumber im abstrakten Basis-FB schon vollständig implementiert und dort nicht als abstract deklariert wurden. Ein Überschreiben dieser Elemente in den abgeleiteten FBs ist nicht mehr notwendig. Soll z.B. die Formatierung für den Namen angepasst werden, so ist dieses nur an einer Stelle durchzuführen.

2. Lösungsansatz: Schnittstelle

Ein Ansatz mit Schnittstellen ähnelt sehr stark der vorherigen Variante. Die Schnittstelle enthält alle Methoden und Eigenschaften, die bei beiden FBs (FB_FullTimeEmployee und FB_ContractEmployee) gleich sind.

Beispiel 2 (TwinCAT 3.1.4024) auf GitHub

Nachteile

Dadurch das FB_FullTimeEmployee und FB_ContractEmployee die Schnittstelle I_Employee implementieren, muss jeder FB alle Methoden und alle Eigenschaften aus der Schnittstelle auch enthalten. Das betrifft auch die Methode GetFullName(), die in beiden Fällen die gleiche Berechnung durchführt.

Wurde ein Schnittstelle veröffentlich (z.B. durch eine Bibliothek) und in verschiedenen Projekten eingesetzt, so sind Änderungen an dieser Schnittstelle nicht mehr möglich. Wird eine Methode oder eine Eigenschaft hinzugefügt, so müssen auch alle Funktionsblöcke angepasst werden, die diese Schnittstelle implementieren. Bei der Vererbung von FBs ist dieses nicht notwendig. Wird ein Basis-FB erweitert, so müssen alle FBs die davon erben nicht verändert werden. Außer, die neuen Methoden oder Eigenschaften sind abstrakt.

Tipp: Kommt es doch vor, dass man eine Schnittstelle später anpassen muss, so kann man eine neue Schnittstelle anlegen. Diese erbt von der ursprünglichen Schnittstelle und wird um die notwendigen Elemente erweitert.

Vorteile

Funktionsblöcke können mehrere Schnittstellen implementieren. Schnittstellen sind dadurch in vielen Fällen flexibler einsetzbar.

Bei einem Funktionsblock kann zur Laufzeit per __QUERYINTERFACE() eine bestimmte Schnittstelle abgefragt werden. Wurde dieses implementiert, so ist über diese Schnittstelle ein Zugriff auf den FB möglich. Dieses macht den Einsatz von Schnittstellen sehr flexibel.

Ist die Implementierung einer bestimmten Schnittstelle bekannt, so kann der Zugriff über die Schnittstelle auch direkt erfolgen.

VAR
  fbFullTimeEmployee :  FB_FullTimeEmployee;
  ipEmployee         :  I_Employee;
  sFullName          :  STRING;
END_VAR
ipEmployee := fbFullTimeEmployee;
sFullName := ipEmployee.GetFullName();

Auch können Schnittstellen als Datentyp für ein Array verwendet werden. Alle FBs, welche die Schnittstelle I_Employee implementieren, können zu dem folgenden Array hinzugefügt werden.

aEmployees : ARRAY [1..2] OF I_Employee;

3. Lösungsansatz: Kombination abstrakter FB und Schnittstelle

Warum nicht beide Ansätze miteinander kombinieren und somit von den Vorteilen beider Varianten profitieren?

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 3 (TwinCAT 3.1.4024) auf GitHub

Bei der Kombination der beiden Ansätze wird zunächst die Schnittstelle zur Verfügung gestellt. Anschließend wird die Verwendung der Schnittstelle durch den abstrakten Funktionsblock FB_Employee vereinfacht. Gleiche Implementierungen von gemeinsamen Methoden können in dem abstrakten FB bereitgestellt werden. Eine mehrfache Implementierung ist nicht notwendig. Kommen neue FBs hinzu, können diese auch direkt die Schnittstelle I_Employee nutzen.

Der Aufwand für die Umsetzung ist erstmal etwas höher als bei den beiden vorherigen Varianten. Aber gerade bei Bibliotheken, die von mehreren Programmierern eingesetzt und über Jahre weiterentwickelt werden, kann sich dieser Mehraufwand lohnen.

  • Wenn der Anwender keine eigene Instanz des FBs anlegen soll (weil dieses nicht sinnvoll erscheint), dann sind abstrakte FBs oder Schnittstellen hilfreich.
  • Wenn man die Möglichkeit haben will, in mehr als einen Basistyp zu verallgemeinern, dann sollte eine Schnittstelle zum Einsatz kommen.
  • Wenn ein FB ohne die Implementierung der Methoden oder Eigenschaften vereinbart werden kann, dann sollte man eine Schnittstelle dem abstrakten FB vorziehen.

Golo Roden: Einführung in React, Folge 8: Fortgeschrittenes JSX

Eines der Kernkonzepte von React ist die JavaScript-Spracherweiterung JSX, weshalb es wichtig ist, sich auch mit deren fortgeschrittenen Konzepten zu beschäftigen.

Golo Roden: Einführung in Docker, Folge 2: Docker installieren

Bevor man Docker verwenden kann, muss man es zunächst installieren und konfigurieren. Die zweite Folge der Einführung in Docker zeigt, wie das funktioniert.

Golo Roden: Einführung in Docker, Folge 1: Grundkonzepte

Docker ist als Werkzeug aus der modernen Web- und Cloud-Entwicklung nicht mehr wegzudenken. Daher veröffentlichen Docker und the native web in enger Zusammenarbeit einen kostenlosen Videokurs, mit dem man den Einsatz von Docker auf einfachem Weg lernen kann.

Golo Roden: Götz & Golo: Ein Jahr später

Am 19. August 2019 hatten Götz und ich unsere Blogserie "Götz & Golo" angekündigt. Inzwischen sind zwölf Folgen erschienen. Daher ist es an der Zeit für einen kritischen Rück- und einen konstruktiven Ausblick.

Code-Inside Blog: How to run a legacy WCF .svc Service on Azure AppService

Last month we wanted to run good old WCF powered service on Azures “App Service”.

WCF… what’s that?

If you are not familiar with WCF: Good! For the interested ones: WCF is or was a framework to build mostly SOAP based services in the .NET Framework 3.0 timeframe. Some parts where “good”, but most developers would call it a complex monster.

Even in the glory days of WCF I tried to avoid it at all cost, but unfortunately I need to maintain a WCF based service.

For the curious: The project template and the tech is still there. Search for “WCF”.

VS WCF Template

The template will produce something like that:

The actual “service endpoint” is the Service1.svc file.

WCF structure

Running on Azure: The problem

Let’s assume we have a application with a .svc endpoint. In theory we can deploy this application to a standard Windows Server/IIS without major problems.

Now we try to deploy this very same application to Azure AppService and this is the result after we invoke the service from the browser:

"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable." (HTTP Response was 404)

Strange… very strange. In theory a blank HTTP 400 should appear, but not a HTTP 404. The service itself was not “triggered”, because we had some logging in place, but the request didn’t get to the actual service.

After hours of debugging, testing and googling around I created a new “blank” WCF service from the Visual Studio template got the same error.

The good news: It’s was not just my code something is blocking the request.

After some hours I found a helpful switch in the Azure Portal and activated the “Failed Request tracing” feature (yeah… I could found it sooner) and I discovered this:

Failed Request tracing image

Running on Azure: The solution

My initial thoughts were correct: The request was blocked. It was treated as “static content” and the actual WCF module was not mapped to the .svc extension.

To “re-map” the .svc extension to the correct handler I needed to add this to the web.config:

...
<system.webServer>
    ...
	<handlers>
		<remove name="svc-integrated" />
		<add name="svc-integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler" resourceType="File" preCondition="integratedMode" />
	</handlers>
</system.webServer>
...

With this configuration everything worked as expected on Azure AppService.

Be aware:

I’m really not 100% sure why this is needed in the first place. I’m also not 100% sure if the name svc-integrated is correct or important.

This blogpost is a result of these tweets.

That was a tough ride… Hope this helps!

Golo Roden: Einführung in React, Folge 7: React-Forms

Die vergangenen beiden Folgen haben gezeigt, wie das Verarbeiten von Eingaben und das Verwalten von Zustand in React funktionieren. Wie lassen sich mit diesem Wissen Formulare erstellen?

Holger Schwichtenberg: GPX-Dateien verbinden mit der PowerShell

Dieses PowerShell-Skript verbindet eine beliebige Anzahl von GPX-Dateien zu einer Datei anhand von Datum und Uhrzeit.

Jürgen Gutsch: ASP.NET Core Health Checks

Since a while I planned to write about the ASP.NET Health Checks which are actually pretty cool. The development of the ASP.NET Core Health Checks started in fall 2016. At that time it was a architectural draft. In November 2016 during the Global MVP Summit in Redmond we got ask to hack some health checks based on the architectural draft. It was Damien Bowden and me who met Glen Condron and Andrew Nurse during the Hackathon on the last summit day to get into the ASP.NET Health Checks and to write the very first checks and to try the framework.

Actually, I prepared a talk about the ASP.NET Health Checks. And I would be happy to do the presentation at your user group or your conference.

What are the health checks for?

Imagine that you are creating an ASP.NET application that is pretty much dependent on some sub systems, like a database, a file system, an API, or something like that. This is a pretty common scenario. Almost every application is dependent on a database. If the connection to the database got lost for different reasons, the application will definitely break. This is how applications are developed since years. The database is the simplest scenario to imagine what the ASP.NET health checks are good for, but not the real reason why they are developed. So let's continue with the database scenario.

  • What if you where able the check whether the database is available or not before you actually connect to it.
  • What if you where able to tell your application to show a user friendly message about the database that is not available.
  • What if you could simply switch to a fallback database in case the actual one is not available?
  • What if you could tell a load balancer to switch to a different fallback environment, in case your application is unhealthy because of the missing database?

You can exactly do this with the ASP.NET Health Checks:

Check the health and availability of your sub-systems, provide an endpoint that tells other systems about the health of the current application, and consume health check endpoints of other systems.

Health checks are mainly made for microservice environments. where loosely coupled applications need to know the health state of the systems they are depending on. But they are also useful in more monolithic applications that are also dependent on some kind of subsystems and infrastructure.

How to enable health checks?

I'd like to show the health check configuration in a new, plain and simple ASP.NET MVC project that I will create using the .NET CLI in my favorite console:

dotnet new mvc -n HealthCheck.MainApp -o HealthCheck.MainApp

The health checks are already in the framework and you don't need to add an separate NuGet package to use it. It is in the Microsoft.Extensions.Diagnostics.HealthChecks package that should be already available after the installation of the latest version of .NET Core.

To enable the health checks you need to add the relating services to the DI container:

public void ConfigureServices(IServiceCollection services)
{
    services.AddHealthChecks();
    services.AddControllersWithViews();
}

This is also the place where we add the checks later on. But this should be good for now.

To also provide an endpoint to tell other applications about the state of the current system you need to map a route to the health checks inside the Configure method of the Startup class:

app.UseEndpoints(endpoints =>
{
    endpoints.MapHealthChecks("/health");
    endpoints.MapControllerRoute(
        name: "default",
        pattern: "{controller=Home}/{action=Index}/{id?}");
});

This will give you a URL where you can check the health state of your application. Let's quickly run the application and call this endpoint with a browser:

Celling the endpoint:

Our application is absolutely healthy. For sure, because there is no health check yet, that checks for something.

Writing health checks

Like in many other APIs (e. g. the Middlewares) there are many ways to add health checks . The simplest way and the best way to understand how it is working is to use lambda methods:

services.AddHealthChecks()
    .AddCheck("Foo", () =>
        HealthCheckResult.Healthy("Foo is OK!"), tags: new[] { "foo_tag" })
    .AddCheck("Bar", () =>
        HealthCheckResult.Degraded("Bar is somewhat OK!"), tags: new[] { "bar_tag" })
    .AddCheck("FooBar", () =>
        HealthCheckResult.Unhealthy("FooBar is not OK!"), tags: new[] { "foobar_tag" });

Those lines add three different health checks. They are named and the actual check is a Lambda expression that returns a specific HealthCheckResult. The result can be Healthy, Degraded or Unhealthy.

  • Healthy: All is fine obviously.
  • Degraded: The system is not really healthy, but it's not critical. Maybe a performance problem or something like that.
  • Unhealthy: Something critical isn't working.

Usually a health check result has at least one tag to group them by topic or whatever. The message should be meaningful to easily identify the actual problem.

Those lines are not really useful, but they show how the health check are working. If we run the app again and call the endpoint, we would see a Unhealthy state, because it always shows the lowest state, which is Unhealthy. Feel free to play around with the different HealthCheckResult

Now let's demonstrate an more useful health check. This one pings a needed resource in the internet and checks the availability:

services.AddHealthChecks()
    .AddCheck("ping", () =>
    {
        try
        {
            using (var ping = new Ping())
            {
                var reply = ping.Send("asp.net-hacker.rocks");
                if (reply.Status != IPStatus.Success)
                {
                    return HealthCheckResult.Unhealthy("Ping is unhealthy");
                }

                if (reply.RoundtripTime > 100)
                {
                    return HealthCheckResult.Degraded("Ping is degraded");
                }

                return HealthCheckResult.Healthy("Ping is healthy");
            }
        }
        catch
        {
            return HealthCheckResult.Unhealthy("Ping is unhealthy");
        }
    });

This actually won't work, because my blog runs on Azure and Microsoft doesn't allow to ping the app services. Anyway, this demo shows you how to handle the specific results and how to return the right HealthCheckResults depending on the state of the the actual check.

But it doesn't really make sense to write those tests as lambda expressions and to mess with the Startup class. Good there is a way to also add class based health checks.

Also just a simple and useless one, but it demonstrates the basic concepts:

public class ExampleHealthCheck : IHealthCheck
{
    public Task<HealthCheckResult> CheckHealthAsync(
        HealthCheckContext context,
        CancellationToken cancellationToken = default(CancellationToken))
    {
        var healthCheckResultHealthy = true;

        if (healthCheckResultHealthy)
        {
            return Task.FromResult(
                HealthCheckResult.Healthy("A healthy result."));
        }

        return Task.FromResult(
            HealthCheckResult.Unhealthy("An unhealthy result."));
    }
}

This class implements CheckHealthAsync method from the IHealthCheck interface. The HealthCheckContext contains the already registered health checks in the property Registration. This might be useful to check the state of other specific health checks.

To add this class as a health check in the application we need to use the generic AddCheck method:

services.AddHealthChecks()
    .AddCheck<ExampleHealthCheck>("class based", null, new[] { "class" });

We also need to specify a name and at least one tag. With the second argument I'm able to set a default failing state. But null is fine, in case I handle all exceptions inside the health check, I guess.

Expose the health state

As mentioned, I'm able to provide an endpoint to expose the health state of my application to systems that depends on the current app. But by default it responses just with a simple string that only shows the simple state. It would be nice to see some more details to tell the consumer what actually is happening.

Fortunately this is also possible by passing HealthCheckOptions into the MapHealthChecks method:

app.UseEndpoints(endpoints =>
{
    endpoints.MapHealthChecks("/health", new HealthCheckOptions()
    {
        Predicate = _ => true,
        ResponseWriter = UIResponseWriter.WriteHealthCheckUIResponse
    });
    endpoints.MapControllerRoute(
        name: "default",
        pattern: "{controller=Home}/{action=Index}/{id?}");
});

With the Predicate you are able to filter specific health checks to execute and to get the state of those. In this case I want to execute them all. The ResponseWriter is needed to write the health information of the specific checks to the response. In that case I used a ResponseWriter from a community project that provides some cool UI features and a ton of ready-to-use health checks.

dotnet add package AspNetCore.HealthChecks.UI

The UIResponseWriter of that project writes a JSON output to the HTTP response that includes many details about the used health checks:

{
  "status": "Unhealthy",
  "totalDuration": "00:00:00.7348450",
  "entries": {
    "Foo": {
      "data": {},
      "description": "Foo is OK!",
      "duration": "00:00:00.0010118",
      "status": "Healthy"
    },
    "Bar": {
      "data": {},
      "description": "Bar is somewhat OK!",
      "duration": "00:00:00.0009935",
      "status": "Degraded"
    },
    "FooBar": {
      "data": {},
      "description": "FooBar is not OK!",
      "duration": "00:00:00.0010034",
      "status": "Unhealthy"
    },
    "ping": {
      "data": {},
      "description": "Ping is degraded",
      "duration": "00:00:00.7165044",
      "status": "Degraded"
    },
    "class based": {
      "data": {},
      "description": "A healthy result.",
      "duration": "00:00:00.0008822",
      "status": "Healthy"
    }
  }
}

In case the overall state is Unhealthy the endpoint sends the result with a 503 HTTP response status, otherwise it is a 200. This is really useful if you just want to handle the HTTP response status.

The community project provides a lot more features. Also a nice UI to visualize the health state to humans. I'm going to show you this in a later section.

Handle the states inside the application

In the most cases you don't want to just expose the state to depending consumer of your app. It might also be the case that you need to handle the different states in your application, by showing a message in case the application is not working properly, disabling parts of the application that are not working, switching to a fallback source, or whatever is needed to run the application in an degraded state.

To do things like this, you can use the HealthCheckService that is already registered to the IoC Container with the AddHealthChecks() method. You can inject the HealthCheckService using the IHealthCheckService interface wherever you want.

Let's see how this is working!

In the HomeController I created a constructor that injects the IHealthCheckService the same way as other services need to be injected. I also created a new Action called Health that uses the HealthCheckService and calls the method CheckHealthAsync() to execute the checks and to retrieve a HealthReport. The HealthReport is than passed to the view:

public class HomeController : Controller
{
    private readonly IHealthCheckService _healthCheckService;

    public HomeController(
        IHealthCheckService healthCheckService)
    {
        _healthCheckService = healthCheckService;
    }

    public async Task<IActionResult> Health()
    {
        var healthReport = await _healthCheckService.CheckHealthAsync();
        
        return View(healthReport);
    }

Optionally you are able to pass a predicate to the method CheckHealthAsync(). With the Predicate you are able to filter specific health checks to execute and to get the state of those. In this case I want to execute them all.

I also created a view called Health.cshtml. This view retrieves the HealthReport and displays the results:

@using Microsoft.Extensions.Diagnostics.HealthChecks;
@model HealthReport

@{
    ViewData["Title"] = "Health";
}
<h1>@ViewData["Title"]</h1>

<p>Use this page to detail your site's health.</p>

<p>
    <span>@Model.Status</span> - <span>Duration: @Model.TotalDuration.TotalMilliseconds</span>
</p>
<ul>
    @foreach (var entry in Model.Entries)
    {
    <li>
        @entry.Value.Status - @entry.Value.Description<br>
        Tags: @String.Join(", ", entry.Value.Tags)<br>
        Duration: @entry.Value.Duration.TotalMilliseconds
    </li>
    }
</ul>

To try it out, I just need to run the application using dotnet run in the console and calling https://localhost:5001/home/health in the browser:

You could also try to analyze the HealthReport in the Controller, in your services to do something specific in case the the application isn't healthy anymore.

A pretty health state UI

The already mentioned GitHub project AspNetCore.Diagnostics.HealthChecks also provides a pretty UI to display the results in a nice and human readable way.

This just needs a little more configuration in the Startup.cs

Inside the method ConfigureServices() I needed to add the health checks UI services

services.AddHealthChecksUI();

And inside the method Configure() I need to map the health checks UI Middleware right after the call of MapHealthChecks:

endpoints.MapHealthChecksUI();

This adds a new route to our application to call the UI: /healthchecks-ui

We also need to register our health API to the UI. This will be done using small setting to the appsetings.json:

{
  ... ,
  "HealthChecksUI": {
    "HealthChecks": [
      {
        "Name": "HTTP-Api",
        "Uri": "https://localhost:5001/health"
      }
    ],
    "EvaluationTimeOnSeconds": 10,
    "MinimumSecondsBetweenFailureNotifications": 60
  }
}

This way you are able to register as many health endpoints to the UI as you like. Think about a separate application that only shows the health states of all your microservices. This would be the way to go.

Let's call the UI using this route /healthchecks-ui

(Wow... Actually, the ping seemed to work, when I did this screenshot. )

This is awesome. This is a really great user interface to display the health of all your services.

About the Webhooks and customization of the UI, you should read the great docs in the repository.

Conclusion

The health checks are definitely a thing you should look into. No matter what kind of web application you are writing, it can help you to create more stable and more responsive applications. Applications that know about their health can handle degraded of unhealthy states in a way that won't break the whole application. This is very useful, at least from my perspective ;-)

To play around with the demo application used for this post visit the repository on GitHub: https://github.com/JuergenGutsch/healthchecks-demo

Marco Scheel: Enable Unified Labeling for Microsoft 365 Groups (and Teams) in your tenant via PowerShell script

Microsoft announced end of June 2020 the “General Availability” of the Microsoft Information Protection integration for Group labeling. Unified labeling is now available für all Microsoft 365 Groups (Teams, SharePoint, …).

Microsoft Information Protection is a built-in, intelligent, unified, and extensible solution to protect sensitive data across your enterprise – in Microsoft 365 cloud services, on-premises, third-party SaaS applications, and more. Microsoft Information Protection provides a unified set of capabilities to know your data, protect your data, and prevent data loss across Microsoft 365 apps (e.g. Word, PowerPoint, Excel, Outlook) and services (e.g. Teams, SharePoint, and Exchange).

Source: https://techcommunity.microsoft.com/t5/microsoft-security-and/general-availability-microsoft-information-protection/ba-p/1497769

The feature is currently an opt-in solution. The previous Azure AD based group classification is still available and supported. If you want to switch to the new solution to apply sensitivity labels to your groups you need to run some lines of PowerShell. This is the Microsoft documentation:

https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/groups-assign-sensitivity-labels#enable-sensitivity-label-support-in-powershell

The feature is configured with the same commands as the AAD based classification. You have to set the value for “EnableMIPLabels“ to true.The documentation is expecting that you already have Azure AD directory settings for the template “Group.Unified“. If this is not the case you can also follow the instructions on the Azure AD directory settings for Groups:

https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/groups-settings-cmdlets#create-settings-at-the-directory-level

To make it easier for my customers and for you, I’ve created a PowerShell that will help and work in any configuration. Check out the latest version of my script in this GitHub repository:

https://github.com/marcoscheel/snippets/blob/master/m365-enable-ul-groups/enable-ulclassification.ps1

$tenantdetail = $null;
$tenantdetail = Get-AzureADTenantDetail -ErrorAction SilentlyContinue; 
if ($null -eq $tenantdetail)
{
    #connect as gloabl admin
    Connect-AzureAD
    $tenantdetail = Get-AzureADTenantDetail -ErrorAction SilentlyContinue; 
}
if ($null -eq $tenantdetail)
{
    Write-Host "Error connecting to tenant" -ForegroundColor Red;
    Exit
}

$settingIsNew = $false;
$setting = Get-AzureADDirectorySetting | Where-Object { $_.DisplayName -eq "Group.Unified"};
if ($null -eq $setting){
    Write-Host "Not directory settings for Group.Unified found. Create new!" -ForegroundColor Green;
    $settingIsNew = $true;
    $aaddirtempid = (Get-AzureADDirectorySettingTemplate | Where-Object { $_.DisplayName -eq "Group.Unified" }Id;
    $template = Get-AzureADDirectorySettingTemplate -Id $aaddirtempid;
    $setting = $template.CreateDirectorySetting();
}
else{
    Write-Host "Directory settings for Group.Unified found. Current value for EnableMIPLabels:" oregroundColor Green;
    Write-Host $setting["EnableMIPLabels"];
}

$setting["EnableMIPLabels"] = "true";
if (-not $settingIsNew){
    #Reset AAD based classsification?
    #$setting["ClassificationList"] = "";
    #$setting["DefaultClassification"] = "";
    #$setting["ClassificationDescriptions"] = "";
}

if ($settingIsNew){

    New-AzureADDirectorySetting -DirectorySetting $setting;
    Write-Host "New directory settings for Group.Unified applied." -ForegroundColor Green;
    $setting = Get-AzureADDirectorySetting | Where-Object { $_.DisplayName -eq "Group.Unified"};
}
else{
    Set-AzureADDirectorySetting -Id $setting.Id -DirectorySetting $setting;
    Write-Host "Updated directory settings for Group.Unified." -ForegroundColor Green;
    $setting = Get-AzureADDirectorySetting | Where-Object { $_.DisplayName -eq "Group.Unified"};
}
$setting.Values;

Holger Schwichtenberg: Inhalt eines ZIP-Archives mit PowerShell auflisten, ohne es zu entpacken

Für das Auflisten des Inhalts von ZIP-Archiven existiert kein Commandlet, aber man kann die .NET-Klasse System.IO.Compression.ZipFile in der PowerShell nutzen.

Christian Dennig [MS]: Azure DevOps Terraform Provider

Not too long ago, the first version of the Azure DevOps Terraform Provider was released. In this article I will show you with several examples which features are currently supported in terms of build pipelines and how to use the provider – also in conjunction with Azure. The provider is the last “building block” for many people working in the “Infrastructure As Code” space to create environments (including Git Repos, service connections, build + release pipelines etc.) completely automatically.

The provider was released in June 2020 in version 0.0.1, but to be honest: the feature set is quite rich already at this early stage.

The features I would like to discuss with the help of examples are as follows:

  • Create a DevOps project including a hosted Git repo.
  • Creation of a build pipeline
  • Usage of variables and variable groups
  • Creating an Azure service connection and using variables/secrets from an Azure KeyVault

Example 1: Basic Usage

The Azure DevOps provider can be integrated in a script like any other Terraform provider. All that’s required is the URL to the DevOps organisation and a Personal Access Token (PAT) with which the provider can authenticate itself against Azure DevOps.

The PAT can be easily created via the UI of Azure DevOps by creating a new token via User Settings --> Personal Access Token --> New Token. For the sake of simplicity, in this example I give “Full Access” to it…of course this should be adapted for your own purposes.

Create a personal access token

The documentation of the Terraform Provider contains information about the permissions needed for the respective resource.

Defining relevant scopes

Once the access token has been created, the Azure DevOps provider can be referenced in the terraform script as follows:

provider "azuredevops" {
  version               = ">= 0.0.1"
  org_service_url       = var.orgurl
  personal_access_token = var.pat
}

The two variables orgurl and pat should be exposed as environment variables:

$ export TF_VAR_orgurl = "https://dev.azure.com/<ORG_NAME>"
$ export TF_VAR_pat = "<PAT_FROM_AZDEVOPS>"

So, this is basically all that is needed to work with Terraform and Azure DevOps. Let’s start by creating a new project and a git repository. Two resources are needed for this, azuredevops_project and azuredevops_git_repository:

resource "azuredevops_project" "project" {
  project_name       = "Terraform DevOps Project"
  description        = "Sample project to demonstrate AzDevOps <-> Terraform integragtion"
  visibility         = "private"
  version_control    = "Git"
  work_item_template = "Agile"
}

resource "azuredevops_git_repository" "repo" {
  project_id = azuredevops_project.project.id
  name       = "Sample Empty Git Repository"

  initialization {
    init_type = "Clean"
  }
}

Additionally, we also need an initial pipeline that will be triggered on a git push to master.
In a pipeline, you usually work with variables that come from different sources. These can be pipeline variables, values from a variable group or from external sources such as an Azure KeyVault. The first, simple build definition uses pipeline variables (mypipelinevar):

resource "azuredevops_build_definition" "build" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline"

  ci_trigger {
    use_yaml = true
  }

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline.yaml"
  }

  variable {
    name      = "mypipelinevar"
    value     = "Hello From Az DevOps Pipeline!"
    is_secret = false
  }
}

The corresponding pipeline definition looks as follows:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Pipeline is running!
    echo And here is the value of our pipeline variable
    echo $(mypipelinevar)
  displayName: 'Run a multi-line script'

The pipeline just executes some scripts – for demo purposes – and outputs the variable stored in the definition to the console.

Running the Terraform script, it creates an Azure DevOps project, a git repository and a build definition.

Azure DevOps Project
Git Repository
Pipeline

As soon as the file azure_pipeline.yaml discussed above is pushed into the repo, the corresponding pipeline is triggered and the results can be found in the respective build step:

Running pipeline
Output of build pipeline

Example 2: Using variable groups

Normally, variables are not directly stored in a pipeline definition, but rather put into Azure DevOps variable groups. This allows you to store individual variables centrally in Azure DevOps and then reference and use them in different pipelines.

Fortunately, variable groups can also be created using Terraform. For this purpose, the resource azuredevops_variable_group is used. In our script this looks like this:

resource "azuredevops_variable_group" "vars" {
  project_id   = azuredevops_project.project.id
  name         = "my-variable-group"
  allow_access = true

  variable {
    name  = "var1"
    value = "value1"
  }

  variable {
    name  = "var2"
    value = "value2"
  }
}

resource "azuredevops_build_definition" "buildwithgroup" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline with VarGroup"

  ci_trigger {
    use_yaml = true
  }

  variable_groups = [
    azuredevops_variable_group.vars.id
  ]

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline-with-vargroup.yaml"
  }
}

The first part of the terraform script creates the variable group in Azure DevOps (name: my-variable-group) including two variables (var1 and var2), the second part – a build definition – uses the variable group, so that the variables can be accessed in the corresponding pipeline file (azure-pipeline-with-vargroup.yaml).

It has the following content:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: my-variable-group

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Var1: $(var1)
    echo Var2: $(var2)
  displayName: 'Run a multi-line script'

If you run the Terraform script, the corresponding Azure DevOps resources will be created: a variable group and a pipeline.

Variable Group

If you pusht the build YAML file to the repo, the pipeline will be executed and you should see the values of the two variables as output on the build console.

Output of the variables from the variable group

Example 3: Using an Azure KeyVault and Azure DevOps Service Connections

For security reasons, critical values are neither stored directly in a pipeline definition nor in Azure DevOps variable groups. You would normally use an external vault like Azure KeyVault. Fortunately, with Azure DevOps you have the possibility to access an existing Azure KeyVault directly and access secrets which are then made available as variables within your build pipeline.

Of course, Azure DevOps must be authenticated/authorized against Azure for this. Azure DevOps uses the concept of service connections for this purpose. Service connections are used to access e.g. Bitbucket, GitHub, Jira, Jenkis…or – as in our case – Azure. You define a user – for Azure this is a service principal – which is used by DevOps pipelines to perform various tasks – in our example fetching a secret from a KeyVault.

To demonstrate this scenario, various things must first be set up on Azure:

  • Creating an application / service principal in the Azure Active Directory, which is used by Azure DevOps for authentication
  • Creation of an Azure KeyVault (including a resource group)
  • Authorizing the service principal to the Azure KeyVault to be able to read secrets (no write access!)
  • Creating a secret that will be used in a variable group / pipeline

With the Azure Provider, Terraform offers the possibility to manage Azure services. We will be using it to create the resources mentioned above.

AAD Application + Service Principal

First of all, we need a service principal that can be used by Azure DevOps to authenticate against Azure. The corresponding Terraform script looks like this:

data "azurerm_client_config" "current" {
}

provider "azurerm" {
  version = "~> 2.6.0"
  features {
    key_vault {
      purge_soft_delete_on_destroy = true
    }
  }
}

## Service Principal for DevOps

resource "azuread_application" "azdevopssp" {
  name = "azdevopsterraform"
}

resource "random_string" "password" {
  length  = 24
}

resource "azuread_service_principal" "azdevopssp" {
  application_id = azuread_application.azdevopssp.application_id
}

resource "azuread_service_principal_password" "azdevopssp" {
  service_principal_id = azuread_service_principal.azdevopssp.id
  value                = random_string.password.result
  end_date             = "2024-12-31T00:00:00Z"
}

resource "azurerm_role_assignment" "contributor" {
  principal_id         = azuread_service_principal.azdevopssp.id
  scope                = "/subscriptions/${data.azurerm_client_config.current.subscription_id}"
  role_definition_name = "Contributor"
}

With the script shown above, both an AAD Application and a service principal are generated. Please note that the service principal is assigned the role Contributor – on subscription level, see the scope assignment. This should be restricted accordingly in your own projects (e.g. to the respective resource group)!

Azure KeyVault

The KeyVault is created the same way as the previous resources. It is important to note that the user working against Azure is given full access to the secrets in the KeyVault. Further down in the script, permissions for the Azure DevOps service principal are also granted within the KeyVault – but in that case only read permissions! Last but not least, a corresponding secret called kvmysupersecretsecret is created, which we can use to test the integration.

resource "azurerm_resource_group" "rg" {
  name     = "myazdevops-rg"
  location = "westeurope"
}

resource "azurerm_key_vault" "keyvault" {
  name                        = "myazdevopskv"
  location                    = "westeurope"
  resource_group_name         = azurerm_resource_group.rg.name
  enabled_for_disk_encryption = true
  tenant_id                   = data.azurerm_client_config.current.tenant_id
  soft_delete_enabled         = true
  purge_protection_enabled    = false

  sku_name = "standard"

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id

    secret_permissions = [
      "backup",
      "get",
      "list",
      "purge",
      "recover",
      "restore",
      "set",
      "delete",
    ]
    certificate_permissions = [
    ]
    key_permissions = [
    ]
  }

}

## Grant DevOps SP permissions

resource "azurerm_key_vault_access_policy" "azdevopssp" {
  key_vault_id = azurerm_key_vault.keyvault.id

  tenant_id = data.azurerm_client_config.current.tenant_id
  object_id = azuread_service_principal.azdevopssp.object_id

  secret_permissions = [
    "get",
    "list",
  ]
}

## Create a secret

resource "azurerm_key_vault_secret" "mysecret" {
  key_vault_id = azurerm_key_vault.keyvault.id
  name         = "kvmysupersecretsecret"
  value        = "KeyVault for the Win!"
}

If you have followed the steps described above, the result in Azure is a newly created KeyVault containing one secret:

Azure KeyVault

Service Connection

Now, we need the integration into Azure DevOps, because we finally want to access the newly created secret in a pipeline. Azure DevOps is “by nature” able to access a KeyVault and the secrets it contains. To do this, however, you have to perform some manual steps – when not using Terraform – to enable access to Azure. Fortunately, these can now be automated with Terraform. The following resources are used to create a service connection to Azure in Azure DevOps and to grant access to our project:

## Service Connection

resource "azuredevops_serviceendpoint_azurerm" "endpointazure" {
  project_id            = azuredevops_project.project.id
  service_endpoint_name = "AzureRMConnection"
  credentials {
    serviceprincipalid  = azuread_service_principal.azdevopssp.application_id
    serviceprincipalkey = random_string.password.result
  }
  azurerm_spn_tenantid      = data.azurerm_client_config.current.tenant_id
  azurerm_subscription_id   = data.azurerm_client_config.current.subscription_id
  azurerm_subscription_name = "<SUBSCRIPTION_NAME>"
}

## Grant permission to use service connection

resource "azuredevops_resource_authorization" "auth" {
  project_id  = azuredevops_project.project.id
  resource_id = azuredevops_serviceendpoint_azurerm.endpointazure.id
  authorized  = true 
}
Service Connection

Creation of an Azure DevOps variable group and pipeline definition

The last step necessary to use the KeyVault in a pipeline is to create a corresponding variable group and “link” the existing secret.

## Pipeline with access to kv secret

resource "azuredevops_variable_group" "kvintegratedvargroup" {
  project_id   = azuredevops_project.project.id
  name         = "kvintegratedvargroup"
  description  = "KeyVault integrated Variable Group"
  allow_access = true

  key_vault {
    name                = azurerm_key_vault.keyvault.name
    service_endpoint_id = azuredevops_serviceendpoint_azurerm.endpointazure.id
  }

  variable {
    name    = "kvmysupersecretsecret"
  }
}
Variable Group with KeyVault integration

Test Pipeline

All prerequisites are now in place, but we still need a pipeline with which we can test the scenario.

Script for the creation of the pipeline:

resource "azuredevops_build_definition" "buildwithkeyvault" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline with KeyVault Integration"

  ci_trigger {
    use_yaml = true
  }

  variable_groups = [
    azuredevops_variable_group.kvintegratedvargroup.id
  ]

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline-with-keyvault.yaml"
  }
}

Pipeline definition (azure-pipeline-with-keyvault.yaml):

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: kvintegratedvargroup

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo KeyVault secret value: $(kvmysupersecretsecret)
  displayName: 'Run a multi-line script'

If you have run the Terraform script and pushed the pipeline file into the repo, you will get the following result in the next build (the secret is not shown in the console for security reasons, of course!):

Output: KeyVault integrated variable group

Wrap-Up

Setting up new Azure DevOps projects was not always the easiest task, as sometimes manual steps were required. With the release of the first Terraform provider version for Azure DevOps, this has changed almost dramatically 🙂 You can now – as one of the last building blocks for automation in a dev project – create many things via Terraform in Azure DevOps. In the example shown here, the access to an Azure KeyVault including the creation of the corresponding service connection could be achieved. However, only one module was shown here – frankly, one for a task that “annoyed” me every now and then, as most of it had to be set up manually before having a Terraform provider. The provider can also manage branch policies, set up groups and group memberships etc. With this first release you are still “at the beginning of the journey”, but in my opinion, it is a “very good start” with which you can achieve a lot.

I am curious what will be supported next!

Sample files can be found here: https://gist.github.com/cdennig/4866a74b341a0079b5a59052fa735dbc

Golo Roden: Standardlösung oder Eigenbau?

In der Softwareentwicklung steht man häufig vor der Wahl, eine fertige Standardlösung von der Stange zu verwenden oder eine Eigenentwicklung durchzuführen. Was ist ratsam?

Code-Inside Blog: EWS, Exchange Online and OAuth with a Service Account

This week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.

But here is the full story:

Our goal

We wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?

The big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.

So… what now?

EWS is… old. Why?

The Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.

To mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. “Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.

Another argument for using EWS was, that we could support OnPrem and Online with one code base.

Docs from Microsoft

The good news is, that EWS and the Auth problem is more or less good documented here.

There are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.

Delegation:

Delegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.

Application:

Application means, that the app itself can do some actions without any user involved.

EWS and the application way

At first we thought that we might need to use the “application” way.

The good news is, that this was easy and worked. The bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.

Back to the delegation way:

EWS and the delegation way

The documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.

Solution / TL;DR

After some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:

  1. Follow the normal “delegate” steps from the Microsoft Docs

  2. Instead of this code, which will trigger the login UI:

...
// The permission scope required for EWS access
var ewsScopes = new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" };

// Make the interactive token request
var authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();
...

Use the “AcquireTokenByUsernamePassword” method:

...
var cred = new NetworkCredential("UserName", "Password");
var authResult = await pca.AcquireTokenByUsernamePassword(new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" }, cred.UserName, cred.SecurePassword).ExecuteAsync();
...

To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.

Now you should be able to get the AccessToken and do some EWS magic.

I posted a shorter version on Stackoverflow.com

Hope this helps!

Holger Schwichtenberg: Buch zu Blazor WebAssembly und Blazor Server

Das aktuelle Fachbuch des Dotnet-Doktors bietet den Einstieg in Microsofts neues Webframework.

Jürgen Gutsch: Getting the .editorconfig working with the .NET Framework and MSBuild

I demonstrated my results of the last post about the .editorconfig to the team last week. They were quite happy about the fact that the build fails on a code style error but there was one question I couldn't really answer. The Question was:

Does this also work for .NET Framework?

It should, because it is Roslyn who analyses the code. It is not any framework who does it.

To try it out I created three different class libraries that have the same class file linked into it, with the same code style errors:

    using System;

namespace ClassLibraryNetFramework
{
    public class EditorConfigTests
    {
    public int MyProperty { get; } = 1;
    public EditorConfigTests() { 
    if(this.MyProperty == 2){
        Console.WriteLine("Hallo Welt");
        }        }
    }
}

This code file has at least eleven code style errors in it:

I created a .NET Standard library, a .NET Core library, and a .NET Framework library in VS2019 this time. The solution in VS2019 now looks like this:

I also added the MyGet Roslyn NuGet Feed to the NuGet sources and referenced the code style analyzers:

This is the URL and the package name for you to copy:

  • https://dotnet.myget.org/F/roslyn/api/v3/index.json
  • Microsoft.CodeAnalysis.CSharp.CodeStyle Version: 3.8.0-1.20330.5

I also set the global.json to the latest preview of the .NET 5 SDK to be sure to use the latest tools:

{
  "sdk": {
    "version": "5.0.100-preview.6.20318.15"
  }
}

It didn't really work - My fault in the last blog post!

I saw some code style errors in VS2019 but not all the eleven errors I expected. I tried a build and the build didn't fail. Because I knew it worked the last time I tried it using the the dotnet CLI. I did the same here. I ran dotnet build and dotnet msbuild but the build didn't fail.

This is exactly what you don't need as a software developer: Doing things exactly the same twice and one time it works and on the other time it fails and you have no idea why.

I tried a lot of things and compared project files, solution files, and .editorconfig files. Actually I compared it with the Weather Stats application I used in the last post. At the end I found one line in the the PropertyGroup of the project files of the weather application that shouldn't be there but actually was the reason why it worked.

<CodeAnalysisRuleSet>..\editorconfig.ruleset</CodeAnalysisRuleSet>

While trying to get it running for the last post, I also experimented with a ruleset file. The ruleset file is a XML file that can be used to enable or disable analysis rules in VS2019. I added a ruleset file to the solution and linked it into the projects, but forgot about that.

So it seemed the failing builds of the last post wasn't because of the .editorconfig but because of this ruleset file.

It also seemed the ruleset file is needed to get it working. That shouldn't be the case and I asked the folks via the GitHub Issue about that. The answer was fast:

  • Fakt #1: The ruleset file isn't needed

  • Fakt #2: The regular .editorconfig entries don't work yet

The solution

Currently the ruleset entries where moved to the .editorconfig this means you need to add IDE specific entries to the .editorconfig to get it running, which also means you will have redundant entries until all the code style analyzers are moved to Roslyn and are mapped to the .editorconfig:

# IDE0007: Use 'var' instead of explicit type
dotnet_diagnostic.IDE0007.severity = error

# IDE0055 Fix formatting
dotnet_diagnostic.IDE0055.severity = error

# IDE005_gen: Remove unnecessary usings in generated code
dotnet_diagnostic.IDE0005_gen.severity = error

# IDE0065: Using directives must be placed outside of a namespace declaration
dotnet_diagnostic.IDE0065.severity = error

# IDE0059: Unnecessary assignment
dotnet_diagnostic.IDE0059.severity = error

# IDE0003: Name can be simplified
dotnet_diagnostic.IDE0003.severity = error  

As mentioned, these entries are already in the .editorconfig but written differently.

In the GitHub Issue they also wrote to add a specific line, in case you don't know all the IDE numbers. This line writes out warnings for all the possible code style failures. You'll see the numbers in the warning output and you can now configure how the code style failure should be handled:

# C# files
[*.cs]
dotnet_analyzer_diagnostic.category-Style.severity = warning

This solves the problem and it actually works really good.

Conclusion

Even if it solves the problem, I really hope this is a intermediate solution only, because of the redundant entries in the .editorconfig. I would prefer to not have the IDE specific entries, but I guess this needs some more time and a lot work done by Microsoft.

Holger Schwichtenberg: AsNoTrackingWithIdentityResolution() in Entity Framework Core 5.0 ab Preview 7

Microsoft hat den Namen für die forcierte Identitätsfeststellung beim Laden im "No-Tracking"-Modus geändert.

Jürgen Gutsch: .NET Interactive in Jupyter Notebooks

Since almost a year I do a lot of Python projects. Actually Python isn't that bad. Python and Flask to build web applications work almost similar to NodeJS and ExpressJS. Similarly to NodeJS, Python development is really great using Visual Studio Code.

People who are used to use Python know Jupyter Notebooks to create interactive documentations. Interactive documentation means that the code snippets are executable and that you can use Python code to draw charts or to calculate and display data.

If I got it right, Jupyter Notebook was IPython in the past. Now Jupyter Notebook is a standalone project and the IPython project focuses on Python Interactive and Python kernels for Jupyter Notebook.

The so called kernels extend Jupyter Notebook to execute a specific language. The Python kernel is default. You are able to install a lot more kernels. There are kernels for NodeJS and more.

Microsoft is working on .NET Interactive and kernels for Jupyter Notebook. You are now able to write interactive documentations in Jupyter Notebook using C#, F# and PowerShell, as well.

In this blog post I'll try to show you how to install and to use it.

Install Jupyter Notebook

You need to have Python3 installed on your machine. The best way to install Python on Windows is to use Chocolatey:

choco install python

Actually I use Chocolatey since many years as a Windows package manager and never had any problems.

Alternatively you could download and install Pythion 3 directly or by using the Anaconda installer.

If Python is installed you can install Jupyter Notebook using the Python package manager PIP:

pip install notebook

You now can use Jupyter by just type jupyter notebook in the console. This would start the Notebook with the default Python3 kernel. The following command shows the installed kernels:

jupyter kernelspec list

We'll see the python3 kernel in the console output:

Install .NET Interactive

The goal is to have the .NET Interactive kernels running in Jupyter. To get this done you first need to install the latest build of .NET Interactive from MyGet:

dotnet tool install -g --add-source "https://dotnet.myget.org/F/dotnet-try/api/v3/index.json" Microsoft.dotnet-interactive

Since NuGet is not the place to publish continuous integration build artifacts, Microsoft uses MyGet as well to publish previews, nightly builds, and continuous integration build artifacts.

Or install the latest stable version from NuGet:

dotnet tool install -g Microsoft.dotnet-interactive

If this is installed you can use dotnet interactive to install the kernels to Jupyter Notebooks

dotnet interactive jupyter install

Let's see, whether the kernels are installed or not:

jupyter kernelspec list

listkernels02

That's it. We now have four different kernels installed.

Run Jupyter Notebook

Let's run Jupyter by calling the next command. Be sure to navigate into a folder where your notebooks are or where you want to save your notebooks:

cd \git\hub\dotnet-notebook
jupyter notebook

startnotebook

It now starts a webserver that serves the notebooks from the current location and opens a Browser. The current folder will be the working folder for the currently running Jupyter instance. I don't have any files in that folder yet.

Here we have the Python3 and the three new .NET notebook types available:

notebook01

I now want to start playing around with a C# based notebook. So I create a new .NET (C#) notebook:

Try .NET Interactive

Let's add some content and a code snippet. At first I added a Markdown cell.

The so called "cells" are content elements the support specific content types. A Markdown cell is one type as well as a Code cell. The later executes a code snippet and shows the output underneath:

notebook02

That is a easy one. Now let's play with variables usage. I placed two more code cells and some small markdown cells below:

notebook03

And re-run the entire notebook:

notebook04

As well as in Python notebooks the variables are used and valid in the entire notebook and not only in the single code cell.

What else?

Inside a .NET Interactive notebook you can do the same stuff as in a regular code file. You are able to connect to a database, to Azure or just open a file locally on your machine. You can import namespaces as well as reference NuGet packages:

#r "nuget:NodaTime,2.4.8"
#r "nuget:Octokit,0.47.0"

using Octokit;
using NodaTime;
using NodaTime.Extensions;
using XPlot.Plotly;

VS Code

VS Code is also supporting Jupyter Notebooks usingMicrosoft's Python Add-In:

vscode01

Actually, it needs a couple of seconds until the Jupyter server is started. If it is up and running, it works like charm in VS Code. I really prefer VS Code over the browser interface to write notebooks.

GitHub

If you use a notepad to open a notebook file, you will see that it is a JSON file that also contains the outputs of the code cells:

vscode01

Because of that, I was really surprised that GitHub supports Jupyter Notebooks as well and displays it in a human readable format including the outputs. I expected to see the source code of the notebook, instead of the output:

vscode01

The rendering is limited but good enough to read the document. This means, it could make sense to write a notebook instead of a simple markdown file on GitHub.

Conclusion

I really like the concept of the interactive documentations. This is pretty common in the data science, analytics, and statistics universe. Python developers, as well as MatLab developers know that concept.

Personally I see a great benefit in other areas, too. Like learning, library and API documentation, as well as in all documentations that focus on code.

I also see a benefit on documentations about production lines, where several machines working together in a chain. Since you are able to use and execute .NET code, you could connect to machine sensors to read the state of the machines to display it in the documentation. The maintaining people are now able to see the state directly in the documentation of the production line.

Marco Scheel: Beware of the Teams Admin center to create new teams (and assign owners)

The Microsoft Teams Admin Center can be used to create a new Team. The initial dialog allows you to set multiple owners for the Team. This feature was added over time and is a welcome addition to make the life of an administrator easier. But the implementation has a big shortcoming: The specified owners in this dialog will not become a member of the underlying Microsoft 365 Group in Azure Active Directory. As a result all Microsoft 365 group services checking for members will not behave as expected. For example: These owners will not be able to access Planner. Other services like Teams and SharePoint work by accident. image

Let’s start with some basic information. Office Microsoft 365 Groups use a special AAD group type. But it is still a group in Azure Active Directory. A group in AAD is very similar to the old-school AD group in our on-premises directories. The group is made of members. In most on-premises cases these groups are managed by your directory admins. But also in AD you can specify owners of a group that will then be able to manage these groups… if they have the right tool (dsa.msc, …). In the cloud Microsoft (and myself) is pushing towards self-service for group management. This “self-service-first approach” is obvious since the introduction of now Office Microsoft 365 Groups. Teams being one of the most famous M365 Group services is also pushing to the owner/member model where end users are owners of a Team (therefor an AAD group). All end user facing UX from Microsoft is abstracting away how the underlying AAD group is managed. Teams for example will group owners and members in two sections: image

But if you look at the underlying AAD group you will find, that every owner is also a member:

image

And the Azure AD portal shows a dedicated section to manage owners of the group: image

The Microsoft Admin portal also has dedicated sections for members and owners: image

In general it is important that your administrative staff is aware how group membership (including ownership) is working. The problem with the Teams Admin portal as mentioned in the beginning is the result of this initial dialog leaving the group in an inconsistent state without making that admin aware of this misalignment. Lets check the group created in the initial screenshot using the Teams admin center. We specified 5 admin users and one member after the initial dialog. Non of the admin users was added as a member in AAD (Microsoft 365 Admin portal screenshot): image

But looking at the teams Admin center won’t show this “misconfiguration”: image

If Leia wants to access the associated Planner service for this Team/Group the following error will show: image

Planner is checking against group membership. Only is the user is a member Planner (and other services) will check if the user is also an owner and then show administrative controls. Teams itself looks ok. In SharePoint implemented a hack and the owners of the AAD group are granted Site Collection Admin permission so every item is accessible, because that is what Site Collection Admins are for. If your users are reporting problem like this and based on the Teams Admin center everything looks ok, go check “a better” portal like AAD.

To fix the problem in the Teams Admin center the owner has to be demoted to member state and then promoted to being a owner again.

A quick test with the SharePoint admin center shows that the system is managing membership as intended and every owner will also be a member. SharePoint requires one owner (Leia) in the first dialog and the second dialog allows additional owners and members: image

The result for members in AAD is correct as all owners are also added as members: image

Until this bug design flaw is fixed we recommend to not add more owners in the initial dialog. The currently logged-in admin user will be added as the only owner of the created group. In the next step remove your admin and add the requested users as members and promote them to owners. If you rely on self-service this should is not a problem. If you have a custom creation process automated hopefully you add all owner as member, but you should be good most of the time. We at glueckkanja-gab are supporting many customers with our custom lifecycle solution. We are adding a option to report this problem and a optional config switch to fix any detected misalignments.

I’ve created a UserVoice “idea” for this “bug”. So please lets go: vote!

https://microsoftteams.uservoice.com/forums/555103-public/suggestions/40951714-add-owners-also-as-members-aad-group-in-the-init

Christian Dennig [MS]: Release to Kubernetes like a Pro with Flagger

Introduction

When it comes to running applications on Kubernetes in production, you will sooner or later have the challenge to update your services with a minimum amount of downtime for your users…and – at least as important – to be able to release new versions of your application with confidence…that means, you discover unhealthy and “faulty” services very quickly and are able to rollback to previous versions without much effort.

When you search the internet for best practices or Kubernetes addons that help you with these challenges, you will stumble upon Flagger, as I did, from WeaveWorks.

Flagger is basically a controller that will be installed in your Kubernetes cluster. It helps you with canary and A/B releases of your services by handling all the hard stuff like automatically adding services and deployments for your “canaries”, shifting load over time to these and rolling back deployments in case of errors.

As if that wasn’t good enough, Flagger also works in combination with popular Service Meshes like Istio and Linkerd. If you don’t want to use Flagger with such a product, you can also use it on “plain” Kubernetes, e.g. in combination with an NGINX ingress controller. Many choices here…

I like linkerd very much, so I’ll choose that one in combination with Flagger to demonstrate a few of the possibilities you have when releasing new versions of your application/services.

Prerequisites

linkerd

I already set up a plain Kubernetes cluster on Azure for this sample, so I’ll start by adding linkerd to it (you can find a complete guide how to install linkerd and the CLI on https://linkerd.io/2/getting-started/):

$ linkerd install | kubectl apply -f -

After the command has finished, let’s check if everything works as expected:

$ linkerd check && kubectl -n linkerd get deployments
...
...
control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
flagger                  1/1     1            1           3h12m
linkerd-controller       1/1     1            1           3h14m
linkerd-destination      1/1     1            1           3h14m
linkerd-grafana          1/1     1            1           3h14m
linkerd-identity         1/1     1            1           3h14m
linkerd-prometheus       1/1     1            1           3h14m
linkerd-proxy-injector   1/1     1            1           3h14m
linkerd-sp-validator     1/1     1            1           3h14m
linkerd-tap              1/1     1            1           3h14m
linkerd-web              1/1     1            1           3h14m

If you want to, open the linkerd dashboard and see the current state of your service mesh, execute:

$ linkerd dashboard

After a few seconds, the dashboard will be shown in your browser.

Microsoft Teams Integration

For alerting and notification, we want to leverage the MS Teams integration of Flagger to get notified each time a new deployment is triggered or a canary release will be “promoted” to be the primary release.

Therefore, we need to setup a WebHook in MS Teams (a MS Teams channel!):

  1. In Teams, choose More options () next to the channel name you want to use and then choose Connectors.
  2. Scroll through the list of Connectors to Incoming Webhook, and choose Add.
  3. Enter a name for the webhook, upload an image and choose Create.
  4. Copy the webhook URL. You’ll need it when adding Flagger in the next section.
  5. Choose Done.

Install Flagger

Time to add Flagger to your cluster. Therefore, we will be using Helm (version 3, so no need for a Tiller deployment upfront).

$ helm repo add flagger https://flagger.app

$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

[...]

$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090 \
--set msteams.url=<YOUR_TEAMS_WEBHOOK_URL>

Check, if everything has been installed correctly:

$ kubectl get pods -n linkerd -l app.kubernetes.io/instance=flagger

NAME                       READY   STATUS    RESTARTS   AGE
flagger-7df95884bc-tpc5b   1/1     Running   0          0h3m

Great, looks good. So, now that Flagger has been installed, let’s have a look where it will help us and what kind of objects will be created for canary analysis and promotion. Remember that we use linkerd in that sample, so all objects and features discussed in the following section will just be relevant for linkerd.

How Flagger works

The sample application we will be deploying shortly consists of a VueJS Single Page Application that is able to display quotes from the Star Wars movies – and it’s able to request the quotes in a loop (to be able to put some load on the service). When requesting a quote, the web application is talking to a service (proxy) within the Kubernetes cluster which in turn talks to another service (quotesbackend) that is responsible to create the quote (simulating service-to-service calls in the cluster). The SPA as well as the proxy are accessible through a NGINX ingress controller.

After the application has been successfully deployed, we also add a canary object which takes care of the promotion of a new revision of our backend deployment. The Canary object will look like this:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: quotesbackend
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: quotesbackend
  progressDeadlineSeconds: 60
  service:
    port: 3000
    targetPort: 3000
  analysis:
    interval: 20s
    # max number of failed metric checks before rollback
    threshold: 5
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 70
    stepWeight: 10
    metrics:
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      threshold: 99
      interval: 1m
    - name: request-duration
      # maximum req duration P99
      # milliseconds
      threshold: 500
      interval: 30s

What this configuration basically does is watching for new revisions of a quotesbackend deployment. In case that happens, it starts a canary deployment for it. Every 20s, it will increase the weight of the traffic split by 10% until it reaches 70%. If no errors occur during the promotion, the new revision will be scaled up to 100% and the old version will be scaled down to zero, making the canary the new primary. Flagger will monitor the request success rate and the request duration (linkerd Prometheus metrics). If one of them drops under the threshold set in the Canary object, a rollback to the old version will be started and the new deployment will be scaled back to zero pods.

To achieve all of the above mentioned analysis, flagger will create several new objects for us:

  • backend-primary deployment
  • backend-primary service
  • backend-canary service
  • SMI / linkerd traffic split configuration

The resulting architecture will look like that:

So, enough of theory, let’s see how Flagger works with the sample app mentioned above.

Sample App Deployment

If you want to follow the sample on your machine, you can find all the code snippets, deployment manifests etc. on https://github.com/cdennig/flagger-linkerd-canary

Git Repo

First, we will deploy the application in a basic version. This includes the backend and frontend components as well as an Ingress Controller which we can use to route traffic into the cluster (to the SPA app + backend services). We will be using the NGINX ingress controller for that.

To get started, let’s create a namespace for the application and deploy the ingress controller:

$ kubectl create ns quotes

# Enable linkerd integration with the namespace
$ kubectl annotate ns quotes linkerd.io/inject=enabled

# Deploy ingress controller
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ kubectl create ns ingress
$ helm install my-ingress ingress-nginx/ingress-nginx -n ingress

Please note, that we annotate the quotes namespace to automatically get the Linkerd sidecar injected during deployment time. Any pod that will be created within this namespace, will be part of the service mesh and controlled via Linkerd.

As soon as the first part is finished, let’s get the public IP of the ingress controller. We need this IP address to configure the endpoint to call for the VueJS app, which in turn is configured in a file called settings.js of the frontend/Single Page Application pod. This file will be referenced when the index.html page gets loaded. The file itself is not present in the Docker image. We mount it during deployment time from a Kubernetes secret to the appropriate location within the running container.

One more thing: To have a proper DNS name to call our service (instead of using the plain IP), I chose to use NIP.io. The service is dead simple! E.g. you can simply use the DNS name 123-456-789-123.nip.io and the service will resolve to host with IP 123.456.789.123. Nothing to configure, no more editing of /etc/hosts…

So first, let’s determine the IP address of the ingress controller…

# get the IP address of the ingress controller...

$ kubectl get svc -n ingress
NAME                                            TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
my-ingress-ingress-nginx-controller             LoadBalancer   10.0.93.165   52.143.30.72   80:31347/TCP,443:31399/TCP   4d5h
my-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.157.46   <none>         443/TCP                      4d5h

Please open the file settings_template.js and adjust the endpoint property to point to the cluster (in this case, the IP address is 52.143.30.72, so the DNS name will be 52-143-30-72.nip.io).

Next, we need to add the correspondig Kubernetes secret for the settings file:

$ kubectl create secret generic uisettings --from-file=settings.js=./settings_template.js -n quotes

As mentioned above, this secret will be mounted to a special location in the running container. Here’s the deployment file for the frontend – please see the sections for volumes and volumeMounts:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: quotesfrontend
spec:
  selector:
      matchLabels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
  replicas: 1
  minReadySeconds: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
    spec:
      containers:
      - name: quotesfrontend
        image: csaocpger/quotesfrontend:4
        volumeMounts:
          - mountPath: "/usr/share/nginx/html/settings"
            name: uisettings
            readOnly: true
      volumes:
      - name: uisettings
        secret:
          secretName: uisettings

Last but not least, we also need to adjust the ingress definition to be able to work with the DNS / hostname. Open the file ingress.yaml and adjust the hostnames for the two ingress definitions. In this case here, the resulting manifest looks like that:

Now we are set to deploy the whole application:

$ kubectl apply -f base-backend-infra.yaml -n quotes
$ kubectl apply -f base-backend-app.yaml -n quotes
$ kubectl apply -f base-frontend-app.yaml -n quotes
$ kubectl apply -f ingress.yaml -n quotes

After a few seconds, you should be able to point your browser to the hostname and see the “Quotes App”:

Basic Quotes app

If you click on the “Load new Quote” button, the SPA will call the backend (here: http://52-143-30-72.nip.io/quotes), request a new “Star Wars” quote and show the result of the API Call in the box at the bottom. You can also request quotes in a loop – we will need that later to simulate load.

Flagger Canary Settings

We need to configure Flagger and make it aware of our deployment – remember, we only target the backend API that serves the quotes.

Therefor, we deploy the canary configuration (canary.yaml file) discussed before:

$ kubectl apply -f canary.yaml -n quotes

You have to wait a few seconds and check the services, deployments and pods to see if it has been correctly installed:

$ kubectl get svc -n quotes

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
quotesbackend           ClusterIP   10.0.64.206    <none>        3000/TCP   51m
quotesbackend-canary    ClusterIP   10.0.94.94     <none>        3000/TCP   70s
quotesbackend-primary   ClusterIP   10.0.219.233   <none>        3000/TCP   70s
quotesfrontend          ClusterIP   10.0.111.86    <none>        80/TCP     12m
quotesproxy             ClusterIP   10.0.57.46     <none>        80/TCP     51m

$ kubectl get po -n quotes
NAME                                     READY   STATUS    RESTARTS   AGE
quotesbackend-primary-7c6b58d7c9-l8sgc   2/2     Running   0          64s
quotesfrontend-858cd446f5-m6t97          2/2     Running   0          12m
quotesproxy-75fcc6b6c-6wmfr              2/2     Running   0          43m

kubectl get deploy -n quotes
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
quotesbackend           0/0     0            0           50m
quotesbackend-primary   1/1     1            1           64s
quotesfrontend          1/1     1            1           12m
quotesproxy             1/1     1            1           43m

That looks good! Flagger has created new services, deployments and pods for us to be able to control how traffic will be directed to existing/new versions of our “quotes” backend. You can also check the canary definition in Kubernetes, if you want:

$ kubectl describe canaries -n quotes

Name:         quotesbackend
Namespace:    quotes
Labels:       <none>
Annotations:  API Version:  flagger.app/v1beta1
Kind:         Canary
Metadata:
  Creation Timestamp:  2020-06-06T13:17:59Z
  Generation:          1
  Managed Fields:
    API Version:  flagger.app/v1beta1
[...]

You will also receive a notification in Teams, that a new deployment for Flagger has been detected and initialized:

Kick-Off a new deployment

Now comes the part where Flagger really shines. We want to deploy a new version of the backend quote API – switching from “Star Wars” quotes to “Star Trek” quotes! What will happen, is the following:

  • as soon as we deploy a new “quotesbackend”, Flagger will recognize it
  • new versions will be deployed, but no traffic will be directed to them at the beginning
  • after some time, Flagger will start to redirect traffic via Linkerd / TrafficSplit configurations to the new version via the canary service, starting – according to our canary definition – at a rate of 10%. So 90% of the traffic will still hit our “Star Wars” quotes
  • it will monitor the request success rate and advance the rate by 10% every 20 seconds
  • if 70% traffic split will be reached without throwing any significant amount of errors, the deployment will be scaled up to 100% and propagated as the “new primary”

Before we deploy it, let’s request new quotes in a loop (set the frequency e.g. to 300ms via the slider and press “Load in Loop”).

Base deployment: Load quotes in a loop.

Then, deploy the new version:

$ kubectl apply -f st-backend-app.yaml -n quotes

$ kubectl describe canaries quotesbackend -n quotes
[...]
[...]
Events:
  Type     Reason  Age                   From     Message
  ----     ------  ----                  ----     -------
  Warning  Synced  14m                   flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  14m                   flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  4m7s                  flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Starting canary analysis for quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Advance quotesbackend.quotes canary weight 10
  Warning  Synced  3m7s (x2 over 3m27s)  flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Normal   Synced  2m47s                 flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  2m27s                 flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  2m7s                  flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  107s                  flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  87s                   flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  67s                   flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  7s (x3 over 47s)      flagger  (combined from similar events): Promotion completed! Scaling down quotesbackend.quotes

You will notice in the UI that every now and then a quote from “Star Trek” will appear…and that the frequency will increase every 20 seconds as the canary deployment will receive more traffic over time. As stated above, when the traffic split reaches 70% and no errors occured in the meantime, the “canary/new version” will be promoted as the “new primary version” of the quotes backend. At that time, you will only receive quotes from “Star Trek”.

Canary deployment: new quotes backend servicing “Star Trek” quotes.

Because of the Teams integration, we also get a notification of a new version that will be rolled-out and – after the promotion to “primary” – that the rollout has been successfully finished.

Starting a new version rollout with Flagger
Finished rollout with Flagger

What happens when errors occur?

So far, we have been following the “happy path”…but what happens, if there are errors during the rollout of a new canary version? Let’s say we have produced a bug in our new service that will throw an error when requesting a new quote from the backend? Let’s see, how Flagger will behave then…

The version that will be deployed will start throwing errors after a certain amount of time. Due to the fact that Flagger is monitoring the “request success rate” via Linkerd metrics, it will notice that something is “not the way it is supposed to be”, stop the promotion of the new “error-prone” version, scale it back to zero pods and keep the current primary backend (means: “Star Trek” quotes) in place.

$ kubectl apply -f error-backend-app.yaml -n quotes

$ k describe canaries.flagger.app quotesbackend  
[...]
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  23m                    flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  23m                    flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  13m                    flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  3m43s (x4 over 9m43s)  flagger  (combined from similar events): New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m23s (x2 over 12m)    flagger  Advance quotesbackend.quotes canary weight 10
  Normal   Synced  3m23s (x2 over 12m)    flagger  Starting canary analysis for quotesbackend.quotes
  Warning  Synced  2m43s (x4 over 12m)    flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Warning  Synced  2m3s (x2 over 2m23s)   flagger  Halt quotesbackend.quotes advancement success rate 0.00% < 99%
  Warning  Synced  103s                   flagger  Halt quotesbackend.quotes advancement success rate 50.00% < 99%
  Warning  Synced  83s                    flagger  Rolling back quotesbackend.quotes failed checks threshold reached 5
  Warning  Synced  81s                    flagger  Canary failed! Scaling down quotesbackend.quotes

As you can see in the event log, the success rate drops to a significant amount and Flagger will halt the promotion of the new version, scale down to zero pods and keep the current version a the “primary” backend.

New backend version throwing errors
Teams notification: service rollout stopped!

Conclusion

With this article, I have certainly only covered the features of Flagger in a very brief way. But this small example shows what a great relief Flagger can be when it comes to the rollout of new Kubernetes deployments. Flagger can do a lot more than shown here and it is definitely worth to take a look at this product from WeaveWorks.

I hope I could give some insight and made you want to do more…and to have fun with Flagger 🙂

As mentioned above, all the sample files, manifests etc. can be found here: https://github.com/cdennig/flagger-linkerd-canary.

Jürgen Gutsch: Getting the .editorconfig working with MSBuild

UPDATE: While trying the .editorconfig and writing this post, I did a fundamental mistake. I added a ruleset file to the projects and this is the reason why it worked. It wasn't really the .editorconfig in this case. I'm really sorry about that. Please find this post to learn how it is really working.

In January I wrote a post about setting up VS2019 and VSCode to use the .editorconfig. In this post I'm going to write about how to get the .editorconfig settings checked during build time.

It works like it should work: In the editors. And it works in VS2019 at build-time. But it doesn't work at build time using MSBuild. This means it won't work with the .NET CLI, it won't work with VSCode and it won't work on any build server that uses MSBuild.

Actually this is a huge downside about the .editorconfig. Why shall we use the .editoconfig to enforce the coding style, if a build in VSCode doesn't fail, but it fails in VS2019 does? Why shall we use the .editorconfig, if the build on a build server doesn't fail. Not all of the developers are using VS2019, sometimes VSCode is the better choice. And we don't want to install VS2019 on a build server and don't want to call vs.exe to build the sources.

The reason why it is like this is as simple as bad: The Roslyn analyzers to check the codes using the .editorconfig are not yet done.

Actually, Microsoft is working on that and is porting the VS2019 coding style analyzers to Roslyn analyzers that can be downloaded and used via NuGet. Currently, the half of the work is done and some of the analyzers can be used in the project. See here: #33558

With this post I'd like to try it out. We need this for our projects in the YOO, the company I work for and I'm really curious about how this is going to work in a real project

Code Analyzers

To try it out, I'm going to use the Weather Stats App I created in previous posts. Feel free to clone it from GitHub and follow the steps I do within this post.

At first you need to add a NuGet package:

Microsoft.CodeAnalysis.CSharp.CodeStyle

This is currently a development version and hosted on MyGet. This needs you to follow the installation instructions on MyGet. Currently it is the following .NET CLI command:

dotnet add package Microsoft.CodeAnalysis.CSharp.CodeStyle --version 3.8.0-1.20330.5 --source https://dotnet.myget.org/F/roslyn/api/v3/index.json

The version number might change in the future. Currently I use the version 3.8.0-1.20330.5 which is out since June 30th.

You need to execute this command for every project in your solution.

After executing this command you'll have the following new lines in the project files:

<PackageReference Include="Microsoft.CodeAnalysis.CSharp.CodeStyle" Version="3.8.0-1.20330.5">
    <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    <PrivateAssets>all</PrivateAssets>
</PackageReference>

If not just copy this line into the project file and run dotnet restore to actually load the package.

This should be enough to get it running.

Adding coding style errors

To try it out I need to add some coding style errors. I simply added some these:

Roslyn conflicts

Maybe you will get a lot of warnings about that an instance of the analyzers cannot be created because of a missing Microsoft.CodeAnalysis 3.6.0 Assembly like this:

Could not load file or assembly 'Microsoft.CodeAnalysis, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

This is might strange because the code analysis assemblies should already be available in case Roslyn is used. Actually this error happens, if you do a dotnet build while VSCode is running the Roslyn analyzers. Strange but reproduceable. Maybe Roslyn analyzers can only run once at the same time.

To get it running without those warnings, you can simply close VSCode or wait for a few seconds.

Get it running

Actually it didn't work in my machine the first times. The reason was that I forgot to update the global.json. I still used a 3.0 runtime to run the analyzers. This doesn't work.

After updating the global.json to a 5.0 runtime (preview 6 in my case) it failed as expected:

Since the migration of the IDE analyzers to Roslyn analyzers is half done, not all of the errors will fail the build. This is why the the IDE0003 rule doesn't appear here. I used the this keyword twice in the code above, that should also fail the build.

Conclusion

Actually I was wondering why Microsoft didn't start earlier to convert the VS2019 analyzers into Roslyn code analyzers. This is really valuable for teams where developers use VSCode, VS2019, VS for Mac or any other tool to write .NET Core applications. It is not only about showing coding style errors in an editor, it should also fail the build in case coding style errors are checked in.

Anyway, it is working Good. And hopefully Microsoft will complete the set of analyzers as soon as possible.

Holger Schwichtenberg: Behandlung der Umsatzsteuersätze von 5 und 16 Prozent in der Elster-Umsatzsteuervoranmeldung

Die Finanzverwaltung verzichtet nun einfach auf die Trennung nach Steuersätzen. Entwickler von Buchhaltungslösungen müssen aber deutlich agiler sein.

Code-Inside Blog: Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?

Within our product we move more and more stuff in the .NET Core land. Last week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:

.NET Framework 4.5 or higher.

With .NET Core the answer is sligthly different:

In theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.

This system is called “Framework-dependent apps roll forward” and sounds good.

The bad part

Unfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:

It’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.

The good part

With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.

Summery

Read the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.

As a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.

Hope this helps!

Jürgen Gutsch: Exploring Orchard Core - Part 1

Since I while I planned to try out the Orchard Core Application Framework. Back than I saw an awesome video where Sébastien Ros showed an early version of Orchard Core. If I remember right it was this ASP.NET Community Standup: ASP.NET Community Standup - November 27, 2018 - Sebastien Ros on Headless CMS with Orchard Core

Why a blog series

Actually this post wasn't planned to be a series but as usual the posts are getting longer and longer. The more I write, the more came in mind to write about. Bloggers now this, I guess. So I needed to decide, whether I want to write a monster blog post or a series of smaller posts. Maybe the later is easier to read and to write.

What is Orchard Core?

Orchard Core is an open-source modular and multi-tenant application framework built with ASP.NET Core, and a content management system (CMS) built on top of that application framework.

Orchard Core is not a new version of the Orchard CMS. It is a completely new thing written in ASP.NET Core. The Orchard CMS was designed as a CMS, but Orchard Core was designed to be an application framework that can be used to build a CMS, a blog or whatever you want to build. I really like the idea to have a framework like this.

I don't want to repeat the stuff, that is already on the website. To learn more about it just visit it: https://www.orchardcore.net/

I had a look into the Orchard CMS, back then when I was evaluating a new blog. It was good, but I didn't really feel confident.

Currently the RC2 is out since a couple of days and version 1 should be released in September 2020. The roadmap already defines features for future releases.

Let's have a first glimpse

When I try a CMS or something like this, I try to follow the quick start guide. I want to start the application up to have a first look and feel. As a .NET Core fan-boy I decided to use the .NET CLI to run the application. But first I have to clone the repository, to have a more detailed look later on and to run the sample application:

git clone https://github.com/OrchardCMS/OrchardCore.git

This clones the current RC2 into a local repository.

Than we need to cd into the repository and into the web sample:

cd OrchardCore\
cd src\OrchardCore.Cms.Web\

Since this should be a ASP.NET Core application, it should be possible to run the dotnet run command:

dotnet run

As usual in ASP.NET Core I get two URLs to call the app. The HTTP version on port 5000 and the HTTPS version on port 5001.

I'm now should be able to call the CMS in the browser. Et voilà:

Since every CMS has an admin area, I tried /admin for sure.

At the first start it asks you about to set initial credentials and stuff like this. I already did this before. At every other start I just see the log-in screen:

After the log-in I feel myself warmly welcomed... kinda :-D

Actually this screenshot is a little small, because it hides the administration menu which is the last item in menu. You should definitely have a look into the /admin/features page that has a ton of features to enable. Stuff like GraphQL API, Lucene search indexing, Markdown editing, templating, authentication providers and a lot more.

But I won't go threw all the menu items. You can just have a look by yourself. I actually want to explore the application framework.

I want to see some code

This is why I stopped the application and open it in VS Code and this is where the fascinating stuff is.

Ok. This is where I thought the fascinating stuff is. There is almost nothing. There are a ton of language files, an almost empty wwwroot folder, some configuration files and the common files like a *.csproj, the startup.cs and the program.cs. Except the localization part, it completely looks like an empty ASP.NET Core project.

Where is all the Orchard stuff? I expected a lot more to see.

The Program.cs looks pretty common, except the usage of NLog which is provided via OrchardCore.Logging package:

using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using OrchardCore.Logging;

namespace OrchardCore.Cms.Web
{
    public class Program
    {
        public static Task Main(string[] args)
            => BuildHost(args).RunAsync();

        public static IHost BuildHost(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureLogging(logging => logging.ClearProviders())
                .ConfigureWebHostDefaults(webBuilder => webBuilder
                    .UseStartup<Startup>()
                    .UseNLogWeb())
                .Build();
    }
}

This clears the default logging providers and adds the NLog web logger. It also uses the common Startup class which is really clean and doesn't need a lot of configuration .

using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace OrchardCore.Cms.Web
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddOrchardCms();
        }

        public void Configure(IApplicationBuilder app, IHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseStaticFiles();

            app.UseOrchardCore();
        }
    }
}

It only adds the services for the Orchard CMS in the method ConfigureServices and uses Orchard Core stuff in the method Configure.

Actually this Startup configures the Orchard Core as CMS. It seems I would also be able to add the Orchard Core to the ServiceCollection by using AddOrchardCore(). I guess this would just add Core functionality to the application. Let's see if I'm right.

Both the AddOrchardCms() and the AddOrchardCore() methods are overloaded and can be configured using an OrchardCoreBuilder. Using this overloads you can add Orchard Core features to your application. I guess the method AddOrchardCms() has a set of features preconfigured to behave like a CMS:

It is a lot of guessing and trying right now. But I didn't read any documentation until now. I just want to play around.

I also wanted to see what is possible with the UseOrchardCore() method, but this one just has on optional parameter to add an action the retrieves the IApplicationBuilder . I'm not sure why this action is really needed. I mean I would be able to configure ASP.NET Core features inside this action. I could also nest a lot of UseOrchardCore() calls. But why?

I think, it is time to have a look into the docs at https://docs.orchardcore.net/en/dev/. Don't confuse it with the docs on https://docs.orchardcore.net/. This are the Orchard CMS docs that might be outdated now.

The docs are pretty clear. Orchard Core comes in two different targets: The Orchard Core Framework and the Orchard Core CMS. The sample I opened here is the Orchard Core CMS sample. To learn how the Framework works, I need to clone the Orchard Core Samples repository: https://github.com/OrchardCMS/OrchardCore.Samples

I will write about this in the next part this series.

Not a conclusion yet

I will continue exploring the Orchard Core Framework within the next days and continue to write about it in parallel. The stuff I saw until now is really promising and I like the fact that it simply works without a lot of configuration. Exploring the new CMS would be another topic and really interesting as well. Maybe I will find some time in the future.

Mario Noack: NDepend 2020.1

Als ich vor kurzem gefragt wurde, ob ich meinen Test der neuen NDepend Version erneuern möchte, stimmte ich gern zu. Diese Software bezeichnet sich selbst als ein „Schweizer Taschenmesser für .NET und .NET Core Entwicklung Teams“. Es bietet vielfältige Möglichkeiten, um ein Projekt auf Code-technische Probleme und technische Schulden zu untersuchen und das auch über einen zeitlichen Verlauf in grafischer Form darzustellen bzw. die konkreten Veränderungen zu ermitteln.

Die Installation ist einfach und die Integration in Visual Studio ist optional, schnell gemacht und für mich ein echter Mehrwert, da ich bereits während der Entwicklung schnellen Zugriff auf alle relevanten Funktionen habe. Obwohl NDepend dann regelmäßig automatisch Analyseergebnisse sammelt, merkt man davon während der normalen Arbeit in Visual Studio nichts. Das ist bei Visual Studio Erweiterungen anderer Hersteller leider nicht immer der Fall.

NDepend Dashboard

Mit einem einfachen Klick auf einen großen Kreis erhält man eine kleine Übersicht, kann eine Analyse von Hand starten oder einfach ins Dashboard wechseln. Dort erhält man eine Einschätzung der Projektqualität. Das basiert auf einem sehr ausgefeilten Regelwerk, welches der Hersteller entwickelt hat. Ich habe das Regelwerk nicht selbst geändert. Ich habe schließlich das Projekt der Code-Analyse wieder aufgegriffen, weil ich das Tool als neutralen und unbestechlichen Auditor meiner Softwareprojekte sehe, der wesentlich mehr Erfahrung als ich in diesem Bereich hat. Die Darstellung des Dashboards beinhaltet eine anpassbare Übersicht. Bestandteil sind einmal Diagramme über die Entwicklung der Kennzahlen. Weiter ist, im Besonderen nach der Auswahl einer Vergleichsbasis, eine Darstellung der Projektbewertung jetzt und im direkten Vergleich zu finden. Das schließt eine nach Schweregrad gruppierte Übersicht der Anzahl der Vorfälle ein. Ein Klick auf diese Zahl bringt ein sofort in eine konkrete Übersicht der Vorfälle bzw. Probleme. Dies ist besonders dann hilfreich, wenn man eine Entwicklung einer Kennzahl vielleicht nicht so erwartet hat und man nun die Ursache ergründen möchte.

NDepend issue list

Hier beginnt jetzt aber auch der anspruchsvolle Teil! Man kann sehr leicht die einzelnen Vorfälle heraussuchen und bekommt die Fundstellen sogar angezeigt, wenn Sie nur der Vergleichsbasis zu finden waren, nun also entfernt oder korrigiert wurde. Ferner sieht man die Definition des Vorfalls in einer LINQ ähnlichen Sprache, mit einer ausführlichen Erklärung und weiteren externen Verweisen. Die Beschreibungen sind sehr gut, prinzipbedingt aber gerade am Anfang keine leichte Kost! Hier muss man schon einen festen Willen haben, das persönliche Niveau zu verbessern. Keine Verständnisprobleme bei englischen Texten sind auch hilfreich. Dann findet man in NDepend jedoch einen idealen Partner.

Ein Prunkstück, auf das der Hersteller besonders stolz ist, ist der Dependency Graph. Der Name ist Programm! Grenzen scheint es für dieses Modul nicht wirklich zu geben. Man kann mit einer sehr hohen Geschwindigkeit das Zusammenspiel der eigenen oder externen Klassen, Namensbereichen oder Funktionen anschauen und durch viele, oft sehr sinnvoll vorbelegte Optionen, übersichtlich gestalten. Das kann sehr hilfreich bei der Sichtung oder Umgestaltung von Codebereichen sein. Die Leistungsklasse wird eindrucksvoll durch die Präsentation der .NET Core 3 Klassen gezeigt. In meinen eigenen aktuellen Projekten ist mir die Struktur jedoch extrem vertraut. Darum ist steht bei mir das Modul nur in der zweiten Reihe, für viele andere Entwickler wird das sicher nicht gelten.

Kommen wir noch zur Einschätzung des Preises. Dieser liegt ungefähr im Bereich der JetBrains-Tools. Da der Kundenkreis merklich kleiner sein wird, ist das bezogen auf den Funktionsumfang mehr als fair. Positive muss hier erwähnt werden, dass NDepend einer permanenten Entwicklung unterliegt und man trotzdem nicht den Eindruck bekommt, dass die Fertigstellung erst beim Kunden geschieht. Das obligatorische Subscription-Model ist damit gerechtfertigt. Im Vergleich zu meinen älteren Versionen empfinde ich beispielsweise die Integration innerhalb Visual Studio wesentlich umfangreicher und runder.

Fazit: Vermisst habe ich lediglich die Kombination der Analyse mit einer Versionsverwaltung wie SVN oder Git. Das ist schade und stellt sicher noch ein großes Potential für die Zukunft dar. Mein persönliches Highlight von NDepend bleibt ganz klar die große Menge an vordefinierten Regeln. Diese sind sauber definiert, gut gruppiert, flexibel anpassbar und vor allen Dingen gut mit den passenden Hilfethemen verbunden.

Holger Schwichtenberg: PowerShell 7: Null Conditional Operator ?. und ?[]

In PowerShell 7.0 gibt es als experimentelles Feature den Null Conditional Operator ?. für Einzelobjekte und ?[] für Mengen.

Stefan Henneken: 10-year Anniversary

Exactly 10 years ago today, I published the first article here on my blog. The idea was born in 2010 during a customer training in Switzerland. The announced extensions of IEC 61131-3 were lively discussed at the dinner. I had promised the participants that evening to show a small example on this topic at the end of the training. At that time, Edition 3 of IEC 61131-3 had not yet been released, but CODESYS had its first beta versions, so that the participants could familiarize themselves with the language extensions. So later in the hotel room I started to keep my promise and prepared a small example.

Pleased about the interest in the new features of IEC 61131-3, I later sat at the gate in the airport and was able to think a little bit about the last days. I asked myself again and again whether and how I could pass on the example to all others who are interested. Since I was following certain blogs regularly at that time, and I still do, the idea had come up to run a blog as well.

At the same time, Microsoft offered an appropriate platform to run your own blog without having to deal with the technical details yourself. With the Live Writer, Microsoft also provided a free editor with which texts could be created very easily and loaded directly onto the weblog publication system. At the time, I wanted to save myself the effort of administering the blogging software on a web host. I preferred to invest the time in the content of the articles.

After a few considerations and a number of discussions, I published ‘test articles’ on C# and .NET. After these exercises and the experiences from the training, I created and published the first articles on IEC 61131-3. I also noticed that by writing the articles my knowledge on the respective topic was deepened and consolidated. Additionally to the IEC 61131-3, I also wanted to deal with topics related to .NET and therefore I started a series on MEF and the TPL. But I also realized that I had to set priorities.

In the meantime Microsoft stopped its blog service, but offered a migration to WordPress. There is also the possibility to host the blog for free. The statistics functions are very helpful. These provide information about the number of hits of each article. It is also lists the country from which the articles are retrieved. Fortunately, I saw the number of hits increase each year:

In 2014, I also made a decision to publish the articles not only in German but also in English. So in the last 10 years, about 70 posts have been published, 20 of which are in English. Most of the hits still come from the German-speaking countries. Here are the top 5 from 2019:

Germany44.7 %
Switzerland6.5 %
United States6.3 %
Netherlands4.3 %
Austria4.1 %

Asian countries and India are hardly represented so far. Either the access to WordPress is limited or the search engines there rate my site differently.

After all these years, I decided to switch to a paid service at WordPress. One reason is the free choice of an own URL. Instead of https://wordpress.StefanHenneken.com my blog is now accessible via https://StefanHenneken.net. Furthermore, advertising is turned off, which I didn’t always find suitable, and on which I had no influence at all. On this occasion, I also slightly changed the design of the sites.

I will continue to publish posts on IEC 61131-3 in German and English. In the medium term, however, new topics may be included.

At this point, I would like to thank all readers. I am always glad about a comment or if my page is recommended via LinkedIn, Xing, or whatever other means. My thanks also go to the people who have helped with the creation of the texts through comments, suggestions for improvement or proofreading.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.