Daniel Schädler: Meine ersten Schritte mit Kubernetes /Minikube

Ziel

Das Ziel, ist es einen Minikube auf meinem Windows Rechner zu installieren um mich in die Welt von Container und Kubernetes einzuarbeiten. Das soll eine Artikelserie geben, die basierend auf den folgenden Tutorials ist Kubernetes.io

Voraussetzungen

Zuerst müssen die folgenden Voraussetzungen erfüllt sein, damit Minikube installiert werden kann:

  • kubectl muss installiert sein

Ich habe dies mit Powershell durchgeführt. Hierzu muss diese als Administrator ausgeführt werden. Anschliessend kann mit dem folgenden Befehl die Installation initiiert werden.

Install-Script Name 'install-kubectl' -Scope CurrentUser -Force
install-kubectl.ps1 -DownloadLocation "C:\Program Files\kubectl"

Es kann sein, dass der NuGet Provider, wie im nachfolgenden Bild, aktualisiert werden muss. Hier mit „Y“ bestätigen und den Befehl noch einmal ausführen, wie oben beschrieben.

Installation von Minikube mit Powershell

Ich selber hatte Probleme, die mit dem Bit-Filetransfer die Installation verhinderten. Damit ich trotzdem weiterkam, habe ich die Binary manuell heruntergeladen, diese in den „C:\Program Files\kubectl“ Ordner kopiert und den Pfad angegeben.

 Die Version ist mir mit folgendem Befehl angezeigt worden:

kubectl client --version
Kubectl Client version

Nun sind alle Voraussetzungen für die Minikube Installation, wie [hier](https://kubernetes.io/docs/tasks/tools/install-kubectl/) beschrieben, durchgeführt worden.

Durchführung

Für die Installation des Minikube, habe ich mich für den Windows Installer entschieden. Dieser muss dann als Administrator installiert werden.

Minikube Installation als Administrator

Minikube Setup

Ist die ausführbare Datei als Administrator gestartet worden, so werden die Installationsschritte durchgegangen:

  1. Als erstes wählen wir die Sprache "Englisch"
Setup Sprache wählen
  1. Anschliessend sind die Lizenvereinbarungen zu bestätigen.

  2. Danach den Installationsort wählen. Ich habe diesen auf dem Standardwert wie unten dargestellt belassen.

Installationsziel für Minikube

Minikube Installationbestätigung

Mein Setup ist Windows 10 mit Hyper-V. So muss Minikube gestartet werden. Eine Liste der Treiber kann hier gefunden werden.

Zu disem Zweck muss die Powershell wieder mit Administratorenrechten und dem folgenden Befehl ausgeführt werden:

minikube start --driver=hyperv

Ist alles korrekt eingegeben worden, dann startet Minikube und setzt seine Umgebung zur Verwendung mit Hyper-V auf, wie im nachfolgenden Bild dargestellt.

Minikube mit Hyper-V initialisieren

Fazit

So kann ich mir, ohne grosse Kosten auf meinem private Azure Portal einmal Kubernetes lokal in Verbindung mit Hyper-V anschauen und mich vertraut machen. In weiteren Artikeln werde ich mich analog den hier hier durchhangeln und mir dann später eine Umgebung aufzubauen, die in etwa bei den meisten Deployments vorkommt.

Golo Roden: Einführung in React, Folge 5: React-State

Anwendungen zeigen Daten nicht nur statisch an, sondern ändern diese auch von Zeit zu Zeit, nicht zuletzt bedingt durch Eingaben der Anwender. Wie geht man in React mit dynamischen Daten um?

Christian Dennig [MS]: Release to Kubernetes like a Pro with Flagger

Introduction

When it comes to running applications on Kubernetes in production, you will sooner or later have the challenge to update your services with a minimum amount of downtime for your users…and – at least as important – to be able to release new versions of your application with confidence…that means, you discover unhealthy and “faulty” services very quickly and are able to rollback to previous versions without much effort.

When you search the internet for best practices or Kubernetes addons that help you with these challenges, you will stumble upon Flagger, as I did, from WeaveWorks.

Flagger is basically a controller that will be installed in your Kubernetes cluster. It helps you with canary and A/B releases of your services by handling all the hard stuff like automatically adding services and deployments for your “canaries”, shifting load over time to these and rolling back deployments in case of errors.

As if that wasn’t good enough, Flagger also works in combination with popular Service Meshes like Istio and Linkerd. If you don’t want to use Flagger with such a product, you can also use it on “plain” Kubernetes, e.g. in combination with an NGINX ingress controller. Many choices here…

I like linkerd very much, so I’ll choose that one in combination with Flagger to demonstrate a few of the possibilities you have when releasing new versions of your application/services.

Prerequisites

linkerd

I already set up a plain Kubernetes cluster on Azure for this sample, so I’ll start by adding linkerd to it (you can find a complete guide how to install linkerd and the CLI on https://linkerd.io/2/getting-started/):

$ linkerd install | kubectl apply -f -

After the command has finished, let’s check if everything works as expected:

$ linkerd check && kubectl -n linkerd get deployments
...
...
control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
flagger                  1/1     1            1           3h12m
linkerd-controller       1/1     1            1           3h14m
linkerd-destination      1/1     1            1           3h14m
linkerd-grafana          1/1     1            1           3h14m
linkerd-identity         1/1     1            1           3h14m
linkerd-prometheus       1/1     1            1           3h14m
linkerd-proxy-injector   1/1     1            1           3h14m
linkerd-sp-validator     1/1     1            1           3h14m
linkerd-tap              1/1     1            1           3h14m
linkerd-web              1/1     1            1           3h14m

If you want to, open the linkerd dashboard and see the current state of your service mesh, execute:

$ linkerd dashboard

After a few seconds, the dashboard will be shown in your browser.

Microsoft Teams Integration

For alerting and notification, we want to leverage the MS Teams integration of Flagger to get notified each time a new deployment is triggered or a canary release will be “promoted” to be the primary release.

Therefore, we need to setup a WebHook in MS Teams (a MS Teams channel!):

  1. In Teams, choose More options () next to the channel name you want to use and then choose Connectors.
  2. Scroll through the list of Connectors to Incoming Webhook, and choose Add.
  3. Enter a name for the webhook, upload an image and choose Create.
  4. Copy the webhook URL. You’ll need it when adding Flagger in the next section.
  5. Choose Done.

Install Flagger

Time to add Flagger to your cluster. Therefore, we will be using Helm (version 3, so no need for a Tiller deployment upfront).

$ helm repo add flagger https://flagger.app

$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

[...]

$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090 \
--set msteams.url=<YOUR_TEAMS_WEBHOOK_URL>

Check, if everything has been installed correctly:

$ kubectl get pods -n linkerd -l app.kubernetes.io/instance=flagger

NAME                       READY   STATUS    RESTARTS   AGE
flagger-7df95884bc-tpc5b   1/1     Running   0          0h3m

Great, looks good. So, now that Flagger has been installed, let’s have a look where it will help us and what kind of objects will be created for canary analysis and promotion. Remember that we use linkerd in that sample, so all objects and features discussed in the following section will just be relevant for linkerd.

How Flagger works

The sample application we will be deploying shortly consists of a VueJS Single Page Application that is able to display quotes from the Star Wars movies – and it’s able to request the quotes in a loop (to be able to put some load on the service). When requesting a quote, the web application is talking to a service (proxy) within the Kubernetes cluster which in turn talks to another service (quotesbackend) that is responsible to create the quote (simulating service-to-service calls in the cluster). The SPA as well as the proxy are accessible through a NGINX ingress controller.

After the application has been successfully deployed, we also add a canary object which takes care of the promotion of a new revision of our backend deployment. The Canary object will look like this:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: quotesbackend
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: quotesbackend
  progressDeadlineSeconds: 60
  service:
    port: 3000
    targetPort: 3000
  analysis:
    interval: 20s
    # max number of failed metric checks before rollback
    threshold: 5
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 70
    stepWeight: 10
    metrics:
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      threshold: 99
      interval: 1m
    - name: request-duration
      # maximum req duration P99
      # milliseconds
      threshold: 500
      interval: 30s

What this configuration basically does is watching for new revisions of a quotesbackend deployment. In case that happens, it starts a canary deployment for it. Every 20s, it will increase the weight of the traffic split by 10% until it reaches 70%. If no errors occur during the promotion, the new revision will be scaled up to 100% and the old version will be scaled down to zero, making the canary the new primary. Flagger will monitor the request success rate and the request duration (linkerd Prometheus metrics). If one of them drops under the threshold set in the Canary object, a rollback to the old version will be started and the new deployment will be scaled back to zero pods.

To achieve all of the above mentioned analysis, flagger will create several new objects for us:

  • backend-primary deployment
  • backend-primary service
  • backend-canary service
  • SMI / linkerd traffic split configuration

The resulting architecture will look like that:

So, enough of theory, let’s see how Flagger works with the sample app mentioned above.

Sample App Deployment

If you want to follow the sample on your machine, you can find all the code snippets, deployment manifests etc. on https://github.com/cdennig/flagger-linkerd-canary

Git Repo

First, we will deploy the application in a basic version. This includes the backend and frontend components as well as an Ingress Controller which we can use to route traffic into the cluster (to the SPA app + backend services). We will be using the NGINX ingress controller for that.

To get started, let’s create a namespace for the application and deploy the ingress controller:

$ kubectl create ns quotes

# Enable linkerd integration with the namespace
$ kubectl annotate ns quotes linkerd.io/inject=enabled

# Deploy ingress controller
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ kubectl create ns ingress
$ helm install my-ingress ingress-nginx/ingress-nginx -n ingress

Please note, that we annotate the quotes namespace to automatically get the Linkerd sidecar injected during deployment time. Any pod that will be created within this namespace, will be part of the service mesh and controlled via Linkerd.

As soon as the first part is finished, let’s get the public IP of the ingress controller. We need this IP address to configure the endpoint to call for the VueJS app, which in turn is configured in a file called settings.js of the frontend/Single Page Application pod. This file will be referenced when the index.html page gets loaded. The file itself is not present in the Docker image. We mount it during deployment time from a Kubernetes secret to the appropriate location within the running container.

One more thing: To have a proper DNS name to call our service (instead of using the plain IP), I chose to use NIP.io. The service is dead simple! E.g. you can simply use the DNS name 123-456-789-123.nip.io and the service will resolve to host with IP 123.456.789.123. Nothing to configure, no more editing of /etc/hosts…

So first, let’s determine the IP address of the ingress controller…

# get the IP address of the ingress controller...

$ kubectl get svc -n ingress
NAME                                            TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
my-ingress-ingress-nginx-controller             LoadBalancer   10.0.93.165   52.143.30.72   80:31347/TCP,443:31399/TCP   4d5h
my-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.157.46   <none>         443/TCP                      4d5h

Please open the file settings_template.js and adjust the endpoint property to point to the cluster (in this case, the IP address is 52.143.30.72, so the DNS name will be 52-143-30-72.nip.io).

Next, we need to add the correspondig Kubernetes secret for the settings file:

$ kubectl create secret generic uisettings --from-file=settings.js=./settings_template.js -n quotes

As mentioned above, this secret will be mounted to a special location in the running container. Here’s the deployment file for the frontend – please see the sections for volumes and volumeMounts:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: quotesfrontend
spec:
  selector:
      matchLabels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
  replicas: 1
  minReadySeconds: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
    spec:
      containers:
      - name: quotesfrontend
        image: csaocpger/quotesfrontend:4
        volumeMounts:
          - mountPath: "/usr/share/nginx/html/settings"
            name: uisettings
            readOnly: true
      volumes:
      - name: uisettings
        secret:
          secretName: uisettings

Last but not least, we also need to adjust the ingress definition to be able to work with the DNS / hostname. Open the file ingress.yaml and adjust the hostnames for the two ingress definitions. In this case here, the resulting manifest looks like that:

Now we are set to deploy the whole application:

$ kubectl apply -f base-backend-infra.yaml -n quotes
$ kubectl apply -f base-backend-app.yaml -n quotes
$ kubectl apply -f base-frontend-app.yaml -n quotes
$ kubectl apply -f ingress.yaml -n quotes

After a few seconds, you should be able to point your browser to the hostname and see the “Quotes App”:

Basic Quotes app

If you click on the “Load new Quote” button, the SPA will call the backend (here: http://52-143-30-72.nip.io/quotes), request a new “Star Wars” quote and show the result of the API Call in the box at the bottom. You can also request quotes in a loop – we will need that later to simulate load.

Flagger Canary Settings

We need to configure Flagger and make it aware of our deployment – remember, we only target the backend API that serves the quotes.

Therefor, we deploy the canary configuration (canary.yaml file) discussed before:

$ kubectl apply -f canary.yaml -n quotes

You have to wait a few seconds and check the services, deployments and pods to see if it has been correctly installed:

$ kubectl get svc -n quotes

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
quotesbackend           ClusterIP   10.0.64.206    <none>        3000/TCP   51m
quotesbackend-canary    ClusterIP   10.0.94.94     <none>        3000/TCP   70s
quotesbackend-primary   ClusterIP   10.0.219.233   <none>        3000/TCP   70s
quotesfrontend          ClusterIP   10.0.111.86    <none>        80/TCP     12m
quotesproxy             ClusterIP   10.0.57.46     <none>        80/TCP     51m

$ kubectl get po -n quotes
NAME                                     READY   STATUS    RESTARTS   AGE
quotesbackend-primary-7c6b58d7c9-l8sgc   2/2     Running   0          64s
quotesfrontend-858cd446f5-m6t97          2/2     Running   0          12m
quotesproxy-75fcc6b6c-6wmfr              2/2     Running   0          43m

kubectl get deploy -n quotes
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
quotesbackend           0/0     0            0           50m
quotesbackend-primary   1/1     1            1           64s
quotesfrontend          1/1     1            1           12m
quotesproxy             1/1     1            1           43m

That looks good! Flagger has created new services, deployments and pods for us to be able to control how traffic will be directed to existing/new versions of our “quotes” backend. You can also check the canary definition in Kubernetes, if you want:

$ kubectl describe canaries -n quotes

Name:         quotesbackend
Namespace:    quotes
Labels:       <none>
Annotations:  API Version:  flagger.app/v1beta1
Kind:         Canary
Metadata:
  Creation Timestamp:  2020-06-06T13:17:59Z
  Generation:          1
  Managed Fields:
    API Version:  flagger.app/v1beta1
[...]

You will also receive a notification in Teams, that a new deployment for Flagger has been detected and initialized:

Kick-Off a new deployment

Now comes the part where Flagger really shines. We want to deploy a new version of the backend quote API – switching from “Star Wars” quotes to “Star Trek” quotes! What will happen, is the following:

  • as soon as we deploy a new “quotesbackend”, Flagger will recognize it
  • new versions will be deployed, but no traffic will be directed to them at the beginning
  • after some time, Flagger will start to redirect traffic via Linkerd / TrafficSplit configurations to the new version via the canary service, starting – according to our canary definition – at a rate of 10%. So 90% of the traffic will still hit our “Star Wars” quotes
  • it will monitor the request success rate and advance the rate by 10% every 20 seconds
  • if 70% traffic split will be reached without throwing any significant amount of errors, the deployment will be scaled up to 100% and propagated as the “new primary”

Before we deploy it, let’s request new quotes in a loop (set the frequency e.g. to 300ms via the slider and press “Load in Loop”).

Base deployment: Load quotes in a loop.

Then, deploy the new version:

$ kubectl apply -f st-backend-app.yaml -n quotes

$ kubectl describe canaries quotesbackend -n quotes
[...]
[...]
Events:
  Type     Reason  Age                   From     Message
  ----     ------  ----                  ----     -------
  Warning  Synced  14m                   flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  14m                   flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  4m7s                  flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Starting canary analysis for quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Advance quotesbackend.quotes canary weight 10
  Warning  Synced  3m7s (x2 over 3m27s)  flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Normal   Synced  2m47s                 flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  2m27s                 flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  2m7s                  flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  107s                  flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  87s                   flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  67s                   flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  7s (x3 over 47s)      flagger  (combined from similar events): Promotion completed! Scaling down quotesbackend.quotes

You will notice in the UI that every now and then a quote from “Star Trek” will appear…and that the frequency will increase every 20 seconds as the canary deployment will receive more traffic over time. As stated above, when the traffic split reaches 70% and no errors occured in the meantime, the “canary/new version” will be promoted as the “new primary version” of the quotes backend. At that time, you will only receive quotes from “Star Trek”.

Canary deployment: new quotes backend servicing “Star Trek” quotes.

Because of the Teams integration, we also get a notification of a new version that will be rolled-out and – after the promotion to “primary” – that the rollout has been successfully finished.

Starting a new version rollout with Flagger
Finished rollout with Flagger

What happens when errors occur?

So far, we have been following the “happy path”…but what happens, if there are errors during the rollout of a new canary version? Let’s say we have produced a bug in our new service that will throw an error when requesting a new quote from the backend? Let’s see, how Flagger will behave then…

The version that will be deployed will start throwing errors after a certain amount of time. Due to the fact that Flagger is monitoring the “request success rate” via Linkerd metrics, it will notice that something is “not the way it is supposed to be”, stop the promotion of the new “error-prone” version, scale it back to zero pods and keep the current primary backend (means: “Star Trek” quotes) in place.

$ kubectl apply -f error-backend-app.yaml -n quotes

$ k describe canaries.flagger.app quotesbackend  
[...]
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  23m                    flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  23m                    flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  13m                    flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  3m43s (x4 over 9m43s)  flagger  (combined from similar events): New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m23s (x2 over 12m)    flagger  Advance quotesbackend.quotes canary weight 10
  Normal   Synced  3m23s (x2 over 12m)    flagger  Starting canary analysis for quotesbackend.quotes
  Warning  Synced  2m43s (x4 over 12m)    flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Warning  Synced  2m3s (x2 over 2m23s)   flagger  Halt quotesbackend.quotes advancement success rate 0.00% < 99%
  Warning  Synced  103s                   flagger  Halt quotesbackend.quotes advancement success rate 50.00% < 99%
  Warning  Synced  83s                    flagger  Rolling back quotesbackend.quotes failed checks threshold reached 5
  Warning  Synced  81s                    flagger  Canary failed! Scaling down quotesbackend.quotes

As you can see in the event log, the success rate drops to a significant amount and Flagger will halt the promotion of the new version, scale down to zero pods and keep the current version a the “primary” backend.

New backend version throwing errors
Teams notification: service rollout stopped!

Conclusion

With this article, I have certainly only covered the features of Flagger in a very brief way. But this small example shows what a great relief Flagger can be when it comes to the rollout of new Kubernetes deployments. Flagger can do a lot more than shown here and it is definitely worth to take a look at this product from WeaveWorks.

I hope I could give some insight and made you want to do more…and to have fun with Flagger 🙂

As mentioned above, all the sample files, manifests etc. can be found here: https://github.com/cdennig/flagger-linkerd-canary.

Jürgen Gutsch: Getting the .editorconfig working with MSBuild

In January I wrote a post about setting up VS2019 and VSCode to use the .editorconfig. In this post I'm going to write about how to get the .editorconfig settings checked during build time.

It works like it should work: In the editors. And it works in VS2019 at build-time. But it doesn't work at build time using MSBuild. This means it won't work with the .NET CLI, it won't work with VSCode and it won't work on any build server that uses MSBuild.

Actually this is a huge downside about the .editorconfig. Why shall we use the .editoconfig to enforce the coding style, if a build in VSCode doesn't fail, but it fails in VS2019 does? Why shall we use the .editorconfig, if the build on a build server doesn't fail. Not all of the developers are using VS2019, sometimes VSCode is the better choice. And we don't want to install VS2019 on a build server and don't want to call vs.exe to build the sources.

The reason why it is like this is as simple as bad: The Roslyn analyzers to check the codes using the .editorconfig are not yet done.

Actually, Microsoft is working on that and is porting the VS2019 coding style analyzers to Roslyn analyzers that can be downloaded and used via NuGet. Currently, the half of the work is done and some of the analyzers can be used in the project. See here: #33558

With this post I'd like to try it out. We need this for our projects in the YOO, the company I work for and I'm really curious about how this is going to work in a real project

Code Analyzers

To try it out, I'm going to use the Weather Stats App I created in previous posts. Feel free to clone it from GitHub and follow the steps I do within this post.

At first you need to add a NuGet package:

Microsoft.CodeAnalysis.CSharp.CodeStyle

This is currently a development version and hosted on MyGet. This needs you to follow the installation instructions on MyGet. Currently it is the following .NET CLI command:

dotnet add package Microsoft.CodeAnalysis.CSharp.CodeStyle --version 3.8.0-1.20330.5 --source https://dotnet.myget.org/F/roslyn/api/v3/index.json

The version number might change in the future. Currently I use the version 3.8.0-1.20330.5 which is out since June 30th.

You need to execute this command for every project in your solution.

After executing this command you'll have the following new lines in the project files:

<PackageReference Include="Microsoft.CodeAnalysis.CSharp.CodeStyle" Version="3.8.0-1.20330.5">
    <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    <PrivateAssets>all</PrivateAssets>
</PackageReference>

If not just copy this line into the project file and run dotnet restore to actually load the package.

This should be enough to get it running.

Adding coding style errors

To try it out I need to add some coding style errors. I simply added some these:

Roslyn conflicts

Maybe you will get a lot of warnings about that an instance of the analyzers cannot be created because of a missing Microsoft.CodeAnalysis 3.6.0 Assembly like this:

Could not load file or assembly 'Microsoft.CodeAnalysis, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

This is might strange because the code analysis assemblies should already be available in case Roslyn is used. Actually this error happens, if you do a dotnet build while VSCode is running the Roslyn analyzers. Strange but reproduceable. Maybe Roslyn analyzers can only run once at the same time.

To get it running without those warnings, you can simply close VSCode or wait for a few seconds.

Get it running

Actually it didn't work in my machine the first times. The reason was that I forgot to update the global.json. I still used a 3.0 runtime to run the analyzers. This doesn't work.

After updating the global.json to a 5.0 runtime (preview 6 in my case) it failed as expected:

Since the migration of the IDE analyzers to Roslyn analyzers is half done, not all of the errors will fail the build. This is why the the IDE0003 rule doesn't appear here. I used the this keyword twice in the code above, that should also fail the build.

Conclusion

Actually I was wondering why Microsoft didn't start earlier to convert the VS2019 analyzers into Roslyn code analyzers. This is really valuable for teams where developers use VSCode, VS2019, VS for Mac or any other tool to write .NET Core applications. It is not only about showing coding style errors in an editor, it should also fail the build in case coding style errors are checked in.

Anyway, it is working Good. And hopefully Microsoft will complete the set of analyzers as soon as possible.

Golo Roden: Entdeckungsreise in die selbst genutzte Programmiersprache

Gleichwohl in welcher Programmiersprache man entwickelt, für nahezu jeden Entwickler gibt es von Zeit zu Zeit noch Neues zu entdecken. Doch wie lässt sich eine systematische Entdeckungsreise in die eigene Programmiersprache mit dem Alltag vereinbaren?

Stefan Lieser: Aufzeichnung zum Webinar “Softwareentwicklung ohne Abhängigkeiten” veröffentlicht

Soeben habe ich die Aufzeichnung des Webinars Softwareentwicklung ohne Abhängigkeiten vom 02.06.2020 veröffentlicht.

Der Beitrag Aufzeichnung zum Webinar “Softwareentwicklung ohne Abhängigkeiten” veröffentlicht erschien zuerst auf Refactoring Legacy Code.

Stefan Lieser: Folien zum Webinar “Flow Design am Beispiel”

Am 30.06.2020 habe ich das Webinar Flow Design am Beispiel gehalten. Unten finden Sie die Folien sowie den Link zu den Beispielen. Die beiden im Webinar gezeigten Beispiele finden Sie unter den folgenden Link: CSV Viewer – https://github.com/slieser/flowdesignbuch/tree/master/csharp/csvviewer/csvviewer MyStocks – https://github.com/slieser/flowdesignbuch/tree/master/csharp/mystocks  

Der Beitrag Folien zum Webinar “Flow Design am Beispiel” erschien zuerst auf Refactoring Legacy Code.

Holger Schwichtenberg: Behandlung der Umsatzsteuersätze von 5 und 16 Prozent in der Elster-Umsatzsteuervoranmeldung

Die Finanzverwaltung verzichtet nun einfach auf die Trennung nach Steuersätzen. Entwickler von Buchhaltungslösungen müssen aber deutlich agiler sein.

Code-Inside Blog: Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?

Within our product we move more and more stuff in the .NET Core land. Last week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:

.NET Framework 4.5 or higher.

With .NET Core the answer is sligthly different:

In theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.

This system is called “Framework-dependent apps roll forward” and sounds good.

The bad part

Unfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:

It’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.

The good part

With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.

Summery

Read the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.

As a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.

Hope this helps!

Jürgen Gutsch: Exploring Orchard Core - Part 1

Since I while I planned to try out the Orchard Core Application Framework. Back than I saw an awesome video where Sébastien Ros showed an early version of Orchard Core. If I remember right it was this ASP.NET Community Standup: ASP.NET Community Standup - November 27, 2018 - Sebastien Ros on Headless CMS with Orchard Core

Why a blog series

Actually this post wasn't planned to be a series but as usual the posts are getting longer and longer. The more I write, the more came in mind to write about. Bloggers now this, I guess. So I needed to decide, whether I want to write a monster blog post or a series of smaller posts. Maybe the later is easier to read and to write.

What is Orchard Core?

Orchard Core is an open-source modular and multi-tenant application framework built with ASP.NET Core, and a content management system (CMS) built on top of that application framework.

Orchard Core is not a new version of the Orchard CMS. It is a completely new thing written in ASP.NET Core. The Orchard CMS was designed as a CMS, but Orchard Core was designed to be an application framework that can be used to build a CMS, a blog or whatever you want to build. I really like the idea to have a framework like this.

I don't want to repeat the stuff, that is already on the website. To learn more about it just visit it: https://www.orchardcore.net/

I had a look into the Orchard CMS, back then when I was evaluating a new blog. It was good, but I didn't really feel confident.

Currently the RC2 is out since a couple of days and version 1 should be released in September 2020. The roadmap already defines features for future releases.

Let's have a first glimpse

When I try a CMS or something like this, I try to follow the quick start guide. I want to start the application up to have a first look and feel. As a .NET Core fan-boy I decided to use the .NET CLI to run the application. But first I have to clone the repository, to have a more detailed look later on and to run the sample application:

git clone https://github.com/OrchardCMS/OrchardCore.git

This clones the current RC2 into a local repository.

Than we need to cd into the repository and into the web sample:

cd OrchardCore\
cd src\OrchardCore.Cms.Web\

Since this should be a ASP.NET Core application, it should be possible to run the dotnet run command:

dotnet run

As usual in ASP.NET Core I get two URLs to call the app. The HTTP version on port 5000 and the HTTPS version on port 5001.

I'm now should be able to call the CMS in the browser. Et voilà:

Since every CMS has an admin area, I tried /admin for sure.

At the first start it asks you about to set initial credentials and stuff like this. I already did this before. At every other start I just see the log-in screen:

After the log-in I feel myself warmly welcomed... kinda :-D

Actually this screenshot is a little small, because it hides the administration menu which is the last item in menu. You should definitely have a look into the /admin/features page that has a ton of features to enable. Stuff like GraphQL API, Lucene search indexing, Markdown editing, templating, authentication providers and a lot more.

But I won't go threw all the menu items. You can just have a look by yourself. I actually want to explore the application framework.

I want to see some code

This is why I stopped the application and open it in VS Code and this is where the fascinating stuff is.

Ok. This is where I thought the fascinating stuff is. There is almost nothing. There are a ton of language files, an almost empty wwwroot folder, some configuration files and the common files like a *.csproj, the startup.cs and the program.cs. Except the localization part, it completely looks like an empty ASP.NET Core project.

Where is all the Orchard stuff? I expected a lot more to see.

The Program.cs looks pretty common, except the usage of NLog which is provided via OrchardCore.Logging package:

using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using OrchardCore.Logging;

namespace OrchardCore.Cms.Web
{
    public class Program
    {
        public static Task Main(string[] args)
            => BuildHost(args).RunAsync();

        public static IHost BuildHost(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureLogging(logging => logging.ClearProviders())
                .ConfigureWebHostDefaults(webBuilder => webBuilder
                    .UseStartup<Startup>()
                    .UseNLogWeb())
                .Build();
    }
}

This clears the default logging providers and adds the NLog web logger. It also uses the common Startup class which is really clean and doesn't need a lot of configuration .

using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace OrchardCore.Cms.Web
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddOrchardCms();
        }

        public void Configure(IApplicationBuilder app, IHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseStaticFiles();

            app.UseOrchardCore();
        }
    }
}

It only adds the services for the Orchard CMS in the method ConfigureServices and uses Orchard Core stuff in the method Configure.

Actually this Startup configures the Orchard Core as CMS. It seems I would also be able to add the Orchard Core to the ServiceCollection by using AddOrchardCore(). I guess this would just add Core functionality to the application. Let's see if I'm right.

Both the AddOrchardCms() and the AddOrchardCore() methods are overloaded and can be configured using an OrchardCoreBuilder. Using this overloads you can add Orchard Core features to your application. I guess the method AddOrchardCms() has a set of features preconfigured to behave like a CMS:

It is a lot of guessing and trying right now. But I didn't read any documentation until now. I just want to play around.

I also wanted to see what is possible with the UseOrchardCore() method, but this one just has on optional parameter to add an action the retrieves the IApplicationBuilder . I'm not sure why this action is really needed. I mean I would be able to configure ASP.NET Core features inside this action. I could also nest a lot of UseOrchardCore() calls. But why?

I think, it is time to have a look into the docs at https://docs.orchardcore.net/en/dev/. Don't confuse it with the docs on https://docs.orchardcore.net/. This are the Orchard CMS docs that might be outdated now.

The docs are pretty clear. Orchard Core comes in two different targets: The Orchard Core Framework and the Orchard Core CMS. The sample I opened here is the Orchard Core CMS sample. To learn how the Framework works, I need to clone the Orchard Core Samples repository: https://github.com/OrchardCMS/OrchardCore.Samples

I will write about this in the next part this series.

Not a conclusion yet

I will continue exploring the Orchard Core Framework within the next days and continue to write about it in parallel. The stuff I saw until now is really promising and I like the fact that it simply works without a lot of configuration. Exploring the new CMS would be another topic and really interesting as well. Maybe I will find some time in the future.

Daniel Schädler: Dokumentieren ohne Microsoft Word

Voraussetzungen

Ich bin als System Engineer bei der Schweizerischen Eidgenossenschaft tätig und wir haben die tägliche Herausforderung, dass wir Artefakte zum Beispiel für die Betriebsübergabe gemäss HERMES verpflichtet sind zu erstellen.Als Beispiel in diesem Blogpost habe ich das Betriebshandbuch und dessen Vorlage genommen, die in diesem Blogpost als Markdown verwendet werden soll.

Die Struktur des Dokumentes ist nachfolgend aufgeführt:

  • Systemübersicht
  • Aufnahme des Betriebes
    • Voraussetzungen für die Betriebsaufnahme
      • Ablauf der Betriebsaufnahme
      • Qualitätssicherung nach Betriebsaufnahme
      • Vorgaben zur Abnahme des Systems
  • Durchführung und Überwachung des Betriebes
    • Betriebsüberwachung
    • Datensicherung
    • Kontrolle zum Datenschutz
    • Statistisken, Kennzahlen, Messzahlen
    • Vorgehen im Fehlerfall
    • Hindweise auf Betriebsprozesse
  • Unterbrechung oder Beendigung des Betriebes
    • System stoppen
    • Ablauf für die Wiederinbetriebnahme
    • Qualitätssicherung nach Wiederinbetriebnahme
    • Abbau des System, Archivierung, Übergabe
  • Supportorganisation
    • Supportprozesse
    • Organisation mit Rollen
  • Changemanagement
    • Changemanagement Prozess
      • Changemanagement mit Rollen, Kontaktinformationen
  • Sicherheitsbestimmungen

Hierzu verwende ich die folgenden Visual Studio Code Extensions:

Eine sehr hilfreiche Quelle für die Anwendung des Authoring Packs findet man hier

Duchführung

Hierfür erstelle ich mir die notwendige Struktur des Betriebsdokumentes auf dem Dateisystem. Dies sieht dann bei mir so aus:

  • Systemübersicht
  • Aufnahme des Betriebes
  • Durchführung des Betriebes
  • Unterbrechung oder Beendigung des Betriebes
  • Supportorganisation
  • Changemanagement
  • Sicherheitbestimmungen

Als Beispiel die Ordnerstruktur für die Ordnerübersicht.

In diesem Ordner sind alle relevanten Artefakte für dieses Kapitel integriert. Bedauerlicherweise ist es aktuell nicht möglich mit dem Microsoft Authoring Pack SVG Grafiken zu referenzieren.

Jedoch kann mit der INCLUDE Direktive, Mardown Dateien zusammen geführt werden, sodass zum Schluss dann ein gesamthaftes Betriebshandbuch erstellt werden kann. Diese sieht dann folgendermassen aus:

[!INCLUDE [Systemoverview](Link zum Kapitel)]
[!INCLUDE [Betriebsaufnahme](Link zum Kapitel)]
[!INCLUDE [Durchführung des Betriebes](Link zum Kapitel)]
[!INCLUDE [Unterbrechung des Betriebes](Link zum Kapitel)]

Fazit

Das Microsoft Authoring Pack hilf viel beim Dokumentieren nur wäre es wünschenswert, wenn die Grafiken nicht nur als jpg/png unterstützt werden würden, sondern auch als SVG, das ja von yUML automatisch generiert wird. Für hilfreiche Rückmeldungen bin ich dankbar.

Golo Roden: Einführung in React, Folge 4: Webpack

Wer Anwendungen mit React entwickelt, braucht über kurz oder lang einen Bundler wie Webpack. Doch wie verbindet man Webpack mit React?

Holger Schwichtenberg: PowerShell 7: Null Conditional Operator ?. und ?[]

In PowerShell 7.0 gibt es als experimentelles Feature den Null Conditional Operator ?. für Einzelobjekte und ?[] für Mengen.

Stefan Henneken: 10-year Anniversary

Exactly 10 years ago today, I published the first article here on my blog. The idea was born in 2010 during a customer training in Switzerland. The announced extensions of IEC 61131-3 were lively discussed at the dinner. I had promised the participants that evening to show a small example on this topic at the end of the training. At that time, Edition 3 of IEC 61131-3 had not yet been released, but CODESYS had its first beta versions, so that the participants could familiarize themselves with the language extensions. So later in the hotel room I started to keep my promise and prepared a small example.

Pleased about the interest in the new features of IEC 61131-3, I later sat at the gate in the airport and was able to think a little bit about the last days. I asked myself again and again whether and how I could pass on the example to all others who are interested. Since I was following certain blogs regularly at that time, and I still do, the idea had come up to run a blog as well.

At the same time, Microsoft offered an appropriate platform to run your own blog without having to deal with the technical details yourself. With the Live Writer, Microsoft also provided a free editor with which texts could be created very easily and loaded directly onto the weblog publication system. At the time, I wanted to save myself the effort of administering the blogging software on a web host. I preferred to invest the time in the content of the articles.

After a few considerations and a number of discussions, I published ‘test articles’ on C# and .NET. After these exercises and the experiences from the training, I created and published the first articles on IEC 61131-3. I also noticed that by writing the articles my knowledge on the respective topic was deepened and consolidated. Additionally to the IEC 61131-3, I also wanted to deal with topics related to .NET and therefore I started a series on MEF and the TPL. But I also realized that I had to set priorities.

In the meantime Microsoft stopped its blog service, but offered a migration to WordPress. There is also the possibility to host the blog for free. The statistics functions are very helpful. These provide information about the number of hits of each article. It is also lists the country from which the articles are retrieved. Fortunately, I saw the number of hits increase each year:

In 2014, I also made a decision to publish the articles not only in German but also in English. So in the last 10 years, about 70 posts have been published, 20 of which are in English. Most of the hits still come from the German-speaking countries. Here are the top 5 from 2019:

Germany44.7 %
Switzerland6.5 %
United States6.3 %
Netherlands4.3 %
Austria4.1 %

Asian countries and India are hardly represented so far. Either the access to WordPress is limited or the search engines there rate my site differently.

After all these years, I decided to switch to a paid service at WordPress. One reason is the free choice of an own URL. Instead of https://wordpress.StefanHenneken.com my blog is now accessible via https://StefanHenneken.net. Furthermore, advertising is turned off, which I didn’t always find suitable, and on which I had no influence at all. I try to compensate the costs by affiliate marketing. This means that the book recommendations contain a link to Amazon. If the book is purchased via Amazon, I receive a few euro cents as commission. On this occasion, I also slightly changed the design of the sites.

I will continue to publish posts on IEC 61131-3 in German and English. In the medium term, however, new topics may be included.

At this point, I would like to thank all readers. I am always glad about a comment or if my page is recommended via LinkedIn, Xing, or whatever other means. My thanks also go to the people who have helped with the creation of the texts through comments, suggestions for improvement or proofreading.

Stefan Henneken: 10-jähriges Jubiläum

Auf den Tag genau ist es 10 Jahre her, das ich den ersten Artikel hier auf meinem Blog veröffentlicht habe. Die Idee ist 2010 während einer Kundenschulung in der Schweiz entstanden. Beim Abendessen wurde über die angekündigten Erweiterungen der IEC 61131-3 rege diskutiert. Den Teilnehmern hatte ich an diesem Abend versprochen, zum Ende der Schulung ein kleines Beispiel zu diesem Thema zu zeigen. Damals war die Edition 3 der IEC 61131-3 noch nicht veröffentlicht, aber von CODESYS gab es die ersten Betaversionen, so dass man sich mit den Erweiterungen der Sprache vertraut machen konnte. Somit begann ich später im Hotelzimmer mein Versprechen einzulösen und ich habe ein kleines Beispiel vorbereitet.

Erfreut über das Interesse an den Neuerungen der IEC 61131-3 saß ich später im Flughafen am Gate und konnte ein wenig über die letzten Tage nachdenken. Ich stellte mir immer wieder die Frage, ob und wie ich das Beispiel an andere Interessierte weitergeben könnte. Da ich zu dem Zeitpunkt bestimmte Blogs regelmäßig verfolgt habe, und das tue ich heute noch, war die Idee aufgekommen, ebenfalls einen Blog zu betreiben.

Microsoft bot zur gleichen Zeit eine entsprechende Plattform an, um einen eigenen Blog zu betreiben, ohne selbst sich um die technischen Details kümmern zu müssen. Auch lieferte Microsoft mit dem Live Writer einen kostenlosen Editor an, mit dem die Texte sehr einfach erstellt und auf das Weblog-Publikationssystem direkt geladen werden konnten. Damals wollte ich mir den Aufwand ersparen bei einem Webhost selbst die Blog-Software zu administrieren. Die Zeit wollte ich lieber in den Inhalt der Artikel investieren.

Nach einigen Überlegungen und etlichen Diskussionen habe ich erstmal ‚Testartikel‘ zu C# und .NET veröffentlicht. Nach diesen Übungen und den Erfahrungen aus der Schulung habe ich die ersten Artikel zur IEC 61131-3 erstellt und veröffentlicht. Auch habe ich gemerkt, dass durch das Schreiben der Artikel sich mein Wissen zu dem jeweiligen Thema weiter vertiefte und gefestigt wurde. Neben der IEC 61131-3 wollte ich mich noch zusätzlich mit Themen rund um .NET beschäftigen und habe deshalb auch eine Serie zu MEF und der TPL gestartet. Ich merkte aber auch, dass ich Schwerpunkte setzen musste.

Zwischenzeitig stellte Microsoft seinen Blog-Dienst ein, bot aber eine Migration nach WordPress an. Auch dort besteht die Möglichkeit, den Blog kostenlos zu hosten. Sehr hilfreich sind die Statistikfunktionen. Diese geben Auskunft über die Anzahl der Aufrufe der jeweiligen Artikel. Auch wird aufgeführt aus welchem Land die Aufrufe kommen. Erfreulicher Weise konnte ich beobachten, wie jedes Jahr die Anzahl der Aufrufe zunahm:

Ab 2014 faste ich zusätzlich den Endschluss, die Artikel nicht nur in Deutsch, sondern auch in Englisch zu veröffentlichen. Somit wurden in den letzten 10 Jahren ca. 70 Posts veröffentlicht, wovon 20 in Englisch sind. Nach wie vor, kommen aber die meisten Aufrufe aus dem deutschsprachigen Raum. Hier die Top 5 aus dem Jahr 2019:

undefinedGermany44,7 %
undefinedSwitzerland6,5 %
undefinedUnited States6,3 %
undefinedNetherlands4,3 %
undefinedAustria4,1 %

Kaum vertreten sind bisher die asiatischen Länder und auch Indien. Entweder sind die Zugriffe auf WordPress nur eingeschränkt möglich oder die dortigen Suchmaschinen bewerten meine Seite anders.

Nach all den Jahren habe ich mich dazu entschlossen, bei WordPress auf einen kostenpflichtigen Dienst umzustellen. Ein Grund ist die freie Wahl einer eignen URL; statt  https://wordpress.StefanHenneken.com ist mein Blog jetzt über https://StefanHenneken.net erreichbar. Außerdem wird die Werbung ausgeblendet, die ich nicht immer passend fand und auf der ich keinerlei Einfluss hatte. Die Unkosten versuche ich per Affiliate Marketing zu kompensieren. Das Bedeutet, dass die Buchempfehlungen einen Link zu Amazon enthalten. Wird darüber das Buch erworben, erhalten ich einige Eurocents als Provision. Auch das Design der Seiten habe ich bei dieser Gelegenheit etwas angepasst.

Inhaltlich werde ich weiterhin Posts zu IEC 61131-3 in Deutsch und Englisch veröffentlichen. Mittelfristig werden aber evtl. noch neue Themengebiete hinzukommen.

An dieser Stelle möchte ich mich bei allen Lesern bedanken. Ich freue mich auch immer über einen Kommentar oder wenn meine Seite per LinkedIn, Xing, oder wie auch immer weiterempfohlen wird. Mein Dank gilt auch den Personen, die durch Hinweise, Verbesserungsvorschläge oder Korrekturlesungen bei der Erstellung der Texte geholfen haben.

Johnny Graber: Abhängigkeiten im Code aufzeigen mit NDepend 2020

NDepend ist ein praktisches Werkzeug zur statischen Codeanalyse. Die grosse Neuerung in der Version 2020 ist der komplett überarbeitete Abhängigkeitsgraph. Ich hatte bereits mehrmals über NDepend geschrieben (hier, hier und hier) und möchte nicht mehr darauf verzichten. Der alte Abhängigkeitsgraph Bisher konnte man die Grafik auf der obersten Ebene nur bedingt beeinflussen. Man konnte wählen … Abhängigkeiten im Code aufzeigen mit NDepend 2020 weiterlesen

Holger Schwichtenberg: PowerShell 7: Null Coalescing Assigment Operator ??=

Eine weitere Behandlung des $null-Falls ist in PowerShell 7.0 hinzugekommen, und zwar in Form des Operators "Null Coalescing Assignment" mit ??=.

Golo Roden: Vergünstigte Remote-Workshops zu DDD, CQRS, TypeScript und Kryptografie

Weiterbildung und -qualifizierung sind im Moment vielleicht wichtiger als je zuvor. Auch wenn einige durch die Lockerungsmaßnahmen wieder im Büro sein können, arbeiten viele nach wie vor remote von zu Hause. Für sie bietet the native web ausgewählte Workshops zum rabattierten Preis an.

Daniel Schädler: Dokumentieren mit Markdown und Visual Studio Code

Ausgangslage

Für die Erstellung von Code bieten sich verschiedene Werkzeuge an. Eines davon ist Visual Studio Code, welches mit seiner Plug-In Vielfalt extrem erweitert werden kann.Viele Code-Dokumentationen bauen auf Markdown auf.

Ziel

Das Ziel ist, mit Visual Studio Code eine einfache Dokumentation, inklusive Grafiken zu erstellen und diese dann generieren zu lassen. Folgender Fall soll abgedeckt werden.

  • Eine Klassendiagramm als Bild in einem Markdown anzeigen.

Natürlich sind noch weitere Diagramme möglich. Eine Hilfestellung bietet sich beim Plug-In Ersteller an (https://marketplace.visualstudio.com/_apis/public/gallery/publishers/ms-vscode-remote/vsextensions/remote-ssh/0.51.0/vspackage) wo viele weitere Beispiele zu finden sind.

Vorbereitungen

Für meine Installation habe ich foglende Komponenten verwendet:

Somit können wir starten zu dokumentieren.

Ausführung

Als ersters wird eine yuml Datei erstellt.

yuml Datei erstellen Erstellte yuml-Datei

Es ist ersichtlich, dass auf der rechten Seite bereits eine Vorschau aktiv ist. Fängt man in der Datei mit den vorgsehenen Tags an, so wird einem mit „class“ Autovervollständigung angeboten, die mit Tab bestätig werden kann.

Autofill

Anschliessend ist der „Stub-Code“ bereits generiert.

Stub code

Auch das bereits generierte Bild lässt sich sehen und kann anschliessend im Readme, mittels Markdown Syntax, eingebettet werden. 

SVG Referenz im Markdown

Markdown Referenz auf das SVG-Bild

Das SVG-Bild, wird mit der Option

{generate:true}

forciert.

Einbinden lässt es sich dann wie nachfolgend dargestellt. 

SVG Referenz im Markdown

Markdown Referenz auf das SVG-Bild

Auch die Vorschau, auch wenn nicht Word konkurrenzfähig, lässt sich sehen. 

Doc Preview

Dokument-Vorschau

Um einenen Export zu starten drückt man CTRL+P und gibt dann Export ein.

Fazit

Mit einfachen Mitteln, kann eine Dokumentation erstellt und exportiert werden, die sich sehen lassen kann. Konstruktive Kritik und Anregungen werden gerne entgegen genommen.

Golo Roden: Einführung in React, Folge 3: React Components

Einer der wichtigsten Aspekte von React ist die Komponentenorientierung, denn sie ermöglicht die Wiederverwendbarkeit von UI-Elementen. Doch wie schreibt man React Components?

Holger Schwichtenberg: PowerShell 7: Null Coalescing Operator ??

Der neue PowerShell-Operator ?? liefert den Wert des vo­rangestellten Ausdrucks, wenn dieser nicht $null ist.

Jürgen Gutsch: [Off-Topic] 2020 will be a year full of challenges

This post really is off-topic.

It seemed that 2020 started to get a good year. Life was kind of normal and I was looking forward to the upcoming events. Community events as well as family events. I was really looking forward to the MVP Summit 2020, to meeting good friends again and to visit my favorite city in the US.

But life wouldn't be life if there were no changes and challenges.

February 1st was the end of my old life but the start of a new one. A challenging one and a life full of new chances and opportunities.

What did change?

My wife and I, we broke up. Without any drama and stuff like this. It was a kind of spontaneous decision but a needed one. It was unexpected for our family and friends but we kind of knew about this for the last three years. It was a shock for the kids for sure but, as I said, there was never any drama and we are still friends and are talking and laughing together. The shock for the kids was a small one and a short one. Actually nothing really changed for the kids, except living in two houses which for sure is a huge change but they seem to love it, to enjoy it like a kind of adventure. Every house has other things they like, other rules and it seems they love to live in both houses.

This might also be shocking for friends who are reading this right now.

To leave the wife that was on my side for around 16 years felt strange and odd. But at the end it was a good decision for both of us.

This for sure results in new challenging situations, like moving into a new apartment and stuff like this.

What else did change?

The second challenging situation is the one that happens to all of us all over the world. The COVID-19 lockdowns all over the world are challenging for everyone. Especially the kids and their parents. Fortunately, I am able to work from home and, fortunately, child care is divided 50/50 between me and my wife. (She is still called my wife because we are still married.) But to work and to do home-schooling and child care in parallel is different, challenging and might be really hard and almost impossible for some parents.

Since COVID-19 happened to central Europe, I was working from home. Actually, today is the first day since weeks I commute to work. Feels strange sitting in the train, for more than one hour, and wearing a face mask. The train is almost empty. Only a few people are talking because of those annoying masks.

And, the first time since months, I have some time to write a blog post. I used to write while commuting, so I took the chance to write some stuff.

Actually, I started this post two weeks ago. Now it is the second time since COVID-19 to commute to work :-D

What comes next?

The new situation and the less commute time are the reasons why I didn't write a blog post ore something else since January.

The move to the new apartment is done. The kids know the new situation since more than a month and get used to it. It is all settling and calming down and the numbers around COVID-19 are getting better and better in central Europe. Let's have a small look into the future:

I'm going to start challenging myself again to do some more stuff for the developer community:

  • Writing a blog post at least every second week
    • Just ask for topics.
  • Rewriting my book to update it to ASP.NET Core 5.0 and really, really, really get it published
  • Writing some more technical articles for my favorite magazine.
  • Trying to do some talks on conferences and user groups
    • The next planned talk is at the DWX in November this year
  • Getting my streaming set up and running in the new apartment and start streaming again
    • The bandwidth could be a problem in the new apartment.
    • But just recording and feed a YouTube channel could be an option, too.

Actually, this is kind of challenging but why not? I really love challenges :-)

Christian Dennig [MS]: WSL2: Making Windows 10 the perfect dev machine!

Disclaimer: I work at Microsoft. And you might think that this makes me a bit biased about the current topic. However, I was an enthusiastic MacOS / MacBook user – both privately and professionally. I work as a so-called “Cloud Solution Architect” in the area of Open Source / Cloud Native Application Development – i.e. everything concerning container technologies, Kubernetes etc. This means that almost every tool you have to deal with is Unix based – or at least, it only works perfectly on that platform. That’s why I early moved to the Apple ecosystem, because it makes your (dev) life so much easier – although you get some not so serious comments on it at work every now and then 🙂

Well, things have changed…

Introduction

In this article I would like to describe how my current setup of tools / the environment looks like on my Windows 10 machine and how to setup the latest version of the Windows Subsystem for Linux 2 (WSL2) optimally – at least for me – when working in the “Cloud Native” domain.

Long story short…let’s start!

Basics

Windows Subsystem for Linux 2 (WSL2)

The whole story begins with the installation of WSL2, which is now available with the current version of Windows (Windows 10, version 2004, build 19041 or higher). The Linux subsystem has been around for quite a while now, but it has never been really usable – at least this is the case for version 1 (in terms of performance, compatibility etc.).

The bottom line is that WSL2 gives you the ability to run ELF64 Linux binaries on Windows – with 100% system call compatibility and “near-native” performance! The Linux kernel (optimized in size and performance for WSL2) is built by Microsoft from the latest stable branch based on the sources available on “kernel.org”. Updates of the kernel are provided via Windows Update.

I won’t go into the details of the installation process as you can simply get WSL2 by following this tutorial: https://docs.microsoft.com/en-us/windows/wsl/install-win10

It comes down to:

  • installing the Subsystem for Linux
  • enabling “Virtual Machine Platform”
  • setting WSL2 as the default version

Next step is to install the distribution of your choice…

Install Ubuntu 20.04

I decided to use Ubuntu 20.04 LTS, because I already know the distribution well and have used it for private purposes for some time – there are, of couse, others: Debian, openSUSE, Kali Linux etc. No matter which one you choose, the installation itself couldn’t be easier: all you have to do is open the Windows Store app, find the desired distribution and click “Install” (or simply click on this link for Ubuntu: https://www.microsoft.com/store/apps/9n6svws3rx71).

Windows Store Ubuntu 20.04 LTS
Ubuntu Installation

Once it is installed, you have to check if “version 2” of the subsystem for Linux is used (we have set “version 2” as default, but just in case…). Therefor, open a Powershell prompt and execute the following commands:

C:\> wsl --list --verbose
  NAME                   STATE           VERSION
* Ubuntu-20.04           Running         2
  docker-desktop-data    Running         2
  docker-desktop         Running         2
  Ubuntu-18.04           Stopped         2

If you see “Version 1” for Ubuntu-20.04, please run…

C:\> wsl --set-version Ubuntu-20.04 2

This will convert the distribution to be able to run in WSL2 mode (grab yourself a coffee, the conversion takes some time 😉 ).

Windows Terminal

Next, you need a modern, feature-rich and lightweight terminal. Fortunately, Microsoft also delivered on this: the Open Source Windows Terminal.  It includes many of the features most frequently requested by the Windows command-line community including support for tabs, rich text, globalization, configurability, theming & styling etc.

The installation is also done via the Windows Store: https://www.microsoft.com/store/productId/9N0DX20HK701

Once it’s on your machine, we can tweak the settings of the terminal to use Ubuntu 20.04 as the default terminal. Therefor, open Windows Terminal and hit “Ctrl+,” (that opens the settings.json file in your default text editor).

Add the guid of the Ubuntu 20.04 profile to the “defaultProfile” property:

Default Profile

Last but not least we upgrade all existing packages to be up to date.

$ sudo apt upgrade

So, the “basics” are in place now…we have a terminal that’s running Ubuntu Linux in Windows. Next, let’s give it super-powers!

Setup / tweak the shell

The software that is now being installed is an extract of what I need for my daily work. Of course the selection differs from what you might want (although I think this will cover a lot someone in the “Cloud Native” space would install). Nevertheless, it was important for me to list almost everything here, because it basically also helps me if I have to set up the environment again in the future 🙂

SSH Keys

Since it’s in the nature of a developer to work with GitHub (and other services, of course :)), I first need an SSH key to authenticate against the service. To do this, I create a new key (or copy an existing one to ~/.ssh/), which I then publish to GitHub (via their website).

At the same time the key is added to ssh-agent, so you don’t have to enter the corresponding keyphrase all the time when using it.

$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
# start the ssh-agent in the background
$ eval $(ssh-agent -s)
> Agent pid 59566
$ ssh-add ~/.ssh/id_rsa

Oh My Zsh

Now comes the best part 🙂 To give the Ubuntu shell (which is bash by default) real superpowers, I exchange it with zsh in combination with the awesome project Oh My Zsh (which provides hundreds of plugins, customizing options, tweaks etc. for it). zsh is an extended bash shell which has many improvements and extensions compared to bash. Among other things, the shell can be themed, the command prompt adjusted, auto-completion can be used etc.

So, let’s install both:

$ sudo apt install git zsh -y
# After the installation has finished, add OhMyZsh...
$ sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

When ready, OhMyZsh can be customized via the .zshrc file in you home directory (e.g. enable plugins, set the theme). Here are the settings I usually make:

  • Adjust Theme
  • Activate plugins

Let’s do this step by step…

Theme

As theme, I use powerlevel10k (great stuff!), which you can find here.

Sample: powerlevel10k (Source: https://github.com/romkatv/powerlevel10k)

The installation is very easy by first cloning the repo to your local machine and then activating the theme in ~/.zshrc (variable ZSH_THEME, see screenshot below):

$ git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/powerlevel10k

Adjust theme to use

The next time you open a new shell, a wizard guides you through all options of the theme and allows you to customize the look&feel of your terminal (if the wizard does not start automatically or you want to restart it, simply run p10k configure from the command prompt).

The wizard offers a lot of options. Just find the right setup for you, play around with it a bit and try out one or the other. My setup finally looks like this:

My powerlevel10k setup

Optional, but recommended…install the corresponding fonts (and adjust the settings.json of Windows Terminal to use these, see image below): https://github.com/romkatv/powerlevel10k#meslo-nerd-font-patched-for-powerlevel10k

Window Terminal settings

Plugins

In terms of OhMyZsh plugins, I use the following ones:

  • git (git shortcuts, e.g. “gp” for “git pull“, “gc” for “git commit -v“)
  • zsh-autosuggestions / zsh-completions (command completion / suggestions)
  • kubectl (kubectl shortcuts / completion, e.g. “kaf” for “kubectl apply -f“, “kgp” for “kubectl get pods“, “kgd” for “kubectl get deployment” etc.)
  • ssh-agent (starts the ssh agent automatically on startup)

You can simply add them by modifying .zshrc in your home directory:

Activate oh-my-zsh plugins in .zshrc

Additional Tools

Now comes the setup of the tools that I need and use every day. I will not go into detail about all of them here, because most of them are well known or the installation is incredibly easy. The ones, that don’t need much explanation are:

Give me more…!

There are a few tools that I would like to discuss in more detail, as they are not necessarily widely used and known. These are mainly tools that are used when working with Kubernetes/Docker. This is exactly the area where kubectx/kubens and stern are located. Docker for Windows and Visual Studio Code are certainly well known to everyone and are familiar through daily work. The reason why I want to talk about the latter two is because they meanwhile tightly integrate with WSL2!

kubectx / kubens

Who doesn’t know it? You work with Kubernetes and have to switch between clusters and/or namespaces all the time…forgetting the appropriate commands to set the context correctly and typing yourself “to death”. This is where the tools kubectx and kubens come in and help you to switch between different clusters and namespaces quickly and easily. I never want to work with a system again where these tools are not installed – honestly. To see kubectx/kubens in action, here are the samples from their GitHub repo:

kubectx in action
kubens in action

To install both tools, follow these steps:

$ sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
$ sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
$ sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens
mkdir -p ~/.oh-my-zsh/completions
chmod -R 755 ~/.oh-my-zsh/completions
ln -s /opt/kubectx/completion/kubectx.zsh ~/.oh-my-zsh/completions/_kubectx.zsh
ln -s /opt/kubectx/completion/kubens.zsh ~/.oh-my-zsh/completions/_kubens.zsh

To be able to work with the autocomletion features of these tools, you need to add the following line at the end of your .zshrc:

autoload -U compinit && compinit

Congrats, productivity gain: 100% 🙂

stern

stern allows you to output the logs of multiple Pods simultaneously to the local command line. In Kubernetes it is normal to have many services running at the same time that communicate with each other. It is sometimes difficult to follow a call through the cluster. With stern, this becomes relatively easy, because you can select pods by e.g. label selectors from which you want to follow the logs.

With the command stern -l application=scmcontacts e.g. you can stream the logs of all pods with the label application=scmcontacts to your local shell…which then looks like that (each color represents another pod!):

stern log streams

To install stern, use this script:

$ sudo curl -fsSL -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64
$ sudo chmod 755 /usr/local/bin/stern

One more thing

Docker for Windows has been around for a long time and is probably running on your machine right now. What some people may not know is that Docker for Windows integrates seamlessly with WSL2. If you are already running Docker on Windows, a simple invocation of the settings is enough to enable Docker / WSL2 integration:

Activate WSL2 based engine
Choose WSL / distro integration

If you want more details about the integration, please visit this page: https://www.docker.com/blog/new-docker-desktop-wsl2-backend/ For this article, the fact that Docker now runs within WSL2 is sufficient 🙂

Last but not least, one short note. Of course, Visual Studio Code can also be integrated into WSL2. If you install a current version of the editor in Windows, all components to run VS Code with WSL2 are included.

A simple call of code . in the respective directory with your source code is sufficient to install the Visual Studio Code Server (https://github.com/cdr/code-server) in Ubuntu. This allows VSCode to connect remotely to your distro and work with source code / Frameworks that are located in WSL2.

That’s all 🙂

Wrap-Up

Pretty long blog post now, I know…but it contains all the tools that are necessary (take that with a “grain of salt” 😉 ) to make your Windows 10 machine a “wonderful experience” for you as a developer or architect in the “Cloud Native” space. You have a full compatible Linux “system”, which tightly integrates with Windows. You have .NET Core, Go, NodeJS, tools to work in the “Kubernetes universe”, the best code editor currently out there, git, ssh-agent etc. etc.…and a beautiful terminal which makes working with it simply fun!

For sure, there are thing that I missed or I just don’t know at the moment. I would love to hear from you, if I forgot “the one” tool to mention. Looking forward to read your comments/suggestions!

Hope this helps someone out there! Take care…

Photo by Maxim Selyuk on Unsplash

Holger Schwichtenberg: PowerShell 7: Zugriff auf den letzten Fehler

Seit PowerShell 7.0 kann man die letzten aufgetretenen Fehler mit dem Commandlet Get-Error abrufen.

Stefan Lieser: Folien zum Webinar Softwareentwicklung ohne Abhängigkeiten

Am 02.06.2020 habe ich das Webinar “Softwareentwicklung ohne Abhängigkeiten” gehalten. Unten finden Sie die Folien. Die Aufzeichnung des Webinars wird später ebenfalls hier eingestellt.

Der Beitrag Folien zum Webinar Softwareentwicklung ohne Abhängigkeiten erschien zuerst auf Refactoring Legacy Code.

Golo Roden: Fachlichen Code schreiben: Excusez-moi, do you sprechen Español?

Programmieren bedeutet nicht nur, technischen, sondern vor allem auch fachlichen Code zu schreiben. Doch in welcher natürlichen Sprache?

Code-Inside Blog: SqlBulkCopy for fast bulk inserts

Within our product OneOffixx we can create a “full export” from the product database. Because of limitations with normal MS SQL backups (e.g. compatibility with older SQL databases etc.), we created our own export mechanic. An export can be up to 1GB and more. This is nothing to serious and far from “big data”, but still not easy to handle and we had some issues to import larger “exports”. Our importer was based on a Entity Framework 6 implementation and it was really slow… last month we tried to resolve this and we are quite happy. Here is how we did it:

TL;DR Problem:

Bulk Insert with a Entity Framework based implementation is really slow. There is at least one NuGet package, which seems to help, but unfortunately we run into some obscure issues. This Stackoverflow question highlights some numbers and ways of doing it.

SqlBulkCopy to the rescure:

After my failed attempt to tame our EF implementation I discovered the SqlBulkCopy operation. In .NET (Full Framework and .NET Standard!) the usage is simple via the “SqlBulkCopy” class.

Our importer looks more or less like this:

using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TimeSpan.FromMinutes(30), TransactionScopeAsyncFlowOption.Enabled))
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(databaseConnectionString))
    {
    var dt = new DataTable();
    dt.Columns.Add("DataColumnA");
    dt.Columns.Add("DataColumnB");
    dt.Columns.Add("DataColumnId", typeof(Guid));

    foreach (var dataEntry in data)
    {
        dt.Rows.Add(dataEntry.A, dataEntry.B, dataEntry.Id);
    }

    sqlBulk.DestinationTableName = "Data";
    sqlBulk.AutoMapColumns(dt);
    sqlBulk.WriteToServer(dt);

    scope.Complete();
    }

public static class Extensions
    {
        public static void AutoMapColumns(this SqlBulkCopy sbc, DataTable dt)
        {
            sbc.ColumnMappings.Clear();

            foreach (DataColumn column in dt.Columns)
            {
                sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);
            }
        }
    }       

Some notes:

  • The TransactionScope is not required, but still nice.
  • The SqlBulkCopy instance just needs the databaseConnectionString.
  • A Datatable is needed and (I’m not sure why) all non crazy SQL datatypes are magically supported, but GUIDs needs to be typed explicitly.
  • Insert thousands of data in your dataTable, point the SqlBulkCopy to your destination table, map those columns and write the to the server.
  • You can use the same instance for multiple bulk operations.
  • There is also an Async implementation available.

Only “downside”: SqlBulkCopy is a table by table insert. You need to insert your data in the correct order if you have any db constraints in your schema.

Result:

We reduced the import from several minutes to seconds :)

Hope this helps!

Holger Schwichtenberg: Nachschau auf die Microsoft Build 2020

Der Dotnet-Doktor fasst die Highlights der ersten rein virtuellen Build-Konferenz aus Entwicklersicht zusammen.

Christian Dennig [MS]: Horizontal Autoscaling in Kubernetes #3 – KEDA

Introduction

This is the last article in the series regarding “Horizontal Autoscaling” in Kubernetes. I began with an introduction to the topic and showed why autoscaling is important and how to get started with Kubernetes standard tools. In the second part I talked about how to use custom metrics in combination with the Horizontal Pod Autoscaler to be able to scale your deployments based on “non-standard” metrics (coming from e.g. Prometheus).


This article now concludes the topic. I would like to show you how to use KEDA (Kubernetes Event-Driven Autoscaler) for horizontal scaling and how this can simplify your life dramatically.

KEDA

KEDA, as the official documentation states, is a Kubernetes-based event-driven autoscaler. The project was originally initiated by Microsoft and has been developed under open source from the beginning. Since the first release, it has been widely adopted by the community and many other vendors such as Amazon, Google etc. have contributed source code / scalers to the project.

It is safe to say that the project is “ready for production” and there is no need to fear that it will disappear in the coming years.

KEDA, under the hood, works like the many other projects in this area and – first and foremost – acts as a “Custom Metrics API” for exposing metrics to the Horizontal Pod Autoscaler. Additionally, there is a “KEDA agent” that is responsible for managing / activating deployments and scale your workload between “0” and “n” replicas (scaling to zero pods is currently not possible “out-of-the-box” with the standard Horizontal Pod Autoscaler – but there is an alpha feature for it).

So in a nutshell, KEDA takes away the (sometimes huge) pain of setting up such a solution. As seen in the last article, the usage of custom metrics can generate a lot of effort upfront. KEDA is much more “lightweight” and setting it up is just a piece of cake.

This is how KEDA looks under the hood:

Source: https://keda.sh

KEDA comes with its own custom resource definition, the ScaledObject which takes care of managing the “connection” between the source of the metric, access to the metric, your deployment and how scaling should work for it (min/max range of pods, threshold etc.).

The ScaledObject spec looks like that:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: {scaled-object-name}
spec:
  scaleTargetRef:
    deploymentName: {deployment-name} # must be in the same namespace as the ScaledObject
    containerName: {container-name}  #Optional. Default: deployment.spec.template.spec.containers[0]
  pollingInterval: 30  # Optional. Default: 30 seconds
  cooldownPeriod:  300 # Optional. Default: 300 seconds
  minReplicaCount: 0   # Optional. Default: 0
  maxReplicaCount: 100 # Optional. Default: 100
  triggers:
  # {list of triggers to activate the deployment}

The only thing missing now is the last component of KEDA: Scalers. Scalers are adapters that provide metrics from various sources. Among others, there are scalers for:

  • Kafka
  • Prometheus
  • AWS SQS Queue
  • AWS Kinesis Stream
  • GCP Pub/Sub
  • Azure Blob Storage
  • Azure EventHubs
  • Azure ServiceBus
  • Azure Monitor
  • NATS
  • MySQL
  • etc.

A complete list of supported adapters can be found here: https://keda.sh/docs/1.4/scalers/.

As we will see later in the article, setting up such a scaler couldn’t be easier. So now, let’s get our hands dirty.

Sample

Install KEDA

To install KEDA on a Kubernetes cluster, we use Helm.

$ helm repo add kedacore https://kedacore.github.io/charts
$ helm repo update
$ kubectl create namespace keda
$ helm install keda kedacore/keda --namespace keda

BTW, if you use Helm 3, you will probably receive errors/warnings regarding a hook called “crd-install”. You can simply ignore them, see https://github.com/kedacore/charts/issues/18.

The Sample Application

To show you how KEDA works, I will simply reuse the sample application from the previous blog post. If you missed it, here’s a brief summary of what it’s like:

  • a simple NodeJS / Express application
  • using Prometheus client to expose the /metrics endpoint and to create/add a custom (gauge) metric called custom_metric_counter_total_by_pod
  • the metric can be set from outside via /api/count endpoint
  • Kubernetes service is of type LoadBalancer to receive a public IP for it

Here’s the sourcecode for the application:

const express = require("express");
const os = require("os");
const app = express();
const apiMetrics = require("prometheus-api-metrics");
app.use(apiMetrics());
app.use(express.json());

const client = require("prom-client");

// Create Prometheus Gauge metric
const gauge = new client.Gauge({
  name: "custom_metric_counter_total_by_pod",
  help: "Custom metric: Count per Pod",
  labelNames: ["pod"],
});

app.post("/api/count", (req, res) => {
  // Set metric to count value...and label to "pod name" (hostname)
  gauge.set({ pod: os.hostname }, req.body.count);
  res.status(200).send("Counter at: " + req.body.count);
});

app.listen(4000, () => {
  console.log("Server is running on port 4000");
  // initialize gauge
  gauge.set({ pod: os.hostname }, 1);
});

And here’s the deployment manifest for the application and the Kubernetes service:

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: promdemo 
  labels:
    application: promtest
    service: api
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: promtest
      service: api
  template:
    metadata:
      labels:
        application: promtest
        service: api
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: "64Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          image: csaocpger/expressmonitoring:4.3
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
  name: promdemo
  labels:
    application: promtest
spec:
  ports:
  - name: http
    port: 4000
    targetPort: 4000
  selector:
    application: promtest
    service: api
  type: LoadBalancer

Collect Custom Metrics with Prometheus

In order to have a “metrics collector” that we can use in combination with KEDA, we need to install Prometheus. Due to the kube-prometheus project, this is not much of a problem. Go to https://github.com/coreos/kube-prometheus, clone it to your local machine and install Prometheus, Grafana etc. via:

$ kubectl apply -f manifests/setup
$ kubectl apply -f manifests/

After the installation has finished, we also need to tell Prometheus to scrape the /metrics endpoint of our application. Therefore, we need to create a ServiceMonitor which is a custom resource definition from Prometheus, pointing to “a source of metrics”. Apply the following Kubernetes manifest:

kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
  name: promtest
  labels:
    application: promtest
spec:
  selector:
    matchLabels:
      application: promtest
  endpoints: 
  - port: http

So now, let’s check, if everything is in place regarding the monitoring of the application. This can easily be achieved by having a look at the Prometheus targets:

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

Open your local browser at http://localhost:9090/targets.

Prometheus targets for our application

This looks good. Now, we can also check Grafana. Therefor, also “port-forward” the Grafana service to your local machine (kubectl --namespace monitoring port-forward svc/grafana 3000) and open the browser at http://localhost:3000 (the definition of the dashboard you see here is available in the GitHub repo mentioned at the end of the blog post).

Grafana

As we can see here, the application reports a current value of “1” for the custom metric (custom_metric_counter_total_by_pod).

ScaledObject Definition

So, the last missing piece to be able to scale the application with KEDA is the ScaledObject definition. As mentioned before, this is the connection between the Kubernetes deployment, the metrics source/collector and the Kubernetes HorizontalPodAutoscaler.

For the current sample, the definition looks like this:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: prometheus-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    deploymentName: promdemo
  triggers:
  - type: prometheus
    metadata:
      serverAddress: http://prometheus-k8s.monitoring.svc:9090
      metricName: custom_metric_counter_total_by_pod
      threshold: '3'
      query: sum(custom_metric_counter_total_by_pod{namespace!="",pod!=""})

Let’s just walk through the spec part of the definition…we tell KEDA the target we want to scale, in this case, the deployment (scaleTargetRef) of the application. In the triggers section, we point KEDA to our Prometheus service and specify the metric name, the query to issue to collect the current value of the metric and the threshold – the target value for the Horizontal Pod Autoscaler that will automatically be created and managed by KEDA.

And how does that HPA look like? Here is the current version of it as a result of the applied ScaledObject definition from above.

apiVersion: v1
items:
- apiVersion: autoscaling/v1
  kind: HorizontalPodAutoscaler
  metadata:
    annotations:
      autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"ReadyForNewScale","message":"recommended
        size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"ValidMetricFound","message":"the
        HPA was able to successfully calculate a replica count from external metric
        custom_metric_counter_total_by_pod(\u0026LabelSelector{MatchLabels:map[string]string{deploymentName:
        promdemo,},MatchExpressions:[]LabelSelectorRequirement{},})"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"DesiredWithinRange","message":"the
        desired count is within the acceptable range"}]'
      autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"External","external":{"metricName":"custom_metric_counter_total_by_pod","metricSelector":{"matchLabels":{"deploymentName":"promdemo"}},"currentValue":"1","currentAverageValue":"1"}}]'
      autoscaling.alpha.kubernetes.io/metrics: '[{"type":"External","external":{"metricName":"custom_metric_counter_total_by_pod","metricSelector":{"matchLabels":{"deploymentName":"promdemo"}},"targetAverageValue":"3"}}]'
    creationTimestamp: "2020-05-20T07:02:05Z"
    labels:
      app.kubernetes.io/managed-by: keda-operator
      app.kubernetes.io/name: keda-hpa-promdemo
      app.kubernetes.io/part-of: prometheus-scaledobject
      app.kubernetes.io/version: 1.4.1
    name: keda-hpa-promdemo
    namespace: default
    ownerReferences:
    - apiVersion: keda.k8s.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: ScaledObject
      name: prometheus-scaledobject
      uid: 3074e89f-3b3b-4c7e-a376-09ad03b5fcb3
    resourceVersion: "821883"
    selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/keda-hpa-promdemo
    uid: 382a20d0-8358-4042-8718-bf2bcf832a31
  spec:
    maxReplicas: 100
    minReplicas: 1
    scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: promdemo
  status:
    currentReplicas: 1
    desiredReplicas: 1
    lastScaleTime: "2020-05-25T10:26:38Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

You can see, the ScaledObject properties have been added as annotations to the HPA and that it was already able to fetch the metric value from Prometheus.

Scale Application

Now, let’s see KEDA in action…

The application exposes an endpoint (/api/count) with which we can set the value of the metric in Prometheus (again, as a reminder, the value of custom_metric_counter_total_by_pod is currently set to “1”).

Custom metric before adjusting the value

Now let’s set the value to “7”. With the current threshold of “3”, KEDA should scale up to at least three pods.

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 7
}'

Let’s look at the events within Kubernetes:

$ k get events
LAST SEEN   TYPE     REASON              OBJECT                                      MESSAGE
2m25s       Normal   SuccessfulRescale   horizontalpodautoscaler/keda-hpa-promdemo   New size: 3; reason: external metric custom_metric_counter_total_by_pod(&LabelSelector{MatchLabels:map[string]string{deploymentName: promdemo,},MatchExpressions:[]LabelSelectorRequirement{},}) above target
<unknown>   Normal   Scheduled           pod/promdemo-56946cb44-bn75c                Successfully assigned default/promdemo-56946cb44-bn75c to aks-nodepool1-12224416-vmss000001
2m24s       Normal   Pulled              pod/promdemo-56946cb44-bn75c                Container image "csaocpger/expressmonitoring:4.3" already present on machine
2m24s       Normal   Created             pod/promdemo-56946cb44-bn75c                Created container application
2m24s       Normal   Started             pod/promdemo-56946cb44-bn75c                Started container application
<unknown>   Normal   Scheduled           pod/promdemo-56946cb44-mztrk                Successfully assigned default/promdemo-56946cb44-mztrk to aks-nodepool1-12224416-vmss000002
2m24s       Normal   Pulled              pod/promdemo-56946cb44-mztrk                Container image "csaocpger/expressmonitoring:4.3" already present on machine
2m24s       Normal   Created             pod/promdemo-56946cb44-mztrk                Created container application
2m24s       Normal   Started             pod/promdemo-56946cb44-mztrk                Started container application
2m25s       Normal   SuccessfulCreate    replicaset/promdemo-56946cb44               Created pod: promdemo-56946cb44-bn75c
2m25s       Normal   SuccessfulCreate    replicaset/promdemo-56946cb44               Created pod: promdemo-56946cb44-mztrk
2m25s       Normal   ScalingReplicaSet   deployment/promdemo                         Scaled up replica set promdemo-56946cb44 to 3

And this is how the Grafana dashboard looks like…

Grafana dashboard after setting a new value for the metric

As you can see here, KEDA is incredibly fast in querying the metric and the corresponding scaling process (in combination with the HPA) of the target deployment. The deployment has been scaled up to three pods, based on a custom metric – all we wanted to achieve 🙂

Wrap-Up

So, this was the last article about horizontal scaling in Kubernetes. In my opinion the community has provided a powerful and simple tool with KEDA to trigger scaling processes based on external sources / custom metrics. If you compare this to the sample in the previous article, KEDA is way easier to setup! By the way, it works perfectly in combination with Azure Functions, which can be run in a Kubernetes cluster based on Docker images Microsoft provides. Cream of the crop is then to outsource such workloads on virtual nodes, so that the actual cluster is not stressed. But this is a topic for a possible future article 🙂

I hope I was able to shed some light on how horizontal scaling works and how to automatically scale your own deployments based on metrics coming from external / custom sources. If you miss something, give me a short hint 🙂

As always, you can find the source code for this article on GitHub: https://github.com/cdennig/k8s-keda

So long…

Articles in this series:

Christian Dennig [MS]: Horizontal Autoscaling in Kubernetes #2 – Custom Metrics

In my previous article, I talked about the “why” and “how” of horizontal scaling in Kubernetes and I gave an insight, how to use the Horizontal Pod Autoscalar based on CPU metrics. As mentioned there, CPU is not always the best choice when it comes to deciding whether the application/service should scale in or out. Therefore Kubernetes (with the concept of the Metrics Registry and the Custom or External Metrics API) offers the possibility to also scale based on your own, custom metrics.

Introduction

In this post, I will show you how to scale a deployment (NodeJS / Express app) based on a custom metric which is collected by Prometheus. I chose Prometheus, as it is one of the most popular montoring tools in the Kubernetes space…but, it can easily be exchanged by any other montoring solution out there – as long as there is a corresponding “adapter” (custom metrics API) for it. But more on that later.

How does it work?

To be able to work at all with custom metrics as a basis for scaling services, various requirements must be met. Of course, you need an application that provides the appropriate metrics (the metrics source). In addition, you need a service – in our case Prometheus – that is able to query the metrics from the application at certain intervals (the metrics collector). Once you have set up these things, you have fulfilled the basic requirements from the application’s point of view. These components are probably already present in every larger solution.

Now, to make the desired metric available to the Horizontal Pod Autoscaler, it must first be added to a Metrics Registry. And last but not least, a custom metrics API must provide access to the desired metric for the Horizontal Pod Autoscaler.

To give you the complete view of what’s possible…here’s the list of available metric types:

  • Resource metrics (predefined metrics like CPU)
  • Custom metrics (associated with a Kubernetes object)
  • External metrics (coming from external sources like e.g. RabbitMQ, Azure Service Bus etc.)
Metrics Pipeline (with Prometheus as metrics collector)
Metrics Pipeline (with Prometheus as metrics collector)

Sample

In order to show you a working sample of how to use a custom metric for scaling, we need have a few things in place/installed:

  • An application (deployment) that exposes a custom metric
  • Prometheus (incl. Grafana to have some nice charts) and a Prometheus Service Monitor to scrape the metrics endpoint of the application
  • Prometheus Adapter which is able to provide a Prometheus metric for the custom metrics API
  • Horizontal Pod Autoscaler definition that references the custom metric

Prometheus

To install Prometheus, I chose “kube-prometheus” (https://github.com/coreos/kube-prometheus) which installs Prometheus as well as Grafana (and Alertmanager etc.) and is super easy to use! So first, clone the project to your local machine and deploy it to your cluster:

# from within the cloned repo...

$ kubectl apply -f manifests/setup
$ kubectl apply -f manifests/

Wait a few seconds until everything is installed and then check access to the Prometheus Web UI and Grafana (by port-forwarding the services to your local machine):

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
Prometheus / Web UI
Prometheus / Web UI

Check the same for Grafana (if you are promted for a username and password, default is: “admin”/”admin” – you need to change that on the first login):

$ kubectl --namespace monitoring port-forward svc/grafana 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Grafana
Grafana

In terms of “monitoring infrastructure”, we are good to go. Let’s add the sample application that exposes the custom metric.

Metrics Source / Sample Application

To demonstrate how to work with custom metrics, I wrote a very simple NodeJS application that provides a single endpoint (from the application perspective). If requests are sent to this endpoint, a counter (Prometheus Gauge – later more on other options) is set with a value provided from the body of the request. The application itself uses the Express framework and an additional library that allows to “interact” with Prometheus (prom-client) – to provide metrics via a /metrics endpoint.

This is how it looks like:

const express = require("express");
const os = require("os");
const app = express();
const apiMetrics = require("prometheus-api-metrics");
app.use(apiMetrics());
app.use(express.json());

const client = require("prom-client");

// Create Prometheus Gauge metric
const gauge = new client.Gauge({
  name: "custom_metric_counter_total_by_pod",
  help: "Custom metric: Count per Pod",
  labelNames: ["pod"],
});

app.post("/api/count", (req, res) => {
  // Set metric to count value...and label to "pod name" (hostname)
  gauge.set({ pod: os.hostname }, req.body.count);
  res.status(200).send("Counter at: " + req.body.count);
});

app.listen(4000, () => {
  console.log("Server is running on port 4000");
  // initialize gauge
  gauge.set({ pod: os.hostname }, 1);
});

As you can see in the sourcecode, a “Gauge” metric is created – which is one of the types, Prometheus supports. Here’s a list of what metrics are offered (description from the official documentation):

  • Counter – a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart
  • Gauge – a gauge is a metric that represents a single numerical value that can arbitrarily go up and down
  • Histogram – a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values.
  • Summary – Similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window.

To learn more about the metrics types and when to use what, see the official Prometheus documentation.

Let’s deploy the application (plus a service for it) with the following YAML manifest:

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: promdemo 
  labels:
    application: promtest
    service: api
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: promtest
      service: api
  template:
    metadata:
      labels:
        application: promtest
        service: api
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: "64Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          image: csaocpger/expressmonitoring:4.3
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
  name: promdemo
  labels:
    application: promtest
spec:
  ports:
  - name: http
    port: 4000
    targetPort: 4000
  selector:
    application: promtest
    service: api
  type: LoadBalancer

Now that we have Prometheus installed and an application that exposes a custom metric, we also need to tell Prometheus to scrape the /metrics endpoint (BTW, this endpoint is automatically created by one of the libraries used in the app). Therefore, we need to create a ServiceMonitor which is a custom resource definition from Prometheus, pointing “a source of metrics”.

This is how the ServiceMonitor looks like for the current sample:

kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
  name: promtest
  labels:
    application: promtest
spec:
  selector:
    matchLabels:
      application: promtest
  endpoints: 
  - port: http

What that basically does, is telling Prometheus to look for a service called “promtest” and scrape the metrics via the (default) endpoint /metrics on the http port (which is set to port 4000 in the Kubernetes service).

The /metrics endpoint reports values like that:

$ curl --location --request GET 'http://<EXTERNAL_IP_OF_SERVICE>:4000/metrics' | grep custom_metric
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 26337  100 26337    0     0   325k      0 --:--:-- --:--:-- --:--:--  329k
# HELP custom_metric_counter_total_by_pod Custom metric: Count per Pod
# TYPE custom_metric_counter_total_by_pod gauge
custom_metric_counter_total_by_pod{pod="promdemo-56946cb44-d5846"} 1

When the ServiceMonitor has been applied, Prometheus will be able to discover the pods/endpoints behind the service and pull the corresponding metrics. In the web UI, you should be able to see the following “scrape target”:

Prometheus Targets

Time to test the environment! First, let’s see how the current metric looks like. Open Grafana an have a look at the Custom dashboard (you can find the JSON for the dashboard in the GitHub repo mentioned at the end of the post). You see, we have one pod running in the cluster reporting a value of “1” at the moment .

Grafana Showing the Custom Metric

If everything is set up correctly, we should be able to call our service on /api/count and set the custom metric via a POST request with a JSON document that looks like that:

{
    "count": 7
}

So, let’s try this out…

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 7
}'
Grafana Dashboard – After setting the counter to a new value

So, this works as expected. After setting the value via a POST request to “7”, Prometheus receives the updated value by scraping the metrics endpoint and Grafana is able to show the updated chart. To be able to execute the full example in the end on the basis of a “clean environment”, we set the counter back to “1”.

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 1
}'

From a monitoring point of view, we have finished all the neccessary work and are now able to add the Prometheus adapter which “connects” the Prometheus custom metric with the Kubernetes Horizontal Pod Autoscaler.

Let’s do that now.

Prometheus Adapter

So, now the adapter must be installed. I have decided to use the following implementation for this sample: https://github.com/DirectXMan12/k8s-prometheus-adapter

The installation works quite smoothly, but you have to adapt a few small things for it.

First you should clone the repository and change the URL for the Prometheus server in the file k8s-prometheus-adapter/deploy/manifests/custom-metrics-apiserver-deployment.yaml. In the current case, this is http://prometheus-k8s.monitoring.svc:9090/.

Furthermore, an additional rule for our custom metric must be defined in the ConfigMap, which defines the rules for mapping from Prometheus metrics to the Metrics API schema (k8s-prometheus-adapter/deploy/manifests/custom-metrics-config-map.yaml). This rule looks like this:

- seriesQuery: 'custom_metric_counter_total_by_pod{namespace!="",pod!=""}'
      seriesFilters: []
      resources:
        overrides:
          namespace:
            resource: namespace
          pod:
            resource: pod
      name:
        matches: "^(.*)_total_by_pod"
        as: "${1}"
      metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>})

If you want to find out more about how the mapping works, please have a look at the official documentation. In our case, only the query for custom_metric_counter_total_by_pod is executed and the results are mapped to the metrics schema as total/sum values.

To enable the adapter to function as a custom metrics API in the cluster, a SSL certificate must be created that can be used by the Prometheus adapter. All traffic from the Kubernetes control plane components to each other must be secured by SSL. This SSL certificate must be added upfront as a secret in the cluster, so that the Custom Metrics Server can automatically map it as a volume during deployment.

In the corresponding GitHub repository for this article, you can find a gencerts.sh file that needs to be executed and does all the heavy-lifting for you. The result of the script is a file called cm-adapter-serving-certs.yaml containing the certificate. Please add that secret to the cluster before installing the adpater. To whole process looks like this (in the folder of the git clone):

$ ./gencerts.sh
$ kubectl create namespace custom-metrics
$ kubectl apply -f cm-adapter-serving-certs.yaml -n custom-metrics
$ kubectl apply -f manifests/

As soon as the installation is completed, open a terminal and let’s query the custom metrics API for our metric called custom_metric_counter_total_by_pod via kubectl. If everything is set up correctly, we should be able to get a result from the metrics server:

$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/custom_metric_counter_total_by_pod?pod=$(kubectl get po -o name)" | jq

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/custom_metric_counter_total_by_pod"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "promdemo-56946cb44-d5846",
        "apiVersion": "/v1"
      },
      "metricName": "custom_metric_counter_total_by_pod",
      "timestamp": "2020-05-26T12:22:15Z",
      "value": "1",
      "selector": null
    }
  ]
}

Here we go! The Custom Metrics API returns a result for our custom metric – stating that the current value is “1”.

I have to admit it was a lot of work, but we are now finished with the infrastructure and can test our application in combination with the Horizontal Pod Autoscaler and our custom metric.

Horizontal Pod Autoscaler

Now we need to deploy a Horizontal Pod Autoscaler that targets our app deployment and references the custom metric custom_metric_counter_total_by_pod as a metrics source. The manifest file for it looks like this, apply it to the cluster:

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
  name: prometheus-demo-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: promdemo
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metricName: custom_metric_counter_total_by_pod
      targetAverageValue: "3"

Now that we have added the HPA manifest, we can make a new request against our API to increase the value of the counter back to “7”. Please keep in mind that the target value for the HPA was “3”. This means that the Horizotal Pod Autoscaler should scale our deployment to a total of three pods after a short amount of time. Let’s see what happens:

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 7
}'

How does the Grafana Dashboard look like after that request?

Grafana after setting the value of the counter to “7”

Also, the Custom Metrics API reports a new value:

$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/custom_metric_counter_total_by_pod?pod=$(kubectl get po -o name)" | jq

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/custom_metric_counter_total_by_pod"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "promdemo-56946cb44-d5846",
        "apiVersion": "/v1"
      },
      "metricName": "custom_metric_counter_total_by_pod",
      "timestamp": "2020-05-26T12:38:58Z",
      "value": "7",
      "selector": null
    }
  ]
}

And last but not least, the Horizontal Pod Autoscaler does its job and scales the deployment the three pods! Hooray…

$ kubectl get events
LAST SEEN   TYPE     REASON              OBJECT                                        MESSAGE

118s        Normal   ScalingReplicaSet   deployment/promdemo                           Scaled up replica set promdemo-56946cb44 to 3
118s        Normal   SuccessfulRescale   horizontalpodautoscaler/prometheus-demo-app   New size: 3; reason: pods metric custom_metric_counter_total_by_pod above target
Grafana showing three pods now!

Wrap-Up

So what did we see in this example? We first installed Prometheus with the appropriate addons like Grafana, Alert Manager (which we did not use in this example…). Then we added a custom metric to an application and used Prometheus scraping to retrieve it, so that it was available for evaluation within Prometheus. The next step was to install the Prometheus adapter, which admittedly was a bit more complicated than expected. Finally, we created an Horizontal Pod Autoscaler in the cluster that used the custom metric to scale the pods of our deployment.

All in all, it was quite an effort to scale a Kubernetes deplyoment based on custom metrics. In the next article, I will therefore talk about KEDA (Kubernetes Event-Driven Autoscaling), which will make our lives – also regardings this example – much easier.

You can find all the Kubernetes manifest files, Grafana dashboard configs and the script to generate the SSL certificate in this GitHub repo: https://github.com/cdennig/k8s-custom-metrics.

Articles in this series:

Christian Dennig [MS]: Horizontal Autoscaling in Kubernetes #1 – An Introduction

Kubernetes is taking the world of software development by storm and every company in the world feels tempted to develop their software on the platform or migrate existing solutions to it. There are a few basic principles to be considered when the application is finally operated productively in Kubernetes.

One of them, is the implementation of a clean autoscaling. Kubernetes offers a number of options, especially when it comes to horizontally scaling your workloads running in the cluster.

I will discuss the different options in a short series of blog posts. This article is about the introduction to the topic and the presentation of the “out-of-the-box” options of Kubernetes. In the second article I will discuss the possibilities of scaling deployments using custom metrics and how this can work in combination with popular monitoring tools like Prometheus. The last article then will be about the use of KEDA (Kubernetes Event-Driven Autoscaling) and looks at the field of “Event Driven” scaling.

Why?

A good implementation of horizontal scaling is extremely important for applications running in Kubernetes. Load peaks can basically occur at any time of day, especially if the application is offered worldwide. Ideally, you have implemented measures that automatically respond to these load peaks so that no operator has to perform manual scaling (e.g. increase the number of pods in a deployment) – I mention this because this type of “automatic scaling” is still frequently seen in the field.

Your app should be able to automatically deal with these challenges:

  • When the resource demand increases, your service(s) should be able to automatically scale up
  • When the demand decreases, your service(s) should be able to scale down (and save you some money!)

Options for scaling your workload

As already mentioned, Kubernetes offers several options for horizontal scaling of services/pods. This includes:

  • Load / CPU-based scaling
  • (Custom) Metrics-based scaling
  • Event-Driven scaling

Recap: Kubernetes Objects

Before I start with the actual topic, here is a short recap of the (basically) involved Kubernetes objects when services/applications are hosted in a cluster – just to put everyone on the same page.

Recap: Kubernetes Deployments, RS, Pods etc.
  • Pods: smallest unit of compute/scale in Kubernetes. Hosts your application and n additonal/„sidecar“ containers.
  • Labels: key/value pairs to identify workloads/kubernetes objects
  • ReplicaSet: is responsible for scaling your pods. Ensures that the desired number of pods are up and running
  • Deployment: defines the desired state of a workload (number of pods, rollout process/update strategy etc.). Manages ReplicaSets.

If you need more of a “refresh”, please head over to the offical Kubernetes documentation.

Horizontal Pod Autoscaler

To avoid manual scaling, Kubernetes offers the concept of the “Horizontal Pod Autoscaler”, which many people have probably already heard of before. The Horizontal Pod Autoscaler (HPA) itself is a controller and configured by HorizontalPodAutoscaler resource objects. It is able to scale pods in a deployment/replicaset (or statefulset) based on obeserved metrics like e.g. CPU.

How does it work?

The HPA works in combination with the Metrics server, which determines the corresponding metrics from running Pods and makes them available to the HPA via an API (Resource Metrics API). Based on this information, the HPA is able to make scaling decisions and scale up or down according to what you defined in terms of thresholds. Let’s have a look at a sample definition in YAML:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: my-hpa
spec:
  maxReplicas: 20
  minReplicas: 5
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-deployment
  targetCPUUtilizationPercentage: 50

The sample shown above means, that if the CPU utilization constantly surpasses 50% of the deployment (considering all pods!), the autoscaler is able to scale the number of pods in the deployment between 5 and 20.

The algorithm of how the number of replicas is determined, works as follows:

replicas = ceil[currentReplicas * ( currentValue / desiredValue )]

Here’s the whole process:

Pretty straightforward. So now, let’s have a look at a concrete sample.

Sample

Let’s see the Horizontal Pod Autoscaler in action. First, we need a workload as our scale target:

$ kubectl create ns hpa
namespace/hpa created

$ kubectl config set-context --current --namespace=hpa
Context "my-cluster" modified.

$  kubectl run nginx-worker --image=nginx --requests=cpu=200m --expose --port=80
service/nginx-worker created
deployment.apps/nginx-worker created

As you can see above, we created a new namspace called hpa and added a deployment of nginx pods (apparently only one at the moment – exposing port 80, to be able to fire some request against the pods from within the cluster). Next, create the horizontal pod autoscaler object.

$ kubectl autoscale deployment nginx-worker --cpu-percent=5 --min=1 --max=10

Let’s have a brief look at the YAML for the HPA:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-worker
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-worker
  targetCPUUtilizationPercentage: 5

So now that everything is in place, we need some load on our nginx deployment. BTW: You might wonder, why the target value is that low (only 5%!). Well, seems like the creators of NGINX have done a pretty good job. It’s really hard to “make NGINX sweat” 🙂

Let’s simulate load! Create another pod (in this case “busybox”) in the same namespace, “exec into it” and use wget to request the default page of NGINX in an endless loop:

$ kubectl run -i --tty load-generator --image=busybox /bin/sh
$ while true; do wget -q -O- http://nginx-worker.hpa.svc; done

While this is running, open another terminal and “watch” the Horizontal Pod Autoscaler do his job.

HPA in Action

As you can see, after a short period of time, the HPA recognizes that there is load on our deployment and that the current value of our metric (CPU utilization) is above the target value (29% vs. 5%). It then starts to scale the pods within the deployment to an apropiate number, so that the utilization drops after a few seconds back to a value that is – more or less – within the defined range.

After a certain time the wget loop was aborted, so that basically no more requests were sent to the pods. As you can see here, the autoscaler does not start removing pods immediately after that. It waits a given time (in this case 5min) until pods are killed and the deployment is set back to “replicas: 1”.

Wrap-Up

In this first article, I discussed the basics of horizontal scaling in Kubernetes and how you can leverage CPU utilization – a standard metric in Kubernetes – to automatically scale your pods up and down. In the next article, I will show you how to use custom metrics for scaling decisions – as you might guess, CPU is not always the best metric to decide if your service is under heavy load and needs more replicas to do the job.

Articles in this series:

Golo Roden: Einführung in React, Folge 2: React-Setup

Wie installiert man React und entwickelt eine erste Anwendung? Um diese Themen geht es in der zweiten Folge des Videokurses zu React von the native web, die ab sofort kostenlos verfügbar ist.

Holger Schwichtenberg: PowerShell 7: Verbesserte Fehlerdarstellung

Seit PowerShell 7.0 ist "ConciseView" der neue, übersichtlichere Standard für die Fehlerausgabe.

Golo Roden: Ich reagiere auf "Wie bleibt man auf dem Laufenden?"

In der aktuellen Folge von "Götz & Golo" lautete die Fragestellung, wie man auf dem Laufenden bleibt. Golos Reaktion auf den Blogpost von Götz im Video.

Stefan Lieser: Folien zum Webinar Clean Code Development mit Flow Design

Am 30. April habe ich bei der GFU mein Webinar “Clean Code Development mit Flow Design” gehalten. Es drehte sich um die Frage, wie man mit Hilfe von Flow Design von den Anforderungen zum Code gelangt. Die Folien zu diesem Webinar findest du unten. Leider wurde das Webinar nicht aufgezeichnet. Es wird aber weitere Webinare ... Read more

Der Beitrag Folien zum Webinar Clean Code Development mit Flow Design erschien zuerst auf Refactoring Legacy Code.

Golo Roden: Einführung in React, Folge 1: Vorstellung und Einführung

Heute startet der neue Videokurs zu React von the native web. Der Kurs steht vollständig kostenlos zur Verfügung und führt Entwickler von der ersten Zeile bis zum Schreiben komplexer React-Anwendungen.

Holger Schwichtenberg: Entwickler-Update 2020 zu .NET 5.0, C# 9.0 und Blazor am 26. Mai

Der diesjährige Softwareentwickler-Infotag für .NET- und Webentwickler findet am 26. Mai als Online-Veranstaltung per Webkonferenzsoftware und Chat statt.

Golo Roden: Wie man relevante Nachrichten aus dem Grundrauschen herausfiltert

Für Entwickler ist es wichtig, über technologische Neuerungen informiert zu sein. Doch woher kommen diese Informationen? Folgt man zu wenigen Quellen, läuft man Gefahr, wichtige Nachrichten zu verpassen. Folgt man zu vielen, gehen wichtige Themen in der Masse unter. Wie findet man die richtige Balance?

Holger Schwichtenberg: PowerShell 7: Mehrere Zufallszahlen erzeugen

Seit PowerShell 7.0 kann man bei Get-Random mit -count anfordern, dass mehr als eine Zufallszahl geliefert wird. So erzeugt man schnell die Lottozahlen oder ein Zufallskennwort.

Code-Inside Blog: Blazor for Office Add-ins: First look

Last week I did some research and tried to build a pretty basic Office Addin (within the “new” web based Addin model) with Blazor.

Side note: Last year I blogged about how to build Office Add-ins with ASP.NET Core.

Why Blazor?

My daily work home is in the C# and .NET land, so it would be great to use Blazor for Office Addins, right? A Office Add-in is just a web application with a “communication tunnel” to the hosting Office application - not very different from the real web.

What (might) work: Serverside Blazor

My first try was with a “standard” serverside Blazor application and I just pointed the dummy Office Add-in manifest file to the site and it (obviously) worked:

I assume that serverside Blazor is for the client not very “complicated” and it would probably work.

After my initial tweet Manuel Sidler jumped in and made a simple demo project, which also invokes the Office.js APIs from C#!

Checkout his repository on GitHub for further information.

What won’t work: WebAssembly (if I don’t miss anything)

Serverside Blazor is cool, but has some problems (e.g. a server connection is needed and scaling is not that easy) - what about WebAssembly?

Well… Blazor WebAssembly is still in preview and I tried the same setup that worked for serverside blazor.

Result:

The desktop PowerPoint (I tried to build a PowerPoint addin) keeps crashing after I add the addin. On Office Online it seems to work, but not for a very long time:

Possible reasons:

The default Blazor WebAssembly installs a service worker. I removed that part, but I’m not 100% sure if I did it correctly. At least they are currently not supported from the Office Add-in Edge WebView. My experience with Office Online and the Blazor addin failed as well and I don’t think that service workers are the problem.

I’m not really sure why its not working, but its quite early for Blazor WebAssembly, so… time will tell.

What does the Office Dev Team think of Blazor?

Currently I just found one comment on this blogpost regarding Blazor:

Will Blazor be supported for Office Add-ins?

No, it will be a React Office.js add-in. We don’t have any plans to support Blazor yet. For that, please put a note on our UserVoice channel: https://officespdev.uservoice.com. There are several UserVoice items already on this, so know that we are listening to your feedback and prioritizing based on customer requests. The more requests we get for particular features, the more we will consider moving forward with developing it. 

Well… vote for it! ;)

Golo Roden: Wochenrückblick: GitHub, Apple, SpaceX & Co.

In der vergangenen Woche gab es einige Neuigkeiten, unter anderem bezüglich GitHub, Apple und SpaceX. Die wichtigsten kommentiert Golo im Rückblick.

Holger Schwichtenberg: PowerShell 7: Parallele Ausführung mit ForEach-Object -parallel

Seit PowerShell 7.0 kann man mit dem Parameter -parallel die Ausführung der Schleifendurchläufe auf verschiedene Threads parallelisieren (via Multithreading).

Golo Roden: Ich reagiere auf "Pfeiler eines guten Arbeitsklimas"

In der vergangenen Folge von "Götz & Golo" lautete die Fragestellung, was die Pfeiler eines guten Arbeitsklimas seien. Golos Reaktion auf den Blogpost von Götz im Video.

Golo Roden: Pfeiler eines guten Arbeitsklimas

Vermutlich jeder wünscht sich ein gutes Arbeitsklima, doch was zeichnet ein solches überhaupt aus? Und welche Ziele gibt es für Verbesserungen?

Marco Scheel: Microsoft Teams Live Events für (Krisen-)Vorträge

Online Meetings sind aus den heutigen Unternehmen nicht mehr wegzudenken. Microsoft Teams ist die Meeting-Lösung im Microsoft 365 Service. Ein Teams Meeting ermöglicht es allen Teilnehmern, aktiv an der Besprechung teilzunehmen. Es gibt nur wenig Kontrolle für den Besprechungsleiter. Die Teilnehmer müssen die Disziplin aufbringen und sich im Meeting “korrekt” verhalten. Durch die aktuelle Corona-Krise wird aber deutlich, dass sich viele Teilnehmer schwerer damit tun als gedacht. Besonders für Neulinge sind die verschiedenen Optionen in der Software ungewohnt und Grundregeln für ein gutes Meeting eventuell unbekannt. Es wäre toll, wenn die Software hier besser unterstützt, aber der aktuelle Stand (”Mute all”, …) wird sich kurzfristig nicht ändern.

Ich möchte euch heute eine Alternative zum klassischen Meeting zeigen. Es wird nicht für jede Situation passen, aber ihr solltet es kennenlernen und selbst entscheiden. Ich wurde von einer Schule angesprochen, welche Optionen es gibt, die Schüler Online zu unterrichten. Die Zielgruppe ist nicht besonders gut ausgebildet, wenn es um eine gute Meeting-Kultur geht :) Die Eltern und die Schüler dürften oft Anfänger sein und da passieren automatisch Fehler. In dieser Situation kann es sin machen auf ein Microsoft Teams Live Event auszuweichen.

Microsoft Teams Live Event

Ein Live Event kann über den Teams Client geplant und durchgeführt werden. Anders als in einem normalen Meeting gibt es eine klare Trennung zwischen Meeting Organisator und Teilnehmern. Der Organisator wird zum Producer des Events (Meetings) und muss den Microsoft Team Client nutzen. Er kann weitere Personen als Producer einladen und das macht auch Sinn, damit man so ein Meeting professionelle über die Bühne bekommt. Die Teilnehmer verwenden den Browser (ohne Plugins oder ähnlicher) oder falls vorhanden den Teams Client. Im schulischen Szenario kann man sehr wenig Rückschlüsse über die technischen Möglichkeiten der Teilnehmer ziehen. Es ist also gut, dass die Lösung eigentlich mit allen Optionen zurechtkommt.

Für die Teilnahme am Meeting gibt es zwei Links. Einmal der Link für den Producer (so ähnlich wie ein normaler Teams Meeting-Link) und nur über den Teams Client kommt man an den Teilnehmerlink. Das Meeting (besser Live Event) muss nach dem Beitritt des Producers nochmal über die Software gestartet werden, sonst sehen die Teilnehmer nichts.

Das Meeting wird mit einem Zeitversatz (10-40 Sekunde) an die Teilnehmer gestreamt. Die Teilnehmer können nur über die eingebaute Frage und Antwort-Funktion (Q&A) mit den Producern kommunizieren. Die Zuschauer können das Meeting jederzeit pausieren und zurück spulen. Es stehen mehr als 50 Sprachen für optionalen Live-Untertitel zur Verfügung.

Übersicht der Limits

Live Event vorbereiten

Der Benutzer benötigt eine Office 365 Lizenz (E1 oder höher, A3 oder höher) und der Administrator kann über eine Live Event Policy im Admin-Center festlegen, wer Events erstellen kann. Im Standard sind externe Benutzer nicht erlaubt. Für das Schul-Szenario ist es also wichtig, dass dieses Setting verändert wird.

Teams Admin Center - Meetings - Live event policies - Global (Org-wide default)

“Who can join scheduled live events” = “Everyone” statt “Everyone in your organization“.

image

Live Event Planen

In der Teams App wechselt ihr auf den Kalender und könnt dann oben rechts das DropDown auslösen und “Live event” wählen:

image

Über Yammer kann man ebenfalls ein Live Event starten. Begebt euch in die entsprechende Gruppe/Community und dann findet ihr rechts im Bereich “Group Actions” die Möglichkeit ein Live Meeting zu planen.

image

Über Microsoft Stream kann man zwar Live Events starten, allerdings muss man hier spezielle Software verwenden und kann nicht Microsoft Teams als Producer wählen.

image

Wir wählen den Weg über Microsoft Teams und erstellen ein Live Event. Hier findet ihr die Anleitung von Microsoft. Es muss wie bei jedem Meeting ein Titel angegeben werden und die Start- sowie Endzeit. Die optionale Ortsangabe (Location) könnt ihr offenlassen, Microsoft Teams oder Online eintragen. An dieser Stelle ladet ihr auch die weiteren Presenter ein. Diese Personen stammen aus eurer Organisation und unterstützen euch später beim Erzeugen der Inhalte im Meeting. 

image

Auf der nächsten Seite werdet ihr nach den Berechtigungen gefragt. Wenn euer Administrator die richtigen Voreinstellungen getroffen hat, dann könnt ihr “Public” (Öffentlich) auswählen. Für das Scenario Schule ist Public zwingend erforderlich. Setzt ihr ein Live Event für eure Kollegen auf, dann könnt ihr natürlich auch Org-wide oder einzelne Personen auswählen. In beiden letzten Fällen wird ein Login des Unternehmens benötigt. Wichtig: Hier geht es um Berechtigungen und nicht darum, wer eingeladen wird. Teilnehmen kann an einem Org-wide Meeting nur wer auch den Teilnahme-Link erhalten hat!

image

Das Live Event ist bereits als Teams Event gekennzeichnet und erlaubt so auch nur die Teams relevanten Einstellungen. Trefft eure Entscheidung, ob die Aufzeichnung für alle Teilnehmer verfügbar gemacht werden soll. Je nach Publikum bietet es sich an, dass “Live-Untertitel” angeboten werden. Es stehen mehr als 50 Sprachen zur Verfügung, aber ein Event kann immer nur 6 gleichzeitig anbieten. Ihr müsst hier entscheiden, welche Sprache gesprochen wird und in welche Sprache übersetzt werden kann. Die Konfiguration des Teilnehmer-Berichtes erlaubt es nach dem Meeting zu sehen (CSV), wer teilgenommen hat sowie wann und wie oft beigetreten wurde. Die Option für die “Frage&Antwort” Funktion (Q&A) sollte immer gewählt sein, damit Teilnehmer und Producer interagieren können.

image

Das Erstellen des Live Events ist abgeschlossen und wird angezeigt.

image

Über die Anzeige kann man dann auch den Teilnehmerlink kopieren. Dieser link muss dann an alle externen (oder internen) Teilnehmer weitergeleitet werden. Am besten wählt man hier den üblichen Kommunikationskanal (Teams, Email, WhatsApp, …) für die Teilnehmergruppe.

image

Der Link zum Teilnehmen sieht ungefähr so aus wie jeder andere Meeting Link:
https://teams.microsoft.com/l/meetup-join/19%3ameeting_…
aber für die Teilnehmer-Links endet der Link mit
IsBroadcastMeeting=true

Live Event produzieren

Rechtzeitig vor Beginn des eigentlichen Live Events sollten sich alle Producer ins Meeting einwählen und die nötigen Vorbereitungen zum Start des Events durchführen. Für den Producer sieht der Beitritt so aus wie bei jeden anderen Meeting (Abweichung über dem Titel: “Als Produzent teilnehmen”). Wollt ihr euer Video übermitteln, so müsst ihr natürlich die Webcam aktivieren. Hinweis: Leider gibt es kein Background-Blur in Live Meetings. Achtet also auf eure Umgebung! 

image

Über die Einstellungen (Zahnrad mit dem aktuellen Audiogerät als Name) könnt ihr das passende Audiogerät verwenden. Es ist extrem wichtig hier die hochwertigsten Komponenten zu verwenden. Laptop Mikrophone und Lautsprecher erzeugen in der Regel das schlechteste Ergebnis. Ein Kopfhörer mit entsprechendem Mikro macht es für alle einfacher und reduzieren Nebengeräusche. Der Auditorium-Modus sollte in Zeiten von Social Distancing nicht relevant sein, aber eventuell für zukünftige Ereignisse von Interesse sein, um Publikum nicht durch Teams rausrechnen zu lassen. Hinweis: Wenn ihr unsicher seid, dann testet eure Optionen mit einem Testanruf

image

Seid ihr dem Live Event beigetreten, dann seht ihr die Producer Oberfläche und spätestens jetzt wird klar, dass es kein normales Meeting ist und warum ihr das vorher üben solltet :) Es gibt zwei Inhaltsanzeigen. Links befindet sich der Inhalt, der noch nicht Live gesendet wird. Hier bereitet ihr die Szene vor, welche dann Live geschaltet wird. Rechts seht ihr, was die Teilnehmer sehen werden und sich gerade “On-Air” befindet. Mit dem Beitritt zum Live Event wird das Meeting also noch nicht gestartet! Hierfür muss man erstmal die Inhalte anordnen.

image

Wenn ihr eure Webcam übertragen wollt, dann schaltet auf das entsprechende Layout unten links um.

image

Jetzt könnt ihr aus dem unteren Bereich eure Webcam nach oben in den kleinen Bereich des linken Fensters ziehen. Genau so geht das auch mit den Inhalten, die ihr teilen wollt. Ihr könnt ein einzelnes Fenster freigeben oder den ganzen Desktop. Ich empfehle immer den Desktop freizugeben, da es kompliziert wird, wenn man auch andere Dinge zeigen möchte. Es kommt schneller als man denkt.

image

Oben: Fenster freigeben | Unten: Desktop freigeben

image

Wie in jedem anderen Meeting wird nun das geteilte Element rot umrandet und es gibt im oberen Bereich die Möglichkeit weitere Kontrollen einzustellen oder abzubrechen.

image

In der Producer-Ansicht habt ihr nun Video und Inhalt ausgerichtet. Wenn die Zeit gekommen ist, dann muss man die Inhalte “Live schalten”.

image

Das Live Event ist aber noch immer nicht für die Teilnehmer sichtbar. Erst mit einem weiteren Klick auf “Start” werden die Inhalte (Audio und Video) an die Teilnehmer übermittelt. Auf der linken Seite, kann zum Beispiel ein weiterer Producer das nächste Layout (Speaker, Speaker-Runde, …) vorbereitet und gegebenenfalls sofort “Live schalten

image

Es ist ein Microsoft Produkt also wird nochmal nachgefragt! 

image

In der Producer-Ansicht wird der sichtbare Teil für die Teilnehmer rot umrandet.

image

Stoppt der aktuelle Benutzer seine Freigabe, dann wird sein Video in Vollbilddarstellung gebracht.

image

In der rechten Leiste der Produceransicht gibt es verschiedene Reiter mit Informationen zum Meeting.

Staus und Leistung

image

Frage und Antwort

image

Teilnehmer können hier Fragen stellen und Producer können diese privat beantworten. ist eine Frage/Antwort für alle Teilnehmer interessant, dann kann der Producer die Frage (samt Antwort) veröffentlichen und so allen zugänglich machen.

Besprechungsnotizen 

image

Die Notizen werden wie aus Meetings bekannt, über die Wiki-Funktion abgewickelt.

image

Es gibt eine rudimentäre Struktur und Formatierung der Inhalte für die Teilnehmer. Die Notizen können nur vom Producer erstellt werde und sind sofort für die Teilnehmer sichtbar.

Besprechungschat (nur Producer)

image

Der Chat ist nur für Producer und kann nicht von Teilnehmern eingesehen werden. Teilnehmer können nur über die Frage&Antwort Funktion interagieren. Eine Interaktion unter Teilnehmern ist ausgeschlossen.

Kontakte

image

Hier werden alle Producer aufgelistet. Teilnehmer sind hier nicht sichtbar.

Geräteeinstellungen

image

Hier kann das Audio und Webcam Setup (Producer) verändert werden. Es können auch nachträglich die Live-Untertitel abgeschaltet werden.

Besprechungsdetails

image

Wie in jedem Meeting werden auch hier die Einwahlinformationen für das Meeting angezeigt. Hinweis: Diese Einladung ist nur für Producer und nicht für Teilnehmer geeignet! 

Live Event teilnehmen

Die Teilnahme an einem Live Event kann mit einem Browser oder Microsoft Teams Client durchgeführt werden. Hier findet ihr die Systemanforderungen. Mit dem Teilnehmerlink kommt man auf eine Website, welcher dem normalen Meeting Beitritt sehr ähnelt. Über “Watch on the web instead” kann man über alle modernen Browser teilnehmen, ohne extra Software zu installieren. 

image

Wenn man über ein Teams Account (Azure AD) verfügt, dann kann dieser genutzt werden oder ihr tretet anonym bei.

image

Nach dem Beitritt seht ihr den aktuell geteilten Inhalt. Die Ansicht ähnelt einem normalen Meeting. Der Teilnehmer kann aber jederzeit Pause machen oder sogar zurückspulen.

image

Über die Einstellungen im Wiedergabefenster können die optionalen Untertitel eingeschaltet werden.

image

Die Auswahl entspricht den Einstellungen, welche zum Setup des Live Events gemacht wurden.

image

Hier sehen wir den englischen Untertitel.

image

Frage & Antwort

Hier haben wir die Teilnehmersicht der Q&A Funktion. Der Teilnehmer kann seine Fragen stellen und bekommt gegebenenfalls Antworten vom Producer Team. Der Teilnehmer kann sich für die Frage einen Namen geben oder aber anonym schreiben.

image

Die Fragen können weiter “verfeinert” werden.

image

Hier sehen wir die Producer Ansicht für die gestellt Frage. Die Frage kann ignoriert oder veröffentlicht werden (sichtbar für alle Teilnehmer).

image

Wird die Frage (inclusive Antwort) veröffentlicht, dann wird sie im Bereich “Featured” angezeigt. Hier gibt es die Möglichkeit für die Teilnehmer eine Frage zu “liken” und damit Feedback in die Runde zu geben.

image

Ist das Live Event beendet, dann wird es entsprechend im Stream angezeigt.

image

Zusammenfassung

Live Events sind eine sehr spezielle Form von Meetings in Microsoft 365. Der Einsatz macht nur in wenigen Szenarien Sinn, aber es ist wichtig, diese Option zu kennen. Besonders mit vielen “Meeting” Anfängern kann es die Situation vereinfachen, wenn eventuell zwei Meetings aufgesetzt werden. Ein Live Meeting für den Transport der Inhalte mit absoluter Kontrolle über die Präsentation ohne störende Zwischenrufe und ein anschließendes Ad-Hoc Q&A Meeting im normalen Teams Meeting Modus, damit jeder sprechen kann und sich auch per Video präsentiert.

Holger Schwichtenberg: PowerShell 7.0: Funktionsumfang

Die PowerShell 7.0 hat sich hinsichtlich des Befehlsumfangs stark an die PowerShell 5.1 angenähert. 95 Befehle fehlen aber weiterhin. Und unter Linux und macOS ist nur ein Bruchteil der Funktionen verfügbar.

Code-Inside Blog: Escape enviroment variables in MSIEXEC parameters

Problem

Customers can install our product on Windows with a standard MSI package. To automate the installation administrators can use MSIEXEC and MSI parameters to configure our client.

A simple installation can look like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/OneOffixx/"

The “CACHEFOLDER” parameter will be written in the .exe.config file and our program will read it and stores offline content under the given location.

So far, so good.

For Terminal Server installations or “multi-user” scenarios this will not work, because each cache is bound to a local account. To solve this we could just insert the “%username%” enviroment variable, right?

Well… no… at least not with the obvious call, because this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%username%/OneOffixx/"

will result in a call like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/admin/OneOffixx/"

Solution

I needed a few hours and some Google-Fu to found the answer.

To “escape” those variables we need to invoke it like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%%username%%/OneOffixx/"

Be aware: This stuff is a mess. It depends on your scenario. Checkout this Stackoverflow answer to learn more. The double percent did the trick for us, so I guess it is “ok-ish”.

Hope this helps!

Marco Scheel: Is Microsoft Teams a remote support tool?

My buddy Oliver Kieselbach did a blog post about the capabilities of Microsoft Quick Assist (as part of the current operation system). In his post he raised the question if Microsoft Teams is not enough for this kind of IT support scenarios. Check out his blog to see it live in action and what the biggest shortcoming is. Microsoft Teams is not a good option for anything UAC related. Even without the so called secure desktop feature Microsoft Teams will not allow the support staff to enter admin credentials if needed. I also would suggest (for the most customers) to pick a proper IT support tool for these scenarios.

Microsoft Teams is the hub for teamwork, but that doesn’t mean you could not support your colleagues, if they are experiencing non admin related issues. But first things first, we should check, if your tenant settings are ready to allow remote control during a desktop sharing session. In quite a few customer environments I’ve noticed that remote sharing is restricted or completely disabled.

Desktop sharing and remote control is configured through the tenant meeting policies. Check out your teams admin center:
Meeting policies - Pick a policy (”Global - Org Wide Default”) - Content sharing

image

Ensure that “Screen sharing mode“ is set to “Entire screen” (1) otherwise remote control will be limited to a single window. I’ve seen people struggle using the single app sharing mode more than once. For a remote support scenario please enable “Allow a participant to give or request control” (2)! If your users are asking for support from a valuable and skilled colleague, don’t waste their time yelling what button to press next. The last option needs a decision, if you want to allow the same privilege to be granted for people outside of your organization. I’m a fan of enabling “Allow an external participant to give or request control” (3), because I’m often the external user trying to help, but please align with your corporate security requirements. By the way: Settings (1) and (2) are configured like shown by default in any tenant!

1:1

Now that we have setup the prerequisites let’s have a look at the user experience. In my scenario Luke is trying to organize a new funding round to order some spaceships. Now he tries to prepare a nice Excel to present at the next procurement meeting, but he is not happy with the visual display of one of his charts. He needs help from an expert and he is in contact with Leia (she is running the rebellion so she is awesome in Excel!). Luke is starting a chat to make his point:

image

To start the screen sharing in a 1:1 session you will find the icon for screen sharing (1) in the top right corner. Luke needs to start it and share the complete screen (2):

image

Leia will receive a request to accept the screen sharing session. You should only accept a request if you talked/chatted with the person! Otherwise you could end up seeing things you don’t want to see. 

image

If you started through chat the system will asked you if it is a good idea to add audio to the conversation. Normally this is a good idea. Especially if you are not willing to give control to the person you request help from. 

image

Because Leia is a busy employee and has lots of stuff to do she will request control, because the particular action to execute is hard to describe and maybe takes some poking around with various settings. So Leia requests to have control over the screen/application that was shared. In the far right part of the call control bar you will find the option to request control. If this option is missing, talk to your Teams admins! They didn’t execute all prerequisites as described above:

image

Luke will see the request at the top of the shared content and has the option to accept or deny sharing. Sometimes meeting participants accidently hit the button so think twice if this is what you want:

image

Now comes a really impressive upgrade from previous Skype for Business based screen sharing. Both parties are represented by their Teams profile avatar (in my case the Office 365 profile pictures). You will always see what the other is pointing at or clicking.

image

I often prefer in a support case that the requesting party will do all the clicks and I’m just advising what’s up next and where to find it on screen. With this solution I think it is a great learning opportunity and I can gently show where to click instead of yelling where not to click :)

Here are the two screens side by side (Luke on the left vs Leia on the right):

image

Leia fixed the request by switching to a logarithmic scale and Luke is happy and the session can be ended. Leia can click on “Stop control”:

image

Or Luke can end the session (”Cancel Control”):

image

Just for completeness this is the way to give control if the supported doesn’t find the “Request control” option:

image

Extra: Meetings

In a normal meeting everything is working alike, but the UX is looking a bit different. If you are in a scheduled meeting the share button is located in the call control bar:

image

If you select the icon a new screen will appear from the bottom of the teams app:

image

(1) Share the complete screen. If you have more than one screen, you can only share one at a time.
(2) Share a window.
(3) You can also upload a PowerPoint presentation, but this is beyond a remote control/support session.
(4) Open a whiteboard, but this is beyond a remote control/support session.
(5) While sharing your screen or an application also include audio (for example a Microsoft Stream video that you want to trim/edit) Note: This option is only available in planed meetings and not 1:1 support session started from chat.

Conclusion

The capabilities for remote support in Microsoft Teams are available and very useful. Thinks like the AAD picture next to the mouse cursor is a great addition and helping a lot. Is Teams a better remote support tool then Quick Assist?

For your IT staff: No! A proper Remote Assist tool will be a better choice.

For your typical Information Worker: Yes! Sharing your desktop to get support from a colleague is a quick and proper solution! No need to walk to someone’s desk and touch a maybe filthy mouse/keyboard. Not always do you have the right person in the same building.

I definitely know that Microsoft Quick Assist is not a proper collaboration solution. Try to co-author a Excel document for the next #FreeCookiesFriday campaign ;)

image

Holger Schwichtenberg: PowerShell 7.0: Technische Basis und Installation

Der Dotnet-Doktor beginnt mit diesem Beitrag eine Blog-Serie zur PowerShell 7.0, der neuen Version der .NET-basierten Shell für Windows, Linux und macOS.

Golo Roden: Was zeichnet lesbaren Code aus?

Während Entwickler Code schreiben, achten sie primär darauf, dass er funktioniert. Die Lesbarkeit ist aber das, was über die Wartbarkeit von Code entscheidet.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.