Holger Schwichtenberg: Microsoft baut an neuem Upgrade-Assistent von .NET Framework zu .NET 5 und .NET 6

Microsoft startet mit dem .NET Upgrade Assistant einen neuen Versuch eines Werkzeugs, das Entwickler beim Umstieg von .NET Framework auf .NET 5 und .NET 6 unterstützen soll.

Golo Roden: Datenbanktypen im Vergleich

Früher waren relationale Datenbanken das Maß der Dinge, doch in den vergangenen 15 Jahren haben sich zahlreiche andere Datenbanktypen etabliert. Wie unterscheiden sie sich, und spielen relationale Datenbanken heutzutage überhaupt noch eine Rolle?

Jürgen Gutsch: Trying the REST Client extension for VSCode

I recently stumbled upon a tweet by Lars Richter who mentioned and linked to a rest client extension in VSCode. I had a more detailed look and was pretty impressed by this extension.

I can now get rid of Fiddler and Postman.

Let's start at the beginning

The REST Client Extension for VSCode was developed by Huachao Mao from China. You will find the extension on the visual studio marketplace or in the extensions explorer in VS Code:

  • https://marketplace.visualstudio.com/items?itemName=humao.rest-client

If you follow this link, you will find a really great documentation about the extension, how it works, and how to use it. This also means this post is pretty useless, except you want to read a quick overview ;-)

rest client extension

The source code of the REST Client extension is hosted on GitHub:

  • https://github.com/Huachao/vscode-restclient

This extension is actively maintained, has almost one and a half installations and an awesome rating (5.0 out of 5) by more than 250 people

What does it solve?

Compared to Fiddler and Postman it is absolutely minimalistic. There is no overloaded and full-blown UI. While Fidler is completely overloaded but full of features, Postman's UI is nicer, easier, and more intuitive, but the REST Client doesn't need a UI at all, except the VSCode shell and a plain text editor.

While Fiddler and Postman cannot easily share the request configurations, the REST Client stores the request configurations in text files using the *.http or *.rest extension that can be committed to the source code repository and shared with the entire team.

Let's see how it works

To test it out in a demo, let's create a new Web API project, change to the project directory, and open VSCode:

dotnet new webapi -n RestClient -o RestClient
cd RestClient
code .

This project already contains a Web API controller. I'm going to use this for the first small test of the REST Client. I will create and use a more complex controller later in the blog post

To have the *.http files in one place I created an ApiTest folder and place a WeatherForecast.http in it. I'm not yet sure if it makes sense to put such files into the project, because these files won't go into production. I think, in a real-world project, I would place the files somewhere outside the actual project folder, but inside the source code repository. Let's keep it there for now:

http file

I already put the following line into that file:

GET https://localhost:5001/WeatherForecast/ HTTP/1.1

This is just a simple line of text in a plain text file with the file extension *.http but the REST Client extension does some cool magic with it while parsing it:

On the top border, you can see that the REST Client extension supports the navigation inside the file structure. This is cool. Above the line, it also adds a CodeLens actionable link to the configured request to send the request.

At first, start the project by pressing F5 or by using dotnet run in the shell.

If the project is running you can click the Send Request CodeLens link and see what happens.

result

It opens the response in a new tab group in VSCode and shows you the response headers as well as the response content

A more complex sample

I created another API controller that handles persons. The PersonController uses GenFu to create fake users. The Methods POST, PUT and DELETE don't really do anything but the controller is good to test no.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

using GenFu;

using RestClient.Models;

namespace RestClient.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class PersonController : ControllerBase
    {

        [HttpGet]
        public ActionResult<IEnumerable<Person>> Get()
        {
            return A.ListOf<Person>(15);
        }

        [HttpGet("{id:int}")]
        public ActionResult<Person> Get(int id)
        {
            var person = A.New<Person>(new Person { Id = id });
            return person;
        }

        [HttpPost]
        public ActionResult Post(Person person)
        {
            return Ok(person);
        }

        [HttpPut("{id:int}")]
        public ActionResult Put(int id, Person person)
        {
            return Ok(person);

        }

        [HttpDelete("{id:int}")]
        public ActionResult Delete(int id)
        {
            return Ok(id);
        }
    }
}

The Person model is simple:

namespace RestClient.Models
{
    public class Person
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }
        public string Telephone { get; set; }
        public string Street { get; set; }
        public string Zip { get; set; }
        public string City { get; set; }
    }
}

If you now start the project you will see the new endpoints in the Swagger UI that is already configured in the Web API project. Call the following URL to see the Swagger UI: https://localhost:5001/swagger/index.html

swaggerui

The Swagger UI will help you to configure the REST Client files.

Ok. Let's start. I created a new file called Person.http in the ApiTests folder. You can add more than one REST Client request configuration into one file.

We don't need the Swagger UI for the two GET endpoints and for the DELETE endpoints, since they are the easy ones and look the same as in the WeatherForecast.http:

GET https://localhost:5001/Person/ HTTP/1.1

###

GET https://localhost:5001/Person/2 HTTP/1.1

### 

DELETE https://localhost:5001/Person/2 HTTP/1.1

The POST request is just a little more complex

If you now open the POST /Person section in the Swagger UI and try the request, you'll get all the information you need for the REST Client:

swagger details

In the http file it will look like this:

POST https://localhost:5001/Person/ HTTP/1.1
content-type: application/json

{
  "id": 0,
  "firstName": "Juergen",
  "lastName": "Gutsch",
  "email": "juergen@example.com",
  "telephone": "08150815",
  "street": "Mainstr. 2",
  "zip": "12345",
  "city": "Smallville"
}

You can do the same with the PUT request:

PUT https://localhost:5001/Person/2 HTTP/1.1
content-type: application/json

{
  "id": 2,
  "firstName": "Juergen",
  "lastName": "Gutsch",
  "email": "juergen@example.com",
  "telephone": "08150815",
  "street": "Mainstr. 2",
  "zip": "12345",
  "city": "Smallville"
}

This is how it looks in VSCode, if you click the CodeLens link for the GET request :

results

You are now able to test all the API endpoints this way

Conclusion

Actually, it is not only about REST. You can test any kind of HTTP request this way. You can even send binary data, like images to your endpoint.

This is a really great extension for VSCode and I'm sure I will use Fiddler or Postman only in environments where I don't have a VS Code installed.

Stefan Henneken: IEC 61131-3: unterschiedliche Versionen der gleichen Bibliothek in einem TwinCAT Projekt

Bibliotheksplatzhalter ermöglichen es, mehrere Versionen der gleichen Bibliothek in einem SPS-Projekt zu referenzieren. Diese Situation kann hilfreich sein, wenn auf Grund neuer Funktionen eine Bibliothek in einem bestehenden Projekt aktualisiert werden soll, sich durch das Update aber herausstellt, dass ein älterer FB ein geändertes Verhalten erhält.

Das genannte Problem ist dadurch lösbar, indem verschiedene Versionen der gleichen Bibliothek über Platzhalter in dem Projekt eingebunden werden. Platzhalter bei Bibliotheken sind vergleichbar mit Referenzen. Statt Bibliotheken direkt zu einem Projekt hinzuzufügen, werden diese indirekt über Platzhalter referenziert. Jeder Platzhalter wird mit einer Bibliothek verknüpft. Entweder mit einer konkreten Version oder so, dass immer die aktuelle Bibliothek verwendet wird. Werden Bibliotheken über den Standarddialog hinzugefügt, kommen immer automatisch Platzhalter zum Einsatz.

Wie mehrere Versionen der gleichen Bibliothek in einem Projekt eingebunden werden, will ich in dem folgenden, kurzen Post vorstellen. Bei unserem Beispiel werde ich zwei verschiedene Versionen der Tc3_JsonXml Bibliothek in ein Projekt hinzufügen. Auf meinen Rechner sind aktuellen drei verschiedene Versionen der Bibliothek vorhanden.

V3.3.7.0 und V3.3.14.0 sollen in dem Beispiel parallel zum Einsatz kommen.

Öffnen Sie den Dialog zum Hinzufügen einer Bibliothek. Wechseln Sie anschließend in die erweitere Ansicht.

Wechseln Sie in den Bereich Platzhalter und geben Sie für den neuen Platzhalter einen eindeutigen Namen ein.

Wählen Sie die Bibliothek aus, auf der der Platzhalter referenzieren soll. Hierbei kann eine konkrete Version oder durch das ‚*‘, immer die neuste Version ausgewählt werden.

Wenn Sie anschließend im Projektbaum unter References den Platzhalter auswählen und in das Eigenschaftsfenster wechseln, so werden dort die Eigenschaften des Platzhalters angezeigt.

Hier muss der Namensraum noch angepasst werden. Der Namensraum wird später im SPS-Programm verwendet und dient dazu, Elemente beider Bibliotheken über unterschiedliche Bezeichner anzusprechen. Das Grundkonzept von Namensräumen hatte ich unter IEC 61131-3: Namensräume vorgestellt. Für den Namensraum habe ich die gleichen Bezeichner gewählt, wie für die Platzhalter.

Nachdem die gleichen Arbeitsschritte auch für die Version V3.3.14.0 der Library ausgeführt wurden, sollten beide Platzhalter mit einen eindeutigen Namen und angepassten Namensraum vorhanden sein.

Einen guten Überblick liefert die Bibliotheksverwaltung, die durch ein Doppelklick auf References geöffnet wird.

Hier ist gut zu erkennen, wie die Platzhalter aufgelöst werden. In der Regel haben die Platzhalter den gleichen Namen wie die Bibliotheken, auf die verwiesen wird. Das ‚*‘ bedeutet, dass immer die neuste Version der Bibliothek verwendet wird, die auf dem Entwicklungsrechner vorhanden ist. In der rechten Spalte wird die Version angezeigt, auf die der Platzhalter verweist. Für die beiden Platzhalter der Tc3_JsonXml Bibliothek wurden die Namen der Platzhalter angepasst.

Als Beispiel soll in dem SPS-Programm FB_JsonSaxWriter zum Einsatz kommen. Wird der FB bei der Deklaration der Instanz ohne Namensraum angegeben,

PROGRAM MAIN
VAR
  fbJsonSaxWriter    : FB_JsonSaxWriter;
END_VAR

so gibt der Compiler eine Fehlermeldung aus:

Der Name FB_JsonSaxWriter kann nicht eindeutig aufgelöst werden, da von der Bibliothek Tc3_JsonXml zwei verschiedene Versionen (V3.3.7.0 und V3.3.14.0) in dem Projekt vorhanden sind. Somit ist auch FB_JsonSaxWriter in dem Projekt zweimal enthalten.

Durch die Verwendung der Namensräume ist ein gezielter Zugriff auf die einzelnen Elemente der gewünschten Bibliothek möglich:

PROGRAM MAIN
VAR
  fbJsonSaxWriter_Build7           : Tc3_JsonXml_Build7.FB_JsonSaxWriter;
  fbJsonSaxWriter_Build14          : Tc3_JsonXml_Build14.FB_JsonSaxWriter;
  sVersionBuild7, sVersionBuild14  : STRING;
END_VAR

fbJsonSaxWriter_Build7.AddBool(TRUE);
fbJsonSaxWriter_Build14.AddBool(FALSE);

sVersionBuild7 := Tc3_JsonXml_Build7.stLibVersion_Tc3_JsonXml.sVersion;
sVersionBuild14 := Tc3_JsonXml_Build14.stLibVersion_Tc3_JsonXml.sVersion;

In diesem kurzen Beispiel wird des Weiteren über eine globale Struktur, die in jeder Bibliothek enthalten ist, die aktuelle Versionsnummer ausgelesen:

Beide Bibliotheken lassen sich jetzt parallel im gleichen SPS-Projekt verwenden. Allerdings muss hierbei sichergestellt werden, dass beide Bibliotheken in genau den geforderten Versionen (V3.3.7.0 und V3.3.14.0) auf dem Entwicklungsrechner vorhanden sind.

Golo Roden: RTFM #4: Common Lisp

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär geht es um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Common Lisp: A Gentle Introduction to Symbolic Computation" von David S. Touretzky.

Golo Roden: Algorithmen für künstliche Intelligenz

Im Bereich der künstlichen Intelligenz (KI) gibt es zahlreiche Algorithmen für die verschiedensten Arten von Problemen. Welche grundlegenden Algorithmen sollte man in dem Zusammenhang einordnen können?

Jürgen Gutsch: ASP.NET Core in .NET 6 - Part 01 - Overview

.NET 5 was released just about 3 months age and Microsoft announced the first preview of .NET 6 last week. This is really fast. Actually, they already started working on .NET 6 before version 5 was released. But it is anyway cool to have a preview available to start playing around. Also, the ASP.NET team wrote a new blog post. It is about ASP.NET Core updates on .NET 6.

I will take the chance to have a more detailed look into the updates and the new feature. I'm going to start a series about those updates and features. This is also a chance to learn what I need to rewrite, If I need to update my book that recently got published by Packt.

Install .NET 6 preview 1

At first I'm going to download ..NET 6 preview 1 from https://dotnet.microsoft.com/download/dotnet/6.0 and install it on my machine.

ef840ef252ce357df85be2c27fbc759a.png

I chose the x64 Installer for Windows and started the installation

install01.png

After the installation is done the new SDK is available. Type dotnet --info in a terminal:

dotnetinfo.png

Be careful

Since I didn't add a global.json yet, the .NET 6 preview 1 is the default SDK. This means I need to be careful if I want to create a .NET 5 project. I need to add a global.json every time I want to create a .NET 5 project:

dotnet new globaljson --sdk-version 5.0.103

This creates a small JSON file that contains the SDK version number in the current folder.

{
  "sdk": {
    "version": "5.0.103"
  }
}

Now all folder and subfolder will use this SDK version.

Series posts

This series will start with the following topics:

  • Update on dotnet watch
  • Support for IAsyncDisposable in MVC
  • DynamicComponent
  • ElementReference
  • Nullable Reference Type Annotations

(I will update this list as soon I add a new post).

Christian Dennig [MS]: Getting started with KrakenD on Kubernetes / AKS

If you develop applications in a cloud-native environment and, for example, rely on the “microservices” architecture pattern, you will sooner or later have to deal with the issue of “API gateways”. There is a wide range of offerings available “in the wild”, both as managed versions from various cloud providers, as well as from the open source domain. Many often think of the well-known OSS projects such as “Kong”, “tyk” or “gloo” when it comes to API gateways. The same is true for me. However, when I took a closer look at the projects, I wasn’t always satisfied with the feature set. I was always looking for product that can be hosted in your Kubernetes cluster, is flexible and easy to configure (“desired state”) and offers good performance. During my work as a cloud solution architect at Microsoft, I became aware of the OSS API Gateway “KrakenD” during a project about 1.5 years ago.

KrakenD API Gateway

krakend logo
KrakenD logo

KrakenD is an API gateway implemented in Go that relies on the ultra-fast GIN framework under the hood. It offers an incredible number of features out-of-the-box that can be used to implement about any gateway requirement:

  • request proxying and aggregation (merge multiple responses)
  • decoding (from JSON, XML…)
  • filtering (allow- and block-lists)
  • request & response transformation
  • caching
  • circuit breaker pattern via configuration, timeouts…
  • protocol translation
  • JWT validation / signing
  • SSL
  • OAuth2
  • Prometheus/OpenCensus integration

As you can see, this is quite an extensive list of features, which is nevertheless far from being “complete”. On their homepage and documentation, you can find much more information, what the product offers in its entirety​.

The creators also recently published an Azure Marketplace offer, a container image that you can directly push / integrate to your Azure Container Registry…so I thought, it’s an appropriate time to publish a blog post about how to get started with KrakenD on Azure Kubernetes Service (AKS).

Getting Started with KrakenD on AKS

Ok, let’s get started then. First, we need a Kubernetes cluster on which we can roll out a sample application that we want to expose via KrakenD. So, as with all Azure deployments, let’s start with a resource group and then add a corresponding AKS service. We will be using the Azure Command Line Interface for this, but you can also create the cluster via the Azure Portal.

# create an Azure resource group

$ az group create --name krakend-aks-rg \
   --location westeurope

{
  "id": "/subscriptions/xxx/resourceGroups/krakend-aks-rg",
  "location": "westeurope",
  "managedBy": null,
  "name": "krakend-aks-rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

# create a Kubernetes cluster

$ az aks create -g krakend-aks-rg \
   -n krakend-aks \
   --enable-managed-identity \
   --generate-ssh-keys

After a few minutes, the cluster has been created and we can download the access credentials to our workstation.

$ az aks get-credentials -g krakend-aks-rg \
   -n krakend-aks 

# in case you don't have kubectl on your 
# machine, there's a handy installer coming with 
# the Azure CLI:

$ az aks install-cli

Let’s check, if we have access to the cluster…

$ kubectl get nodes

NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-34625029-vmss000000   Ready    agent   24h   v1.18.14
aks-nodepool1-34625029-vmss000001   Ready    agent   24h   v1.18.14
aks-nodepool1-34625029-vmss000002   Ready    agent   24h   v1.18.14

Looks great and we are all set from an infrastructure perspective. Let’s add a service that we can expose via KrakenD.

Add a sample service

We are now going to deploy a very simple service implemented in dotnet core, that is capable of creating / storing “contact” objects in a MS SQL server 2019 (Linux) that is running – for convenience reasons – on the same Kubernetes cluster as a single container/pod. After having the services deployed, the in-cluster situation looks like that:

In-cluster architecture /wo KrakenD

Let’s deploy everything. First, the MS SQL server with its service definition:

# content of sql-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 30
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql
          image: mcr.microsoft.com/mssql/server:2019-latest
          ports:
            - containerPort: 1433
          env:
            - name: MSSQL_PID
              value: 'Developer'
            - name: ACCEPT_EULA
              value: 'Y'
            - name: SA_PASSWORD
              value: 'Ch@ngeMe!23'
---
apiVersion: v1
kind: Service
metadata:
  name: mssqlsvr
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: ClusterIP

Create a file called sql-server.yaml and apply it to the cluster.

$ kubectl apply -f sql-server.yaml

deployment.apps/mssql-deployment created
service/mssqlsvr created

Second, the contacts API plus a service definition:

# content of contacts-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ca-deploy
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: scmcontacts
      service: contactsapi
  template:
    metadata:
      labels:
        application: scmcontacts
        service: contactsapi
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '256Mi'
              cpu: '500m'
          image: ghcr.io/azuredevcollege/adc-contacts-api:3.0
          env:
            - name: ConnectionStrings__DefaultConnectionString
              value: "Server=tcp:mssqlsvr,1433;Initial Catalog=scmcontactsdb;Persist Security Info=False;User ID=sa;Password=Ch@ngeMe!23;MultipleActiveResultSets=False;Encrypt=False;TrustServerCertificate=True;Connection Timeout=30;"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: contacts
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  type: ClusterIP
  selector:
    application: scmcontacts
    service: contactsapi
  ports:
    - port: 8080
      targetPort: 5000

Create a file called contacts-app.yaml and apply it to the cluster.

$ kubectl apply -f contacts-app.yaml

deployment.apps/ca-deploy created
service/contacts created

To check, if the contact pods can communicate with the MSSQL server, let’s quickly spin up an interactive pod and issue a few requests from within the cluster. As you can see in the YAML manifests, the services have been added as type ClusterIP which means they don’t get an external IP address. Exposing the contacts service to the public will be the responsibility of KrakenD.

$ kubectl run -it --rm --image csaocpger/httpie:1.0 http --restart Never -- /bin/sh
If you don't see a command prompt, try pressing enter.

$ echo '{"firstname": "Satya", "lastname": "Nadella", "email": "satya@microsoft.com", "company": "Microsoft", "avatarLocation": "", "phone": "+1 32 6546 6545", "mobile": "+1 32 6546 6542", "description": "CEO of Microsoft", "street": "Street", "houseNumber": "1", "city": "Redmond", "postalCode": "123456", "country": "USA"}' | http POST http://contacts:8080/api/contacts

HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 10:58:57 GMT
Location: http://contacts:8080/api/contacts/ee176782-a767-45ad-a7df-dbcefef22688
Server: Kestrel
Transfer-Encoding: chunked

{
    "avatarLocation": "",
    "city": "Redmond",
    "company": "Microsoft",
    "country": "USA",
    "description": "CEO of Microsoft",
    "email": "satya@microsoft.com",
    "firstname": "Satya",
    "houseNumber": "1",
    "id": "ee176782-a767-45ad-a7df-dbcefef22688",
    "lastname": "Nadella",
    "mobile": "+1 32 6546 6542",
    "phone": "+1 32 6546 6545",
    "postalCode": "123456",
    "street": "Street"
}

$ http GET http://contacts:8080/api/contacts
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 11:00:07 GMT
Server: Kestrel
Transfer-Encoding: chunked

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

As you can see, we can create new contacts by POSTing a JSON payload to the endpoint http://contacts:8080/api/contacts (first request) and also retrieve what has been added to the database by GETing data from http://contacts:8080/api/contacts endpoint (second request).

Create a KrakenD Configuration

So far, everything works as expected and we have a working API in the cluster that is storing its data in a MSSQL server. As discussed in the previous section, we did not expose the contacts service to the internet on purpose. We will do this later by adding KrakenD in front of that service giving the API gateway a public IP so that it is externally reachable.

But first, we need to create a KrakenD configuration (a plain JSON file) where we configure the endpoints, backend services, how requests should be routed etc. etc. Fortunately, KrakenD has a very easy-to-use designer that gives you a head-start when creating that configuration file – it’s simply called the KrakenDesigner.

kraken designer
KrakenDesigner – sample service
kraken designer logging config
KrakenDesigner – logging configuration

When creating such a configuration, it comes down to these simple steps:

  1. Adjust “common” configuration for KrakenD like service name, port, CORS, exposed/allowed headers etc.
  2. Add backend services, in our case just the Kubernetes service for our contacts API (http://contacts:8080)
  3. Exposed endpoints (/contacts) at the gateway and to which backend to route to (http:/contacts:8080/api/contacts). Here you can also define, if a JWT token should be validated, which headers to pass to the backend etc. A lot of options – which we obviously don’t need in our simple setup.
  4. Add logging configuration – it’s optional, but you should do it. We simply enable stdout logging, but you can also use OpenCensuse.g. and even expose metrics to a Prometheus instance (nice!).

You can export the configuration you have done in the UI as a last step to a JSON file. For our sample here, this file looks like that:

{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

We simply expose two endpoints, one that let’s us create (POST) contacts and one that retrieves (GET) all contacts from the database – so basically the same sample we did when calling the contacts service from within the cluster.

Save that file above to your local machine (name it krakend.json) as we need to add it later to Kubernetes as a ConfigMap.

Add the KrakenD API Gateway

So, now we are ready to deploy KrakenD to the cluster: we have an API that we want to expose and we have the KrakenD configuration. To dynamically add the configuration (krakend.json) to our running KrakenD instance, we will use a Kubernetes ConfigMap object. This gives us the ability to decouple configuration from our KrakenD application instance/pod – if you are not familiar with the concepts, have a look at the official documentation here.

During the startup of KrakenD we will then use this ConfigMap and mount the content of it (krakend.json file) into the container (folder /etc/krakend) so that the KrakenD process can pick it up and apply the configuration.

In the folder where you saved the config file, issue the following commands:

$ kubectl create configmap krakend-cfg --from-file=./krakend-cfg.json

configmap/krakend-cfg created

# check the contents of the configmap

$ kubectl describe configmap krakend-cfg

Name:         krakend-cfg
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
krakend.json:
----
{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

Events:  <none>

That looks great. We are finally ready to spin up KrakenD in the cluster. We therefor apply the following Kubernetes manifest file which creates a deployment and a Kubernetes service of type LoadBalancer – which gives us a public IP address for KrakenD via the Azure loadbalancer.

# content of api-gateway.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: krakend-deploy
  labels:
    application: apigateway
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: apigateway
  template:
    metadata:
      labels:
        application: apigateway
    spec:
      automountServiceAccountToken: false
      volumes:
        - name: krakend-cfg
          configMap:
            name: krakend-cfg
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '1024Mi'
              cpu: '1000m'
          image: devopsfaith/krakend:1.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          volumeMounts:
          - name: krakend-cfg
            mountPath: /etc/krakend

---
apiVersion: v1
kind: Service
metadata:
  name: apigateway
  labels:
    application: apigateway
spec:
  type: LoadBalancer
  selector:
    application: apigateway
  ports:
    - port: 8080
      targetPort: 8080

Let me highlight the two important parts here, that mount the configuration file into our pod. First, we create a volume on line 26 (named krakend-cfg) referencing the ConfigMap we created before and second, we mount that volume (line 43) to our pod (mountPath /etc/krakend).

Save the manifest file and apply it to the cluster.

$ kubectl apply -f api-gateway.yaml

deployment.apps/krakend-deploy created
service/apigateway created

The resulting architecture within the cluster is now as follows:

Architecture with krakend
Architecture with KrakenD API gateway

As a last step, we just need to retrieve the public IP of our “LoadBalancer” service.

$ kubectl get services

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)          AGE
apigateway   LoadBalancer   10.0.26.150   104.45.73.37   8080:31552/TCP   4h53m
contacts     ClusterIP      10.0.155.35   <none>         8080/TCP         3h47m
kubernetes   ClusterIP      10.0.0.1      <none>         443/TCP          26h
mssqlsvr     ClusterIP      10.0.192.57   <none>         1433/TCP         3h59m

So, in our case here, we got 104.45.73.37. Let’s issue a few request (either with a browser or a tool like httpie – which I use all the time) against the resulting URL http://104.45.73.37:8080/contacts.

$ http http://104.45.73.37:8080/contacts

HTTP/1.1 200 OK
Content-Length: 337
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 12:10:20 GMT
Server: Kestrel
Vary: Origin
X-Krakend: Version 1.2.0
X-Krakend-Completed: false

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

Works like a charm! Also, have a look at the logs of the KrakenD container:

$ kubectl logs krakend-deploy-86c44c787d-qczjh -f=true

Parsing configuration file: /etc/krakend/krakend.json
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ ERROR unable to create the GELF writer: getting the extra config for the krakend-gelf module
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ INFO Listening on port: 8080
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN influxdb: unable to load custom config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN opencensus: no extra config defined for the opencensus module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN building the etcd client: unable to create the etcd client: no config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN bloomFilter: no config for the bloomfilter
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN no config present for the httpsecure module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.747 ▶ INFO registering usage stats for cluster ID '293C0vbu4hqE6jM0BsSNl/HCzaAKsvjhSbHtWo9Hacc='
[GIN] 2021/02/17 - 10:01:44 | 200 |    4.093438ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:46 | 200 |    5.397977ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:56 | 200 |    6.820172ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:57 | 200 |    5.911475ms |      10.244.1.1 | GET      "/contacts"

As mentioned before, KrakenD logs its events to stdout and we can see how the request are coming in, the destination and the time the request needed to complete at the gateway level.

Wrap-Up

In this brief article, I showed you how you can deploy KrakenD to an AKS/Kubernetes cluster on Azure and how to setup a first, simple sample of how to expose an API running in Kubernetes via the KrakenD API gateway. The project has so many useful features, that this post only covers the very, very basic stuff. I really encourage you to have a look at the product when you consider hosting an API gateway within your Kubernetes cluster. The folks at KrakenD do a great job and are also open and accept pull requests, if you want to contribute to the project.

As mentioned in the beginning of this article, they recently published a version of their KrakenD container image to the Azure Marketplace. This gives you the ability to directly push their current and future image to your own Azure Container Registry, enabling scenarios like static image scanning, Azure Security Center integration, geo-replication etc. You can find their offering here: KrakenD API Gateway

Hope you enjoyed this brief introduction…happy hacking, friends! 🖖

Golo Roden: Grundbegriffe der künstlichen Intelligenz

Künstliche Intelligenz (KI) ist eines der wichtigsten Themen der vergangenen Jahre. Ein zumindest grundlegendes Verständnis ist daher hilfreich, um gewisse Themen ins rechte Licht rücken zu können. Welche Grundbegriffe der künstlichen Intelligenz sollte man kennen?

Golo Roden: Wie man Aufwand schätzt

Jede Entwicklerin und jeder Entwickler kennt die Herausforderung, Aufwand für die Entwicklung Code zu schätzen. Die wenigsten machen das gerne. Warum ist Schätzen so unbeliebt, warum ist es überhaupt erforderlich und worauf sollte man achten?

Golo Roden: RTFM #3: Game Engine Black Book: Doom

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär geht es um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Game Engine Black Book: Doom" von Fabien Sanglard.

Golo Roden: Fünf Maßnahmen für mehr Codequalität

Die Codequalität zu verbessern, ist vielen Teams ein wichtiges Anliegen. Dabei gibt es einige grundlegende Maßnahmen, die man mit verhältnismäßig überschaubarem Aufwand anwenden kann. Welche sind das?

Jürgen Gutsch: Working inside a Docker container using Visual Studio Code

As mentioned in the last post I want to write about remote working inside a docker container. But at first, we should get an idea about why we should ever remote work inside a docker container.

Why should I do that?

One of our customers is running an OpenShift/Kubernetes cluster and also likes to have the technology-specific development environments in a container that runs in Kubernetes. We had a NodeJS development container, a Python development container, and so on... All the containers had an SSH server installed, Git, the specific SDKs, and all the stuff that is needed to develop. Using VSCode we connected to the containers via SSH and developed inside the container.

Having the development environment in a container is one reason. Maybe not the most popular reason. But trying stuff inside a container because the local environment isn't the same makes a lot of sense. If you want to debug an application in a production-like environment makes absolute sense.

How does it work?

VSCode has a great set of tools to work remotely. In installed Remote WSL (used in the last post), Remote SSL was the one we used with OpenShift (maybe I will write about it, too), and with this post, I'm gonna use Remote Container. All three of them will work inside the remote explorer within VS Code. All three add-ins will work pretty similarly.

If the remote machine doesn't have the VSCode Server Installed, the remote toll will install it and start it. The VSCode server is like a full VSCode without a user interface. It also needs to have add-ins installed to work with the specific technologies. The local VSCode will connect to the remote VSCode server and mirrors it in the user interface of your locally installed VSCode. It is like a remote session to the other machine but feels local.

Setup the demo

I created a small ASP.NET Core MVC project:

dotnet new mvc -n RemoteDocker -o RemoteDocker
cd RemoteDocker

Than I added a dockerfile to it:

FROM mcr.microsoft.com/dotnet/sdk:5.0

COPY . /app

WORKDIR /app

EXPOSE 5000 5001

# ENTRYPOINT ["dotnet", "run"] not needed to just work in the container

If you don't have the docker tool installed, VSCode will ask you to install it as soon you have the dockerfile open. If it's installed you can just right-click the dockerfile in the VSCode Explorer and select "Build image...".

image-20210203220213602

This will prompt you for an image name. You can use the proposed name which is "remotedocker:latest" in my case. It seems it uses the project name or the folder name which makes sense:

image-20210203220356005

Select the Docker tab in VSCode and you will find your newly built image in the list of images:

image-20210203220705183

You can now right-click the tag latest and choose "Run Interactive". If you just choose "Run" the container stops, because we commented out the entry point. We need an interactive session. This will start-up the container and it will now appear as a running container in the container list:

image-20210203220954330

You can browse and open the files inside the container from this containers list, but editing will not work. This is not what we want to do. We want to remotely connect VSCode to this Docker container.

Connecting to Docker

This can be done using two different ways:

  1. Just right-click the running container and choose "Attach Visual Studio Code"

image-20210203221956964

  1. Or select the Remote Explorer tab, ensure the Remote Containers add-in is selected in the upper-right dropdown box, wait for the containers to load. If all the containers are visible, choose the one you want to connect, right-click it and choose "Attach to container" or "Attach in New Window". It does the same thing as the previous way

image-20210203221546976

Now you have a VSCode instance open that is connected to the container. You now can see the files in the project, you can sue the terminal inside the container and you can now edit the files inside the project.

image-20210203222354886

You can see that this is a different VSCode than your local instance by having a look at the tabs on the left side. Not all the add-ins are installed on that instance. In my case, the database tools are missing as well as the Kubernetes tools and some others.

Working inside the Container

Since we disabled the entry point in the dockerfile we are now able to start debugging by pressing F5.

image-20210204221414743

This also opens the local browser and shows the application that is running inside the container. This is really awesome. It feels like really local development:

image-20210204222047723

Let's change something to see that this is really working. Like in the last demo, I'm going to change the page title. I would like to see the name "Remote Docker demo":

image-20210204222347623

Just save and restart debugging in VSCode:

image-20210204222617177

That's it.

Conclusion

Isn't this cool?

You can easily start docker containers to test, debug and develop in a production-like environment. You can configure a production-like environment with all Docker containers you need using docker-compose on your machine. Then add your development, or your testing container to the composition and start it all up. Now you can connect to this container and start playing around within this environment. It is all fast, accessible, and on your machine.

This is cool!

I'd like to see this is also working if the containers running on Azure. I will try it within the next weeks and maybe I can put the results in a new blog post.

Golo Roden: Servicemodelle in der Cloud

Der Begriff "Cloud" ist längst in die Alltagssprache eingezogen, doch die wenigsten können im Detail erklären, was genau damit gemeint ist. Tatsächlich gibt es eine Definition des NIST, die vier Servicemodelle beschreibt. Welche sind das?

Jürgen Gutsch: Finally - My first book got published

I always had the idea to write a book. Twelve or thirteen years ago, Stefan Falz told me not to do it, because it is a lot of effort and takes a lot of your time. Even if my book is just a small one and smaller than Stefan's books for sure, now I know what he meant, I guess :-)

How it started

My journey writing a book starts in fall 2018 when I started the "Customizing ASP.NET Core" series. A reader asked me to bundle the series as a book. I took my time to thought about it and started to work on it in July 2019. The initial idea to use LeanPub and create a book the open source way was good but But there was no pressure, no timeline, and that project was had lower priority besides life and other stuff. The release of ASP.NET Core was a good event to put some more pressure on it. From September last year on I started to update all the contents and samples to ASP.NET Core 5.0. I also updated the text in a way that it matches a book more than a blog series.

Actually my very first book is a compilation of the old blog series, but updated to ASP.NET Core 5.0 and it includes an additional thirteenth chapter that wasn't part of the original series.

I was almost done end of October and ready to publish it around the .NET Conf 2020 when .NET 5 and ASP.NET Core were announced. Then I decided to try an experiment:

How it went

At that time, I did a technical review of a book about Blazor for Packt and I decided to ask Packt if my book is worth it to get published by Packt. They said yes and wanted to publish it. That was awesome. My idea was to improve the quality of the book, to have professional editors, and reviewers and most important to not do the publishing and the marketing by myself.

The downside of this decision: I wasn't able to publish the book around the .NET Conf 2020. Packt started to work on it and it was a really impressive experience:

  • An editor worked on it to make the texts more "booky" than "bloggy", and I had to review and rework some texts
  • A fellow MVP Toi B. Wright did the technical review, and I had a lot more to fix.
  • Another technical reviewer executed all the samples and snippets, and I had to fix some small issues.
  • A copy editor went through all the chapters and had feedback about formatting.
  • In the meanwhile I had to work on the front matter and the preface.

I also never thought about a foreword of my book until I worked on the preface. I didn't want to write the foreword by myself and had the right person in mind.

I asked Damien Bowden the smartest and coolest ASP.NET Core security guru I know. He also is a fellow MVP and a famous blogger. His posts got shared many times and often mentioned in the ASP.NET Community Standup. It's always a pleasure to talk to him and we had a lot of fun at the MVP summits in Redmond and Bellevue.

Thanks Damien for writing this awesome foreword :-)

How it is right now

Sure, my very first book is just a compilation of the old blog series, but updated to ASP.NET Core 5.0 and it includes an additional thirteenth chapter that wasn't part of the original series:

  1. Customizing Logging
  2. Customizing App Configuration
  3. Customizing Dependency Injection
  4. Configuring and Customizing HTTPS
  5. Using IHostedService and BackgroundService
  6. Writing Custom Middleware
  7. Content negotiation using custom OutputFormatter
  8. Managing inputs with custom ModelBinders
  9. Creating custom ActionFilter
  10. Creating custom TagHelpers
  11. Configuring WebHostBuilder
  12. Using different Hosting models
  13. Working with Endpoint Routing

This book also contains details about ASP.NET Core 3.1. I'm mentioning 3.1, if it differs from 5.0. Because ASP.NET Core 3.1 is a LTS version and some companies definitely will stay on LTS.

Packt helped me to higher the quality of the contents and it now is is a compact cook book with 13 recipes you should know about ASP.NET Core.

It is definitely a book for ASP.NET Core beginners, who already know C# and the main concepts about ASP.NET Core

Where to get it

Last Saturday Packt published it on Amazon as Kindle edition and as paperback

Damien, do you see your name below the title? ;-)

I guess it will be as available on Packt as well soon, for those of you who have a Packt subscription.

Would be awesome, if you would drop a review as soon you read it

Thanks

I would like to say thanks to some persons, who helped me do this.

  • At first I say thanks to my family, friends, and colleagues who supported me and motivated me to finish the work.

  • I also say thanks to Packt. They did a great job supporting me and they added a lot more value to the book. I also like the cover design.

  • I say thanks again to Damien for that great foreword

  • Also thanks to the developer community and the readers of my blog, since this book is mainly powered by the community.

What's next?

My plan is to keep this book up-to-date. I will update the samples and concepts with every new major version.

For now, I will focus on my blog again. I've written almost nothing in the past six months. In any case, I already have an idea for another book :-)

Code-Inside Blog: Microsoft Graph: Read user profile and group memberships

In our application we have a background service, that “syncs” user data and group membership information to our database from the Microsoft Graph.

The permission model:

Programming against the Microsoft Graph is quite easy. There are many SDKS available, but understanding the permission model is hard.

‘Directory.Read.All’ and ‘User.Read.All’:

Initially we only synced the “basic” user data to our database, but then some customers wanted to reuse some other data already stored in the graph. Our app required the ‘Directory.Read.All’ permission, because we thought that this would be the “highest” permission - this is wrong!

If you need “directory” information, e.g. memberships, the Directory.Read.All or Group.Read.All is a good starting point. But if you want to load specific user data, you might need to have the User.Read.All permission as well.

Hope this helps!

Golo Roden: Was man über Unicode wissen sollte

Nahezu jede Entwicklerin und jeder Entwickler kennt Unicode, zumindest vom Hörensagen. Doch vielen ist nicht klar, was genau Unicode eigentlich ist, was Encodings sind und wie das alles im Detail funktioniert. Was sollte man über Unicode wissen?

Marco Scheel: Microsoft Teams Incoming Webhook update required

With the Message Center Notification MC234048 Microsoft announced a change to the Microsoft Teams App “Incoming Webhook”. The URL currently used will be deprecated by mid of April 2021. The exact wording is:

We will begin transitioning to the new webhook URLs on Monday January 11, 2021; however, existing webhooks URLs will continue to work for three (3) months to allow for migration time Source (as of 2021-01-26): https://admin.microsoft.com/Adminportal/Home?#/MessageCenter/:/messages/MC234048

If you created a webhook prior January 11, 2021 you will need to update your existing connector configuration!

This app is in regular use by most companies, if not disabled by a Teams App permission policy in the tenant. The app is a very easy option to post a message to a team. The URI of a webhook is cryptic and the only security in place. If you send a well-crafted HTTP message to the endpoint, you will create a Teams post in the channel the app is connected to. Here is the Microsoft documentation and a great community article.

Currently Microsoft is using a non-tenant specific URI (outlook.office.com). The new URI will be tenant related (YOURTENANT.webhook.office.com).

This feature is communicated for Microsoft Teams, but it is also a Microsoft 365 Group Connector feature so these might also affected.

image

Check if the app is used

It could be a good idea to check, if the app is active in your tenant. As a Teams administrator you can check the application in your admin center.

image

Even if you checked the app and the Teams App Permission policy you could still have the app installed prior this configuration. It is easy to check if the application is installed in a Microsoft Team. To query for installed apps we will need to use the preview version of the MicrosoftTeams module (as of writing 1.1.10-preview). Using the Teams PowerShell you can get a list of Teams the app is installed in.

Get the application ID and more details:

Get-TeamsApp | Where-Object { $_.DisplayName -eq "Incoming Webhook"}

Result:

ExternalId Id                                   DisplayName      DistributionMethod
---------- --                                   -----------      ------------------
           203a1e2c-26cc-47ca-83ae-be98f960b6b2 Incoming Webhook store

With the application id we now can query all teams and check if the app is installed:

Get-Team | ForEach-Object {
    $team = $_;
    $apps = Get-TeamsAppInstallation -TeamId $team.GroupId | Where-Object { $_.TeamsAppId -eq "203a1e2c-26cc-47ca-83ae-be98f960b6b2"};
    if ($apps -ne $null){
        $team;
    }
}

Result for my two teams with the app installed:

GroupId                              DisplayName        Visibility  Archived  MailNickName       Description
-------                              -----------        ----------  --------  ------------       -----------
a6687ed4-c1a6-4c7b-9171-2d625a60b76e GK Malachor MSDN   Public      False     GKMalachorMSDN     Check here for or…
75366f42-6fc6-4857-90d1-3283236789b6 20200906 Demo Acc… Private     False     20200906DemoAcces… 20200906 Demo Acc…

Based on this information we now can contact the owners/members of a team and make them check if they use the app and need to update the URI. Currently I am not aware of a method to get the specific channel the webhook is attached to. The user needs to check all the channels to find the connectors.

How to fix the problem

The user needs to navigate to the team and check for the connector of all channels:

image

image

Open the “x configured” (1) if available and click on the “Manage” (2) button for the specific implementation:

image

This will show you the current configuration of the webhook:

image

You need to click on “Update URL” and you will receive a new URI with the tenant specific part. The connector page did not refresh automatically. I quite the page and reopened the dialog. Now the page is not complaining about a required update and I could copy the new webhook URI:

image

Now you just need to remember and find the app you integrated the webhook in :)

NOTE: I was not able to update the incoming webhook, if the account that created the webhook is not the account updating the webhook. You can see the account that did the setup in the connector list and you will notice the “Save” button is disabled. In this case an easy option is to delete webhook and recreate it with the same name.

image

Summary

Check your tenant (admin) or teams (power users) for the configuration of incoming webhook. Remember as soon as you update the URL the webhook for this will stop working and not accept messages. Updating the URL is only solving 50% of the problem. You also need to update your Power Automate flows, Azure Functions, Azure Automation Runbooks or your PowerShell scripts in your on-prem servers task scheduler.

Bonus

Get the owners of the groups to send an email:

Get-Team | ForEach-Object {
    $team = $_;
    $apps = Get-TeamsAppInstallation -TeamId $team.GroupId | Where-Object { $_.TeamsAppId -eq "203a1e2c-26cc-47ca-83ae-be98f960b6b2"};
    if ($apps -ne $null){
        Get-TeamUser -GroupId $team.GroupId -Role Owner | ForEach-Object {
            $owner = $_;
            $fields = @{
                Team = $team.DisplayName
                OwnerEmail = $owner.User
            }
            New-Object -TypeName PSObject -Property $fields;
        }
    }
}

Result: image

Golo Roden: Verschlüsseln mit elliptischen Kurven

Elliptische Kurven bilden die Grundlage für moderne asymmetrische Kryptografie. Mathematisch sind sie verhältnismäßig komplex, aber ihre Funktionsweise lässt sich dennoch anschaulich erklären. Wie also funktionieren sie?

Marco Scheel: Create your Azure AD application via script - M365.TeamsBackup

If you are using Azure AD authentication for your scripts, apps, or other scenarios at some point you will end up creating your own application in your directory. Normally you open the Azure portal and navigate to the “App registrations” part of AAD. This is fine during development, but if you want to share the solution or a customer wants to run the software in their own tenant, things get complicated and error prone. For my Microsoft Teams backup solution this is very real because you need to hit all required permissions and configure the public client part otherwise the solution will not run.

This post provides you will all the needed information to create your own script. I’m using my M365 Teams Backup solution as a reference. The key components are:

image

Choose a scripting environment (Azure CLI vs Azure AD PS)

During my day job I created some applications based on Microsoft Graph and I tried a few approaches to script the Azure AD app creation. It is important to understand that an Azure AD application consists of two parts. The application registration is like a blueprint for your app. The enterprise application is the implementation of your blueprint.

The application permissions are defined in the “App registration”. Here you select the permissions that your app will request from users in the tenant. Without a consent the permissions are not in effect. If your only have an app registered but not received consent the app will not be able to use the requested permissions. Check the Microsoft documentation for a deeper look at the consent framework.

Most of my applications leverage application permissions or require admin consent for delegate permissions. The �M365.TeamsBackup� solution is using a bunch of Microsoft Graph permissions and some of them are pretty powerful. If you have an application with this kind of permission requirements it is needed to have admin consent given by a (best case) global administrator.

If your apps are like mine, it might be the best to use the Azure CLI because this is as of my knowledge the only way to script the admin consent. I am not a CLI guy. I am a PowerShell fan. I struggled in the past integrating the CLI and its output into my scripting flow. That is why I wanted to show you what and how it can be done. If you are OK with opening the portal to give admin consent or you don’t want to give admin consent during application setup, I also have an Azure AD PowerShell version of the script.

Setup Azure CLI and connect

The Azure CLI is not purely targeted at Azure AD. It is the other way around because the CLI is used to script all the Azure things available. There is great Microsoft docs on installing the Azure CLI. I’m running on Windows, so I typically go the MSI route:

  • Download the release version of the MSI (that is what I’m running)
  • Install the MSI (bring some extra time because the installation is slow)
  • After the download open a new PowerShell (this ensured the path is set and available)
  • You can check if the installation worked using the ‘az –version’ command

image

As you can see my version is not up to date. As with most tools you need to keep these tools at the latest version. The Azure CLI can be updated by installing the newest MSI or by running an admin command using this command ‘az upgrade’. The upgrade command will download the MSI and start the installation for you.

Installation is finish and now it is time to login to your tenant. The CLI is different from your normal “PowerShell Connect-SERVICE” (SharePoint, AD, Teams, …) command. The Azure CLI will remember your last login. If you close and open your terminal you will still be logged in. If you use the Azure CLI just for the one-time setup, please consider a logout after you finish any script. But first lets login. I’m a big fan of device code authentication where possible. Azure CLI is supporting this flow so that is how I roll:

  • Login:
    • az login –use-device-code –allow-no-subscriptions
  • Check current login:
    • az account show
  • Logout:
    • az logout

image

Check my script using the Azure CLI:

Setup Azure AD PowerShell and connect

Azure AD PowerShell versioning is complicated. For my job (M365 Modern Collaboration) I am always using the AzureADPreview module and this is what I recommend in most cases. The AzureADPreview cannot be installed side by side with the AzureADPreview, so at some point you will have to move to the AzureADPreview. As the Azure CLI is not PowerShell based I am using my Windows Terminal default that is PowerShell 7. The Azure AD modules are not yet ready for PowerShell 7 so you will need to open your old school PowerShell 5.

Installing the Azure AD module is like most modern modules and relies on the Powershell gallery.

  • Open your PowerShell as an administrator and execute
    • Install-module AzureADPreview
  • Check your version opening a non admin session
    • Get-Module AzureADPreview -ListAvailable

image

If you are not on the latest version, you need to upgrade the module like any other module:

  • Open your PowerShell as an administrator and execute
    • Update-Module AzureADPreview

image

To connect to Azure AD you cannot rely on device authentication and you will need to login directly on the script execution. If you need to execute multiple scripts, check if you want to disable the login command “Connect-AzureAD” in the script to prevent multiple logins (incl. MFA).

  • Open your PowerShell and execute
    • Connect-AzureAD

Check my script using the Azure CLI:

Script the creation process

Now we are prepared, and we can create our application. The easy part is to create an “App registration”. If your app need permissions the trouble begins. There are two challenges:

  • Setting the permission in the two scripts
  • Getting the permission definition in the first place

Getting the permission translated from the nice Azure AD portal UX to a script-ready solution is harder to research than expected. I’ve done a blog post (in German) about this in the past. In the next section I will show you how to get the ID of the Microsoft Graph application and the IDs of the required permissions.

My Teams Backup solution requires many permissions from the Microsoft Graph (this time delegation because app permissions for some require Microsoft approval) so let�s have a look at a non-error prone implementation that is also easy to read and extend.

Azure CLI

For reference: create-aadapp-cli.ps1 Microsoft docs: az command overview

Use the Azure CLI to query the Azure Active Directory for the service principal with the name of the Microsoft Graph.

$servicePrincipalName = "Microsoft Graph";
$servicePrincipalId = az ad sp list --filter "displayname eq '$servicePrincipalName'" --query '[0].appId' | ConvertFrom-Json

Using the query parameter we select the first result (there is only one Microsoft Graph) and “cast” the app ID. Using the ConvertFrom-JSON makes it easy to parse the result and we receive “00000003-0000-0000-c000-000000000000” as the value for the app id.

Next, we need to get the ID for each required permission. This info is part of the “oauth2Permissions” property from the MS Graph service principal:

$servicePrincipalNameOauth2Permissions = @("Channel.ReadBasic.All", "ChannelMember.Read.All", "ChannelMessage.Read.All", "ChannelSettings.Read.All", "Group.Read.All", "GroupMember.Read.All", "Team.ReadBasic.All", "TeamMember.Read.All", "TeamSettings.Read.All", "TeamsTab.Read.All");
(az ad sp show --id $servicePrincipalId --query oauth2Permissions | ConvertFrom-Json) | ? { $_.value -in $servicePrincipalNameOauth2Permissions} | % {
    $permission = $_

    $delPermission = @{
        id = $permission.Id
        type = "Scope"
    }
    $reqGraph.resourceAccess += $delPermission
}

Using the “-in” filter we receive all specified entries for the array we need. To use the IDs in the next command the script creates a hashtable that can be converted in the needed JSON file (correct a file). The permissions are added as “Scope” representing “Delegation” permission. The “az ad app create” command will require a file with the permissions.

Set-Content ./required_resource_accesses.json -Value ("[" + ($reqGraph | ConvertTo-Json) + "]")
$newapp = az ad app create --display-name $appName --available-to-other-tenants false --native-app true --required-resource-accesses `@required_resource_accesses.json | ConvertFrom-Json

This creates an app that is only valid in your tenant “–available-to-other-tenants false” and allows the login as a public client “–native-app true”. The result is a JSON representing the new application.

The benefit of using the Azure CLI is the possibility to grant admin consent for the newly created app

az ad app permission admin-consent --id $newapp.appId

PowerShell with Azure AD

For reference: create-aadapp.ps1 Microsoft docs: Azure AD Application command overview

To get the ID for the Microsoft Graph Service principal we query the current directory and filter to the display name.

$servicePrincipalName = "Microsoft Graph";
$servicePrincipal = Get-AzureADServicePrincipal -All $true | ? { $_.DisplayName -eq $servicePrincipalName };

Where the Azure CLI requires a file to setup permission, the PowerShell version requires a .NET object. The Microsoft Graph service principal ID is the ResourceAppID.

$reqGraph = New-Object -TypeName "Microsoft.Open.AzureAD.Model.RequiredResourceAccess";
$reqGraph.ResourceAppId = $servicePrincipal.AppId;

From the returned object we can select the “Oauth2Permissions” property to filter on our array with the required permissions. For each permission another .NET object is created and added to the collection named “ResourceAccess”.

$servicePrincipalNameOauth2Permissions = @("Channel.ReadBasic.All", "ChannelMember.Read.All", "ChannelMessage.Read.All", "ChannelSettings.Read.All", "Group.Read.All", "GroupMember.Read.All", "Team.ReadBasic.All", "TeamMember.Read.All", "TeamSettings.Read.All", "TeamsTab.Read.All");
$servicePrincipal.Oauth2Permissions | ? { $_.Value -in $servicePrincipalNameOauth2Permissions} | % {
    $permission = $_
    $delPermission = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList $permission.Id,"Scope" #delegate permission (oauth) are always "Scope"
    $reqGraph.ResourceAccess += $delPermission
}

Now it is time to setup the application (only in this directory and as public client) and retrieve the ID of the new app:

New-AzureADApplication -DisplayName $appName -AvailableToOtherTenants:$false -PublicClient:$true -RequiredResourceAccess $reqGraph;
"ClientId: " + $newapp.AppId;
"TenantId: " + (Get-AzureADTenantDetail).ObjectId;
"Check AAD app: https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationMenuBlade/CallAnAPI/appId/" + $newapp.AppId + "/objectId/" + $newapp.ObjectId + "/isMSAApp/";

The last line creates a link to the Azure AD to grant admin consent.

Summary

You can check out my linked solution to get the full picture from the client code using the app to the setup required for authentication. I would recommend checking out the Azure CLI because it is the most complete solution even though it does not feel natural to me as a PowerShell guy. The example should give you an idea how to get the needed IDs and how to constructed the required objects/file to create the app. Let me know how you setup Azure AD apps and if there are other options. I’ve ignored the PowerShell AZ module because you are not able to grant admin consent too and chances are higher you may have AzureAD PowerShell installed already.

Golo Roden: Welche Programmiersprachen lernen?

Persönliche Weiterentwicklung ist für viele Entwicklerinnen und Entwickler gerade zu Beginn eines neuen Jahres häufig ein großes Thema. Besonders gut eignet sich dazu das Lernen einer neuen Programmiersprache. Welche Sprachen bieten sich an?

Golo Roden: RTFM #2: Computers and Intractability

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär geht es um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Computers and Intractability" von Michael R. Garey und David S. Johnson.

Golo Roden: Was man über Komplexitätstheorie wissen sollte

Die Komplexitätstheorie ist ein Themenbereich der theoretischen Informatik, der den Entwurf und die Analyse von Algorithmen betrifft. Das Thema hat allerdings auch Relevanz für die Praxis. Was sollte man darüber wissen?

Jürgen Gutsch: Working inside WSL using Visual Studio Code

It is a long time since I did the last post... I was kind of busy finalizing a book. Also, COVID19 and the remote only working periods steal my commuting writing time. Two hours on the train that I used to use to write blog posts and stuff. My book is finished and will be published soon and to make 2021 better than 2020, I force myself to write for my blog.

For a while, I have the WSL2 (Windows Subsystem for Linux) installed on my computer to play around with Linux and to work with Docker. We did a lot using Docker the last year at the YOO and id is pretty easy using Docker Desktop and the WSL. Recently I had to check a demo building and running on Linux. My first thought was using and running a Docker container to work with, but this seemed to be too much effort for a simple check.

So why not do this in the WSL directly?

If you don't have the WSL installed, you should follow this installation guide: https://docs.microsoft.com/en-us/windows/wsl/install-win10

If the WSL is installed, you will have a Ubuntu terminal to work with. It seems this hosts the wsl.exe that is the actual bash to work:

bash1

You can also start the wsl.exe directly, or host it in the Windows Terminal or in the cmder which is my favorite terminal:

cmder

Installing the .NET 5 SDK

The installation packages for the Linux distributions are a little bit hidden inside the docs. You can follow the links from https://dot.net or just look here for the Ubuntu packages: https://docs.microsoft.com/de-de/dotnet/core/install/linux#ubuntu

As you can see in the first screenshot, my WSL2 is based on Ubuntu 18.04 LTS. So, I should choose the link to the package for this specific version:

ubuntu packages

The link forwards me to the installation guide.

At first, I need to download and add the key to the Microsoft package repository. Otherwise, I won't be able to download and install the package:

wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

After that, I can install the .NET 5 SDK:

sudo apt-get update; \
  sudo apt-get install -y apt-transport-https && \
  sudo apt-get update && \
  sudo apt-get install -y dotnet-sdk-5.0

This needs some time to finalize. If this is done, you can prove the installation by typing dotnet --info into the terminal:

dotnet --info

That's it about the installation of the .NET 5 SDK. Now let's create a project

Creating a ASP.NET Core project inside the WSL

This doesn't really differ from creating a project on Windows, except it is on the Linux file system.

Create a Razor Pages project using the dotnet CLI

dotnet new webapp -o wsldemo -n wsldemo
cd wsldemo

After changing into the project directory you can start it using the following command

dotnet run

You can now see the familiar output in your terminal:

dotnet run

The cool thing now is that you can call the running web with your local browser. The request gets directly forwarded into the WSL:

WSL demo

That's it about creating and running an application inside the WSL. Let's see how you can use your local VSCode to develop inside the WSL

Developing inside WSL using Visual Studio Code

To remotely develop in the WSL using VSCode, you need to have the Remote - WSL extension installed

Remote WSL

This extension will be visible in the Remote Explorer in VS Code. It directly shows you the existing WSL Target on your computer:

Remote Explorer

Right-click the Ubuntu-18.08 item and connect, or click the small connect icon on the right of the WSL item to connect to the WSL. This opens a new instance of VSCode that doesn't have a folder open. If you now open a folder, you can directly select the project folder from inside the WSL:

Open Folder

Click OK or press Enter, if you selected the right folder. When you connect the first time, it installs the VSCode Server inside the WSL, which is the actual VSCode instance that does the actual work. You really work, code, and debug inside the WSL. Your local VSCode instance is a terminal session into the WSL. IntelliSense, code analysis, and all the good stuff act inside the WSL. This also means you might need to install VSCode Extensions again in the WSL, even if you already installed it on your machine. Even the VSCode terminal is connected to the WSL:

VSCode Terminal

The Explorer shows you the current project:

VSCode Explorer

To see that remote coding is working, I open the _Layout.cshtml in the Pages/Shared/ folder and change the app titles to make it a little more readable. I change all wsldemo to WSL Demo:

WSL Demo code

There is another occurrence at the end of the file.

What I didn't try until I write this line, is to press F5 in VS Code to start debugging the application. So I do now and voila: debugging starts and a browser opens and shows my changes:

WSL Demo

That's it.

Conclusion

This was really easy and smoothly done. Microsoft did a lot to make remote development as easy as possible. Now I'm able to also test my applications on Linux or to develop for Linux.

Actually, I didn't expect that I can call a web that runs inside the WSL directly in a browser in Windows. This makes testing and front end debugging really easy.

To not mess up the WSL, I would avoid doing too much different things on it. Installing a .NET 5 runtime isn't a big thing, but if I also want to test a Nginx integration or other stuff, I would go with Docker Containers. Remote development inside a Docker container is also possible and I will write about it in one of the next posts.

Golo Roden: Wie entwickelt man sich weiter?

Wie kann man sich als Entwicklerin oder Entwickler persönlich weiterentwickeln, über die Beschäftigung mit Technologien, Konzepten und Methoden hinaus?

Holger Schwichtenberg: Planung für Entity Framework Core 6.0 veröffentlicht

Microsoft hat eine Liste von Features bekannt gegeben, die in der Version 6.0 des objektrelationalen Mappers im November 2021 erscheinen sollen.

Golo Roden: Was man über Kryptografie wissen sollte

Kryptografie ist eine Wissenschaft, die sich mit dem Verschlüsseln und dem Schützen von Informationen beschäftigt. Als Teilbereich der Informatik ist sie aus der modernen IT-Welt nicht mehr wegzudenken. Daher ist es hilfreich, die Grundbegriffe zu kennen.

Holger Schwichtenberg: Mehrere Wege führen nach Rom in der PowerShell und die Leistungsunterschiede

In Microsofts PowerShell gibt es oft mehrere Wege, um ein Ziel zu erreichen. Dabei gibt es manchmal erhebliche Geschwindigskeitsunterschiede.

Martin Richter: HP + UPS = Der Service des Grauens / oder man braucht viel Geduld (Teil 1)

Nach langer Zeit hatte ich mich im Sommer entschieden meinen lang dienendes Samsung Laptop auszumustern. Ich habe mir ein HP-Envy 15″ Laptop gegönnt. So ein richtig schönes mit allem drum und dran.
Vor allem wollte ich einen Touchscreen.

Irgendwann bemerkte ich, dass die Akku-Stands Anzeige ungenau war und auch der HP-Assistent meinte ich müsste mal meinen Akku kallibrieren. Wozu auch immer.

Das ging aber nicht. Die Aufforderung zum Kalibrieren blieb. Also habe ich, als ich endlich mal Zeit hatte und der Laptop nicht so benötigt wurde HP kontaktiert und hier nun die Abfolge chronolgisch:

Di – 22.12.2020 16:39 Uhr – erster Kontakt mit dem Support von HP
Auch eine neue Erfahrung per Facebook Chat. Nach einigen Anweisungen und Tests die ich durchführen sollte ergab sich keine Besserung.

Mi – 23.12.2020 08:35 Uhr – erneuter Kontakt mit HP
Nun hatte ich alle Logs und Meldungen und Screenshots hochgeladen.
Das Laptop muss eingesendet werden.

Mi – 23.12.2020 10:55 Uhr – Antwort von HP
Das Laptop muss eingesendet werden und dazu die folgenden Instruktionen.
– Das Laptop wird abgeholt
– Ich muss eine Datensicherung machen (Mist aber ich hätte sowieso niemals meine Daten auf dem Laptop belassen)
– Das Laptop wird abgeholt durch UPS
– Die Verpackung und das Label bringt UPS mit.
Das ging mir zu schnell… Laptop von meinen Daten putzen Datensicherung machen…

Mi – 23.12.2020 15:15 Uhr – gebe ich das OK zur Abholung
OK. ging doch schneller als gedacht.
Ich gebe alle Kontaktdaten durch, falls diese noch nicht vorhanden sind.

Do- 24.12.2020 08:43 Uhr – Antwort von HP
Nochmals erhalte ich die Anweisungen: „Nicht einpacken“, „Label kommt von UPS“, „Kiste kommt von UPS“

Do- 24.12.2020 09:38 Uhr – Email von HP (hp.customer.care@hp.com)
Email Bestätigung mit Falldaten und allen bekannten Infos nochmal.
Mit bestätigtem Abholtermin 28.12. durch UPS.

Mo- 28.12.2020 07:04 Uhr – Email von UPS (HP.notifications@ups.com)
Email Bestätigung mit bestätigtem Abholtermin 28.12. durch UPS.

Mo – 28.12.2020 Vormittags
Bei uns kommt UPS oft Vormittags oder später am Nachmittag.
UPS Laster fährt an unserem Haus vorbei. OK. Geduld. Die fahren in der Weihnachtszeit ja öfter.
Aber kein UPS am Montag.
Ja unsere Adresse kann man mit Google finden. Unsere Adresse ist seit Jahren unverändert. Wir bekommen seit Jahren Lieferungen durch Versanddienste! (Nur falls jemand meint wir würden in Hessisch-Uganda leben)

Di – 29.12.2020 Vormittags
UPS fährt wieder an unserem Haus vorbei. Auf der UPS Seite kann ich zu meiner Sendung in der Sendungsverfolgung nichts sehen. Keine Infos zur Abholung. Keine Adresse. Nichts. Nicht mal, dass es abgeholt werden soll.

Ich habe keine Lust ewig auf UPS zu warten und kontaktiere, telefonisch die Hotline.

Di – 29.12.2020 11:35 Uhr- Anruf bei UPS (20 Cent der Anruf)

Ich gebe meine UPS Sendungsnummer an und erfahre, dass es zu dieser Sendungsnummer keine Abholung gibt. Ich verlange den Vorgesetzten.

Ein unerfreuliches Gespräch:
– UPS würde niemals Kisten bereitstellen.
– Ich müsste das Label drucken und die Abholung veranlassen
– Ich hätte niemals eine Bestätigung von UPS per Email bekommen.
– Eine Email Adresse kann der Mitarbeiter mit bei UPS nicht geben, sonst hätte ich ihm die Email zugeschickt.
Ich beende total sauer das Gespräch.

Di – 29.12.2020 11:50 Uhr- Anruf bei HP

Nach x-Minuten gelange ich nach verständigem Gespräch in der Logistik.
Diese sagt, sie kümmert sich drum und eskaliert das bei UPS.

Di – 29.12.2020 Nachmittags

Kein UPS.

Di – 29.12.2020 16:49 Uhr- Anruf bei UPS (20 Cent der Anruf)

Genau der gleiche Verlauf.
– Angeblich habe ich keinen Abholtermin.
– Ich muss ein Label haben.
Ich bin geladen und der Kundenbetreuer betont unwillig.
Bei UPS hat scheinbar keiner eine Ahnung wie Abwicklungen mit HP laufen.

Di – 29.12.2020 16:59 Uhr- Anruf bei HP

Nochmal Logistik Abteilung. Nochmal die Beteuerung: „Wir kümmern und drum!“

Di – 29.12.2020 17:24 Uhr- Email von HP

Nochmal die Sendungsinfos und die Aufforderung ich solle mich an UPS wenden. Dazu die bekannt 20 Cent Telefonnummer.
Soll ich lachen oder weinen?

Mi – 30.12.2020 Vormittags

Wie erwartet fährt der UPS Laster an unserem Haus vorbei.

Mi- 30.12.2020 10:30 Uhr- Anruf bei HP

Nochmal ein Gespräch mit einem Supporter.
Der versteht zwar meinen Ärger aber es nutzt nichts.
Wieder ein Gespräch mit einem Mitarbeiter der Logistik. Wieder das Versprechen das etwas passiert.
Nochmal Telefonnummer und Handy Nummer durchgegeben.
Mir wird versprochen, dass ich eine SMS oder einen Anruf erhalte.

Do – 31.12.2020 10:30
Kein UPS. Keine Versprochene SMS. Keine weitere Email. Kein Anruf!
Ich bin wütend. Habe keinen Bock meine Zeit mit warten zu verbringen oder andere Mitbewohner zu instruieren was zu machen ist wenn UPS kommt (inkl. Zettel schreiben und an die Tür hängen)

HP ist nett aber unfähig etwas in die Wege zu leiten.

Bei UPS hat keiner eine Ahnung und man ist dort nicht mal gewillt einem zu helfen. (und dafür zahlt man noch 20 Cent pro Anruf)

Anmerkung: Ich habe bei UPS nicht einmal mit jemandem gesprochen, der mich zu 100% verstanden hat, oder den ich zu 100% verstehen konnte.
Ich habe nichts gegen einen Akzent in der Sprache aber verstehen und ausdrücken sollte man schon.

BTW: Wäre die Kalibrierung bereits Teil der Endkontrolle bei HP gewesen, dann wäre das Gerät niemals ausgeliefert worden. Aber auch hier wird halt lieber alles auf den Kunden abgewälzt.

To be continued…




Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Code-Inside Blog: How to get all distribution lists of a user with a single LDAP query

In 2007 I wrote a blogpost how easy it is to get all “groups” of a given user via the tokenGroup attribute.

Last month I had the task to check why “distribution list memberships” are not part of the result.

The reason is simple:

A pure distribution list (not security enabled) is not a security group and only security groups are part of the “tokenGroup” attribute.

After some thoughts and discussions we agreed, that it would be good if we could enhance our function and treat distribution lists like security groups.

How to get all distribution lists of a user?

The get all groups of a given user might be seen as trivial, but the problem is, that groups can contain other groups. As always, there are a couple of ways to get a “full flat” list of all group memberships.

A stupid way would be to load all groups in a recrusive function - this might work, but will result in a flood of requests.

A clever way would be to write a good LDAP query and let the Active Directory do the heavy lifting for us, right?

1.2.840.113556.1.4.1941

I found some sample code online with a very strange LDAP query and it turns out: There is a “magic” ldap query called “LDAP_MATCHING_RULE_IN_CHAIN” and it does everything we are looking for:

var getGroupsFilterForDn = $"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:= {distinguishedName}))";
                using (var dirSearch = CreateDirectorySearcher(getGroupsFilterForDn))
                {
                    using (var results = dirSearch.FindAll())
                    {
                        foreach (SearchResult result in results)
                        {
                            if (result.Properties.Contains("name") && result.Properties.Contains("objectSid") && result.Properties.Contains("groupType"))
                                groups.Add(new GroupResult() { Name = (string)result.Properties["name"][0], GroupType = (int)result.Properties["groupType"][0], ObjectSid = new SecurityIdentifier((byte[])result.Properties["objectSid"][0], 0).ToString() });
                        }
                    }
                }

With a given distinguishedName of the target user, we can load all distribution and security groups (see below…) transitive!

Combine tokenGroups and this

During our testing we found some minor differences between the LDAP_MATCHING_RULE_IN_CHAIN and the tokenGroups approach. Some “system-level” security groups were missing with the LDAP_MATCHING_RULE_IN_CHAIN way. In our production code we use a combination of those two approaches and it seems to work.

A full demo code how to get all distribution lists for a user can be found on GitHub.

Hope this helps!

Sebastian Seidel: A review of the year 2020

I don't usually review a year, but 2020 was no ordinary year for any of us. With the onset of the COVID-19 pandemic, we at Cayas Software moved completely to remote projects. Home office was even possible for some of us without giving up the comforts of the office that we are used to.

Holger Schwichtenberg: Windows Forms lebt!

Überraschend bietet Microsoft nicht einfach das alte Windows Forms in der neuen .NET-Welt an, sondern arbeitet aktiv an der Weiterentwicklung. Was heißt das für .NET-Entwickler?

Stefan Henneken: IEC 61131-3: Abstract FB vs. Interface

Function blocks, methods and properties can be marked as abstract since TwinCAT V3.1 build 4024. Abstract FBs can only be used as basic FBs for inheritance. Direct instantiation of abstract FBs is not possible. Therefore, abstract FBs have a certain similarity to interfaces. Now, the question is in which case an interface and in which case an abstract FB should be used.

A very good description of abstract can be found in the post The ABSTRACT keyword from the blog PLCCoder.com or Beckhoff Information System. So, the most important things should be repeated only briefly.

abstract methods

METHOD PUBLIC ABSTRACT DoSomething : LREAL
  • consist exclusively of the declaration and do not contain any implementation. The method body is empty.
  • can be public, protected or internal. The access modifier private is not allowed.
  • cannot additionally be declared as final.

abstract properties

PROPERTY PUBLIC ABSTRACT nAnyValue : UINT
  • can contain getters, setters, or both.
  • getter and setter consist only of the declaration and do not contain any implementation.
  • can be public, protected or internal. The access modifier private is not allowed.
  • cannot additionally be declared as final.

abstract function blocks

FUNCTION_BLOCK PUBLIC ABSTRACT FB_Foo
  • As soon as a method or a property is declared as abstract, the function block must also be declared as abstract.
  • No instances can be created from abstract FBs. Abstract FBs can only be used as basic FBs when inherited.
  • All abstract methods and all abstract properties must be overwritten to create a non-abstract FB. An abstract method or an abstract property becomes a non-
    abstract method or a non-abstract property by overwriting.
  • Abstract function blocks can additionally contain non-abstract methods and/or non-abstract properties.
  • If not all abstract methods or not all abstract properties are overwritten during inheritance, the inheriting FB can again only be an abstract FB (step-by-step concretization).
  • Pointers or references of an abstract FB type are permitted. However, these can refer to non-abstract FBs and thus call their methods or properties (polymorphism).

Differences between an abstract FB and an interface

If a function block consists exclusively of abstract methods and abstract properties, then, it does not contain any implementations and thus has a certain similarity to interfaces. However, there are some special features to consider in detail.

Interfaceabstract FB
supports multiple inheritance+
can contain local variables+
can contain non-abstract methods+
can contain non-abstract properties+
supports further access modifiers besides public+
applicable with array+only via POINTER

The table can give an impression that interfaces can be almost completely replaced by abstract FBs. However, interfaces offer greater flexibility because they can be used in different inheritance hierarchies. The post IEC 61131-3: Object composition with the help of interfaces shows an example of this.

As a developer, you therefore want to know when an interface and when an abstract FB should be used. The simple answer is preferably both at the same time. This provides a standard implementation in the abstract base FB, which makes it easier to derive. However, every developer has the freedom to implement the interface directly.

Example

Function blocks should be designed for the data management of employees. A distinction is made between permanent employees (FB_FullTimeEmployee) and contract employees (FB_ContractEmployee). Each employee is identified by his first name (sFirstName), last name (sLastName) and the personnel number (nPersonnelNumber). Corresponding properties are provided for this purpose. In addition, a method is required that outputs the full name including personnel number as a formatted string (GetFullName()). The calculation of the monthly income is done by the method GetMonthlySalary().

The two functional blocks differ in the calculation of the monthly income. While the permanent employee receives an annual salary (nAnnualSalary), the monthly salary of the contract employee results from the hourly wage (nHourlyPay) and the monthly working hours (nMonthlyHours). The two function blocks for calculating the monthly income therefore have different properties. The GetMonthlySalary() method is contained in both function blocks, but differs in implementation.

1. Solution approach: Abstract FB

Since both FBs have a lot in common, it is obvious to create a basic FB (FB_Employee). This basic FB contains all methods and properties that are contained in both FBs. However, since the methods GetMonthlySalary() differ in their implementation, it is marked as abstract in FB_Employee. This means that all FBs that inherit from these base FBs must overwrite GetMonthlySalary().

(abstract elements are displayed in italics)

Sample 1 (TwinCAT 3.1.4024) on GitHub

The cons

The solution approach looks very solid at first glance. However, as mentioned above, the use of inheritance can also have disadvantages. Especially when FB_Employee is part of an inheritance chain. FB_FullTimeEmployee and FB_ContractEmploye also inherit everything that FB_Employee inherits via this chain. If FB_Employee is used in a different context, an extensive inheritance hierarchy can lead to further problems.

There are also restrictions when trying to store all instances in an array as references. The compiler does not allow the following declaration:

aEmployees : ARRAY [1..2] OF REFERENCE TO FB_Employee; // error

Pointers must be used instead of references:

aEmployees : ARRAY [1..2] OF POINTER TO FB_Employee;

However, there are some things to consider when using pointers (e.g., online change). For this reason, I try to avoid pointers as far as possible.

The pros

Although it is not possible to create an instance of an abstract FB directly, the methods and properties of an abstract FB can be accessed by reference.

VAR
  fbFullTimeEmployee :  FB_FullTimeEmployee;
  refEmployee        :  EFERENCE TO FB_Employee;
  sFullName          :  STRING;
END_VAR
refEmployee REF= fbFullTimeEmployee;
sFullName := refEmployee.GetFullName();

It can also be an advantage that the method GetFullName() and the properties sFirstName, sLastName and nPersonnelNumber are already completely implemented in the abstract base FB and are not declared as abstract there. It is no longer necessary to overwrite these elements in the derived FBs. For example, if the formatting for the name is to be adapted, then this is to be accomplished only at one place.

2. Solution approach: Interface

An approach with interfaces is very similar to the previous variant. The interface contains all methods and properties that are the same for both FBs (FB_FullTimeEmployee and FB_ContractEmployee).

Sample 2 (TwinCAT 3.1.4024) on GitHub

The cons

Because the FB_FullTimeEmployee and FB_ContractEmployee implement the I_Employee interface, each FB must also contain all methods and all properties from the interface. This also applies to the method GetFullName(), which performs the same calculation in both cases.

If an interface has been published (e.g., by a library) and used in different projects, changes to this interface are no longer possible. If you add a property, all function blocks that implement this interface must also be adapted. This is not necessary when inheriting FBs. If a basic FB is expanded, no FBs that inherit from it have to be changed unless the new methods or properties are abstract.

Tip: If it happens that you have to adapt an interface later, you can create a new interface. This interface inherits from the original interface and is extended by the necessary elements.

The pros

Function blocks can implement several interfaces. Interfaces can therefore be used more flexibly in many cases.

With a function block, a specific interface can be queried at runtime using __QUERYINTERFACE(). If this has been implemented, access to the FB is possible via this interface. This makes the use of interfaces very flexible.

If the implementation of a certain interface is known, the access via the interface can also be done directly.

VAR
  fbFullTimeEmployee :  FB_FullTimeEmployee;
  ipEmployee         :  I_Employee;
  sFullName          :  STRING;
END_VAR
ipEmployee := fbFullTimeEmployee;
sFullName := ipEmployee.GetFullName();

In addition, interfaces can be used as data type for an array. All FBs that implement the interface I_Employee can be added to the following array.

aEmployees : ARRAY [1..2] OF I_Employee;

3. Solution approach: Combination of abstract FB and interface

Why not combine both approaches and thus benefit from the advantages of both variants?

(abstract elements are displayed in italics)

Sample 3 (TwinCAT 3.1.4024) on GitHub

When combining the two approaches, the interface is provided first. Then, the use of the interface is simplified by the abstract function block FB_Employee. The same implementations of common methods can be provided in the abstract FB. A multiple implementation is not necessary. If new FBs are added, they can also use the I_Employee interface directly.

The effort for the implementation is initially a little higher than with the two previous variants. However, this extra effort can be worth it, especially with libraries that are used by several programmers and further developed over the years.

  • If the user should not create an own instance of the FB (because this does not seem to be useful), then abstract FBs or interfaces are helpful.
  • If one wants to have the possibility to generalize into more than one basic type, then an interface should be used.
  • If an FB can be set up without implementation of methods or properties, then an interface should be preferred over an abstract FB.

Marco Scheel: Microsoft Teams backup your channel messages with Microsoft Graph

I have sat down for four weekends in a row to come up with a solution for two problem I encountered in the past:

  • I wanted to save all images from a beautiful Teams channel message. Teams is saving these inline images not to SharePoint. The only way to download is to click on each image.
  • We are doing a Tenant to Tenant migration at glueckkanja-gab after our merger. Most of the migration tools will support the migration of Teams chat, but not all tools are available, and some implementations are lacking features for a more flexible approach.

I started my career as a developer and my heart is still thinking Visual Basic. Do not be afraid, I migrated my dev skills to C# a long time ago and my array starts at 0 not 1.

image

I really wanted to try a few things regarding Microsoft Graph, the Microsoft Graph SDK and Microsoft Teams. The Microsoft Teamwork part of the API has a solid starting point (if you look at the beta version). The API is getting very mature set of capabilities. I’m a huge fan of Azure Functions and I’ve done quite a few projects that are talking to the Microsoft Graph using Application Permissions. I’ve checked the documentation and if I wanted to go this route I would have to request special permissions from Microsoft to access the content without a real user. For now I decided to go with a console application and a Azure AD Device Code flow.

I have published the source code at GitHub. Maybe this will get your own solution a kickstart. Just a quick disclaimer: A lot of this stuff is first time code for me (DI in a console, Graph Auth provider, logging, …). I think at some points I over-engineered the solution and I got distracted from my real business problems ;)

https://github.com/marcoscheel/M365.TeamsBackup

In the following sections I will show you how I approached the problem, how the result of the backup looks, how to setup and how to run it for yourself.

Check your migration vendor of choice first!

I have only limited experience with the following tools. But these vendors are the big players, and you can easily get in touch with them to get a demo or further information.

It is ok if you stop here if your migration needs are satisfied by one of these vendors. If you are interested in my approach it might still be worth reading on ;)

My goal

Since the beginning of the first lockdown in April we at Glück & Kanja did a cooking event for a social online gathering. We are still doing it and we created a lot of content since the first meeting. The event is hosted on Microsoft Teams Channel meeting every Monday. Check out our blog post (german). The event created a ton of beautifully pictures but they were stuck inside a Teams thread without easy access like SharePoint (if you consider SharePoint an easy access method). The backup should save all the attachments to the file system. Sneak preview of the result: image

For our migration to the new tenant, we are running a set of tools. The tooling we have at hand has some limits regarding channel message migration. We wanted to preserve that chat messages for a Team without polluting the new target team. In some cases, we also want to archive the Team to a central location (a single SharePoint site collection) for archiving purposes. For this case we would need to export the chat to some readable and searchable format preserving most of the content.

My goal is not to provide a ready to use backup solution. For some of your scenarios this could work. I wanted to also provide a code base to accommodate your more specific requirements. We don’t rely on extensive app integration in our teams, so handling the adaptive cards in a chat message is not really a part of the HTML generation solution. But the JSON from the Microsoft Graph will have all or some information included. If you need to handle adaptive card content this solution could be a great kickstart so you don’t have to write a lot of boilerplate code to get to the message attachment properties.

The solution will try to preserve as much information as needed from the Microsoft Graph and dump it to disk. From this data you can run multiple HTML conversions to address the need for the content representation. The HTML generation is no longer interacting with the Microsoft Graph and could be reparsed month after the original content was deleted (decommissioning of the old tenant).

My current focus is on the following data:

  • Basic team metadata including members
  • Channel metadata including members for private channels
  • Messages and replies with author and dates
  • Message body and inline pictures (hosted content)

My approach

I mentioned it a few times. I split the solution in two parts. The first part is talking to the Microsoft Graph and storing the response to a JSON file. For every Team I create a folder with the ID of the group. The response of the team request will be stored in the folder with the name “team.json”. For every channel I create a folder with the channel ID and the channel response will be stored as “channel.json”. For every message (entry point of a thread if replies are available) I create a folder with the message ID and the message response will be stored as “message.json”. For every reply to a message I create another file with the pattern “message.{messageid}.json”. Every message will be checked if inline content is available. This is called hosted content and I treat every item as an inline image represented as an PNG-file. Because the ID for a hosted content item is very long, I decided to use an MD5 hash of the ID to use in the filename. For the root message the file is named “hostedcontent.{hostedcontenidMD5}.png” and for replies “hostedcontent.{messageid}.{hostedcontenidMD5}.png”

image

Of cause all this needs to happen with some kind of authentication. As mentioned, I’m big fan of application permission because there is no user involved. But the current graph implementation only offers access to chat messages if you apply for an app with protected API access. Read more about this here. I tried to apply but 4 weeks later I still don’t have feedback for my request (single tenant app should be easy) and I implemented the access using delegate permission based on the device flow. Based on this approach I made this into a feature. The tool will backup (by default) all teams the account is a member of. This way you can add the account to a Team as an admin or even the owner of Team can do this to “request” chat backup.

The other console application is taking care of building the HTML from the JSON files. Because all the heavy lifting is already done in the first part, the generation of HTML is really fast. The application needs the source directory and the HTML template for the output. The HTML templates currently has some inline styles to make it easy to move around and keep the dependencies low. The application is creating a HTML file for each channel and optionally a HTML file for every thread. Based on the configuration the HTML file will contain all images inline or as a separate file. If you go with the inline images you get a very portable version of the backup, but also a big file if you have many images or a long chat history. The combination of inline images and every thread as an HTML file will give you a great choice out of the box. If you would like to customize the HTML look and feel have a peak at the template file. With a few tweaks in the CSS styles you can enhance readability and change it to your preferences. I hopefully selected some easy-to-understand selectors.

image

The code

Let’s have a quick look at the code and what it takes to get it compiled, if you pull it from GitHub. I am doing all my development in Visual Studio “proper”. The solution was created with Visual Studio 2019 Enterprise (Preview) but the community non preview version should also be fine. I write most of my code in C# and so is this code. As .NET 5 is now available this is my first solution using this version, but I think you can get it running on .NET Core 3.1 if you downgrade the solution and packages. The final solution is hosted on GitHub and you are welcome to open an issue or create a pull request. Just have a little bit of patience as this is just a side hustle for me.

Normally I write Azure Functions and dependency injection is not yet in my DNA. If you look at the code this might feel a little awkward. I spend way too much time to get the console app make use of the DI concepts. For a start it feels OK and I had a lot of fun learning to code this, but I’m not sure this is 100% correct. Let’s put it this way: It works! Also, a thing I love about Azure Functions is the native Microsoft Extension Logging integration. So, it was natural to also rely on my normal code to write logs. In the cloud logging to a file on disk is not really a thing and that might be the reason why Microsoft does not have an out of the box solution for that. That’s my excuse why all of my logs are only on the console for the moment. I am looking into NLog or Serilog. The biggest benefit is to configure the logs levels very easy. Check out the application.json for a sample. The configuration system is also based on Microsoft standards. The console is loading the application.json settings and during development an argument can be passed in to respected the correct JSON file. A Special thanks to David Feldman for his blog post which got me started on the console DI thing.

As of now we didn’t write any business relevant code (developer love to write non relevant code 😁) so let’s get our hands dirty. Getting the data from Microsoft Teams is done via the Microsoft Graph. I’m using the beta SDK because for the Teams workload some feature are only available in the non-production endpoint. Also, I like to play with fire. The authentication (as mentioned: I had to go with the device code) is provided by the MSAL libraries. For my “daemon apps” (Azure Functions) I rely on the pure MSAL implementation, but for this application I tried something new and used the Microsoft Graph Auth libraries. The NuGet is still in preview, but this was the easiest solution to get the device code up and running in minutes. I plan to use this library in the future for my other projects. One thing was missing and was really annoying: I had to authenticate on every debug run, so I copied some code to persist the token to a file. This code is also standard Microsoft code, and it will put the token in a locally protected file. For the generation of the HTML file I visited an old friend: HTML Agility Pack (HAP). This is an awesome library and working with the HTML DOM is a breeze! The HTML from the Teams chat message can contain images (hosted content) pointing to the Microsoft Graph endpoint. Using the HTML Agility Pack, I search the images and replace the src with as base64 encoded version or a local file reference.

image

The result

Here is a side-by-side comparison. The original channel in Microsoft Teams and the HTML backup generated from the Microsoft Graph API.

image

The HTML file will contain the Team name, creation date and the member count. The Channel name is also part of every file and if it is a private channel also the member count. Every post will contain the creation (+ edit if available) time, the author and the body including images. Links to documents are not modified and the file will not be downloaded. The content of an adaptive card is available in the JSON, but currently the output is not rendered in HTML. The adaptive cards rendering will be added later.

After the complete run you have a set of JSON files representing Microsoft Graph SDK classes. I recommend putting them in to a ZIP file (there are a lot of files) and place them next to the HTML output. Based on your data and configuration the output can be stored in a SharePoint library or on a file based archive.

Your setup

Currently the setup is not download and ready to run. First I don’t want to offer a multi-tenant application in my Azure AD. Therefor you need to talk to your admin to register the application in the first place. Based on the permission the app needs the admin consent in any way. Please check out the project Readme.md to get the needed permissions and put the needed details in the application settings file. The binaries can be downloaded from the release page of the GitHub project. You will need to have the .NET 5 runtime installed.

image

Summary

I had a blast writing the solution from start to end. I will tweak the solution in the near future. These are the next ideas I want to code:

  • Save images with the date in the name and add EXIF data with the author information
  • Allow the use of a non-interactive AAD login (confidential client)
  • Integrate features in an Azure Function and maybe trigger from a Teams Message Extension for an ad-hoc export for every user

I created the solution for a very specific use case (our tenant migration), but I hope in making the source code available you will be able to solve your problems. If you have feedback, please let me know. Create an issue, hit me up on Twitter or LinkedIn.

Holger Schwichtenberg: Online-Workshop "Umstieg von .NET Framework auf .NET 5.0"

Ein Crashkurs am 14. Dezember 2020 für alle Entwickler, die vom klassischen .NET Framework auf .NET Core umsteigen wollen. Er behandelt die Migration für WPF, Windows Forms, ASP.NET, Entity Framework und WCF.

Code-Inside Blog: Update AzureDevOps Server 2019 to AzureDevOps Server 2019 Update 1

We did this update in May 2020, but I forgot to publish the blogpost… so here we are

Last year we updated to Azure DevOps Server 2019 and it went more or less smooth.

In May we decided to update to the “newest” release at that time: Azure DevOps Server 2019 Update 1.1

Setup

Our AzureDevOps Server was running on a “new” Windows Server 2019 and everything was still kind of newish - so we just needed to update the AzureDevOps Server app.

Update process

The actual update was really easy, but we had some issues after the installation.

Steps:

x

x

x

x

x

x

Aftermath

We had some issues with our Build Agents - they couldn’t connect to the AzureDevOps Server:

TF400813: Resource not available for anonymous access

As a first “workaround” (and a nice enhancement) we switched from HTTP to HTTPS internally, but this didn’t solved the problem.

The real reason was, that our “Azure DevOps Service User” didn’t had the required write permissions for this folder:

C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys

The connection issue went away, but now we introduced another problem: Our SSL Certificate was “self signed” (from our Domain Controller), so we need to register the agents like this:

.\config.cmd --gituseschannel --url https://.../tfs/ --auth Integrated --pool Default-VS2019 --replace --work _work

The important parameter is -gituseschannel, which is needed when dealing with “self signed, but Domain ‘trusted’“-certificates.

With this setting everything seemed to work as expected.

Only node.js projects or toolings were “problematic”, because node.js itself don’t use the Windows Certificate Store.

To resolve this, the root certificate from our Domain controller must be stored on the agent.

[Environment]::SetEnvironmentVariable(“NODE_EXTRA_CA_CERTS”, “C:\SSLCert\root-CA.pem”, “Machine”)

Summary

The update itself was easy, but it took us some hours to configure our Build Agents. After the initial hiccup it went smooth from there - no issues and we are ready for the next update, which is already released.

Hope this helps!

Daniel Schädler: Meine ersten Schritte mit Terraform

In diesem Artikel ist die Vorgehensweise beschrieben wie man eine Terraform Umgebung für Tests auf- und wieder abbauen kann. Hierzu habe ich Microsoft Azure verwendet. Diese ist im Rahmen der dotnet user group Seite geschehen um so die Tests mit Browserstack auf einer virtuellen Umgebung durchführen zu können.

Vorausetzungen

Ziel

Das Ziel soll sein, dass eine Testumgebung bestehend aus folgenden Komponenten

  • SQL Server
  • App Service hochfährt, die automatisierten Tests durchläuft und danach gelöscht wird.

Vorgehensweise

Die nachfolgenden Schritte beschreiben die Vorgehensweise in 3 Schritten.

  • Identifizieren der benötigten Ressourcen
  • Identifizieren der Terraform Ressourcen
  • Zusammenführen

Identifizieren der Terraform Ressourcen

Wirf man ein Auge auf die Azure Automation, so sticht die sehr grosse JSON-Datei ins Auge. Sie kann als Anhaltspunkt verwendet werden, muss aber nicht. Schaut man sich das so an, dann sind das eine Menge an Ressourcen, die da Verwendung finden. Nun stellt Terraform unter folgendem Link, die aktuellen Ressourcen für Azure zur Verfügung. Folgende Ressourcen können dafür Verwendung finden:

Zusammenführen

Nun muss das Ganze in einer Terraform Datei zusammengeführt werden. Hierzu wird ein Ordner erstellt, zum Beispiel dotnet-usergroup-bern-terraform-configuration. In diesem Ordner ist dann eine main.tf Datei zu erstellen, die die ganze Terraform Konfiguration beinhaltet. Diese Datei sieht dann wie so aus:

# Configure the dotnet-user-group test environment

terraform{
    required_providers{
        azurerm = {
            source = "hashicorp/azurerm"
            version = ">=2.26"
        }
    }
}

provider "azurerm"{
    features{}
}

resources "azurerm_resource_group" "bddtest"{
    name = "bddtest-dotnet-usergroup"
    location = "switzerlandnorth"
}

// SQL configuration
resource "azurerm_storage_account" "bddtest"{
    name = "bdd-test-dotnet-usergroup"
    resource_group_name = azurerm_resource_group.bddtest.name
    location = azurerm_resource_group.bddtest.location
    account_tier = "Free"
    account_replication_type = "GRS"
}

resource "azurerm_sql_server" "bddtest" {
  name                         = "bddtest-sqlserver"
  resource_group_name          = azurerm_resource_group.bddtest.name
  location                     = azurerm_resource_group.bddtest.location
  version                      = "12.0"
  administrator_login          = "4dm1n157r470r"
  administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}

resource "azurerm_mssql_database" "bddtest" {
  name           = "bddtest-dnugberndev"
  server_id      = azurerm_sql_server.bddtest.id
  collation      = "SQL_Latin1_General_CP1_CI_AS"
  license_type   = "LicenseIncluded"
  max_size_gb    = 4
  read_scale     = false
  sku_name       = "Basic"
  zone_redundant = false

  extended_auditing_policy {
    storage_endpoint                        = azurerm_storage_account.example.primary_blob_endpoint
    storage_account_access_key              = azurerm_storage_account.example.primary_access_key
    storage_account_access_key_is_secondary = true
    retention_in_days                       = 6
  }

// App Service configuration (Web Apps)
resource azurerm_app_service_plan "bddtest"{
    name = "bddtest-appserviceplan"
    location = azurerm_resource_group.bddtest.location
    resource_groupe_name = azurerm_resource_group.bddtest.name

    sku{
        tier = "Basic"
        size = "F1"
        name = "F1"
    }
}
 

Natürlich sind hier noch nich alle definitiven Werte eingetragen, sodass während der Build-Zeit diese Konfiguration erstellt wird. Diesen Punkt werde ich in Zukunft behandeln.

Erstellung der Ressourcen
Nun ist es an der Zeit, zu sehen ob die Konfiguration, die da entstanden ist, auch ohne Fehler angewendet werden kann.
Hierzu bin ich wie folgt vorgegangen:
1. Anmelden in Azure (Diese Methode habe ich gewählt, weil bei mir der Browser oder die Browser ein wenig gezickt haben.)

az login --use-device-code

Nach erfolgter Anmeldung, sind die Azure-Abos ersichtlich.

[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "YYYYYYYY",
    "id": "0000000-11111-2222-3333-555555555555",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Pay-As-You-Go",
    "state": "Enabled",
    "tenantId": "ZZZZZZZ",
    "user": {
      "name": "hansmustter@windowslive.com",
      "type": "user"
    }
  },
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "XXXXXXX",
    "id": "0000000-11111-2222-3333-444444444444",
    "isDefault": false,
    "managedByTenants": [],
    "name": "Visual Studio Enterprise mit MSDN",
    "state": "Enabled",
    "tenantId": "XXXXXX",
    "user": {
      "name": "hansmuster@windowslive.com",
      "type": "user"
    }
  }
]

Nun, wenn man die Subscription wechseln will, muss man mit den Azure CLI-Werkzeugen folgenden Befehl ausführen:

az account list

[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "YYYYYYYY",
    "id": "0000000-11111-2222-3333-555555555555",
    "isDefault": false,
    "managedByTenants": [],
    "name": "Pay-As-You-Go",
    "state": "Enabled",
    "tenantId": "ZZZZZZZ",
    "user": {
      "name": "hansmustter@windowslive.com",
      "type": "user"
    }
  },
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "XXXXXXX",
    "id": "0000000-11111-2222-3333-444444444444",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Visual Studio Enterprise mit MSDN",
    "state": "Enabled",
    "tenantId": "XXXXXX",
    "user": {
      "name": "hansmuster@windowslive.com",
      "type": "user"
    }
  }
]

das die Default Subscription geändert hat.

  1. Nun muss der Plan von Terraform initiiert werden.
.\terraform.exe plan -out .\myplan
terraform init
  1. Anschliessend muss
.\terraform.exe plan -out .\myplan

ausgeführt werden. Wobei der Out Parameter optional ist. Bei der Ausführung speichert Terraform mit diesem Parameter den Plan als Datei im aktuellen Verzeichnis oder in dem angegeben im out-Parameter, ab. Verlief alles nach Plan, so zeigt Terraform den Plan an. Die + Zeichen deuten darauf hin, dass diese Ressourcen erstellt werden. Der nachfolgende Plan ist der Übersicht halber symbolisch dargestellt. In Wirklichkeit ist dieser um einiges länger.

------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # azurerm_app_service.bddtest will be created
  + resource "azurerm_app_service" "bddtest" {
      + app_service_plan_id            = (known after apply)
      + app_settings                   = {
          + "SOME_KEY" = "some-value"
        }
      + client_affinity_enabled        = false
      + client_cert_enabled            = false
      + default_site_hostname          = (known after apply)
      + enabled                        = true
      + https_only                     = false
      + id                             = (known after apply)
      + location                       = "switzerlandnorth"
      + name                           = "bddtest-dnug-bern"
      + outbound_ip_addresses          = (known after apply)
      + possible_outbound_ip_addresses = (known after apply)
      + resource_group_name            = "bddtest-dotnet-usergroup"
      + site_credential                = (known after apply)

      + auth_settings {
          + additional_login_params        = (known after apply)
          + allowed_external_redirect_urls = (known after apply)
          + default_provider               = (known after apply)
          + enabled                        = (known after apply)
          + issuer                         = (known after apply)
          + runtime_version                = (known after apply)
          + token_refresh_extension_hours  = (known after apply)
          + token_store_enabled            = (known after apply)
          + unauthenticated_client_action  = (known after apply)

          + active_directory {
              + allowed_audiences = (known after apply)
              + client_id         = (known after apply)
              + client_secret     = (sensitive value)
            }          
....

Plan: 7 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: .\MyPlan

To perform exactly these actions, run the following command to apply:
    terraform apply ".\\MyPlan"
  1. Nun wird der Plan mit
terraform apply

zur Ausführung gebracht.

  1. Eine kurze Rechenphase und der Plan der ausgeführt werden wird, wird angezeigt. Die Bestätigung mit „Yes“ lässt Terraform nun seine Arbeit verrichten.
terraform apply
  1. Sobald die Bestätigung mit „YES“ erfolgte, schreitet Terraform zur Tat. Die Fortschritte sind dann wie folgt:
terraform progress indikator
  1. Wenn alles richtig gemacht worden ist, dann zeigt Terraform den Status der Operation an.
terraform status

Hier noch die erstellten Ressourcen im Azure Portal.

azure ressourcen nach der terraform applizierung

Löschen der erstellten Ressourcen

Der Aufbau einer Umgebung ist das Eine. Das Abbauen das Andere. In der Dotnet User Group Bern, sollt diese Umgebung für die BDD (Behavior Driven Tests) hochgefahren und dann wieder abgebaut werden. Der Abbau geht dann wie nachfolgend beschrieben, sehr schnell.

  1. Mit dem Befehl
terraform.exe destroy

werden die Ressourcen dann abgebaut.

2. Nach einer kurzen Verarbeitungszeit auf dem ausführenden Client sieht man dann Ausführungsplan des oben bereits dargestellten Ausführungsplanes. Der Unterschied hier, bei jeder Resource ist nun ein vorangestellt, dass angibt das die Ressource gelöscht werden wird. Auch hier muss mit YES die Aktion bestätigt werden.

3. Sind alle Aktionen erfolgreich verlaufen so teilt terraform mit, dass die Ressourcen zerstört sind.

terraform destory

Auch im Azure Portal sind die Ressourcen nicht mehr zu finden.

azure nach dem abbau durch terraform destroy

Fazit

Mit Terrform ist es relativ einfach eine automatisierte Infrastruktur zu instantiieren und so den Prozess der Bereitstellung zu Beschleunigen. Wenn Dir dieser Artiekl gefallen hat, dann würde ich mich über ein Like freuen und nehme gerne Verbesserungsvorschläge an. Meine Reise mit Terraform ist noch nicht zu Ende und das hier gezeigt, einfache Beispiel, kann noch weiter verbessert werden. Weiter will ich zudem die Möglichkeiten erkunden, diese Lösung auch onPremise einzusetzen, da nicht in jeder Umgebung die Cloud als definierte Zielplattform gewünscht ist.

Weiterführende Links

Holger Schwichtenberg: Infonachmittag: Eine moderne User Experience (UX) für Ihre Software am 26. November 2020

Bei diesem Online-Event präsentieren User-Experience-Experten ihre Erfahrungen aus spannenden Softwareanwendungen von Versicherungen, Energieversorgern, dem Deutschen Notruf sowie aus dem militärischen Umfeld.

Holger Schwichtenberg: Über 80 Neuerungen in Entity Framework Core 5.0

Die am 10. November erschienene Version des OR-Mappers enthält zahlreiche Neuerungen.

Holger Schwichtenberg: .NET 5.0 erscheint am 10. November im Rahmen der .NET Conf

Microsoft wird morgen im Rahmen der Online-Konferenz ".NET Conf 2020" die fertige Version von .NET 5.0 veröffentlichen.

Marco Scheel: Eigene Vorlagen für Microsoft Teams

Microsoft hat im Mai angekündigt, dass man in Kürze auf Microsoft definierten Templates bei der Anlage zurückgreifen kann und in Zukunft auch eigene Templates im Admin-Center erstellen kann. In der Vergangenheit brauchte man eine Teams Provisioning Lösung und konnte nicht auf die eingebauten Dialoge zurückgreifen. Hier ein Beispiel, wie man über ein Site Design ein Microsoft Flow startet, um mit Teams zu interagieren.

In meinem Lab-Tenant ist nun endlich die Erstellung eigener Templates angekommen. Ich zeige euch, was es mit den Templates von Microsoft auf sich hat und was ihr mit den eigenen Templates erreichen könnt.

image

Microsoft Templates

Microsoft bietet aktuell 13 Templates an, die man im Tenant auswählen kann. Die Templates werden mit einer “Industry” Information versehen. Auch wenn euer Unternehmen nicht aus dem Bereich stammt, sind die Templates trotzdem sinnvoll. Hier findet ihr die Dokumentation, was das einzelne Template ausmacht:

Get started with Teams templates using Microsoft Graph - Teams template capabilities

Aus der User-Sicht startet ihr über den normalen Dialog zum Erstellen eines Teams. Habt ihr die Self-Service Creation abgeschaltet, dann sollten wir mal ein ernstes Wort reden. Wenn ihr alles “Richtig” gemacht habt, dann sieht der User folgendes:

image

Auf dieser Seite kann der Benutzer sich über eine kurze Beschreibung über das Template informieren. Ist die Entscheidung gefallen, dann kommen beim Klick weitere Details zum Template:

image

Ab hier geht es wie gewohnt weiter. Klassifizierung + Privacy auswählen und den Namen für das Team festlegen:

image

Jetzt kommen die ersten Unterschiede. Ohne Template ist das Team in Sekunden erstellt und ich kann weitere Benutzer auswählen. Bei der Verwendung einer Vorlage dauert das Erstellen deutlich länger. Es geht jetzt nicht um Stunden aber der Prozess ist ab jetzt asynchron. Der Benutzer sieht folgende Meldung, welche in meinem Test sich auch nach Minuten nicht verändert hat. Das Schließen des Dialoges ist also nicht optional, wie angedeutet.

image

Das fertige Team sieht dann so aus:

image

In meinem Test hat das System am Ende immer “undefinded” an den gewählten Namen angehängt, aber als Owner kann man den Namen ja jederzeit ändern.

In der aktuellen Implementierung kann man keine der Microsoft Templates ausblenden. Es gibt auch keine Möglichkeit Templates an eine Zielgruppe zu verteilen. Alle Templates sind immer für alle Benutzer zu sehen. Ich bin gespannt, wie es hier weiter geht.

Templates administrieren

Als Microsoft Teams Administrator kann ich ab sofort eigene Templates erstellen und in den gezeigten Dialog integrieren. Die eigenen Templates werden immer vor den Microsoft Templates angezeigt. Hier könnt ihr die Microsoft Dokumentation einsehen:

Get started with Teams templates in the admin center

In der aktuelle Implementierung ist der Funktionsumfang recht überschaubar. Ihr könnt Kanäle vordefinieren, Tabs in die Kanäle hängen und generell Apps in das Team einfügen.

Im Admin Center gibt es im Bereich “Teams” einen neuen Navigationspunkt:

image

Hier kann man ein neues Template erstellen. Aktuell hat man drei Optionen:

image

Eigene Template definieren

Die erste Option “Create a custom team template” führt uns durch den folgenden Dialog. Name des Template, Beschreibung für den Endbenutzer und Hinweise für die Kollegen im Admin Team sind verpflichtend:

image

Auf der folgenden Seite kann ich jetzt einen Kanal anlegen und gleichzeitig auch Tabs (Apps) hinzufügen:

image

Sollte eine App nicht über einen Tab hinzugefügt werden, dann kann ich diese auch direkt in das Team einfügen:

image

Ihr könnt den erstellten Kanälen (z.b. dem Standard-Kanal) nachträglich Apps hinzufügen:

image

Hier seht ihr die fertige Definition des Templates:

image

Eigenes Template von einem bestehenden Template

Die zweite Option “Create a team template from an existing team template” schaltete vor die eigentliche Definition des Templates die Auswahl eines anderen (Microsoft oder eigenes) Template vor:

image

Dann geht es weiter wie bei der Definition eins leeren Template.

Eigenes Template von einem bestehendem Team

Die dritte Option “Create a template from an existing team” erlaubt es ein bestehendes Team auszuwählen:

image

Dann geht es weiter wie bei der Definition eins leeren Template.

In dieser Auswahl werden aber auch nur die Channels, Tabs und die Apps übernommen. Inhalte wie Chat, Tab-Konfigurationen oder Dateien kommen nicht mit.

Zusammenfassung

Es ist eine willkommene Option mit viel Potential für die Zukunft. Wenn ihr kein eigenes Teams Provisioning machen wollt, ist das die erste Möglichkeit euren Usern vorgefertigte Strukturen an die Hand zu geben. Es fehlen aber ganz essenzielle Dinge. Ich kann keine Dateien vorprovisionieren und zum Beispiel als Tab integrieren. Tabs können nicht mit Inhalten gefüllt werden. Ein “Website” Tab bekommt einen Namen, aber man kann nicht die URL hinterlegen.

Es ist gut zu wissen, dass man hier „out-of-the-box“ Funktionen hat, aber solange Microsoft folgende Punkte nicht adressiert, wird es eine Nischenlösung bleiben:

  • Teams Eigenschaften verändern (Moderation, etc.)
  • Erstellungsdialog muss schneller werden
  • Kein User/Group Targeting für Templates -> Heute sehen alle User alle Templates
  • Die Verwendung eines Templates kann nicht erzwungen werden, der User kann immer noch “From scratch” wählen und sein eigenes Ding machen
  • Microsoft Templates können nicht ausgeblendet werden
  • Keine Inhalte (Tabs, Dateien, …)

Holger Schwichtenberg: Zwangweises Herunterfahren einer Reihe von Windows-Systemen per PowerShell

Dieses PowerShell-Skript fährt eine Reihe von Windows-Rechnern herunter. Die Rechnernamen kann man im Skript hinterlegen oder aus einer Datei oder dem Active Directory auslesen.

Marco Scheel: Microsoft Teams Recording mit Externen teilen

Microsoft arbeitet an einer neuen Version von Microsoft Stream. Microsoft Teams nutzte bis vor kurzem genau dieses Video-Backend für die Ablage der Meeting Recordings. Seit heute (01.11.2020) beginnt der Rollout für alle Tenants, es sei denn ihr habt per Meeting Policy ein Opt-Out für eure Benutzer gesetzt. Was das Recording in OneDrive/Teams bedeutet habe ich euch in folgendem Blogpost demonstriert.

Das Meeting Recording liegt also im SharePoint (für Channel Meetings) oder im OneDrive (für alle anderen Meetings). Ist damit der Blog bereits zu Ende? Teilen auf SharePoint kann doch jeder, oder? Hier gibt es die Doku von Microsoft. Natürlich kann ich die Datei einfach über einen neuen Sharing Link teilen. Wenn der externe Benutzer aber im Meeting-Chat auf das Recording klickt, dann gibt es folgenden Fehler, den wir aber “einfach” Lösen können.

image

Meeting Recordings richtig berechtigen

Microsoft nutzt die normalen Share-Features für das Teilen des Recordings für interne Benutzer. Wenn ihr jetzt einfach den Share-Dialog am Video nutzt, dann kann der externe über den Link aus diesem Sharing (wird normal per E-Mail verschickt) zugreifen. Versucht er aber vielleicht später über Teams und den Meeting-Chat zuzugreifen, dann kommt es wieder zum Fehler, da dort ein anderer Sharing-Link hinterlegt ist:

image

Jetzt zeige ich euch, wie man den Link im Meeting-Chat auch für Externe konfiguriert. Wenn ihr die Datei im SharePoint oder OneDrive geöffnet habt, dann könnt ihr über “Manage access” die aktuellen Sharing Links einsehen. Es gibt viele Möglichkeiten “Manage access” zu erreichen. In der Ordneransicht klickt ihr “…” auf der entsprechenden Videodatei und wählt dann “Manage access” aus:

image

Ihr seht die aktuell konfigurierten Freigaben:

image

Die Freigabe für die Anzeige der Datei (“View”-Berechtigung) könnt ihr über die “…” Option aufrufen:

image

Gebt die E-Mail des Gasts ein und bestätigt mit “Save”:

image

Wenn ich (als Gast der Besprechung) nun über den Meeting-Chat den Link aufrufe, dann bekomme ich keine Fehlermeldung beim Zugriff:

image

Zusammenfassung

Die meisten Benutzer werden einfach im SharePoint auf “Share” klicken. Der Zugriff für externe Benutzer ist ein tolles Feature und ich will nicht meckern. Sollten eure Benutzer über das Verhalten meckern, dann könnt ihr sie nun über den richtigen Weg aufklären und das Nutzererlebnis verbessern.

Code-Inside Blog: DllRegisterServer 0x80020009 Error

Last week I had a very strange issue and the solution was really “easy”, but took me a while.

Scenario

For our products we build Office COM Addins with a C++ based “Shim” that boots up our .NET code (e.g. something like this. As the nature of COM: It requires some pretty dumb registry entries to work and in theory our toolchain should “build” and automatically “register” the output.

Problem

The registration process just failed with a error message like that:

The module xxx.dll was loaded but the call to DllRegisterServer failed with error code 0x80020009

After some research you will find some very old stuff or only some general advises like in this Stackoverflow.com question, e.g. “run it as administrator”.

The solution

Luckily we had another project were we use the same approach and this worked without any issues. After comparing the files I notices some subtile differences: The file encoding was different!

In my failing project some C++ files were encoded with UTF8-BOM. I changed everything to UTF8 and after this change it worked.

My reaction:

(╯°□°)╯︵ ┻━┻

I’m not a C++ dev and I’m not even sure why some files had the wrong encoding in the first place. It “worked” - at least Visual Studio 2019 was able to build the stuff, but register it with “regsrv32” just failed.

I needed some hours to figure that out.

Hope this helps!

Marco Scheel: Microsoft Teams Recording jetzt in SharePoint statt Microsoft Stream

Auf der Ignite 2020 wurde angekündigt, dass man Microsoft Stream einstellen neu erfinden wird. Ich bin Feuer und Flamme für die Idee, wie ihr hier sehen könnt:

Den Microsoft Blogpost mit allen Details findet ihr hier. Heute wollen wir uns die Auswirkungen auf die Meeting Recordings in Microsoft Teams anschauen. In der “Vergangenheit” hatten wir folgende Probleme mit der Ablage in Microsoft Stream:

Mit dem neuen Microsoft Stream gehören diese Probleme der Vergangenheit an und es werden noch viele Funktionen in der nahen Zukunft ergänzt. Zum Start bekommen wir aber eine sehr rudimentäre Implementierung mit ihren eigenen Problemen. Wir schauen einmal auf die entsprechende Implementierung Stand Oktober 2020. Microsoft hat zur Ignite eine dedizierte Session zum Thema Besprechungsaufzeichnung erstellt in der ihr viele Details findet.

image

Im Meeting

Ich habe für euch ein Meeting dokumentiert und zeige wo die Unterschiede liegen. Im Microsoft Teams Client bleibt während der Besprechung alles beim Alten. Über die erweiterten Funktionen (…) kann jeder Moderator aus dem einladenden Unternehmen das Recording starten “Start recording”: image

Im Meeting sehen wir:

  • Luke (Meeting Organizer) - luke ät gkmm.org
  • Leia - leia ät gkmm.org
  • Rey - rey ät gkmm.org
  • Kylo - kylo ät gkmm.org
  • Marco (Gast) - marco.scheel ät glueckkanja.com

Die Benutzer werden wie üblich mit einem Banner über den Start der Aufnahme informiert: image

Wird die Aufzeichnung während des Meetings beendet, dann werden die Benutzer über das Speichern informiert: image

Die Aufzeichnung wird im Meeting Chat verlinkt. image

Recording ansehen

Hier kommt die erste Neuerung! Das Video ist deutlich schneller verfügbar. Microsoft Teams erzeugt das Video als MP4 in der Cloud und hat es in der Vergangenheit an Microsoft Steam übergeben. Stream hat dann die Azure Media Services bemüht, um das Video aufzubereiten und für ein adaptives Streaming auszuliefern. Simple gesagt: Stream hat das Video in verschiedenen Auflösungen gerechnet und kann nahtlos zwischen den Bitraten hin und her wechseln. Das neue Stream wirft einfach das MP4 in SharePoint (oder OneDrive for Business) und stellt es dann über einen einfachen HTML Player zur Verfügung. Es sollte klar werden, dass ohne die Integration der Azure Media Services in dieser ersten Version einige Funktionen entfallen:

  • Wiedergabegeschwindigkeit. Nicht jeder redet so schnell wie ich und dann kann man schon ein 60 Minuten Video auf 45 Minuten reduzieren, wenn man es in 1,5x wiedergibt.
  • Transkription (Sprache zu Text). Für ein Meeting Recording ziemlich relevant, um zum Beispiel im 2 Stunden Meeting den Moment zu finden, als es um Produkt X ging.
  • Bandbreiten- und Performanceabhängiges Adaptives Streaming (wechseln von 1.1 Mbps bis 58Kbps). Für ein Meeting Recording nicht besonders relevant.

Anfang 2020 hatten unsere Video Recordings in Stream noch alle Bitraten. Aktuell sind auch alle Stream Videos (Meeting Recording) nur in der Original 1.1 Mbps (1080p) Auflösung verfügbar und macht somit kein adaptives Streaming. Corona lässt grüßen?

Der folgende Screenshot zeigt Lukes Browser bei der Wiedergabe, nachdem er im Chat auf “Open in OneDrive” klickt. image

Genau so wird heute bereits jedes andere Video in SharePoint und OneDrive dargestellt. In der Zukunft wird der Content Typ für Video im SharePoint aufgewertet und die Darstellung, die Metadaten und der Lifecycle werden optimiert.

Für alle Meeting Teilnehmer aus der einladenden Organisation wird das Sharing der Datei automatisch eingerichtet: image

Teams vergibt immer zwei Berechtigungen für die Video Datei. Es wird unterschieden in Moderator und Teilnehmer. Moderatoren erhalten Edit Berechtigungen. Teilnehmer des Meetings erhalten nur View Berechtigungen.

Gäste werden nicht berücksichtig. Für mich als Gast endet der Versuch das Recording anzusehen so: image

In diesem Blogpost gehe ich auf den richtig Umgang mit Gästen und dem Recording ein.

Die Videodatei

Es bleibt zu beantworten, wo die Datei dann eigentlich liegt! Wenn man im Chat ein Video angeklickt, dann öffnet sich in Teams die Video Datei und wird abgespielt. Keine gute Idee, denn jede Interaktion mit Teams führt zum Abbruch der Wiedergabe und man muss von vorne anfangen. Also am besten gleich die “…” anklicken und “Open in OneDrive” auswählen: image

Die Datei liegt also im OneDrive und wird in einem Ordner mit dem Namen “Recordings” (bestimmt auch irgendwo “Aufzeichungen” oder “grabación”). OneDrive ist immer eine persönliche Ablage. Sie gehört einem Benutzer! Keine schöne Lösung. Also ein “Meet now” oder ein geplanter Termin landen im OneDrive eines Benutzers. Welcher Benutzer? Meeting Organizer oder der Benutzer der “Start recording” klickt. Es ist tatsächlich das OneDrive des Benutzers, der am schnellsten Klicken kann. Ich hätte den Meeting Organizer bevorzugt, da er auch der Besitzer des Termins ist. Hier hatte Stream mit seiner “neutralen” Ablageplattform klare Vorteile. Verlässt ein Benutzer das Unternehmen und sein OneDrive wird gelöscht, verschwinden alle Meeting Recordings mit seinem OneDrive! Microsoft hat angekündigt, dass es Mechanismen geben wird, welche automatisch “alte” Aufzeichnungen löschen wird. Der Speicherverbrauch auf SharePoint soll so reduziert werden. Eventuell kommen hier Retention Policies zum Einsatz. Diese Policies können nicht nur Dateien löschen, sondern auch für einen bestimmten Zeitraum garantiert Vorhalten. Es kann also sein, dass es hier auch eine Lösung für das Löschen eines OneDrives geben wird.

Meetings können aber auch alternativ in einem Teams Kanal geplant werden. Diese Channel-Meetings speichern ihr Recording in dem entsprechenden Ordner im Kanal. Wenn ich also im “General” Kanal aufzeichne, dann liegt die Datei hier “/sites/YOURTEAMSITE/Shared Documents/General/Recordings”. Ich bin semi-zufrieden. Die Ablage im Team löst die Probleme beim Zugriff und die Frage wem die Datei gehört. Solltet ihr aber zum Beispiel den Ordner General per OneDrive Client auf eurem Rechner syncen und immer alles Offline wollen… dann kommen jetzt auch größere Videodateien mit. Aber man kann nicht alles haben und ich hoffe, dass Microsoft in Zukunft diese Szenarien weiter optimiert.

Das Label der Schaltfläche im Meeting Chat heißt immer “Open in OneDrive” und ändert sich für ein Channel Meeting NICHT in “Open in SharePoint”. Auch hier gibt es die Chance auf Besserung in der Zukunft.

Schauen wir mal direkt auf die Datei. Hier ein Screenshot der erweiterten MP4 Eigenschaften: image

Die Aufzeichnung ist etwas unter 5 Minuten lang und verbraucht ca. 25 MB. Das Video besitzt eine FullHD (1080p) Auflösung. Wenn ich mir den Dateinamen anschaue, dann bin ich super happy. In Stream waren besonders die Titel der Channel Meetings oft Nichtssagend z.B. “Meeting in General”. So baut sich der Dateiname im neuen Stream auf:

  • Demo Luke and Leia-20201024_152149-Meeting Recording.mp4
    • TitelDesMeeting-yyyyMMdd_HHmmss-Meeting Recording
    • TitelDesMeeting = Demo Luke and Leia
    • yyyyMMdd_HHmmss = Start der Aufnahme (Lokale Zeit und nicht UTC), also der Moment in dem “Start recording” geklickt wird
  • Meeting in _Workplace_-20201023_100348-Meeting Recording.mp4
    • MeetingInKanal–yyyyMMdd_HHmmss-Meeting Recording
    • MeetingInKanal = Der Kanal heißt Workplace und in dem Fall wurde leider der Titel des Meetings nicht übernommen.
    • yyyyMMdd_HHmmss = Start der Aufnahme (Lokale Zeit und nicht UTC), also der Moment in dem “Start recording” geklickt wird

Einschalten im Tenant

Microsoft hat aktuelle Dokumentation wann es wie weiter geht. Aktuell kann ein Tenant Admin den Opt-In durchführen und in wenigen Stunden ist der Tenant bereit und alle neuen Meeting Recordings landen direkt im OneDrive/SharePoint. Wenn der Admin kein Opt-In oder Opt-Out macht, dann wird ab Mitte Q4 2020 die Funktion für den Tenant konfiguriert. Habt ihr für euren Tenant ein Opt-Out konfiguriert, dann ist trotzdem im Q1 2021 Schluss und das Recording auf OneDrive/SharePoint wird auch für euren Tenant umgestellt. Bedeutet, dass mit Start Q2 2021 alle Meeting Recordings nur noch im neuen Microsoft Stream (aka SharePoint) laden.

Für den Opt-In braucht ihr die Skype for Business PowerShell, um die entsprechende Meeting Policy zu setzen. Seit der Ignite 2020 sind die CommandLets aber auch in das Microsoft Teams PowerShell Modul integriert.

Import-Module SkypeOnlineConnector
$sfbSession = New-CsOnlineSession
Import-PSSession $sfbSession
Set-CsTeamsMeetingPolicy -Identity Global -RecordingStorageMode "OneDriveForBusiness"

Da es eine Meeting Policy ist, kann man das Feature auch erstmal nur für einzelne Benutzer freischalten.

Für den Op-Out setzt ihr den Wert einfach auf “Stream”:

Set-CsTeamsMeetingPolicy -Identity Global -RecordingStorageMode "Stream"

Zusammenfassung

Microsoft Teams läutet eine neue Video Ära in Microsoft 365 ein. Die Meeting Recordings können ab sofort für all oder einzelne Benutzer nach SharePoint/OneDrive umgeleitet werden. Der Funktionsverlust ist angesichts der neuen Freiheit beim Teilen der Aufzeichnungen zu verschmerzen. Folgende Punkte sind also zu beachten:

  • Aufzeichnungen im Benutzer OneDrive können verloren gehen, wenn das OneDrive gelöscht wird (Mitarbeiter verlässt das Unternehmen)
  • Kein x-fache Wiedergabegeschwindigkeit
  • Keine adaptive Bandbreitenanpassung (aktuell auch in Stream nicht gegeben)
  • Keine Integration in die aktuelle Stream Mobile App
  • Wird eine Datei umbenannt, verschoben oder gelöscht, dann geht der Link im Meeting Chat kaputt
  • Das richtige Teilen mit Externen erfordert einige Klicks (Blogpost coming soon)
  • Keine Möglichkeit alle Meeting Recordings auf einen Blick zu sehen, man muss jedes Mal den Meeting Chat suchen

Holger Schwichtenberg: PowerShell 7: Ternärer Operator ? :

In PowerShell 7.0 hat Microsoft auch den ternären Operator ? : als Alternative zu if … else eingeführt.

Marco Scheel: Teilnehmer in Microsoft Teams für immer stumm schalten

Wer kennt es nicht: Das Meeting startet und plötzlich “piiiiiiiiep-piiiiiep-piiep-piep” … der Projektleiter parkt rückwärts ein. Die Flexibilität in der Teilnahme an einem Teams Meeting ist “nahezu” unbegrenzt. Die Freiheitsgrade sind aber für viele neu und ungeübt. Der geübte Umgang mit dem Meeting Equipment ist noch in weiter Ferne. Im optimalen Fall setzen wir alle nur Microsoft Teams zertifizierte Geräte ein und Hardware + Software arbeiten in Harmonie. Wenn ich zu spät in ein Meeting komme, dann hoffentlich “on mute”. Die Realität sieht noch immer anders aus. Besonders in großen Meetings war es ein Problem, dass Teilnehmer jederzeit das Mikrofon öffnen konnten. Ein Microsoft Teams Live-Events waren oft keine Lösung, da die Interaktivität in einem späteren Moment fehlte. Durch die Einführung des Roadmap Item 66575 ist das Problem lösbar:

Prevent attendees from unmuting in Teams Meetings

Gives meeting organizers the ability to mute one to many meeting participants while removing the muted participants' ability to unmute themselves.

Hier der Screenshot zum Feature:

image

Teilnehmer (Attendee) vs Moderator(Presenter)

Microsoft Teams kennt in einem Meeting drei wesentliche Rollen. Für unser Szenario sind nur zwei Rollen interessant. Das Unternehmen kann vorgeben, wie strikt die Moderator-Rolle (Presenter) gehandhabt wird. Im Standard ist jeder Benutzer dieser Rolle zugeordnet und kann damit das Meeting unterstützen oder empfindlich stören. Im Meeting selbst kann jeder Moderator andere Benutzer zum Teilnehmer herabstufen. Teilnehmer können nicht präsentieren und keine Benutzer aus der Konferenz werfen. Für die genau Übersicht checkt diesen Link.

Welche Optionen das Unternehmen zur Vorgabe hat, findet ihr mit allen Details hier.

  • Jeder
  • Jeder aus dem Unternehmen
  • Nur der Organisator

Meeting planen

Wenn ihr ein Meeting fertig geplant habt, könnt ihr nachträglich in die Meeting Optionen schauen. In Skype for Business konnte man das schon direkt beim Planen, aber Microsoft hat es sich hier “einfach” gemacht und springt einfach im Browser auf diese Optionen. Im Beschreibungstext des Termins neben dem Teilnahme-Link, kann der Organisator auch aus allen anderen Programmen auf diese Optionen zugreifen:

image

Hier die Meeting Optionen im Browser:

image

Wer kann präsentieren? Hier ist der Unternhemnsstandard vorgegeben und ihr könnt den Wert nach euren Vorlieben anpassen.

image

Solltet ihr die Option “Specific people” auswählen, konnte ich in meinem Test nur eingeladenen Personen aus dem Unternehmen selbst auswählen. Je nachdem wer den Termin tatsächlich steuert, müsst ihr hier aufpassen, wenn ihr zum Beispiel durch einen externen Projektleiter unterstützt werden, der normalerweise das Meeting leitet.

Jetzt kommen wir zum eigentlichen Feature, wenn ihr schon zur Planung die Entscheidung treffen könnt, dann stellt ihr hier die Option “Allow attendees to unmute” aus.

image

Im Meeting

Während des Teams Meetings kann ein Moderator die Option zum “Stumm-schalten” der Teilnehmer direkt im Client bedienen. In der Teilnehmerliste kann man über das “…"-Menü die Benutzergruppe stumm schalten.

image

Für den Teilnehmer wird es im Client kurz signalisiert, dass Sie aktuell stumm sind. Ein “Unmute” über zum Beispiel zertifizierte Hardware-Button wird sofort wieder zurückgesetzt und das Headset (Dongle) zeigt weiterhin Mute (Rot) an.

image

Für mich als Teilnehmer ist das Symbol zum “Unmuten” ausgegraut und nicht bedienbar. Der Rest der Teilnehmer wird ebenfalls als deaktiviert angezeigt.

image

In meinem Test kann man übrigens sehen, dass noch nicht alles rund läuft. Für ein kurzen Moment war mein Mikrofon noch offen (wie beim berühmten Double-Mute) aber Teams hatte mich im Client schon stumm geschaltet. In dem Moment hat mich der Client dann drauf hingewiesen, dass ich noch mute bin, obwohl ich es ja nicht ändern kann :) Wird schon noch werden.

Ein Moderator kann jederzeit die Option wieder für alle entfernen oder einzelne Teilnehmer zum Moderator befördern.

Abschluss

Ich kannte die Funktion bisher nur von WebEx und wurde schon das eine oder andere Mal darauf angesprochen. Bisher war meine Antwort eine Kombination aus Live-Event und Teams Meeting im Anschluss. Durch diese neue Funktion wird es für alle viel einfacher. Ich würde trotzdem sparsam mit dem Setting umgehen, da die Tücke im Detail liegt. Aktuell können zum Beispiel nur die Web- und Desktop-Versionen damit umgehen. Auf einem Microsoft Teams Room System oder dem Mobile Clients (Android/iOS) gibt es die Funktion Stand Oktober 2020 noch nicht. Es kann also passieren, dass ein Meeting nicht stattfinden kann, weil die Teilnehmer nicht zu Wort kommen können. Wie so oft kann man durch gute User-Erziehung mehr erreichen als durch harte Regulation.

Microsoft hat wie so oft in letzter Zeit hervorragen Dokumentation zum Feature Online. Schaut also selbst nochmal rein.

Marco Scheel: My blog has moved - From Tumblr to Hugo on GitHub Pages

My blog now has a new home. It is no longer hosted on Tumblr.com and it is now hosted on GitHub Pages. The main reason to get of Tumblr is the poor image handling. The overall experience was OK. I liked the editor and best of all it is all free including running on your own domain! Having my own name was a key driver. I was running my blog on my own v-server back in the days. I tried a lot of platforms (blogger.com, wordpress.com and prior Tumblr I ran on a “self” hosted WordPress instance). The only constant was and will be my RSS hosting. Believe it or not I’m still running Feedburner from Google. One service that is still not (yet?) killed by search giant (RIP Google Reader). With all the previous choices there was also on driving factor: I’m cheap, can I get it for free? Yes and it will stay 100% free for you and me!

Today is the day I switched to a static website! It is 2020 and the hipster way to go. So, what does it take to run a blog on a static site generator?

Main benefits:

  • Still free
  • I own my content 100%
  • Better image handling (high-res with zoom)
  • Better inline code handling and highlighting
  • Learning new stuff

HUGO

Hugo is one of the most popular open-source static site generators. With its amazing speed and flexibility, Hugo makes building websites fun again.

image

Why Hugo and not Jekyll? Because there are blogs out there that I’m reading, and I liked the idea of being one of them :) Who?

There is even content on Microsoft Docs on hosting Hugo on Azure Static Websites: https://docs.microsoft.com/en-us/azure/static-web-apps/publish-hugo

It was easy to start. Just follow the steps on the Getting started using choco installation for Windows users.

I’ve chosen the Fuji theme as a great staring point and integrated it as a git submodule. As mentioned in the docs I copied the settings into my config.toml and I was ready to go.

hugo new post/hugo/2020/10/my-blog-has-moved.md
hugo server -D

Open localhost:1313 in your browser of choice and check the result.

image

My tweaks

To get result in the picture above I needed some tweaks. Also, some other settings are notable if you are like me :)

The chosen theme is not very colorful and I really wanted a site image. I’m sure it is my missing knowledge about Hugo and theming but I ended up messing this the CSS to get a header image. I have put a classic CSS file in my “static/css” folder.

header {
    background-image: url(/bg-skate-2020.jpg);
    background-size: cover;
}

body header a {
    color: #ffffff;
    white-space: normal;
}

body header .title-sub{
    color: #ffffff;
    white-space: normal;
}

body .markdown-body strong{
    color: #000000;
}

To integrate this into your theme we use partials. To not mess with my theme (it is a submodule and controlled by the original author) I had to copy the “head.html” from my theme into “layouts/_partials” and I added the link to my CSS at the end of the file. While I’m in here I will also add the RSS tag to my FeedBurner account.

...
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/disqusjs@1.3/dist/disqusjs.css" />
{{ end }}
{{ partial "analytic-gtag.html" . }}
{{ partial "analytic-cfga.html" . }}

<link rel="stylesheet" href="/css/custom.css">
<link rel="alternate" type="application/rss+xml" href="http://feeds.marcoscheel.de/marcoscheel">

I also modified the Google Analytics integration in the same way. I copied the “analytic-gtag.html” file to my partials folder and added the attribute “anonymize_ip” to anonymize the IP address.

...
        dataLayer.push(arguments);
    }
    gtag('js', new Date());
    gtag('config', '{{ . }}', {'anonymize_ip': true});
</script>
<script async src="https://www.googletagmanager.com/gtag/js?id={{ . }}"></script>

To get a favicon I followed the instructions on my theme site doc.

By default, the RSS feed generated will include only a summary (I HATE THAT) and return all items. I’ve found this post about solving my RSS “problem”. This time we had to grab the content from the Hugo website and copy the file into “layouts/_default/rss.xml”. Switch from “.Summary” to “.Content” and switched the description of the RSS feed to my site description. Also, I configured the XML feed to only return 25 items.

...
<description>{{.Site.Params.subTitle}}</description>
...
<pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
{{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
<guid>{{ .Permalink }}</guid>
<description>{{ .Content | html }}</description>

config.toml

rssLimit = 25

Content migration

I also need to take care about my old content living on Tumblr and if possible, on WordPress. It was kind of easy. I checked the migration article on the Hugo docs site.

Tumblr: https://gohugo.io/tools/migrations/#tumblr
All of the solutions require a Tumblr app registration, so I created on. To not mess with my fresh Windows install I enabled WSL2 and used the Ubuntu distro. This way I was able to clone the tumblr-importr repo and build my application. The important part was to place the GO binary into the right location. Otherwise the command was unknown. After that I was able to generate the needed files.

git clone https://github.com/carlmjohnson/tumblr-importr
cd tumblr-importr
go build
sudo cp tumblr-importr $GOPATH/bin
tumblr-importr -api-key 'MYAPIKEYHERE' -blog 'marcoscheel.de'

I copied the files into a subfolder named “tmblr” in my “content/post” folder. My main problem was that the content was not markdown. The files used HTML. I ended up opening all the blog posts on Tumblr in edit mode and switched to markdown mode and copied the source to the corresponding .md file. I only had 12 posts, so the work was doable and the result is clean. The main benefit of the conversion was that the front-matter attributes were pre-generated I did not have to recreate those (title, old URL as alias, tags, date, …)

date = 2019-08-02T19:41:30Z
title = "Manage Microsoft Teams membership with Azure AD Access Review"
slug = "manage-microsoft-teams-membership-with-azure-ad"
id = "186728523052"
aliases = [ "/post/186728523052/manage-microsoft-teams-membership-with-azure-ad" ]
tags = [ "Microsoft 365", "Azure AD", "Microsoft Teams"]

The Tumblr export generated an image mapping JSON. I used the JSON (converted to a CSV) to rewrite my images to the downloaded (still to small) version.

"OldURI":"NewURI"
"https://64.media.tumblr.com/023c5bd633c51521feede1808ee7fc20/eb22dd4fa3026290-d8/s540x810/36e4547d82122343bec6a09acf4075bb15eae1c1.png": "tmblr/6b/23/64d506172093d1d548651e196cf7.png"
$images = Import-Csv -Delimiter ":" -Path ".\image-rewrites.csv";

Get-ChildItem -Filter "*.md" -Recurse | ForEach-Object {
    $file = $_;
    $content = get-content -Path $file.FullName -Raw
    foreach ($image in $images) {
        $content = $content -replace $image.OldURI, $image.NewURI
    }
    Set-Content -Value $content -Path ($file.FullName)
}

WordPress: https://gohugo.io/tools/migrations/#wordpress
Once again, I used my handy WSL2 instance to not mess with not loved language. So a save route was to use the WordPress export feature and the repo exitwp-for-hugo. I cloned the repo and a few “sudo apt-get” later I was ready to run the python script. I placed my downloaded XML into the “wordpress-xml” folder. I ended up changing the exitwp.py file to ignore all tags and replace it with a single “xArchived”.

git clone https://github.com/wooni005/exitwp-for-hugo.git
cd exitwp-for-hugo
./exitwp.py

At the end, my “content/post” folder looks like to following.

image

Github

Now that the content is available on my local drive and I’m able to generate the static files. It is already a git repo so where to host the primary authority? So, the Hugo site with all config and logic will go to GitHub. There are only two choices for me. GitHub or Azure DevOps. Microsoft is owning both services. Private repos are free in both services. It looks like in the future Azure DevOps will not get all the love and that is why my website source code is hosted on GitHub: https://github.com/marcoscheel/marcoscheel-de

image

GitHub Pages

Next up is to generate the final HTML and put it out there on the internet. Generating the content is as easy as running this command.

image

Now we need to decide how to host the content. My first try was to setup a new Azure Pay-As-You-Go subscription with a 200$ starting budget for the first month and my personal credit card from here. Based on Andrew Connell blog I setup a storage account and enabled the static website. I could setup a custom domain for the blob store, but I created a Azure CDN (MS Standard) to optimize traffic and reduce potential cost. I also checked for Cloudflare CDN. All options allowed to have a custom domain and easy HTTPS with build in certificates. At the end it was my credit card and if something went really wrong (too much traffic due to non-paid internet fame?) I would be paying a life lesson with real money. I took the easy route instead. GitHub Pages to the rescue.

Websites for you and your projects. Hosted directly from your GitHub repository. Just edit, push, and your changes are live.

For every account GitHub is offering one GitHub Pages repository. I created the repository at: https://github.com/marcoscheel/marcoscheel.github.io

Normally the content will be server on the github.io domain, but through the settings we can add a CNAME to the site. To achieve this we need to put a file called “CNAME” into the root directory. For my Hugo site and the publish process I placed the file into the “static” folder so every time the site is generated the file will be copied to the root of the site. Once the CNAME is in place we configure the HTTPS redirect.

image

Custom domain. HTTPS. No credit card. Everything is good.

Publishing

In the future I’m looking forward to enable GitHub Actions to publish my site. For the moment I rely on my local environment pushing content from my Hugo site to the GitHub Pages repository. I integrated the GitHub Pages repo as a submodule and with the publish process I put the files into “public/docs”.

publishDir = "public/docs"

A quick “hugo” on the Windows Terminal and a fresh version is ready to be pushed into the right repo.

hugo
cd public
git add -A && git commit -m "Ready for go live"
git push

Holger Schwichtenberg: Erstes Buch zu C# 9.0 erschienen

Das Buch behandelt die wesentlichen Neuerungen in der neunten Sprachversion.

Code-Inside Blog: How to share an Azure subscription in a team

We at Sevitec are moving more and more workloads for us or our customers to Azure.

So the basic question needs an answer:

How can a team share an Azure subscription?

Be aware: This approach works for us. There might be better options. If we do something stupid, just tell me in the comments or via email - help is appreciated.

Step 1: Create a directory

We have a “company directory” with a fully configured Azure Active Directory (incl. User sync between our OnPrem system, Office 365 licenses etc.).

Our rule of thumb is: We create for each product team a individual directory and all team members are invited in the new directory.

Keep in mind: A directory itself costs you nothing but might help you to keep things manageable.

Create a new tenant directory

Step 2: Create a group

This step might be optional, but all team members - except the “Administrator” - have the same rights and permissions in our company. To keep things simple, we created a group with all team members.

Put all invited users in a group

Step 3: Create a subscription

Now create a subscription. The typical “Pay-as-you-go” offer will work. Be aware that the user who creates the subscription is initially setup as the Administrator.

Create a subscription

Step 4: “Share” the subscription

This is the most important step:

You need to grant the individual users or the group (from step 2) the “Contributor” role for this subscription via the “Access control (IAM)”. The hard part is to understand how those “Role assignment” affect the subscription. I’m not even sure if the “Contributor” is the best fit, but it works for us.

Pick the correct role assignment

Summary

I’m not really sure why such a basic concept is labeled so poorly but you really need to pick the correct role assignment and the other person should be able to use the subscription.

Hope this helps!

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.