Jürgen Gutsch: ASP.NET Hack Advent Post 10: Wasmtime

WebAssembly is pretty popular this time for .NET Developers. With Blazor we have the possibility to run .NET assemblies inside the WebAssembly inside a browser.

But did you know that you can run WebAssembly outside the web and that you can run WebAssembly code without a browser? This can be done with the open-source, and cross-platform application runtime called Wasmtime. With Wasmtime you are able to load and execute WebAssemby code directly from your program. Wasmtime is programmed and maintained by the Bytecode Alliance

The Bytecode Alliance is an open source community dedicated to creating secure new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI).

Website: https://wasmtime.dev/

GitHub: https://github.com/bytecodealliance/wasmtime/

I wouldn't write about it, if it wouldn't be somehow related to .NET Core. The Bytecode Alliance just added an preview of an API for .NET Core. That means that you now can execute WebAssembly code from your .NET Core application. For more details see this blog post by Peter Huene:

https://hacks.mozilla.org/2019/12/using-webassembly-from-dotnet-with-wasmtime/

He wrote a pretty detailed blog post about Wasmtime and how to use it within a .NET Core application. Also the Bytecode Alliance added a .NET Core sample and created a NuGet package:

https://github.com/bytecodealliance/wasmtime-demos/tree/master/dotnet

https://www.nuget.org/packages/Wasmtime

So Wasmtime is the opposite of Blazor. Instead of running .NET Code inside the WebAssembly, you are now also able to run WebAssembly in .NET Core.

Jürgen Gutsch: ASP.NET Hack Advent Post 09: November 2019 .NET/ASP.NET Documentation Update

For the ninth post I found a pretty useful blog post about .NET Core and ASP.NET Core documentation updates for the version 3.0. This post was written by Maxime Rouiller, a former MVP, who now works for Microsoft as a Microsoft Cloud Developer Advocate.

In this post he shows all the important updates related to version 3.0 structured by topic including the links to the updated documentations. There is definitely a lot of stuff he is mentioning and you should read:

https://blog.maximerouiller.com/post/november-2019-net-aspnet-documentation-update/

BTW: I personally met Maxime during the MVP Summit back when he still was MVP. The first time I met him during the breakfast at the summit hotels. He asked the MVPs at the breakfast table to try to spell his name and I was one of them who tried to speak his name in French which was right. This guy is so cool and funny. It was a pleasure to meet him.

Blog: https://blog.maximerouiller.com

Twitter: https://twitter.com/MaximRouiller

GitHub: https://github.com/MaximRouiller

Christian Dennig [MS]: Using Rook / Ceph with PVCs on Azure Kubernetes Service

Introduction

As you all know by now, Kubernetes is a quite popular platform for running cloud-native applications at scale. A common recommendation when doing so, is to ousource as much state as possible, because managing state in Kubernetes is not a trivial task. It can be quite hard, especially when you have a lot of attach/detach operations on your workloads. Things can go terribly wrong and – of course – your application and your users will suffer from that. A solution that becomes more and more popular in that space is Rook in combination with Ceph.

Rook is described on their homepage rook.io as follows:

Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.

Rook is a project of the Cloud Native Computing Foundation, at the time of writing in status “incubating”.

Ceph in turn is a free-software storage platform that implements storage on a cluster, and provides interfaces for object-, block- and file-level storage. It has been around for many years in the open-source space and is a battle-proven distributed storage system. Huge storage systems have been implemented with Ceph.

So in a nutshell, Rook enables Ceph storage systems to run on Kubernetes using Kubernetes primitives. The basic architecture for that inside a Kubernetes cluster looks as follows:

rook-architecture

Rook in-cluster architecture

I won’t go into all of the details of Rook / Ceph, because I’d like to focus on simply running and using it on AKS in combination with PVCs. If you want to have a step-by-step introduction, there is a pretty good “Getting Started” video by Tim Serewicz on Vimeo:

First, we need a Cluster!

So, let’s start by creating a Kubernetes cluster on Azure. We will be using different nodepools for running our storage (nodepool: npstorage) and application workloads (nodepool: npstandard).

# Create a resource group

$ az group create --name rooktest-rg --location westeurope

# Create the cluster

$ az aks create \
--resource-group rooktest-rg \
--name myrooktestclstr \
--node-count 3 \
--kubernetes-version 1.14.8 \
--enable-vmss \
--nodepool-name npstandard \
--generate-ssh-keys

Add Storage Nodes

After the cluster has been created, add the npstorage nodepool:

az aks nodepool add --cluster-name myrooktestclstr \
--name npstorage --resource-group rooktest-rg \ 
--node-count 3 \
--node-taints storage-node=true:NoSchedule

Please be aware that we add taints to our nodes to make sure that no pods will be scheduled on this nodepool as long as we explicitly tolerate it. We want to have these nodes exclusively for storage pods!

If you need a refresh regarding the concept of “taints and tolerations”, please see the Kubernetes documentation.

So, now that we have a cluster and a dedicated nodepool for storage, we can download the cluster config.

az aks get-credentials \
--resource-group rooktest-rg \
--name myrooktestclstr

Let’s look at the nodes of our cluster:

$ kubectl get nodes

NAME                                 STATUS   ROLES   AGE    VERSION
aks-npstandard-33852324-vmss000000   Ready    agent   10m    v1.14.8
aks-npstandard-33852324-vmss000001   Ready    agent   10m    v1.14.8
aks-npstandard-33852324-vmss000002   Ready    agent   10m    v1.14.8
aks-npstorage-33852324-vmss000000    Ready    agent   2m3s   v1.14.8
aks-npstorage-33852324-vmss000001    Ready    agent   2m9s   v1.14.8
aks-npstorage-33852324-vmss000002    Ready    agent   119s   v1.14.8

So, we now have three nodes for storage and three nodes for our application workloads. From an infrastructure level, we are now ready to install Rook.

Install Rook

Let’s start installing Rook by cloning the repository from GitHub:

$ git clone https://github.com/rook/rook.git

After we have downloaded the repo to our local machine, there are three steps we need to perform to install Rook:

  1. Add Rook CRDs / namespace / common resources
  2. Add and configure the Rook operator
  3. Add the Rook cluster

So, switch to the /cluster/examples/kubernetes/ceph directory and follow the steps below.

1. Add Common Resources

$ kubectl apply -f common.yaml

The common.yaml contains the namespace rook-ceph, common resources (e.g. clusterroles, bindings, service accounts etc.) and some Custom Resource Definitions from Rook.

2. Add the Rook Operator

The operator is responsible for managing Rook resources and needs to be configured to run on Azure Kubernetes Service. To manage Flex Volumes, AKS uses a directory that’s different from the “default directory”. So, we need to tell the operator which directory to use on the cluster nodes.

Furthermore, we need to adjust the settings for the CSI plugin to run the corresponding daemonsets on the storage nodes (remember, we added taints to the nodes. By default, the pods of the daemonsets Rook needs to work, won’t be scheduled on our storage nodes – we need to “tolerate” this).

So, here’s the full operator.yaml file (→ important parts)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rook-ceph-operator
  namespace: rook-ceph
  labels:
    operator: rook
    storage-backend: ceph
spec:
  selector:
    matchLabels:
      app: rook-ceph-operator
  replicas: 1
  template:
    metadata:
      labels:
        app: rook-ceph-operator
    spec:
      serviceAccountName: rook-ceph-system
      containers:
      - name: rook-ceph-operator
        image: rook/ceph:master
        args: ["ceph", "operator"]
        volumeMounts:
        - mountPath: /var/lib/rook
          name: rook-config
        - mountPath: /etc/ceph
          name: default-config-dir
        env:
        - name: ROOK_CURRENT_NAMESPACE_ONLY
          value: "false"
        - name: FLEXVOLUME_DIR_PATH
          value: "/etc/kubernetes/volumeplugins"
        - name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS
          value: "false"
        - name: ROOK_LOG_LEVEL
          value: "INFO"
        - name: ROOK_CEPH_STATUS_CHECK_INTERVAL
          value: "60s"
        - name: ROOK_MON_HEALTHCHECK_INTERVAL
          value: "45s"
        - name: ROOK_MON_OUT_TIMEOUT
          value: "600s"
        - name: ROOK_DISCOVER_DEVICES_INTERVAL
          value: "60m"
        - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
          value: "false"
        - name: ROOK_ENABLE_SELINUX_RELABELING
          value: "true"
        - name: ROOK_ENABLE_FSGROUP
          value: "true"
        - name: ROOK_DISABLE_DEVICE_HOTPLUG
          value: "false"
        - name: ROOK_ENABLE_FLEX_DRIVER
          value: "false"
        # Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
        # This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs. --> CHANGED to false
        - name: ROOK_ENABLE_DISCOVERY_DAEMON
          value: "false"
        - name: ROOK_CSI_ENABLE_CEPHFS
          value: "true"
        - name: ROOK_CSI_ENABLE_RBD
          value: "true"
        - name: ROOK_CSI_ENABLE_GRPC_METRICS
          value: "true"
        - name: CSI_ENABLE_SNAPSHOTTER
          value: "true"
        - name: CSI_PROVISIONER_TOLERATIONS
          value: |
            - effect: NoSchedule
              key: storage-node
              operator: Exists
        - name: CSI_PLUGIN_TOLERATIONS
          value: |
            - effect: NoSchedule
              key: storage-node
              operator: Exists
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
      - name: rook-config
        emptyDir: {}
      - name: default-config-dir
        emptyDir: {}

3. Create the Cluster

Deploying the Rook cluster is as easy as installing the Rook operator. As we are running our cluster with the Azure Kuberntes Service – a managed service – we don’t want to manually add disks to our storage nodes. Also, we don’t want to use a directory on the OS disk (which most of the examples out there will show you) as this will be deleted when the node will be upgraded to a new Kubernetes version.

In this sample, we want to leverage Persistent Volumes / Persistent Volume Claims that will be used to request Azure Managed Disks which will in turn be dynamically attached to our storage nodes. Thankfully, when we installed our cluster, a corresponding storage class for using Premium SSDs from Azure was also created.

$ kubectl get storageclass

NAME                PROVISIONER                AGE
default (default)   kubernetes.io/azure-disk   15m
managed-premium     kubernetes.io/azure-disk   15m

Now, let’s create the Rook Cluster. Again, we need to adjust the tolerations and add a node affinity that our OSDs will be scheduled on the storage nodes (→ important parts):

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3
    allowMultiplePerNode: false
    volumeClaimTemplate:
      spec:
        storageClassName: managed-premium
        resources:
          requests:
            storage: 10Gi
  cephVersion:
    image: ceph/ceph:v14.2.4-20190917
    allowUnsupported: false
  dashboard:
    enabled: true
    ssl: true
  network:
    hostNetwork: false
  storage:
    storageClassDeviceSets:
    - name: set1
      # The number of OSDs to create from this device set
      count: 4
      # IMPORTANT: If volumes specified by the storageClassName are not portable across nodes
      # this needs to be set to false. For example, if using the local storage provisioner
      # this should be false.
      portable: true
      # Since the OSDs could end up on any node, an effort needs to be made to spread the OSDs
      # across nodes as much as possible. Unfortunately the pod anti-affinity breaks down
      # as soon as you have more than one OSD per node. If you have more OSDs than nodes, K8s may
      # choose to schedule many of them on the same node. What we need is the Pod Topology
      # Spread Constraints, which is alpha in K8s 1.16. This means that a feature gate must be
      # enabled for this feature, and Rook also still needs to add support for this feature.
      # Another approach for a small number of OSDs is to create a separate device set for each
      # zone (or other set of nodes with a common label) so that the OSDs will end up on different
      # nodes. This would require adding nodeAffinity to the placement here.
      placement:
        tolerations:
        - key: storage-node
          operator: Exists
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: agentpool
                operator: In
                values:
                - npstorage
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd
                - key: app
                  operator: In
                  values:
                  - rook-ceph-osd-prepare
              topologyKey: kubernetes.io/hostname
      resources:
        limits:
          cpu: "500m"
          memory: "4Gi"
        requests:
          cpu: "500m"
          memory: "2Gi"
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          resources:
            requests:
              storage: 100Gi
          storageClassName: managed-premium
          volumeMode: Block
          accessModes:
            - ReadWriteOnce
  disruptionManagement:
    managePodBudgets: false
    osdMaintenanceTimeout: 30
    manageMachineDisruptionBudgets: false
    machineDisruptionBudgetNamespace: openshift-machine-api

So, after a few minutes, you will see some pods running in the rook-ceph namespace. Make sure, that the OSD pods a running, before continuing with configuring the storage pool.

$ kubectl get pods -n rook-ceph
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-4qxsv                                            3/3     Running     0          28m
csi-cephfsplugin-d2klt                                            3/3     Running     0          28m
csi-cephfsplugin-jps5r                                            3/3     Running     0          28m
csi-cephfsplugin-kzgrt                                            3/3     Running     0          28m
csi-cephfsplugin-provisioner-dd9775cd6-nsn8q                      4/4     Running     0          28m
csi-cephfsplugin-provisioner-dd9775cd6-tj826                      4/4     Running     0          28m
csi-cephfsplugin-rt6x2                                            3/3     Running     0          28m
csi-cephfsplugin-tdhg6                                            3/3     Running     0          28m
csi-rbdplugin-6jkx5                                               3/3     Running     0          28m
csi-rbdplugin-clfbj                                               3/3     Running     0          28m
csi-rbdplugin-dxt74                                               3/3     Running     0          28m
csi-rbdplugin-gspqc                                               3/3     Running     0          28m
csi-rbdplugin-pfrm4                                               3/3     Running     0          28m
csi-rbdplugin-provisioner-6dfd6db488-2mrbv                        5/5     Running     0          28m
csi-rbdplugin-provisioner-6dfd6db488-2v76h                        5/5     Running     0          28m
csi-rbdplugin-qfndk                                               3/3     Running     0          28m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000c8gdp   1/1     Running     0          16m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000tfk2s   1/1     Running     0          13m
rook-ceph-crashcollector-aks-npstandard-33852324-vmss00000xfnhx   1/1     Running     0          13m
rook-ceph-crashcollector-aks-npstorage-33852324-vmss000001c6cbd   1/1     Running     0          5m31s
rook-ceph-crashcollector-aks-npstorage-33852324-vmss000002t6sgq   1/1     Running     0          2m48s
rook-ceph-mgr-a-5fb458578-s2lgc                                   1/1     Running     0          15m
rook-ceph-mon-a-7f9fc6f497-mm54j                                  1/1     Running     0          26m
rook-ceph-mon-b-5dc55c8668-mb976                                  1/1     Running     0          24m
rook-ceph-mon-d-b7959cf76-txxdt                                   1/1     Running     0          16m
rook-ceph-operator-5cbdd65df7-htlm7                               1/1     Running     0          31m
rook-ceph-osd-0-dd74f9b46-5z2t6                                   1/1     Running     0          13m
rook-ceph-osd-1-5bcbb6d947-pm5xh                                  1/1     Running     0          13m
rook-ceph-osd-2-9599bd965-hprb5                                   1/1     Running     0          5m31s
rook-ceph-osd-3-557879bf79-8wbjd                                  1/1     Running     0          2m48s
rook-ceph-osd-prepare-set1-0-data-sv78n-v969p                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-1-data-r6d46-t2c4q                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-2-data-fl8zq-rrl4r                     0/1     Completed   0          15m
rook-ceph-osd-prepare-set1-3-data-qrrvf-jjv5b                     0/1     Completed   0          15m

Configuring Storage

Before Rook can provision persistent volumes, either a filesystem or a storage pool should be configured. In our example, a Ceph Block Pool is used:

apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
  name: replicapool
  namespace: rook-ceph
spec:
  failureDomain: host
  replicated:
    size: 3

Next, we also need a storage class that will be using the Rook cluster / storage pool. In our example, we will not be using Flex Volume (which will be deprecated in furture versions of Rook/Ceph), instead we use Container Storage Interface.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
    clusterID: rook-ceph
    pool: replicapool
    imageFormat: "2"
    imageFeatures: layering
    csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
    csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
    csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
    csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
    csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete

Test

Now, let’s have a look at the dashboard which was also installed when we created the Rook cluster. Therefore, we are port-forwarding the dashboard service to our local machine. The service itself is secured by username/password. The default username is admin and the password is stored in a K8s secret. To get the password, simply run the following command.

$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password \ 
    -o jsonpath="{['data']['password']}" | base64 --decode && echo
# copy the password

$ kubectl port-forward svc/rook-ceph-mgr-dashboard 8443:8443 \ 
    -n rook-ceph

Now access the dasboard by heading to https://localhost:8443/#/dashboard

Screenshot 2019-12-08 at 22.25.01

Ceph Dashboard

As you can see, everything looks healthy. Now let’s create a pod that’s using a newly created PVC leveraging that Ceph storage class.

PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-pv-claim
spec:
  storageClassName: rook-ceph-block
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Pod

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pv-pod
spec:
  volumes:
    - name: ceph-pv-claim
      persistentVolumeClaim:
        claimName: ceph-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: ceph-pv-claim

As a result, you will now have an NGINX pod running in your Kuberntes cluster with a PV attached/mounted under /usr/share/nginx/html.

Wrap Up

So…what exactly did we achieve with this solution now? We have created a Ceph storage cluster on an AKS that uses PVCs to manage storage. Okay, so what? Well, the usage of volume mounts in your deployments with Ceph is now super-fast and rock-solid, because we do not have to attach physical disks to our worker nodes anymore. We just use the ones we have created during Rook cluster provisioning (remember these four 100GB disks?)! We minimized the amount of “physical attach/detach” actions on our nodes. That’s why now, you won’t see these popular “WaitForAttach”- or “Can not find LUN for disk”-errors anymore.

Hope this helps someone out there! Have fun with it.

Update: Benchmarks

Short update on this. Today, I did some benchmarking with dbench (https://github.com/leeliu/dbench/) comparing Rook Ceph and “plain” PVCs with the same Azure Premium SSD disks (default AKS StorageClass managed-premium, VM types: Standard_DS2_v2). Here are the results…as you can see, it depends on your workload…so, judge by yourself.

Rook Ceph

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 10.6k/571. BW: 107MiB/s / 21.2MiB/s
Average Latency (usec) Read/Write: 715.53/31.70
Sequential Read/Write: 100MiB/s / 43.2MiB/s
Mixed Random Read/Write IOPS: 1651/547

PVC with Azure Premium SSD

100GB disk used to have a fair comparison

==================
= Dbench Summary =
==================
Random Read/Write IOPS: 8155/505. BW: 63.7MiB/s / 63.9MiB/s
Average Latency (usec) Read/Write: 505.73/
Sequential Read/Write: 63.6MiB/s / 65.3MiB/s
Mixed Random Read/Write IOPS: 1517/505

Jürgen Gutsch: ASP.NET Hack Advent Post 08: Hanselman debugs a .NET Core Linux app in WSL2 with VS on Windows

Scott Hanselman also loves hacking. Hacking on small devices and on Windows and Linux. In this post, I want to introduce, he shows how to debug a .NET Core Linux app that runs in the WSL2 with a Visual Studio on Windows:

Remote Debugging a .NET Core Linux app in WSL2 from Visual Studio on Windows

This is one of those posts where he put things together that might not match, or things that didn't match in the past. Even tough the fact that Linux is running natively inside Windows was hard to imagine in the past, the fact that we as developers where able to remote debug an .NET Core app on any platform is incredibly awesome. Hacking things together that might not match is the most interesting topic for me as well. Things like getting .NET apps running on Linux based small devices like the RaspberryPi or hosting Mono based ASP.NET Webform apps on an Apache running on Suse Linux where things I did in the past and I still do whenever I find some time. This is why I really love those posts written by Hanselman.

Blog: https://www.hanselman.com/blog

Twitter: https://twitter.com/shanselman

Jürgen Gutsch: ASP.NET Hack Advent Post 07: Blazorise

Recently I stumbled upon a really cool project that provides frontend components for Blazor. It supports Blazor server side and Blazor WebAssembly on the client side. I found that project, while I was searching for a chart component for a Blazor demo application I'm currently working on.

This project is called Blazorise, is completely open source and hosted on GitHub. It is built on top of Blazor and CSS frameworks like Bootstrap, Material and Bulma. (Actually I've never heard about Bulma.)

Blazorise contains a lot of useful components, including a library to create Charts and DataGrids. It is actively maintained, well documented and also has demo pages for all three CSS Framework implementations.

If you are working with Blazor, you should have a look at it:

Website: https://blazorise.com/

GitHub: https://github.com/stsrki/Blazorise

Jürgen Gutsch: ASP.NET Hack Advent Post 06: Andrew Lock's blog

This sixth post is about a blog that is full of different, but detailed posts about .NET Core and ASP.NET Core. The blog's name ".NET Escapades" kinda describes, that author is writing about almost all he is experiencing related to .NET Core and ASP.NET Core.

This blog is running by Andrew Lock, who is a full stack ASP.NET developer, living in Devon (UK). As well as the other blog authors I introduced in the last advent posts, he is a Microsoft MVP and pretty much involved and well known in the .NET developer community.

He also published the book ASP.NET Core in Action in June last year.

Blog: https://andrewlock.net/

GitHub: https://github.com/andrewlock

Twitter: https://twitter.com/andrewlocknet

Jürgen Gutsch: ASP.NET Hack Advent Post 05: .NET Core 3.1 is out

Ok, this is not one of the posts I expected to write for the ASP.NET Hack Advent, but it is an important one anyway and not that off-topic.

On Monday .NET Core 3.1 was released, which is a release for long term support (LTS). This release is full of bugfixes and and housekeeping. On the ASP.NET Core side it brings a lot of new improvements for Blazor as well as it now supports .NET Standard 2.1

Checkout the following links and download the latest version:

BTW: The next version of .NET Core will be called .NET 5 and will be released around November next year.

Jürgen Gutsch: ASP.NET Hack Advent Post 04: Damien Bowden's blog

Do you want to write secure ASP.NET Core applications and do you want to know how to protect your single page application? One of the best places to learn about security is the the blog of Damien Bowden.

Damien is a Microsoft MVP since 2016, living in Switzerland and very interested in web development, application security and Azure. He is pretty involved, encouraged and well known in the .NET community.

Damien: "My favourite technologies are ASP.NET Core, OpenID Connect, OAuth, SQL, EF Core, Angular, Typescript."

His Blog is full of very detailed and deep-dive posts about ASP.NET Core Identity, Identity Server, OAuth, OpenID Connect. Besides blogging he is also speaking on Meetups, user groups and conferences. He is involved in many open source projects, including Microsoft's projects on GitHub.

Blog: https://damienbod.com/

GitHub: https://github.com/damienbod

Twitter: https://twitter.com/damien_bod

Golo Roden: Wie viele Programmiersprachen sind zu viel?

Verschiedene Ansätze ermöglichen in zunehmendem Maße, innerhalb eines Projekts unterschiedliche Programmiersprachen zu kombinieren. Doch nicht alles, was technisch möglich ist, ist auch sinnvoll.

Jürgen Gutsch: ASP.NET Hack Advent Post 03: Fritz and Friends

Twitch isn't just for gamers and folks who wants to watch gamers playing games. There are also a lot of developers who do life coding on Twitch. And the guy who does this for a long time (since almost forever) is Jeff Fritz.

Jeff is a Program Manager for Microsoft on the ASP.NET and .NET Community Outreach teams and he was responsible for ASP.NET Web Forms and WCF in the past.

Several times a week he starts a really entertaining live stream on Twitch you should definitely watch. During the live stream you are able to directly interact with him and the with the rest of the audience. This is definitely a fun show that also contains a lot of things to learn.

Twitch: https://www.twitch.tv/csharpfritz

Blog: https://jeffreyfritz.com/

Twitter: https://twitter.com/csharpfritz

Do you miss the older videos twitch? Just have a look at this channel on YouTube:

https://www.youtube.com/user/jfritz828

Jürgen Gutsch: ASP.NET Hack Advent Post 02: Steve Gordon's blog

Do you like to deep dive into specific ASP.NET Core and .NET Core topics? If yes, you definitely should bookmark Steve Gordon's blog about .NET Core and ASP.NET Core. Steve is a senior .NET developer and Microsoft MVP working in Brighton, East Sussex (UK). He's working on various open source projects, a quite active part of the .NET developer community and running a .NET user group in Brighton.

Blog: https://www.stevejgordon.co.uk/

Twitter: https://twitter.com/stevejgordon

What I just learned while writing this post is, that he also has a channel on YouTube: https://www.youtube.com/channel/UC_rMpypHGP8_J8AAo_bLCmA/videos

Jürgen Gutsch: ASP.NET Hack Advent Post 01: NDC Conference Videos

Maybe you already know the NDC Conferenced that are organized around the world (like Oslo, London, Sydney and Minnesota). Did you already know the there is a YouTube channel with almost al of the recorded sessions? The NDC organizers publish the recording around the weeks after the conferences and you are able to watch them afterwards. The videos are organized in playlists per conference. You'll find a lot of talks done by top speakers from all over the world.

So, have a look and subscribe to the NDC Conferences channel: https://www.youtube.com/channel/UCTdw38Cw6jcm0atBPA39a0Q/playlists

Code-Inside Blog: Did you know that you can build .NET Core apps with MSBuild.exe?

The problem

We recently updated a bunch of our applications to .NET Core 3.0. Because of the compatibility changes to the “old framework” we try to move more and more projects to .NET Core, but some projects still target “.NET Framework 4.7.2”, but they should work “ok-ish” when used from .NET Core 3.0 applications.

The first tests were quite successful, but unfortunately when we tried to build and pulish the updated .NET Core 3.0 app via ‘dotnet publish’ (with a reference to a .NET Framework 4.7.2 app) we faced this error:

C:\Program Files\dotnet\sdk\3.0.100\Microsoft.Common.CurrentVersion.targets(3639,5): error MSB4062: The "Microsoft.Build.Tasks.AL" task could not be loaded from the assembly Microsoft.Build.Tasks.Core, Version=15.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a.  Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. 

The root cause

After some experiments we saw a pattern:

Each .NET Framework 4.7.2 based project with a ‘.resx’ file would result in the above error.

The solution

‘.resx’ files are still a valid thing to do, so we checked out if we could work around this problem, but unfortunately this was not super successful. We moved some resources, but in the end some resources must stay in the corresponding file.

We used the ‘dotnet publish…’ command to build and publish .NET Core based applications, but then I tried to build the .NET Core application from MSBuild.exe and discovered that this worked.

Lessons learned

If you have a mixed environment with “old” .NET Framework based applications with resources in use and want to use this in combination with .NET Core: Try to use the “old school” MSBuild.exe way.

MSBuild.exe is capable of building .NET Core applications and it is more or less the same.

Be aware

Regarding ASP.NET Core applications: The ‘dotnet publish’ command will create a web.config file - if you use the MSBuild approach this file will not be created automatically. I’m not sure if there is a hidden switch, but if you just treat .NET Core apps like .NET Framework console applications the web.config file is not generated. This might lead to some problems when you deploy this to an IIS.

Hope this helps!

Holger Schwichtenberg: Ist ASP.NET Core Blazor nun fertig oder noch nicht?

Der Dotnet-Doktor erklärt den Unterschied zwischen Blazor Server (im RTM-Status) und Blazor WebAssembly (im Preview-Status).

Jürgen Gutsch: ASP.NET Core 3.0 Weather Application - The gRPC Client

Introduction

I'm going to write an application that reads weather data in, stores them and provides statistical information about that weather. In this case I use downloaded data from a weather station in Kent (WA). I'm going to simulate a day in two seconds.

I will write a small gRPC services which will be our weather station in Kent. I'm also goin to write a worker service that hosts a gRPC Client to connect to the weather station to fetch the data every day. This worker service also stores the date into a database. The third application is a Blazor app that fetches the data from the database and displays the data in a chart and in a table.

In this post I'm going to continue with the client that fetches the data from the server. I will create a worker service, which fetches the weather data from the previously created weather station. The worker service will include a gRPC client to connect to the service and it will store the data in a MongoDB database.

Setup the app

As already mentioned I would like to use a worker service that fetches the weather data periodically from the weather station.

With your console change to the directory where the weather stats solution is located. As always we will use the .NET CLI to create new projects or to work with .NET Core projects. The next two commands create a new worker service application and add the project to the the current solution file

dotnet new worker -n WeatherStats.Worker -o WeatherStats.Worker
dotnet sln add WeatherStats.Worker

This worker service project is basically a console application that executes a background service using the new generic hosting environment:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureServices((hostContext, services) =>
            {
                services.AddHostedService<Worker>();
            });
}

In the Program.cs a IHostBuilder is created that initializes some cool features like logging, configuration and dependency injection. But it doesn't initializes the web stack that is needed for ASP.NET Core. In the ConfigureServices method a HostedService is added to the dependency injection container. This is the actual background service. Let's rename it to WeatherWorker and have a short glimpse into the default implementation:

public class WeatherWorker : BackgroundService
{
    private readonly ILogger<WeatherWorker> _logger;

    public WeatherWorker(ILogger<WeatherWorker> logger)
    {
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
            await Task.Delay(1000, stoppingToken);
        }
    }
}

I just realized that there is no unique wording for this kind of service. There is Worker Service, Background Service and Hosted Service. In General, it is all the same thing.

A HostedService is a class that gets added to the dependency injection container, to get executed by the generic hosting service once after the application starts. This could be used to initialize a database or something else. This class gets executed asynchronous in the background. If this class runs an endless loop to execute stuff periodically we could call it a service, like a windows service. Because it also runs asynchronously in the background it is a Background Serivce. The implementation of a Background Service is called a worker in those kind of projects. That's why we also talk about a Worker Service. Also the entire application could be called a Worker Service, since it runs workers like a service.

Now we need to create the gRPC client to fetch the data from the weather station:

The gRPC client

Creating the gRPC client needs some configuration, since there is no gRPC client template project available yet in the .NET CLI. Since the server and the client have to use the same proto file to setup a connection, it would make sense to copy the proto file of the server project into the solution folder and to share it between the projects. This is why I created a new Protos folder in the solution folder and moved the weather.proto into it.

This needs us to change the link to the proto file in the project files. The server:

<ItemGroup>
  <Protobuf Include="..\Protos\weather.proto" 
    GrpcServices="Server" 
    Link="Protos\weather.proto" />
</ItemGroup>

The client:

<ItemGroup>
  <Protobuf Include="..\Protos\weather.proto" 
    GrpcServices="Client" 
    Link="Protos\weather.proto" />
</ItemGroup>

You see that the code is pretty equal except the value of the GrpcServices attribute. This tells the tools to create the client or the server services.

We also need to add some NuGet Packages to the project file:

<PackageReference Include="Grpc" Version="2.24.0" />
<PackageReference Include="Grpc.Core" Version="2.24.0" />
<PackageReference Include="Google.Protobuf" Version="3.9.2" />
<PackageReference Include="Grpc.Net.Client" Version="2.24.0" />
<PackageReference Include="Grpc.Tools" Version="2.24.0" PrivateAssets="All" />

These packages are needed to generate the client code out of the weather.proto and to access the client in C#

Until yet we didn't add any C# code. So let's open the Worker.cs and add some code to the ExecuteAsync method. But first remove the lines inside this method.

protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
    _logger.LogInformation("create channel");
    using (var channel = GrpcChannel.ForAddress("https://localhost:5001"))
    {
        _logger.LogInformation("channel created");

        _logger.LogInformation("create client");
        var client = new Weather.WeatherClient(channel);
        _logger.LogInformation("client created");

        // Add your logic here
        // ...
    }
}

I added a lot of logging in this method, that writes out to the console. This is for debugging purposes and to just see what is happening in the worker app. At first we create a channel to the server. This will connect to the Server with the given address. And than we need to create the actual client using the channel. The client was built with the proto file and contains all the defined methods and is using the defined types.

var d = new DateTime(2019, 1, 1, 0, 0, 0, DateTimeKind.Utc);
while (!stoppingToken.IsCancellationRequested)
{
    try
    {
        _logger.LogInformation("load weather data");
        var request = new WeatherRequest
            {
                Date = Timestamp.FromDateTime(d)
            };
        var weather = await client.GetWeatherAsync(
            request, null null, stoppingToken);
        _logger.LogInformation(
            $"Temp: {weather.AvgTemperature}; " +
            $"Precipitaion: {weather.Precipitaion}");
    }
    catch (Exception ex)
    {
        _logger.LogError(ex, ex.Message);
    }
    d = d.AddDays(1); // add one day
    await Task.Delay(1000, stoppingToken);
}

This snippet simulates the daily execution. There is a DateTIme defined that represents the first of January in 2019. (This is the first day of our weather time series in the database.) On every iteration of the while loop we add one day to fetch the weather data of the next day.

On line seven of this snippet I use the client to call the generated method GetWeatherAsync with a new WeatherRequest. The WeatherRequest contains the current DateTime as a Google Protobuf Timestamp. This type already has methods to convert the .NET UTC DateTimes into this kind of Timestamps.

After I retrieved the weather data, I write some of the information out to the console.

Now I am able to run both applications using two console sessions and it should work. One for the server and one for the client. The worker service application should be able to connect to the weather station and to fetch the data:

As you can see in the screenshot it works absolutely fine.

I'm now going to add some code to write the data to the database which is also used by the Web UI.

The database

Since the applications will run on docker, I'm going to use an open source data base server to store the data. This time I need to share the database with the UI project. The current app writes the data into the database and the UI project will read and display the data. So I need a separate container that host the database. In this this case I would try to use a MongoDB.

To use the PostgreSQL I need to add the Entity Framework Provider first:

<PackageReference Include="MongoDB.Driver" Version="2.9.3" />

At first I defined a WeatherService that contains the connection to the MongoDB:

public interface IWeatherService
{
    Task<List<WeatherData>> Get();
    Task<WeatherData> Get(int id);
    Task<WeatherData> Create(WeatherData weather);
    Task Update(int id, WeatherData weatherIn);
    Task Remove(WeatherData weatherIn);
    Task Remove(int id);
}
public class WeatherService : IWeatherService
{
    private readonly IMongoCollection<WeatherData> _weatherData;

    public WeatherService(IWeatherDatabaseSettings settings)
    {
        var client = new MongoClient(settings.ConnectionString);
        var database = client.GetDatabase(settings.DatabaseName);

        _weatherData = database.GetCollection<WeatherData>(
            settings.WeatherCollectionName);
    }

    public async Task<List<WeatherData>> Get() =>
        (await _weatherData.FindAsync(book => true)).ToList();

    public async Task<WeatherData> Get(int id) =>
        (await _weatherData.FindAsync<WeatherData>(weather => weather.Id == id)).FirstOrDefault();

    public async Task<WeatherData> Create(WeatherData weather)
    {
        await _weatherData.InsertOneAsync(weather);
        return weather;
    }
    
    public async Task Update(int id, WeatherData weatherIn) =>
        await _weatherData.ReplaceOneAsync(weather => weather.Id == id, weatherIn);

    public async Task Remove(WeatherData weatherIn) =>
        await _weatherData.DeleteOneAsync(weather => weather.Id == weatherIn.Id);

    public async Task Remove(int id) =>
        await _weatherData.DeleteOneAsync(weather => weather.Id == id);
}

This WeatherService and the needed Settings need to be registered in the Program.cs:

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureServices((hostContext, services) =>
        {                    
            services.Configure<WeatherDatabaseSettings>(
                hostContext.Configuration.GetSection(nameof(WeatherDatabaseSettings)));

            services.AddSingleton<IWeatherDatabaseSettings>(sp =>
                sp.GetRequiredService<IOptions<WeatherDatabaseSettings>>().Value);

            services.AddTransient<IWeatherService, WeatherService>();

            services.AddHostedService<WeatherWorker>();
        });

This registration works the same way as in the regular ASP.NET Core Startup classes. The DI container is the same, only the location, where the configuration needs to be done is different.

At first I register the WeatherDatabaseSettings which reads the settings out of the appsettings.json. The second line registers an instance of the settings together with the settings interface. This is not really needed, but shows how you could register a service like this.

The third registration is the actual WeatherService

Since the connection string is read from the appsettings.json file, I also need to add the connection string here:

{
  "WeatherDatabaseSettings": {
    "WeatherCollectionName": "WeatherData",
    "ConnectionString": "mongodb+srv://weatherstats:weatherstats@instancename.azure.mongodb.net/test?retryWrites=true&w=majority",
    "DatabaseName": "WeacherDataDb"
  },
  // ...
}

Currently it is a MongoDB hosted on Azure, but later on I will use an instance inside a Docker container. It seems to be more useful to have it all all boxed in containers. From my perspective this makes shipping the entire application more easy and flexible.

However, If the WeatherService is registered, I'm almost able to use it in the Worker.cs. I need to inject the WeatherService first:

public class WeatherWorker : BackgroundService
{
    private readonly ILogger<WeatherWorker> _logger;
    private readonly IWeatherService _weatherService;

    public WeatherWorker(ILogger<WeatherWorker> logger,
        IWeatherService weatherService)
    {
        _logger = logger;
        _weatherService = weatherService;
    }

Now I can add this lines to save the weather date to the database:

await _weatherService.Create(new WeatherData
{
    Id = i,
    WeatherStation = "US1WAKG0045",
    AvgTemperature = weather.AvgTemperature,
    AvgWindSpeed = weather.AvgWindSpeed,
    MaxTemperature = weather.MaxTemperature,
    MinTemperature = weather.MinTemperature,
    Precipitaion = weather.Precipitaion,
    Date = weather.Date.ToDateTime()
});

That's it. Now the weather data fetched from the weather station will be saved into a database using a worker service.

Conclusion

This is working quite well. It's actually the first time I use a MongoDB, but it's nice, since it is just working and easy to setup. During development I'm going to use the instance on Azure and later on I will setup a dockerized instance.

I really like the way gRPC works and how easy it is to setup a gRPC client. But I think it makes sense to have a gRPC client template available with the .NET CLI by default. This way it wouldn't be needed to find the right packages to include in various blog posts and documentations. Because this get's hard and confusing, if some of resources are just a little bit outdated. The way to add a gRPC service as a service reference using VIsual Studio 2019 is nice, but doesn't really help developers who use VSCode or/and are working on different platforms.

As mentioned the worker and the weather station are working pretty well, but there is still a lot to do:

  • I need to create the web client
  • I will add health checks to monitor the entire application
  • I need to dockerize all the stuff
  • I will to push it so somewhere

But this are topics for the next blog posts :-)

Jürgen Gutsch: ASP.NET Hack Advent

During this years Advent, I'd like to post a kind of an Advent calendar. Until December 24th I'm going to post a link to a good community resource and a few lines about it. The resource should be about ASP.NET Core, .NET Core or C# or a related topic. This resource might be a blog post, a useful open source software or a video. It shouldn't be a commercial. It also should be as new as possible. Maybe not older than two months. None of the linked resources will be created by me.

You got the idea?

I'm going to push the Advent calendar posts every day via twitter and I will list them here in this introduction post:

Until yet, I've just a few ideas which resources I'm going to write about. So you are free to propose some nice and useful links in the comments below.

Golo Roden: Tools für Web- und Cloud-Entwicklung

Die vergangene Folge von "Götz & Golo" hat sich mit der Frage beschäftigt, wann Teams gut zusammen arbeiten. Der Fokus lag dabei auf der Arbeit remote und vor Ort. Doch wie sieht es mit den eingesetzten Tools aus?

Holger Schwichtenberg: User-Group-Vortrag und Workshop zu Continuous Delivery mit Azure DevOps

Der Dotnet-Doktor hält am 7. November einen Vortrag und bietet vom 2. bis 4. Dezember einen Workshop in Essen an.

Norbert Eder: MySQL-Queries mitloggen

Beim Microsoft SQL Server kann man SQL Queries recht einfach mitloggen in dem man mal schnell den SQL Server Profiler startet. MySQL bietet ein derartiges Tool nicht an, zumindest kann es die MySQL Workbench nicht. Dennoch kann man aber die abgesetzten Queries aufzeichnen.

So kann man beispielsweise alle Queries in eine Log-Datei schreiben:

SET global general_log_file='c:/Temp/mysql.log'; 
SET global general_log = on; 
SET global log_output = 'file';

Das kann man dann natürlich auch wieder deaktivieren:

SET global general_log = off; 

Weiterführende Informationen finden sich in der MySQL Dokumentation.

Der Beitrag MySQL-Queries mitloggen erschien zuerst auf Norbert Eder.

Christina Hirth : My Reading List @KDDDConf

(formerly known as KanDDDinsky 😉)

Accelerate - Building and Scaling High Performing Technology Organizations

Accelerate by Nicole Forsgren, Gene Kim, Jez Humble

This book was referenced to in a lot of talks, mostly with the same phrase “hey folks, you have to read this!”


Domain Modeling Made Functional by Scott Wlaschin

The book was called as the only real currently published reference work for DDD for functional programming.

More books and videos to find on fsharpforfunandprofit


Functional Core, Imperative Shell by Gary Bernhard – a talk

The comments on this tweet are telling me, watching this video is long overdue …


37 Things One Architect Knows About IT Transformation by Gregor Hohpe

The name @ghohpe was also mentioned a few times at @KDDDconf


Domain Storytelling

A Collaborative Modeling Method

by Stefan Hofer and Henning Schwentner


Drive: The surprising truth about what motivates us by Daniel H Pink

There is also a TLDR-Version: a talk on vimeo


Sapiens – A Brief History of Humankind by Yuval Noah Harari

This book was recommended by @weltraumpirat after our short discussion about how broken or industry is. Thank you Tobias! I’m afraid, the book will give me no happy ending.

UPDATE:

It is not a take-away from KDDD-Conf but still a must have book (thank you Thomas): The Phoenix Project

Jürgen Gutsch: ASP.NET Core 3.0 Weather Application - The gRPC Server

Introduction

As mentioned in the last post, the next couple of posts will be a series that describes how to build a kind of a microservice application that reads weather data in, stores them and provides statistical information about that weather.

I'm going to use gRPC, Worker Services, SignalR, Blazor and maybe the Identity Server to secure all the services. If some time is left, I'll put all the stuff into docker containers.

I will write a small gRPC services which will be our weather station in Kent. I'm also goin to write a worker service that hosts a gRPC Client to connect to the weather station to fetch the data every day. This worker service also stores the date into a database. The third application is a Blazor app that fetches the data from the database and displays the data in a chart and in a table.

In this case I use downloaded weather data of Washington state and I'm going to simulate a day in two seconds.

In this post I will start with the weather station.

Setup the app

In my local git project dump folder I create a new folder called WeatherStats, which will be my project solution folder:

mkdir weatherstats
cd weatherstats
dotnet new sln -n WeatherStats
dotner new grpc -n WeatherStats.Kent -o WeatherStats.Kent
dotnet sln add WeatherStats.Kent

This line create the folder, creates a new solution file (sln) with the name WeatherStats. The fourth line creates the gRPC project and the last line adds the project to the solution file.

The solution file helps MSBuild to build all the projects, to see the dependencies and so on. And it helps user who like to use Visual Studio.

If this is done I open VSCode using the code command in the console:

code .

The database is the SQLite database that I created for my talk about the ASP.NET Core Health Checks. Just copy this database to your own repository into the folder of the weather station WeatherStats.Kent.

In the Startup.cs we only have the services for gRPC registered:

services.AddGrpc();

But we also need to add a DbContext:

services.AddDbContext<ApplicationDbContext>(options =>
{
    options.UseSqlite(
        Configuration["ConnectionStrings:DefaultConnection"]);
});

The configuration points to a SQLite database in the current project:

{
  "ConnectionStrings": {
    "DefaultConnection": "Data Source=wa-weather.db"
  },

In the Configure method the gRPC middleware is mapped to the WeatherService:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseRouting();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapGrpcService<WeatherService>();

        endpoints.MapGet("/", async context =>
        {
            await context.Response.WriteAsync("Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit: https://go.microsoft.com/fwlink/?linkid=2086909");
        });
    });
}

A special in this project type is the proto folder with the greet.proto in it. This is a text file that describes the gRPC endpoint. We are going to rename it to weather.proto later on and to change it a little bit. If you change the name outside of Visual Studio 2019, you also need to change it in the project file. I never tried it, but the Visual Studio 2019 tooling should also rename the references.

You will also find a GreeterService in the Services folder. This file is the implementation of the service that is defined in the greeter.proto.

And last but not least we have the DbContext to create, which isn't really complex in out case:

public class ApplicationDbContext : DbContext
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    {
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<WeatherData>()
            .HasKey(x => x.Id );
        modelBuilder.Entity<WeatherData>()
            .HasOne(p => p.WeatherStation)
                .WithMany(b => b.WeatherData);
        modelBuilder.Entity<WeatherStation>()
            .HasKey(x => x.Id);
    }

    public DbSet<WeatherData> WeatherData { get; set; }
    public DbSet<WeatherStation> WeatherStation { get; set; }
}

The gRPC endpoint

Let's start changing the gRPC endpoint. Personally I really love starting to code from the UI perspective, this forces me to not do more than the UI really needs. In our case the gRPC endpoint is the UI. So I use the weather.proto file to design the API:

syntax = "proto3";
import "google/protobuf/timestamp.proto";

option csharp_namespace = "WeatherStats.Kent";

package Weather;

// The weather service definition.
service Weather {
  // Sends a greeting
  rpc GetWeather (WeatherRequest) returns (WeatherReply);
}

// The request message containing the date.
message WeatherRequest {
  google.protobuf.Timestamp date = 1;
}

// The response message containing the weather.
message WeatherReply {
  google.protobuf.Timestamp date = 1;
  float avgTemperature = 2;
  float minTemperature = 3;
  float maxTemperature = 4;
  float avgWindSpeed = 5;
  float precipitaion = 6;
}

I need to import the support for timestamp to work with dates. The namespace was predefined by the tooling. I changed the package name and the service name to Weather. The rpc method now is called GetWeather and takes an WeatherRequest as an argument and returns a ReatherReply.

After that the types (messages) are defined. The WeatherRequest only has the date in it, which is the requested date. The WeatherReply also contains the date as well as the actual weather data of that specific day.

That's it. When I now build the application, the gRPC tooling builds a lot of C# code in the background for us. This code will be used in the WeatherService, that fetches the date from the database:

public class WeatherService : Weather.WeatherBase
{
    private readonly ILogger<WeatherService> _logger;
    private readonly ApplicationDbContext _dbContext;

    public WeatherService(
        ILogger<WeatherService> logger,
        ApplicationDbContext dbContext)
    {
        _logger = logger;
        _dbContext = dbContext;
    }

    public override Task<WeatherReply> GetWeather(
        WeatherRequest request, 
        ServerCallContext context)
    {
        var weatherData = _dbContext.WeatherData
            .SingleOrDefault(x => x.WeatherStationId == WeatherStations.Kent
                && x.Date == request.Date.ToDateTime());

        return Task.FromResult(new WeatherReply
        {
            Date = Timestamp.FromDateTime(weatherData.Date),
            AvgTemperature = weatherData?.AvgTemperature ?? float.MinValue,
            MinTemperature = weatherData?.MinTemperature ?? float.MinValue,
            MaxTemperature = weatherData?.MaxTemperature ?? float.MinValue,
            AvgWindSpeed = weatherData?.AvgWindSpeed ?? float.MinValue,
            Precipitaion = weatherData?.Precipitaion ?? float.MinValue
        });
    }
}

This service will fetches a specific WeatherData item from the database using a Entity Framework Core DbContext that we created previously. gRPC has another date and time implementation. This needs to add the Google.Protobuf.WellKnownTypes package via NuGet. This package also provides functions to convert between this two date and time implementations.

The WeatherService derives from the WeatherBase class, which is auto generated from the weather.proto file. Also the types WeatherRequest and WeatherReply are auto generated as defined in the weather.proto. As you can see the WeatherBase is in the WeatherStats.Kent.Weather namespace, which is a combination of the csharp_namespace and the package name.

That's it. We are able to test the service after the client is done.

Conclusion

This is all the code for the weather station. Not really complex, but enough to demonstrate the gRPC server.

In the next part, I will show how to connect to the gRPC server using a gRPC client and how to store the weather data into a database. The client will run inside a worker service to fetch the date regularly, e. g. once a day.

Code-Inside Blog: IdentityServer & Azure AD Login: Unkown Response Type text/html

The problem

Last week we had some problems with our Microsoft Graph / Azure AD login based system. From a user perspective it was all good until the redirect from the Microsoft Account to our IdentityServer.

As STS and for all auth related stuff we use the excellent IdentityServer4.

We used the following configuration:

services.AddAuthentication()
            .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
            {
                options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                options.ClientSecret = office365Config.MicrosoftAppClientSecret;    // Client-Secret from the AppRegistration 
                options.Authority = office365Config.AuthorizationEndpoint;          // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                options.ResponseType = "code id_token";
                options.GetClaimsFromUserInfoEndpoint = true;
                options.SaveTokens = true;
                options.CallbackPath = "/oidc-signin"; 
                
                foreach (var scope in office365Scopes)
                {
                    options.Scope.Add(scope);
                }
            });

The “office365config” contains the basic OpenId Connect configuration entries like ClientId and ClientSecret and the needed scopes.

Unfortunatly with this configuration we couldn’t login to our system, because after we successfully signed in to the Microsoft Account this error occured:

System.Exception: An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
   --- End of inner exception stack trace ---
   at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
   at IdentityServer4.Hosting.FederatedSignOut.AuthenticationRequestHandlerWrapper.HandleRequestAsync() in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\FederatedSignOut\AuthenticationRequestHandlerWrapper.cs:line 38
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Cors.Infrastructure.CorsMiddleware.InvokeCore(HttpContext context)
   at IdentityServer4.Hosting.BaseUrlMiddleware.Invoke(HttpContext context) in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\BaseUrlMiddleware.cs:line 36
   at Microsoft.AspNetCore.Server.IIS.Core.IISHttpContextOfT`1.ProcessRequestAsync()

Fix

After some code research I found the problematic code:

We just needed to disable “GetClaimsFromUserInfoEndpoint” and everything worked. I’m not sure why we the error occured, because this code was more or less untouched a couple of month and worked as intended. I’m not even sure what “GetClaimsFromUserInfoEndpoint” really does in the combination with a Microsoft Account.

I wasted one or two hours with this behavior and maybe this will help someone in the future. If someone knows why this happend: Use the comment section or write me an email :)

Full code:

   services.AddAuthentication()
                .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
                {
                    options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                    options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                    options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                    options.ClientSecret = office365Config.MicrosoftAppClientSecret;  // Client-Secret from the AppRegistration 
                    options.Authority = office365Config.AuthorizationEndpoint;        // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                    options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                    options.ResponseType = "code id_token";
                    // Don't enable the UserInfoEndpoint, otherwise this may happen
                    // An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
                    // at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
                    options.GetClaimsFromUserInfoEndpoint = false; 
                    options.SaveTokens = true;
                    options.CallbackPath = "/oidc-signin"; 
                    
                    foreach (var scope in office365Scopes)
                    {
                        options.Scope.Add(scope);
                    }
                });

Hope this helps!

Martin Richter: Note 1 für den Support von Schaudin / RC-WinTrans

Seit Jahren benutzen wir RC-WinTrans von Schaudin.com für unsere die Multilinguale Unterstützung unserer Software.

Durch eine Änderung in VC-2019 16.3.3 wurden nun RC Dateien nicht mehr ANSI Codepage 1252 gespeichert sondern grundsätzlich als UTF-8 Dateien. D.h. alle RC Dateien, die nicht in UTF-8 oder UTF-16 vorliegen, werden zwangsweise in UTF-8 konvertiert.

Jetzt hatten wir ein Problem. Unsere Tools von Schaudin (RC-WinTrans) können kein UTF-8 in der von uns genutzten Version. Zuerst habe ich bei Microsoft einen Case zu öffnen, weil so ein erzwungenes Encoding ist für mich ein No-Go.

Eine Anfrage in Stackoverflow brachte keine Erkenntnis außer, das das Problem ist bereits bekannt unter mehreren Incidents
Link1, Link2, Link3

Also habe ich mich an den Support von Schaudin gewandt. Neuere Tools können zwar kein UFT-8 aber UTF-16 verarbeiten. Also müssen wir eben ein Update kaufen.
Nach einigen Emails hin und her bot mir Schaudin an, die nächste Version nach meiner (die auch UTF-16 unterstützt) kostenlos zu erhalten.

Ich bin etwas sprachlos! So etwas (kostenlos) auf die nächste Version, ist doch nicht so ganz üblich in unserer Welt.

Ich sage Danke und gebe der Firma Schaudin die Note 1 in Kulanz und Support.


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Holger Schwichtenberg: Aktuelle Fachbücher zu C# 8.0 und Entity Framework Core 3.0

Der Dotnet-Doktor hat seine Fachbücher zu C# 8.0 und Entity Framework Core 3.0 auf den Stand der am 23. September 2019 erschienen endgültigen Versionen gebracht.

Norbert Eder: Cascadia Code: Neuer Font für Visual Studio Code

Microsoft hat einen neuen nichtproportionalen (monospaced) Font (für Visual Studio Code, Terminal etc.) veröffentlicht: Cascadia Code.

This is a fun, new monospaced font that includes programming ligatures and is designed to enhance the modern look and feel of the Windows Terminal.

Ich habe den Font getestet und finde ihn empfehlenswert. Und so kannst du ihn auch verwenden:

Installation

Öffne die Cascadia Code Releases. Klicke auf Cascadia.ttf und lade somit die Datei auf deinen Computer. Öffne den Font anschließend mit der Windows Schriftartenanzeige.

Links oben kann der Font nun via Installieren am System installiert und registriert werden.

Nun kann der Font in jeder Anwendung verwendet werden.

Font in Visual Studio Code ändern

Unter File > Preferences > Settings > Text Editor > Font kann der verwendete Font in Visual Studio Code geändert werden. Hierfür einfach im Feld Font Family einfach 'Cascadia Code', Consolas, 'Courier New', monospace eintragen. Um Ligaturen zu verwenden, ist das entsprechende Flag zu aktivieren:

Cascadia Code und Ligaturen in Visual Studio Code konfigurieren | Norbert Eder

Cascadia Code und Ligaturen in Visual Studio Code konfigurieren

Der Beitrag Cascadia Code: Neuer Font für Visual Studio Code erschien zuerst auf Norbert Eder.

Christina Hirth : About silos and hierarchies in software development

Disclaimer: this is NOT a rant about people. In most of the situations all devs I know want to deliver a good work. This is a rant about organisations imposing such structures calling themselves “an agile company”.

To give you some context: a digital product, sold online as a subscription. The application in my scenario is the usual admin portal to manage customers, get an overview of their payment situation, like balance, etc.
The application is built and maintained by a frontend team. The team is using the GraphQL API built and maintained by a backend team. Every team has a team lead and over all of them is at least one other lead. (Of course there are also a lot of other middle-management, etc.) 

Some time ago somebody must have decided to include in the API a field called “total” containing the balance of the customer so that it can be displayed in the portal. Obviously I cannot know what happened (I’m just a user of this product), but fact is, this total was implemented as an integer. Do you see the problem? We are talking about money displayed on the website, about a balance which is almost never an integer. This small mistake made the whole feature unusable.

Point 1: Devs implement technical requests instead of improving the product 
I don’t know if the developer who implemented this made an error by not thinking about what this total should represent or he/she simple didn’t had the experience in e-commerce but it is not my point. My point is that this person was obviously not involved in the discussion about this feature, why it is needed, what is the benefit. I can see it with my spiritual eyes how this feature became turned in code: The team lead, software lead (xyz lead) decided that this task has to be done. The task didn’t referred to the customer benefit, it stripped everything down to “include a new property called total having as value the sum of some other numbers”. I can see it because I had a lot of meetings like this. I delivered a string to the other team and this string was sometimes a URL and sometimes a name. But I did this in a company which didn’t called himself agile. 

Point 2: No chance for feedback, no chance for commitment for the product
Again: I wasn’t there as this feature was requested and built, I just can imagine that this is what it happened, but it really doesn’t matter. It is not about a special company or about special people but about the ability to deliver features or only just some lines of code sold as a product. Back to my “total”: this code was reviewed, integrated, deployed to development, then to some in-between stages and finally to production. NOBODY on this whole chain asked himself if the new field included in a public(!) API is implemented as it should. And I would bet that nobody from the frontend team was asked to review the API to see if their needs can be fulfilled.

Point 3: Power play, information hiding makes teams slow artificially (and kills innovation and the wish to commit themselves to the product they build) 
If this structure wouldn’t be built on power and position and titles then the first person observing the error could have talked to the very first developer in the team responsible for the feature to correct it. They could have changed it in a few minutes (this was the first person noticing the error ergo nobody was using it yet) and everybody would have been happy. But not if you have leads of every kind who must be involved in everything (because this is why they have their position, isn’t it?) Then somebody young and enthusiastic wanting to deliver a good product would create a JIRA ticket. In a week or two this ticket will be eventually discussed (by the leads of course)  and analyzed and it will eventually moved forward in the backlog – or not. It doesn’t matter anyway because the frontend team had a deadline and they had to solve their problem somehow.

Epilogue: the culture of “talk only to the leads” bans the cooperation between teams
at this moment I did finally understood the reason behind of another annoying behavior in the admin panel: the balance is calculated in the frontend and is equal with the sum of the shown items. I needed some time to discover this and was always wondering WTF… Now I can see what happened: the total in the API was not a total (only the integer part of the balance) and the ticket had to be finished so that somebody had this idea to create a total adding the values from the displayed items. Unfortunately this was a very short-sighted idea because it only works if you have less then 25 payments, the default number of items pro page. Or you can use the calculator app to add the single totals on every page…

All this is on so many levels wrong! For every involved person is a lose-lose situation. 

What do you think? Is this only me arguing for better “habitat for devs” or it is time that this kind of structures disappear.

Jürgen Gutsch: New in ASP.NET Core 3.0: Worker Services

I mentioned in on of the first posts of this series, that we are now able to create ASP.NET Core applications without a web server and without all the HTTP stuff that is needed to provide content via HTTP or HTTPS. At the first glance it sounds wired. Why should I create a ASP.NET application that doesn't provide any kind of an endpoint over HTTP? Is this really ASP.NET. Well, it is not ASP.NET in the sense of creating web applications. But it is part of the ASP.NET Core and uses all the cool features that we are got used to in ASP.NTE Core:

  • Logging
  • Configuration
  • Dependency Injection
  • etc.

In this kind of applications we are able to span up a worker service which is completely independent from the HTTP stack.

Worker services can run in any kind of .NET Core applications, but they don't need the IWebHostBuilder to run

The worker service project

In Visual Studio or by using the .NET CLI you are able to create a new worker service project.

dotnet new worker -n MyWorkerServiceProject -o MyWorkerServiceProject

This project looks pretty much like a common .NET Core project, but all the web specific stuff is missing. The only two code files here are the Program.cs and a Worker.cs.

The Program.cs looks a little different compared to the other ASP.NET Core projects:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
        	.ConfigureServices((hostContext, services) =>
            {
                services.AddHostedService<Worker>();
            });
}

There is just a IHostBuilder created, bot no IWebHostBuilder. There is also no Startup.cs created, which actually isn't needed in general. The Startup.cs should only be used to keep the Program.cs clean and simple. Actually the DI container is configure in the Program.cs in the method ConfigureServices.

In a regular ASP.NET Core application the line to register the Worker in the DI container, will actually also work in the Startup.cs.

The worker is just a simple class that derives from BackgroundService:

public class Worker : BackgroundService
{
    private readonly ILogger<Worker> _logger;

    public Worker(ILogger<Worker> logger)
    {
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
            await Task.Delay(1000, stoppingToken);
        }
    }
}

The BackgroundService base class is still a well known IHostedService that exists for a while. It just has some base implementation in it to simplify the API. You would also be able to create a WorkerService by implementing the IHostedService directly.

This demo worker just does a endless loop and writes the current date and time every second to the logger:

What you can do with Worker Services

With this kind of services you are able to create services that do some stuff for you in the background or you can simply create service applications that can run as a windows service or as a service inside a docker container.

Worker Services are running one time on startup or just create a infinite loop to do stuff periodically. They run asynchronously in a separate thread and don't block the main application. With this in mind you are able to execute tasks that aren't really related to the applications domain logic

  • Fetching data periodically
  • Sending mails periodically
  • Calculating data in the background
  • Startup initialization

In a microservice environment it would make sense to run one or more worker services in console applications inside docker containers. This way it is easy to maintain and deploy them separately from the main application and they can be scaled separately.

Let's create an example

With the next couple of post I'm going to create an example on how to use worker services.

I'm going to write weather station that provides a gRPC endpoint to fetch the whether data of a specific date. I'll also write a worker service that fetches the data using a gRPC Client and prepares the data for another app that will displaying it. At the end we will at least have three Applications:

  • The weather station: A gRPC service that provides an endpoint to fetch the weather data of an specific date.
  • The weather data loader: A worker service running a gRPC Client that fetches the data every day and puts the data into a database. Console application.
  • The weather stats app: Loads the data from the database and shows the current weather and a graph of all loaded weather data. Blazor Server Side

I'm going to put those apps and the database into docker containers and put them together using docker-compose.

I'll simulate the days by changing to the next day every second starting by 1/1/2019. I already have weather data of some weather stations in Washington state and will reuse this data.

The weather station will have a SQLite inside the docker container. The separate database on a fourth docker container is for the worker and the web app to share the date. I'm not yet sure what database I want to use. If you have an idea, just drop me a comment.

I'm going to create a new repository on GitHub for this project and will add the link to the next posts.

Conclusion

I guess the Worker Services will be as most useful in micro service environments. But it might also be a good way to handle those mentioned aspects in common ASP.NET Core applications. Feel free to try it out.

But what I tried to show here as well, is the possibility to use a different hosting model to run a different kind of (ASP.NET) Core application, which still uses all the useful features of the ASP.NET Core framework. The way Microsoft decoupled ASP.NET from the generic hosting model is awesome.

Golo Roden: Virtuell vereint: Wann Teams remote und/oder vor Ort gut zusammenarbeiten

Bei der Frage nach Arbeit vor Ort oder remote gibt es keine pauschale Antwort. Teams funktionieren gut, wenn sie gemeinsame Ziele aus eigenem Antrieb verfolgen.

Code-Inside Blog: Enforce Administrator mode for builded dotnet exe applications

The problem

Let’s say you have a .exe application builded from Visual Studio and the application always needs to be run from an administrator account. Windows Vista introduced the “User Account Control” (UAC) and such applications are marked with a special “shield” icon like this:

x

TL;DR-version:

To build such an .exe you just need to add a __“application.manifest” and request the needed permission like this:

<requestedExecutionLevel  level="requireAdministrator" uiAccess="false" />

Step by Step for .NET Framework apps

Create your WPF, WinForms or Console project and add a application manifest file:

x

The file itself has quite a bunch of comments in it and you just need to replace

<requestedExecutionLevel level="asInvoker" uiAccess="false" />

with

<requestedExecutionLevel  level="requireAdministrator" uiAccess="false" />

… and you are done.

Step by Step for .NET Core apps

The same approach works more or less for .NET Core 3 apps:

Add a “application manifest file”, change the requestedExecutionLevel and it should “work”

Be aware: For some unkown reasons the default name for the application manifest file will be “app1.manifest”. If you rename the file to “app.manifest”, make sure your .csproj is updated as well:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <ApplicationManifest>app.manifest</ApplicationManifest>
  </PropertyGroup>

</Project>

Hope this helps!

View the source code on GitHub.

Holger Schwichtenberg: Nachlese zur BASTA!-Konferenz: Videos und Unterlagen zum Download

Die Herbst-BASTA! Konferenz letzte Woche war in ihrem 22. Jahr besonders spannend, weil .NET Core 3.0, ASP.NET Core 3.0 und Entity Framework Core 3.0 am Vorabend der Hauptkonferenz erschienen sind.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Blazor Client Side

In the last post we had a quick look into Blazor Server Side, which doesn't really differ on the hosting level. This is a regular ASP.NET Core application that will run on a web server. Blazor Client Site on the other hand differs for sure, because it doesn't need a web server, it completely runs in the browser.

Microsoft compiled the Mono runtime into a WebAssembly. With this, it is possible to execute .NET Assemblies natively inside the WebAssembly in the browser. This doesn't need a web serve. There is no HTTP traffic between the browser and a server part anymore. Except you are fetching data from a remote service.

Let's have a look at the HostBuilder

This time the Program.cs look different compared to the default ASP.NET Core projects:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IWebAssemblyHostBuilder CreateHostBuilder(string[] args) =>
        BlazorWebAssemblyHost.CreateDefaultBuilder()
            .UseBlazorStartup<Startup>();
}

Here we create IWebAssemblyHostBuilder instead of a IHostBuilder. Actually it is a completely different Interface and doesn't derive from the IHostBuilder at the time I wrote this. But it looks pretty similar. In this case also a default configuration of the IWebAssemblyHostBuilder is created and similar to the ASP.NET Core projects, a Startup class is used to configure the application.

The Startup class is pretty empty but has the same structure as all the other ones. You are able to configure services to the IoC container and to configure the application:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
    }

    public void Configure(IComponentsApplicationBuilder app)
    {
        app.AddComponent<App>("app");
    }
}

Usually you won't configure a lot here, except the services. The only thing you can really do here is to execute code on startup, to maybe initialize a kind of a database or whatever you need to do on startup.

The important line of code here is the line where the root component is added to the application. Actually it is the App.cshtml in the root of the project. In Blazor server side this is the host page that calls the root component and here it is configured in the Startup.

All the other UI stuff is pretty much equal in both versions of Blazor.

What you can do with Blazor Client Side

In general you can do the same things in both versions of Blazor. You can also share the same UI logic. Both versions are made to create single page application with C# and Razor and without to learn a JavaScript framework like React or Angular. It will be pretty easy for you to build single page applications, if you know C# and Razor.

The Client side version will live in the WebAssembly only and will work without a connection to a web server, if no remote service is needed. Usually every single page application needs a remote service to fetch data or to store date.

Blazor Client Side will have a lot faster UI, because it is all rendered natively on the client. All the C# and Razor code is running in the WebAssembly and Blazor Server Side still needs to send UI from the server to the client.

Conclusion

In this part you learned a different kind of Hosting in ASP.NET Core and this will lead us back to the generic hosting approach of ASP.NET Core 3.0.

In the next post I will write about a different hosting model to run service worker and background services without the full web server stack.

Jürgen Gutsch: .NET Conf 2019

From September 23 to 25 the .NET Conf 2019, hosted by Microsoft, was running virtually on Twitch. Like last year the third day was full of talks done by the community. As well as last year, I also did a talk this year. I talked about the ASP.NET Core Health Checks and it went much better this time. There were no technical problems and because of my own live stream on twitch I'm a little bit more used to speak to an screen and a camera.

The conference

This years .NET Conf was full of .NET Core 3.0, which was launched during the first day. Also C# 8 and the latest DevOps, Azure, Xamarin and Visual Studio features were hot topics this year.

If you want to watch the talks on demand, there is a playlist on YouTube with all of the recordings, as well as a list on Channel 9. Since the conference was streamed via Twitch, the videos are also available there.

Some of the talks were pretty funny. While Dan Roth was talking about Blazor, Jeff Fritz interrupted the show and gave him his blazing Blazer.

Because of the time difference, I wasn't able to watch the entire live stream. There are so many recording, just for the first two days, I didn't got the chance to watch them all. However, I'm going to take some time to watch all the other awesome recordings.

My own talk

I was talking about the ASP.NET Core health Checks, which is a cool and fascinating topic. I did a quick introduction about it and demoed the basic configuration und usage. After that I did a demo about a more enhanced scenario with dependent sub systems running in Docker containers that needed to be checked. I also showed the health checks UI which can be used to display the health states on a nice user interface.

The presentation and codes of my presentation are available on GitHub.

My talk was at 11AM in central European time, which was 2AM in the morning in Seattle for Jeff Fritz and Jon Galloway, who moderated the conference during that time. But it seems they had a lot of fun and enough caffeine.

I'm going to link the recording of my talk here in this post, as soon it is available.

The community day

The third day was full of thirty minute presentations done by folks out of the community and some folks from Microsoft. There were a ton of cool presentations and a lot of fun while moderating the community day in the Channel 9 studio

I was happy to see the presentations of Maarten Balliauw, Oren Eini, Shawn Wildermuth, Ed Charbeneau, Steve Smith and a lot more...

Conclusion

This was a lot of fun, even if I was pretty excited and super nervous the hours before I started the presentation. It is all about technique, if you do a livestream and you are not able to see the audience. More technique means more possible problems, but it all went well.

However, I would be happy to to get the chance to do a talk like this next year.

Holger Schwichtenberg: Migration von .NET Framework zu .NET Core per PowerShell-Skript statt Klickorgie

Leider gibt es bislang kein Migrationswerkzeug von Microsoft, um WPF- und Windows Forms-Projekte auf .NET Core umzustellen. Dieser Beitrag stellt ein PowerShell-Skript vor, das bei der Migration einige manuelle Arbeit abnimmt.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Blazor Server Side

To have a look into the generic hosting models, we should also have a look into the different application models we have in ASP.NET Core. In this and the next post I'm going to write about Blazor, which is a new member of the ASP.NET Core family. To be more precisely, Blazor are two members of the ASP.NET Core family. On the one hand we have Blazor Server Side which actually is ASP.NET Core running on the server and on the other hand we have Blazor Client Side which looks like ASP.NET Core and is running on the browser inside a WebAssembly. Both frameworks share the same view framework, which is Razor Components. Both Frameworks may share the same view logic and business logic. Both frameworks are single page application (SPA) frameworks, there is no page reload from the server visible while browsing the application. Both frameworks look pretty similar up from the Program.cs

Under the hood, both frameworks are hosted completely different. While Blazor Client Side is completely running on the Client, there is no web server needed. Blazor Server Side on the other hand is running upon a web server and is using WebSockets and a generic JavaScript client to simulate the same SPA behavior as Blazor Client Side.

Hosting and Startup

Within this post I'm trying to compare Blazor Server Side to the already known ASP.NET Core frameworks like MVC and Web API.

First let's create a new Blazor Server Side project using the .NET Core 3 Preview 7 SDK:

dotnet new blazorserverside -n BlazorServerSideDemo -o BlazorServerSideDemo
cd BlazorServerSideDemo
code .

The second and third line changes the current directory to the project directory and opens it into Visual Studio Code, if it is installed.

The first thing I usually do is to have a short glimpse into the Program.cs, but in this case this class looks completely equal to the other projects. There is absolutely no difference:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
}

At first a default IHostBuilder is created and upon this a IWebHostBuilder is created to spin up a Kestrel web server and to host a default ASP.NET Core application. Nothing spectacular here.

The Startup.cs may be more special.

Actually it looks like a common ASP.NET Core Startup class except there are different services registered and a different Middlewares is used:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddRazorPages();
        services.AddServerSideBlazor();
        services.AddSingleton<WeatherForecastService>();
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
            app.UseHsts();
        }

        app.UseHttpsRedirection();
        app.UseStaticFiles();

        app.UseRouting();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapBlazorHub();
            endpoints.MapFallbackToPage("/_Host");
        });
    }
}

In the ConfigureServices method there are the Razor Pages added to the IoC container. Razor Pages is used to provide the page that is hosting the Blazor application. In this case it is the _Host.cshtml in the Pages directory. Every single page application (SPA) has at least one almost static page which is hosting the actual application that is running in the browser. React, Vue, Angular and so on have to have the same thing. It is a index.html that is loading all the JavaScripts and hosting the JavaScript application. In case of Blazor there is also a generic JavaScript running on the hosting page. This JavaScript will connect to a SignalR WebSocket that is running on the server side.

Additional to the Razor Pages, the services needed for Blazor Server Side will be added to the IoC container. This services will be needed by the Blazor Hub which actually is the SignalR Hub that provides the WebSocket endpoint.

The Configure also looks similar to the other ASP.NET Core frameworks. The only differences are in the last lines, where the Blazor Hub gets added and where the fallback page gets added. This fallback page actually is the hosting Razor Page mentioned before. Since the SPA supports deep links and created URLs for the different views created on the client, the application need to route to a fallback page in case the user directly navigates to client side route that is not existing on the server. So the server will just provide the hosting page and the client will load the right views depending on the URLs in the browser afterwards.

Blazor

The key feature of Blazor are the razor based components, which get interpreted on a runtime that understand C# and Razor and rendered on the client. With Blazor Client Side it the Mono runtime running inside the WebAssembly and on the Server Side version it is the .NET Core runtime running on the server. That means the Razor components get interpreted and rendered on the server. After that they get pushed to the client using SignalR and placed on the right place inside the hosting page using the generic JavaScript which is connected to the SignalR.

So we have a server side rendered single page application, without any visible roundtrip to the server.

The Razor components are also placed in the pages folder, but have the file extension .razor. Except the App.razor which is directly in the project directory. Those are the actual view components, which contain the logic of the application.

If you have a more detailed look into the components, you'll see some similarities to React or Angular, in case you know those frameworks. I mentioned the App.razor which is the root component. Angular and React also have this kind of root component. Inside the Shared directory there is a MainLayout.razor, which is the layout component. (Also this kind of components are available in React and Angular.) All the other components in the pages directory are using this layout implicitly because it is set as the default layout in the _Imports.razor. Those components also define a route that is used to navigate to the component. Reusable components without a specific route are placed inside the Shared directory.

Conclusion

Even this is just a small introduction and overview about Blazor Server side, but I only want to quickly show the new ASP.NET Core 3.0 frameworks to create web applications. This is the last kind of normal server application I want to show. In the next part, I'm going to show Blazor Client side which uses a completely different hosting model.

Blazor server side by the way is the new replacement for ASP.NET WebForms to create stateful web applications using C#. WebForms won't be migrated to ASP.NET Core. It will be supported in the same way as the full .NET Framework will be supported in the future. Which there will be no new versions and no new features in the future. With this new in mind, it absolutely makes sense to have a more detailed look into Blazor Server Side.

Holger Schwichtenberg: Word-Automatisierung in einem Scheduled Task des Windows-Servers

So löst man die Probleme beim Start der Word-Automatisierungsobjekte in einem Hintergrundprozess.

Golo Roden: Plädoyer für eine offene und tolerante Kommunikation in der IT-Unternehmenskultur

Informatiker gelten häufig als fachlich kompetent, aber sozial inkompetent. Dieses Vorurteil lässt sich aber mit der richtigen Kommunikationskultur beheben.

Code-Inside Blog: Check installed version for ASP.NET Core on Windows IIS with Powershell

The problem

Let’s say you have a ASP.NET Core application without the bundled ASP.NET Core runtime (e.g. to keep the download as small as possible) and you want to run your ASP.NET Core application on a Windows Server hosted by IIS.

General approach

The general approach is the following: Install the .NET Core hosting bundle and you are done.

Each .NET Core Runtime (and there are quite a bunch of them) is backward compatible (at least the 2.X runtimes), so if you have installed 2.2.6, your app (created while using the .NET runtime 2.2.1), still runs.

Why check the minimum version?

Well… in theory the app itself (at least for .NET Core 2.X applications) may run under runtime versions, but each version might fix something and to keep things safe it is a good idea to enforce security updates.

Check for minimum requirement

I stumbled upon this Stackoverflow question/answer and enhanced the script, because that version only tells you “ASP.NET Core seems to be installed”. My enhanced version searchs for a minimum required version and if this is not installed, it exit the script.

$DotNetCoreMinimumRuntimeVersion = [System.Version]::Parse("2.2.5.0")

$DotNETCoreUpdatesPath = "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Updates\.NET Core"
$DotNetCoreItems = Get-Item -ErrorAction Stop -Path $DotNETCoreUpdatesPath
$MinimumDotNetCoreRuntimeInstalled = $False

$DotNetCoreItems.GetSubKeyNames() | Where { $_ -Match "Microsoft .NET Core.*Windows Server Hosting" } | ForEach-Object {

                $registryKeyPath = Get-Item -Path "$DotNETCoreUpdatesPath\$_"

                $dotNetCoreRuntimeVersion = $registryKeyPath.GetValue("PackageVersion")

                $dotNetCoreRuntimeVersionCompare = [System.Version]::Parse($dotNetCoreRuntimeVersion)

                if($dotNetCoreRuntimeVersionCompare -ge $DotNetCoreMinimumRuntimeVersion) {
                                Write-Host "The host has installed the following .NET Core Runtime: $_ (MinimumVersion requirement: $DotNetCoreMinimumRuntimeVersion)"
                                $MinimumDotNetCoreRuntimeInstalled = $True
                }
}

if ($MinimumDotNetCoreRuntimeInstalled -eq $False) {
                Write-host ".NET Core Runtime (MiniumVersion $DotNetCoreMinimumRuntimeVersion) is required." -foreground Red
                exit
}

The “most” interesting part is the first line, where we set the minimum required version.

If you have installed a version of the .NET Core runtime on Windows, this information will end up in the registry like this:

x

Now we just need to compare the installed version with the existing version and know if we are good to go.

Hope this helps!

Holger Schwichtenberg: Assembly-Meta-Daten (AssemblyInfo.cs) in .NET Core

In .NET-Core-Projekten werden die Metadaten im Standard in der Projektdatei gespeichert. Eine AssemblyInfo.cs wie im klassischen .NET ist aber dennoch möglich.

Golo Roden: Neue Serie: Götz & Golo

Am 3. September 2019 wird es soweit sein: Die neue Serie "Götz & Golo" startet auf diesem Blog. Ein kurzer Ausblick, worum es in dieser Serie gehen und was das Konzept dahinter sein wird.

Jürgen Gutsch: ASP.NET Core 3.0: Endpoint Routing

The last two posts were just a quick look into the Program.cs and the Startup.cs. This time I want to have a little deeper look into the new endpoint routing.

Wait!

Sometimes I have an Idea about a specific topic to write about and start writing. While writing I'm remembering that I maybe already wrote about it. Than I take a look into the blog archive and there it is:

Implement Middlewares using Endpoint Routing in ASP.NET Core 3.0

Maybe I get old now... ;-)

This is why I just link to the already existing post.

Anyways. The next two posts are a quick glimpse into Blazor Server Side and Blazor Client Side.

Why? Because I also want to focus on the different Hosting models and Blazor Client Side is using a different one.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Taking a quick look into the Startup.cs

I the last post, I took a quick look into the Program.cs of ASP.NET Core 3.0 and I quickly explored the Generic Hosting Model. But also the Startup class has something new in it. We will see some small but important changes.

Just one thing I forgot to mention in the last post: It should just work ASP.NET Core 2.1 code of the Program.cs and the Startup.cs in ASP.NET Core 3.0, if there is no or less customizing. The IWebHostBuilder is still there and can be uses the 2.1 way and also the default 2.1 Startup.cs should run in ASP.NET Core 3.0. It may be that you only need to do some small changes there.

The next snippet is the Startup class of an newly created empty web project:

public class Startup
{
    // This method gets called by the runtime. Use this method to add services to the container.
    // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
    public void ConfigureServices(IServiceCollection services)
    {
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        app.UseRouting();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapGet("/", async context =>
            {
                await context.Response.WriteAsync("Hello World!");
            });
        });
    }
}

The empty web project is a ASP.NET Core project without any ASP.NET Core UI feature. This is why the ConfigureServices method is empty. There is no additional service added to the dependency injection container.

The new stuff is into in the Configure method. The first lines look familiar. Depending on the hosting environment the development exception page will be shown.

app.UseRouting() is new. This is a middleware that enables the new endpoint routing. The new thing is, that routing is decoupled from the specific ASP.NET Feature. In the previous Version every feature (MVC, Razor Pages, SIgnalR, etc.) had its own endpoint implementation. Now the endpoint and routing configuration can be done independently. The Middlewares that need to handle a specific endpoint will now be mapped to a specific endpoint or route. So the Middlewares don't need to handle the routes anymore.

If you wrote a Middleware in the past which needs to work on a specific endpoint, you added the logic to check the endpoint inside the middleware or you used the MapWhen() extension method on the IApplicationBuilder to add the Middleware to a specific endpoint.

Now you create a new pipeline (using IApplicationBuilder) per endpoint and Map the Middleware to the specific new pipeline.

The MapGet() method above does this implicitly. It created a new endpoint "/" and maps the delegate Middleware to the new pipeline that was created internally.

That was a simple snippet. Now let's have a look into the Startup.cs of a new full blown web application using individual authentication. Created by using this .NET CLI command:

dotnet new mvc --auth Individual

Overall this also looks pretty familiar if you already know the previous versions:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {

        services.AddDbContext<ApplicationDbContext>(options =>
            options.UseSqlite(
                Configuration.GetConnectionString("DefaultConnection")));
        services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true)
            .AddEntityFrameworkStores<ApplicationDbContext>();

        services.AddControllersWithViews();
        services.AddRazorPages();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
            app.UseDatabaseErrorPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
            // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
            app.UseHsts();
        }

        app.UseHttpsRedirection();
        app.UseStaticFiles();

        app.UseRouting();

        app.UseAuthentication();
        app.UseAuthorization();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapControllerRoute(
                name: "default",
                pattern: "{controller=Home}/{action=Index}/{id?}");
            endpoints.MapRazorPages();
        });
    }
}

This is a MVC application, but did you see the lines where MVC is added? I'm sure you did. It is not longer called MVC, even if it is the MVC pattern used, because it was a little bit confusing with Web API.

To add MVC you now need to add AddControllersWithViews(). If you want to add Web API only you just need to add AddControllers(). I think this is a small but useful change. This way you can be more specific by adding ASP.NET Core features. In this case also Razor pages where added to the project. It is absolutely no problem to mix ASP.NET Core features.

AddMvc() still exists and is still working in ASP.NET Core

The Configure method doesn't really change, except the new endpoint routing part. There are two endpoints configured. One for controller routes (Which is Web API and MVC) and one for RazorPages.

Conclusion

This is also just a quick look into the Startup.cs with just some small but useful changes.

In the next post I'm going to do a little more detailed look into the new endpoint routing. While working on the GraphQL endpoint for ASP.NET Core, I learned a lot about the endpoint routing. This feature makes a lot of sense to me, even if it means to rethink some things, when you build and provide a Middleware.

Golo Roden: Funktionale Programmierung mit Objekten

JavaScript kennt verschiedene Methoden zur funktionalen Programmierung, beispielsweise map, reduce und filter. Allerdings stehen sie nur für Arrays zur Verfügung, nicht für Objekte. Mit ECMAScript 2019 lässt sich das jedoch auf elegante Weise ändern.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Generic Hosting Environment

In ASP.NET Core 3.0 the hosting environment changes to get more generic. Hosting is not longer bound to Kestrel and not longer bound to ASP.NET Core. This means you are able to create a host, that doesn't start the Kestrel web server and doesn't need to use the ASP.NET Core Framework.

This is a small introduction post about the Generic Hosting Environment in ASP.NET Core 3.0. During the next posts I'm going to write more about it and what you can do with it in combination with some more ASP.NET Core 3.0 features.

In the next posts we will see a lot more details about why this makes sense. For the short term: There are different hosting models. One is the already known web hosting. One other model is running a worker service without a web server and without ASP.NET Core. Also Blazor uses a different hosting model inside the web assembly.

How does it look like in ASP.NET Core 3.0?

First let's recap how it looks in previous versions. This is a ASP.NET Core 2.2 Startup.cs that creates an IWebHostBuilder to start up Kestrel and to bootstrap ASP.NET Core using the Startup class:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

The next snippet shows the Program.cs of a new ASP.NET Core 3.0 web project:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
}

Now a IHostBuilder will be created and configured first. If the default host builder is created, a IWebHostBuilder is created to use the configured Startup class.

The typical .NET Core App features like configuration, logging and dependency injection are configured on the level of the IHostBuilder. All the ASP.NET specific features like authentication, Middlewares, ActionFilters, Formatters, etc. are configured on the level of the IWebHostBuilder.

Conclusion

This makes the Hosting environment a lot more generic and flexible.

I'm going to write about specific scenarios during the next posts about the new ASP.NET Core 3.0 features. But first I will have a look into Startup.cs to see what is new in ASP.NET Core 3.0.

Marco Scheel: Mange Microsoft Teams membership with Azure AD Access Review

This post will introduce you to the Azure AD Access Review feature. With the introduction of modern collaboration through Microsoft 365 and Microsoft Teams being the main tool it is important to mange who is a member to the underlying Office 365 Group (Azure AD Group).

<DE>Für eine erhöhte Reichweite wird der Post heute auf Englisch erscheinen. Es geht um die Einführung von Access Reviews (Azure AD) im Zusammenspiel mit Microsoft Teams. Das Verwalten der Mitgliedschaft eines Teams wird durch den Einsatz von diesem Feature unterstützt und stellt die Besitzer weiter in den Mittelpunkt. Sollte großes Interesse an einer komplett deutschen Version bestehen, dann lasst es mich bitte wissen.</DE>

Microsoft has great resources to get started on a technical level. The feature enables a set of people to review another set of people. Azure AD is leveraging this capability (all under the bigger umbrella called Identity Governance) on two assets: Azure AD Groups and Azure AD Apps. Microsoft Teams as a hub for collaboration is build on top of Office 365 Groups and so we will have a closer look at the Access Review part for Azure AD Groups.

Each Office 365 Group (each Team) is build from a set of owners and members. With the open nature of Office 365, members can be employees, contractors, or people outside of the organization.

image

In our modern collaboration (Teams, SharePoint, …) implementation we strongly recommend to leverage full self service group creation that is already built into the system. With this setup everyone is able to create and manage/own a group. Permanent user education is needed for everyone to understand the concept behind modern groups. Many organizations also have a strong set of internal rules that forces a so called information owner (could be equal to the owner of a group) to review who has access to their data. Most organization rely on the fact people are fulfilling their duties as demanded, but lets face it owners are just human beings that need to do their “real” job. With the introduction of Azure AD Access Review we can support these owner duties and make the process documented and easy to execute.

AAD Access Review can do the following to support an up to date group membership:

  • Setup an Access Review for an Azure AD Group
  • Specify the duration (start date, recurrence, duration, …)
  • Specify who will do the review (owner, self, specific people, …)
  • Specify who will be reviewed (all members, guests, …)
  • Specify what will happen if the review is not executed (remove members, …)

Before we start we need to talk about licensing. It is obvious that M365 E5 is the best SKU to start with ;) but if you are not that lucky, you need at least an Azure AD P2 license. It is not a “very” common license as it was only part of the EMS E5 SKU, but Microsoft started some time ago really attractive license bundles. Many orgs with strong security requirements will at some point hit a license SKU that will include AAD P2. For your trusty lab tenants start a EMS E5 trial to test these features today. To be precise only the accounts reviewing (executing the Access Review) need the license, at least this is my understanding and as always with licensing ask your usual licensing people to get the definitive answer.

The setup of an Access Review (if not automated through MS Graph Beta) is setup in the Azure Portal in the identity governance blade of AAD. To create our first Access Review we need to on-board to this feature.

image

Please note we are looking at Access Review in the context of modern collaboration (groups created by Teams, SharePoint, Outlook, …). Access Review can be used to review any AAD group that you use to grant access to a specific resource or keep a list of trusted users for an infrastructure piece of tech in Azure. The following information might not always be valid for your scenario!

This is the first half of the screen we need to fill-out for a new Access Review:

image


Review name: This is a really important piece! The Review name will be the “only” visible clue for the reviewer once they get the email about the outstanding review. With self service setup and with the nature of how people name their groups we need to ensure people are understanding what they review. We try to automate the creation of the reviews so we put the review timing, the group name and the groups object id in the review name. The ID is helping during support if you send out 4000 Access Reviews and people ask why they got this email they can provide you with the ID and things get easier. For example: 2019-Q1 GRP New Order (af01a33c-df0b-4a97-a7de-c6954bd569ef)

Frequency: Also very important! You have to understand that an Access Review is somehow static. You could do a recurring review, but some information will be out of sync. For example the group could be renamed, but the title will not be updated and people might get confused about misleading information in the email that is send out. If you choose to let the owner of a group do the review, the owners will be “copied” to the Access Review config and not updated for future reviews. Technically this could be fixed by Microsoft, but as of now we ran into problems in the context of modern collaboration.

image

Users: “Members of a group” is our choice for collaboration. The other option is “Assigned to an application” and not our focus. For a group we have the option to do a guest only review or review everybody as a member of a group. Based on organizational needs and information like the confidentiality we can make a decision. As a starting point it could be a good option to go with guests only because guests are not very well controlled in most environments. An employee at least has a contract and the general trust level should be higher.

Group: Select a group the review should apply to. The latest changes to the Access Review feature allowed to select multiple groups at once. From a collaboration perspective I would avoid it, because at the end of the creation process each group will have its own Access Review instance and the settings are no longer shared. Once again from a collab point of view we need some kind of automation because it is not feasible to create these reviews by an manual task in a foreseeable future.

Reviewers: The natural choice for an Office 365 Group (Team) is to go with the “Group owners” option. Especially if we automate the process and don’t have an extra database to lookup who is the information owner. For static groups or highly confidential groups the option “Selected users” could make sense. An interesting option is also the last one called “Members (self)”. This option will "force” each member to take a decision if the user is any longer part of this project, team or group. We at Glück & Kanja are currently thinking about doing this for some of our internal clients teams. Most of our groups are public and accessible by most of the employee, but membership will document some kind of current involvement for the client represented by the group. This could also naturally reduce the number of teams that show up in your Microsoft Teams client app. As mentioned earlier at the moment it seems that the option “Group owners” will be resolved once the Access Review starts and the instance of the review is then fixed. So any owner change could be not reflected in future instances in recurring reviews. Hopefully this will be fixed by Microsoft.

Program: This is a logical grouping of access reviews. For example we could add all collaboration related reviews to one program vs administration reviews with a more static route.

image

More advanced settings are collapsed, but should definitely be reviews.

Upon completion settings: Allows to automatically apply the review results. I would suggest to try this settings, because it will not only document the review but take the required action on the membership. If group owners are not aware what these Access Review email are, then we talk about potential loss of access for members not reviewed, but at the end that is what we want. People need to take this part of identity governance for real and take care of their data. Any change by the system is document (Audit log of the group) and can be reverse manually. If the system is not executing the results of the review, someone must look up results regularly and then ensure to remove the users based on the outcome. If you go for Access Review, I strongly recommend on automatically applying the results (after you own internal tests).

Lets take a look on the created Access Review.

image


Azure Portal: This is an overview for the admin (non recurring access review).

image


Email: As you can see the prominent Review name is what is standing out to the user. The group name (also highlighted red) is buried within all other text.

image


Click on “Start Review” from the email: The user now can take action based on recommendations (missing in my lab tenant due to inactivity of my lab users).

image

Take Review: Accept 6 users.

image

Review Summary: This is the summary if the owner has taken all actions.

image

Azure Portal: Audit log information for the group.

After the user completed the review the system didn’t make a change to the group. Based on the configuration if actions should be automatically applied the results apply at the end of the review process! Until this time the owners can change their mind. Once the review period is over the system will apply the needed changes.

I really love this feature in the context of modern collaboration. The process of keeping a current list of involved members in a team is a big benefit for productivity and security. The “need to know” principal is supported by a technical implementation “free of cost” (a mentioned everyone should have AAD P2 through some SKU 😎).

Our GK O365 Lifecycle tool was extended to allow the creation of Access Reviews through the Microsoft Graph based on the Group/Team classification. Once customers read or get a demo about this feature and own the license we immediately start a POC implementation. If our tool is already in place it is only a matter of some JSON configuration to be up and running.

Code-Inside Blog: SQL Server, Named Instances & the Windows Firewall

The problem

“Cannot connect to sql\instance. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)”

Let’s say we have a system with a running SQL Server (Express or Standard Edition - doesn’t matter) and want to connect to this database from another machine. The chances are high that you will see the above error message.

Be aware: You can customize more or less anything, so this blogposts does only cover a very “common” installation.

I struggled last week with this problem and I learned that this is a pretty “old” issue. To enlighten my dear readers I made the following checklist:

Checklist:

  • Does the SQL Server allow remote connections?
  • Does the SQL Server allow your authentication schema of choice (Windows or SQL Authentication)?
  • Check the “SQL Server Configuration Manager” if the needed TCP/IP protocol is enabled for your SQL Instance.
  • Check if the “SQL Server Browser”-Service is running
  • Check your Windows Firewall (see details below!)

Windows Firewall settings:

Per default SQL Server uses TCP Port 1433 which is the minimum requirement without any special needs - use this command:

netsh advfirewall firewall add rule name = SQLPort dir = in protocol = tcp action = allow localport = 1433 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

If you use named instances we need (at least) two additional ports:

netsh advfirewall firewall add rule name = SQLPortUDP dir = in protocol = udp action = allow localport = 1434 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

This UDP Port 1434 is used to query the real TCP port for the named instance.

Now the most important part: The SQL Server will use a (kind of) random dynamic port for the named instance. To avoid this behavior (which is really a killer for Firewall settings) you can set a fixed port in the SQL Server Configuration Manager.

SQL Server Configuration Manager -> Instance -> TCP/IP Protocol (make sure this is "enabled") -> *Details via double click* -> Under IPAll set a fixed port under "TCP Port", e.g. 1435

After this configuration, allow this port to communicate to the world with this command:

netsh advfirewall firewall add rule name = SQLPortInstance dir = in protocol = tcp action = allow localport = 1435 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

(Thanks Stackoverflow!)

Check the official Microsoft Docs for further information on this topic, but these commands helped me to connect to my SQL Server.

The “dynamic” port was my main problem - after some hours of Googling I found the answer on Stackoverflow and I could establish a connection to my SQL Server with the SQL Server Management Studio.

Hope this helps!

Kazim Bahar: Künstliche Intelligenz für .NET Anwendungen

Mit dem neuen ML.NET Framework aus dem Hause Microsoft lassen sich bestehende .NET Applikationen mit...

Stefan Henneken: IEC 61131-3: Exception Handling with __TRY/__CATCH

When executing a program, there is always the possibility of an unexpected runtime error occurring. These occur when a program tries to perform an illegal operation. This kind of scenario can be triggered by events such as division by 0 or a pointer which tries to reference an invalid memory address. We can significantly improve the way these exceptions are handled by using the keywords __TRY and __CATCH.

The list of possible causes for runtime errors is endless. What all these errors have in common is that they cause the program to crash. Ideally, there should at least be an error message with details of the runtime error:

Pic01

Because this leaves the program in an undefined state, runtime errors cause the system to halt. This is indicated by the yellow TwinCAT icon:

Pic02

For an operational system, an uncontrolled stop is not always the optimal response. In addition, the error message does not provide enough information about where in the program the error occurred. This makes improving the software a tricky task.

To help track down errors more quickly, you can add check functions to your program.

Pic03 

Check functions are called whenever the relevant operation is executed. The best known is probably CheckBounds(). Each time an array element is accessed, this function is implicitly called beforehand. The parameters passed to this function are the array bounds and the index of the element being accessed. This function can be configured to automatically correct attempts to access elements which are out of bounds. This approach does, however, have some disadvantages.

  1. CheckBounds() is not able to determine which array is being accessed, so error correction has to be the same for all arrays.
  2. Because CheckBounds() is called whenever an array element is accessed, it can significantly slow down program execution.

It’s a similar story with other check functions.

It is not unusual for check functions to be used during development only. Check functions include breakpoints, which stop the program when an operation throws up an error. The call stack can then be used to determine where in the program the error has occurred.

The ‘try/catch’ statement

Runtime errors in general are also known as exceptions. IEC 61131-3 includes __TRY, __CATCH and __ENDTRY statements for detecting and handling these exceptions:

__TRY
  // statements
__CATCH (exception type)
  // statements
__ENDTRY
// statements

The TRY block (the statements between __TRY and __CATCH) contains the code with the potential to throw up an exception. Assuming that no exception occurs, all of the statements in the TRY block will be executed as normal. The program will then continue from the line immediately following the __ENDTRY statement. If, however, one of the statements within the TRY block causes an exception, the program will jump straight to the CATCH block (the statements between __CATCH and __ENDTRY). All subsequent statements within the TRY block will be skipped.

The CATCH block is only executed if an exception occurs; it contains the error handling code. After processing the CATCH block, the program continues from the statement immediately following __ENDTRY.

The __CATCH statement takes the form of the keyword __CATCH followed, in brackets, by a variable of type __SYSTEM.ExceptionCode. The __SYSTEM.ExceptionCode data type contains a list of all possible exceptions. If an exception occurs, causing the CATCH block to be called, this variable can be used to query the cause of the exception.

The following example divides two elements of an array by each other. The array is passed to the function using a pointer. If the return value is negative, an error has occurred. The negative return value provides additional information on the cause of the exception:

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR
 
__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__ENDTRY

The ‘finally’ statement

The optional __FINALLY statement can be used to define a block of code that will always be called whether or not an exception has occurred. There’s only one condition: the program must step into the TRY block.

We’re going to extend our example so that a value of one is added to the result of the calculation. We’re going to do this whether or not an error has occurred.

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR
 
__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__FINALLY
  F_Calc := F_Calc + 1;
__ENDTRY

Sample 1 (TwinCAT 3.1.4024 / 32 Bit) on GitHub

The statement in the FINALLY block (line 24) will always be executed whether or not an exception has occurred.

If no exception occurs within the TRY block, the FINALLY block will be called straight after the TRY block.

If an exception does occur, the CATCH block will be executed first, followed by the FINALLY block. Only then will the program exit the function.

__FINALLY therefore enables you to perform various operations irrespective of whether or not an exception has occurred. This generally involves releasing resources, for example closing a file or dropping a network connection.

Extra care should be taken in implementing the CATCH and FINALLY blocks. If an exception occurs within these blocks, it will give rise to an unexpected runtime error, resulting in an immediate uncontrolled program stop.

The sample program runs under 32-bit TwinCAT 3.1.4024 or higher. 64-bit systems are not currently supported.

Stefan Henneken: IEC 61131-3: Ausnahmebehandlung mit __TRY/__CATCH

Bei der Ausführung eines SPS-Programms kann es zu unerwarteten Laufzeitfehlern kommen. Diese treten auf, sobald das SPS-Programm versucht eine unzulässige Operation auszuführen. Auslöser solcher Szenarien kann z.B. eine Division durch 0 sein oder ein Pointer verweist auf einen ungültigen Speicherbereich. Mit den Schlüsselwörtern __TRY und __CATCH kann auf diese Ausnahmen deutlich besser reagiert werden als bisher.

Die Liste der möglichen Ursachen für Laufzeitfehler kann endlos erweitert werden. Allen Fehlern ist aber gemeinsam: Sie führen zum Absturz des Programms. Bestenfalls wird durch eine Meldung auf den Laufzeitfehler hingewiesen:

Pic01

Da sich anschließend das SPS-Programm in einem nicht definierten Zustand befindet, wird das System gestoppt. Dies ist anhand des gelben TwinCAT Icon in der Windows Taskleiste zu erkennen:

Pic02

Für in Betrieb befindliche Anlagen ist das unkontrollierte Stoppen nicht immer die optimalste Reaktion. Außerdem gibt die Meldung nur unzureichend Auskunft darüber, wo genau im SPS-Programm der Fehler aufgetreten ist. Eine Optimierung der Software ist dadurch nur schwer möglich.

Um Fehler schneller ausfindig zu machen, können in dem SPS-Programm Überprüfungsfunktionen eingefügt werden.

Pic03

Überprüfungsfunktionen werden jedes Mal aufgerufen, wenn die entsprechende Operation ausgeführt wird. Am bekanntesten dürfte die Funktion CheckBounds() sein. Bei jedem Zugriff auf ein Arrayelement wird vorher diese Funktion implizit aufgerufen. Als Parameter erhält die Funktion die Arraygrenzen und den Index des Elements, auf das der Zugriff erfolgen soll. Die Funktion kann so angepasst werden, dass bei einem Zugriff außerhalb der Arraygrenzen eine Korrektur erfolgt. Dieser Ansatz hat allerdings einige Nachteile:

  1. In CheckBounds() kann nicht festgestellt werden auf welches Array zugegriffen wird. Somit kann nur für alle Arrays die gleiche Fehlerkorrektur implementiert werden.
  2. Da bei jedem Arrayzugriff die Überprüfungsfunktion aufgerufen wird, kann sich die Laufzeit des Programms erblich verschlechtern.

Ähnlich verhält es sich auch bei den anderen Überprüfungsfunktionen.

Nicht selten werden die Überprüfungsfunktionen nur während der Entwicklungsphase eingesetzt. In den Funktionen werden Breakpoints aktiviert, die, sobald eine fehlerhafte Operation ausgeführt wird, das SPS-Programm anhalten. Über den Callstack kann anschließend die entsprechende Stelle im SPS-Programm ermittelt werden.

Die ‚try/catch‘-Anweisung

Allgemein werden Laufzeitfehler als Ausnahmen (Exceptions) bezeichnet. Für das Erkennen und Bearbeiten von Exceptions gibt es in der IEC 61131-3 die Anweisungen __TRY, __CATCH und __ENDTRY:

__TRY
  // statements
__CATCH (exception type)
  // statements
__ENDTRY
// statements

Der TRY-Block (die Anweisungen zwischen __TRY und __CATCH) beinhaltet die Anweisungen, die potenziell eine Exception verursachen können. Tritt keine Exception auf, werden alle Anweisungen im TRY-Block ausgeführt. Anschließend setzt das SPS-Programm hinter __ENDTRY seine Arbeit fort. Verursacht eine der Anweisungen innerhalb des TRY-Blocks jedoch eine Exception, so wird der Programmablauf unmittelbar im CATCH-Block (die Anweisungen zwischen __CATCH und __ENTRY) fortgeführt. Alle übrigen Anweisungen innerhalb des TRY-Blocks werden dabei übersprungen.

Der CATCH-Block wird nur im Falle einer Exception ausgeführt und enthält die gewünschte Fehlerbehandlung. Nach der Abarbeitung des CATCH-Blocks wird das SPS-Programm mit den Anweisungen nach __ENDTRY fortgesetzt.

Hinter der __CATCH-Anweisung wird in runden Klammern eine Variable vom Typ __SYSTEM.ExceptionCode angegeben. Der Datentyp __SYSTEM.ExceptionCode enthält eine Auflistung aller möglichen Exceptions. Wird der CATCH-Block durch eine Exception aufgerufen, so kann über diese Variable die Ursache der Exception abgefragt werden.

In dem folgenden Beispiel werden zwei Elemente aus einem Array dividiert. Das Array wird hierbei durch einen Pointer an die Funktion übergeben. Ist der Rückgabewert der Funktion negativ, so ist bei der Ausführung ein Fehler aufgetreten. Der negative Rückgabewert der Funktion gibt genauere Informationen über die Ursache der Exception:

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR

__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__ENDTRY

Die ‚finally‘-Anweisung

Mit __FINALLY kann optional ein Codeblock definiert werden, der immer aufgerufen wird, unabhängig davon ob eine Exception aufgetreten ist oder nicht. Es gibt nur eine einzige Randbedingung: Das SPS-Programm muss zumindest in den TRY-Anweisungsblock eintreten.

Das Beispiel soll so erweitert werden, dass das Ergebnis der Berechnung zusätzlich um Eins erhöht wird. Dieses soll unabhängig davon erfolgen, ob ein Fehler aufgetreten ist oder nicht.

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR

__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__FINALLY
  F_Calc := F_Calc + 1;
__ENDTRY

Beispiel 1 (TwinCAT 3.1.4024 / 32 Bit) auf GitHub

Die Anweisung im FINALLY-Block (Zeile 24) wird immer aufgerufen, unabhängig davon ob eine Exception erzeugt wird oder nicht.

Wird im TRY-Block keine Exception ausgelöst, so wird der FINALLY-Block direkt nach dem TRY-Block ausgerufen.

Tritt eine Exception auf, so wird erst der CATCH-Block ausgeführt und anschließend auch der FINALLY-Block. Erst danach wird die Funktion verlassen.

__FINALLY gestattet es somit, diverse Operationen unabhängig davon auszuführen, ob eine Exception aufgetreten ist oder nicht. Dabei handelt es sich in der Regel um die Freigabe von Ressourcen, wie z.B. das Schließen einer Datei oder das Beenden einer Netzwerkverbindung.

Besonders sorgfältig sollte man die Implementierung der CATCH– und FINALLY-Blöcke vornehmen. Tritt in einem dieser Codeblöcke eine Exception auf, so löst dieses einen unerwarteten Laufzeitfehler aus. Mit dem Ergebnis, dass das SPS-Programm unmittelbar gestoppt wird.

An dieser Stelle möchte ich noch auf den Blog von Matthias Gehring hinweisen. In einem seiner Posts (https://www.codesys-blog.com/tipps/exceptionhandling-in-iec-applikationen-mit-codesys) wird das Thema Exception Handling ebenfalls behandelt.

Das Beispielprogramm ist unter 32-Bit Systemen ab TwinCAT 3.1.4024 lauffähig. 64-Bit Systeme werden derzeit noch nicht unterstützt.

Stefan Henneken: IEC 61131-3: Parameter transfer via FB_init

Depending on the task, it may be necessary for function blocks to require parameters that are only used once for initialization tasks. One possible way to pass them elegantly is to use the FB_init() method.

Before TwinCAT 3, initialisation parameters were very often transferred via input variables.

(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
  nDatabits  : BYTE(7..8);
  eParity    : E_Parity;
  nStopbits  : BYTE(1..2);
END_VAR

This had the disadvantage that the function blocks became unnecessarily large in the graphic display modes. It was also not possible to prevent changing the parameters at runtime.

Very helpful is the method FB_init(). This method is implicitly executed one time before the PLC task is started and can be used to perform initialization tasks.

The dialog for adding methods offers a finished template for this purpose.

Pic01

The method contains two input variables that provide information about the conditions under which the method is executed. The variables may not be deleted or changed. However, FB_init() can be supplemented with further input variables.

Example

An example is a block for communication via a serial interface (FB_SerialCommunication). This block should also initialize the serial interface with the necessary parameters. For this reason, three variables are added to FB_init():

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);        
END_VAR

The serial interface is not initialized directly in FB_init(). Therefore, the parameters must be copied into variables located in the function block.

FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
  nInternalDatabits    : BYTE(7..8);
  eInternalParity      : E_Parity;
  nInternalStopbits    : BYTE(1..2);
END_VAR

During initialization, the values from FB_init() are copied in these three variables.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR
 
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

If an instance of FB_SerialCommunication is created, these three additional parameters must also be specified. The values are specified directly after the name of the function block in round brackets:

fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
                                               eParity := E_Parity.None,
                                               nStopbits := 1);

Even before the PLC task starts, the FB_init() method is implicitly called, so that the internal variables of the function block receive the desired values.

Pic02

With the start of the PLC task and the call of the instance of FB_SerialCommunication, the serial interface can now be initialized.

It is always necessary to specify all parameters. A declaration without a complete list of the parameters is not allowed and generates an error message when compiling:

Pic03

Arrays

If FB_init() is used for arrays, the complete parameters must be specified for each element (with square brackets):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
                 (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                 (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];

If all elements are to have the same initialization values, it is sufficient if the parameters exist once (without square brackets):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
                                                             eParity := E_Parity.None,
                                                             nStopbits := 1);

Multidimensional arrays are also possible. All initialization values must also be specified here:

aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
                      (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
                      (nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];

Inheritance

If inheritance is used, the method FB_init() is always inherited. FB_SerialCommunicationRS232 is used here as an example:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication

If an instance of FB_SerialCommunicationRS232 is created, the parameters of FB_init(), which were inherited from FB_SerialCommunication, must also be specified:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1);

It is also possible to overwrite FB_init(). In this case, the same input variables must exist in the same order and be of the same data type as in the basic FB (FB_SerialCommunication). However, further input variables can be added so that the derived function block (FB_SerialCommunicationRS232) receives additional parameters:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
  nBaudrate    : UDINT; 
END_VAR
 
THIS^.nInternalBaudrate := nBaudrate;

If an instance of FB_SerialCommunicationRS232 is created, all parameters, including those of FB_SerialCommunication, must be specified:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1,
                                                         nBaudRate := 19200);

In the method FB_init() of FB_SerialCommunicationRS232, only the copying of the new parameter (nBaudrate) is necessary. Because FB_SerialCommunicationRS232 inherits from FB_SerialCommunication, FB_init() of FB_SerialCommunication is also executed implicitly before the PLC task is started. Both FB_init() methods of FB_SerialCommunication and of FB_SerialCommunicationRS232 are always called implicitly. When inherited, FB_init() is always called from ‘bottom’ to ‘top’, first from FB_SerialCommunication and then from FB_SerialCommunicationRS232.

Forward parameters

The function block (FB_SerialCommunicationCluster) is used as an example, in which several instances of FB_SerialCommunication are declared:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
  fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  nInternalDatabits       : BYTE(7..8);
  eInternalParity         : E_Parity;
  nInternalStopbits       : BYTE(1..2); 
END_VAR

FB_SerialCommunicationCluster also receives the method FB_init() with the necessary input variables so that the parameters of the instances can be set externally.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR
 
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

However, there are some things to be taken into consideration here. The call sequence of FB_init() is not clearly defined in this case. In my test environment the calls are made from ‘inside’ to ‘outside’. First fbSerialCommunication01.FB_init() and fbSerialCommunication02.FB_init() are called, then fbSerialCommunicationCluster.FB_init(). It is not possible to pass the parameters from ‘outside’ to ‘inside’. The parameters are therefore not available in the two inner instances of FB_SerialCommunication.

The sequence of the calls changes as soon as FB_SerialCommunication and FB_SerialCommunicationRS232 are derived from the same basic FB. In this case FB_init() is called from ‘outside’ to ‘inside’. This approach cannot always be implemented for two reasons:

  1. If FB_SerialCommunication is located in a library, the inheritance cannot be changed just offhand.
  2. The call sequence of FB_init() is not further defined with nesting. So it cannot be excluded that this can change in future versions.

One way to solve the problem is to explicitly call FB_SerialCommunication.FB_init() from FB_SerialCommunicationCluster.FB_init().

fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);

All parameters, including bInitRetains and bInCopyCode, are passed on directly.

Attention: Calling FB_init() always initializes all local variables of the instance. This must be considered as soon as FB_init() is explicitly called from the PLC task instead of implicitly before the PLC task.

Access via properties

By passing the parameters by FB_init(), they can neither be read from outside nor changed at runtime. The only exception would be the explicit call of FB_init() from the PLC task. However, this should principally be avoided, since all local variables of the instance will be reinitialized in this case.

If, however, access should still be possible, appropriate properties can be created for the parameters:

Pic04

The setter and getter of the respective properties access the corresponding local variables in the function block (nInternalDatabits, eInternalParity and nInternalStopbits). Thus, the parameters can be specified in the declaration as well as at runtime.

By removing the setter, you can prevent the parameters from being changed at runtime. If the setter is available, FB_init() can be omitted. Properties can also be initialized directly when declaring an instance.

fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
                                                   Parity := E_Parity.Odd,
                                                   Stopbits := 1);

The parameters of FB_init() and the properties can also be specified simultaneously:

fbSerialCommunication  : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
                                               (Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);

In this case, the initialization values of the properties have priority. The transfer by property and FB_init() has the disadvantage that the declaration of the function block becomes unnecessarily long. To implement both does not seem necessary to me either. If all parameters can also be written via properties, the initialization via FB_init() can be omitted. Conclusion: If parameters must not be changeable at runtime, the use of FB_init() has to be considered. If the write access is possible, properties are another opportunity.

Sample 1 (TwinCAT 3.1.4022) on GitHub

David Tielke: #DWX2019 - Inhalt meiner Sessions

Das war sie wieder, die Developer Week 2019 in Nürnberg. An drei Konferenztagen und natürlich dem traditionellen Workshoptag am Donnerstag sind wir alle erschöpft aber glücklich zuhause wieder angekommen. Neben Sessions zu CoCo 2.0 und Softwarequalität, gab es in diesem Jahr auch zwei Abendveranstaltungen von mir, eine davon mit Kollege Christian Giesswein. Nachdem mein Mitarbeiter Sebastian und ich nun die Nacharbeit abgeschlossen haben, stellen wir hier nun die Inhalte meiner Sessions und unseres gemeinsamen Workshops am Donnerstag zur Verfügung.

Softwarequalität


Composite Components 2.0

Da während er Session mein Notebook fast vollständig den Dienst mit einem Zeichenstift verweigert hat, gibt es an dieser Stelle leider nicht die von mir gewohnten Drawings dazu. Dafür hier nun die Repos zu den Beispielimplementierungen der Composite Components 1.0 & 2.0 auf github:


Workshop: Architektur 2.0



Hier noch die entwickelten Beispielprojekte zu beiden Versionen der Architektur.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.