Christian Dennig [MS]: Azure DevOps Terraform Provider

Not too long ago, the first version of the Azure DevOps Terraform Provider was released. In this article I will show you with several examples which features are currently supported in terms of build pipelines and how to use the provider – also in conjunction with Azure. The provider is the last “building block” for many people working in the “Infrastructure As Code” space to create environments (including Git Repos, service connections, build + release pipelines etc.) completely automatically.

The provider was released in June 2020 in version 0.0.1, but to be honest: the feature set is quite rich already at this early stage.

The features I would like to discuss with the help of examples are as follows:

  • Create a DevOps project including a hosted Git repo.
  • Creation of a build pipeline
  • Usage of variables and variable groups
  • Creating an Azure service connection and using variables/secrets from an Azure KeyVault

Example 1: Basic Usage

The Azure DevOps provider can be integrated in a script like any other Terraform provider. All that’s required is the URL to the DevOps organisation and a Personal Access Token (PAT) with which the provider can authenticate itself against Azure DevOps.

The PAT can be easily created via the UI of Azure DevOps by creating a new token via User Settings --> Personal Access Token --> New Token. For the sake of simplicity, in this example I give “Full Access” to it…of course this should be adapted for your own purposes.

Create a personal access token

The documentation of the Terraform Provider contains information about the permissions needed for the respective resource.

Defining relevant scopes

Once the access token has been created, the Azure DevOps provider can be referenced in the terraform script as follows:

provider "azuredevops" {
  version               = ">= 0.0.1"
  org_service_url       = var.orgurl
  personal_access_token = var.pat
}

The two variables orgurl and pat should be exposed as environment variables:

$ export TF_VAR_orgurl = "https://dev.azure.com/<ORG_NAME>"
$ export TF_VAR_pat = "<PAT_FROM_AZDEVOPS>"

So, this is basically all that is needed to work with Terraform and Azure DevOps. Let’s start by creating a new project and a git repository. Two resources are needed for this, azuredevops_project and azuredevops_git_repository:

resource "azuredevops_project" "project" {
  project_name       = "Terraform DevOps Project"
  description        = "Sample project to demonstrate AzDevOps <-> Terraform integragtion"
  visibility         = "private"
  version_control    = "Git"
  work_item_template = "Agile"
}

resource "azuredevops_git_repository" "repo" {
  project_id = azuredevops_project.project.id
  name       = "Sample Empty Git Repository"

  initialization {
    init_type = "Clean"
  }
}

Additionally, we also need an initial pipeline that will be triggered on a git push to master.
In a pipeline, you usually work with variables that come from different sources. These can be pipeline variables, values from a variable group or from external sources such as an Azure KeyVault. The first, simple build definition uses pipeline variables (mypipelinevar):

resource "azuredevops_build_definition" "build" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline"

  ci_trigger {
    use_yaml = true
  }

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline.yaml"
  }

  variable {
    name      = "mypipelinevar"
    value     = "Hello From Az DevOps Pipeline!"
    is_secret = false
  }
}

The corresponding pipeline definition looks as follows:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Pipeline is running!
    echo And here is the value of our pipeline variable
    echo $(mypipelinevar)
  displayName: 'Run a multi-line script'

The pipeline just executes some scripts – for demo purposes – and outputs the variable stored in the definition to the console.

Running the Terraform script, it creates an Azure DevOps project, a git repository and a build definition.

Azure DevOps Project
Git Repository
Pipeline

As soon as the file azure_pipeline.yaml discussed above is pushed into the repo, the corresponding pipeline is triggered and the results can be found in the respective build step:

Running pipeline
Output of build pipeline

Example 2: Using variable groups

Normally, variables are not directly stored in a pipeline definition, but rather put into Azure DevOps variable groups. This allows you to store individual variables centrally in Azure DevOps and then reference and use them in different pipelines.

Fortunately, variable groups can also be created using Terraform. For this purpose, the resource azuredevops_variable_group is used. In our script this looks like this:

resource "azuredevops_variable_group" "vars" {
  project_id   = azuredevops_project.project.id
  name         = "my-variable-group"
  allow_access = true

  variable {
    name  = "var1"
    value = "value1"
  }

  variable {
    name  = "var2"
    value = "value2"
  }
}

resource "azuredevops_build_definition" "buildwithgroup" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline with VarGroup"

  ci_trigger {
    use_yaml = true
  }

  variable_groups = [
    azuredevops_variable_group.vars.id
  ]

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline-with-vargroup.yaml"
  }
}

The first part of the terraform script creates the variable group in Azure DevOps (name: my-variable-group) including two variables (var1 and var2), the second part – a build definition – uses the variable group, so that the variables can be accessed in the corresponding pipeline file (azure-pipeline-with-vargroup.yaml).

It has the following content:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: my-variable-group

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Var1: $(var1)
    echo Var2: $(var2)
  displayName: 'Run a multi-line script'

If you run the Terraform script, the corresponding Azure DevOps resources will be created: a variable group and a pipeline.

Variable Group

If you pusht the build YAML file to the repo, the pipeline will be executed and you should see the values of the two variables as output on the build console.

Output of the variables from the variable group

Example 3: Using an Azure KeyVault and Azure DevOps Service Connections

For security reasons, critical values are neither stored directly in a pipeline definition nor in Azure DevOps variable groups. You would normally use an external vault like Azure KeyVault. Fortunately, with Azure DevOps you have the possibility to access an existing Azure KeyVault directly and access secrets which are then made available as variables within your build pipeline.

Of course, Azure DevOps must be authenticated/authorized against Azure for this. Azure DevOps uses the concept of service connections for this purpose. Service connections are used to access e.g. Bitbucket, GitHub, Jira, Jenkis…or – as in our case – Azure. You define a user – for Azure this is a service principal – which is used by DevOps pipelines to perform various tasks – in our example fetching a secret from a KeyVault.

To demonstrate this scenario, various things must first be set up on Azure:

  • Creating an application / service principal in the Azure Active Directory, which is used by Azure DevOps for authentication
  • Creation of an Azure KeyVault (including a resource group)
  • Authorizing the service principal to the Azure KeyVault to be able to read secrets (no write access!)
  • Creating a secret that will be used in a variable group / pipeline

With the Azure Provider, Terraform offers the possibility to manage Azure services. We will be using it to create the resources mentioned above.

AAD Application + Service Principal

First of all, we need a service principal that can be used by Azure DevOps to authenticate against Azure. The corresponding Terraform script looks like this:

data "azurerm_client_config" "current" {
}

provider "azurerm" {
  version = "~> 2.6.0"
  features {
    key_vault {
      purge_soft_delete_on_destroy = true
    }
  }
}

## Service Principal for DevOps

resource "azuread_application" "azdevopssp" {
  name = "azdevopsterraform"
}

resource "random_string" "password" {
  length  = 24
}

resource "azuread_service_principal" "azdevopssp" {
  application_id = azuread_application.azdevopssp.application_id
}

resource "azuread_service_principal_password" "azdevopssp" {
  service_principal_id = azuread_service_principal.azdevopssp.id
  value                = random_string.password.result
  end_date             = "2024-12-31T00:00:00Z"
}

resource "azurerm_role_assignment" "contributor" {
  principal_id         = azuread_service_principal.azdevopssp.id
  scope                = "/subscriptions/${data.azurerm_client_config.current.subscription_id}"
  role_definition_name = "Contributor"
}

With the script shown above, both an AAD Application and a service principal are generated. Please note that the service principal is assigned the role Contributor – on subscription level, see the scope assignment. This should be restricted accordingly in your own projects (e.g. to the respective resource group)!

Azure KeyVault

The KeyVault is created the same way as the previous resources. It is important to note that the user working against Azure is given full access to the secrets in the KeyVault. Further down in the script, permissions for the Azure DevOps service principal are also granted within the KeyVault – but in that case only read permissions! Last but not least, a corresponding secret called kvmysupersecretsecret is created, which we can use to test the integration.

resource "azurerm_resource_group" "rg" {
  name     = "myazdevops-rg"
  location = "westeurope"
}

resource "azurerm_key_vault" "keyvault" {
  name                        = "myazdevopskv"
  location                    = "westeurope"
  resource_group_name         = azurerm_resource_group.rg.name
  enabled_for_disk_encryption = true
  tenant_id                   = data.azurerm_client_config.current.tenant_id
  soft_delete_enabled         = true
  purge_protection_enabled    = false

  sku_name = "standard"

  access_policy {
    tenant_id = data.azurerm_client_config.current.tenant_id
    object_id = data.azurerm_client_config.current.object_id

    secret_permissions = [
      "backup",
      "get",
      "list",
      "purge",
      "recover",
      "restore",
      "set",
      "delete",
    ]
    certificate_permissions = [
    ]
    key_permissions = [
    ]
  }

}

## Grant DevOps SP permissions

resource "azurerm_key_vault_access_policy" "azdevopssp" {
  key_vault_id = azurerm_key_vault.keyvault.id

  tenant_id = data.azurerm_client_config.current.tenant_id
  object_id = azuread_service_principal.azdevopssp.object_id

  secret_permissions = [
    "get",
    "list",
  ]
}

## Create a secret

resource "azurerm_key_vault_secret" "mysecret" {
  key_vault_id = azurerm_key_vault.keyvault.id
  name         = "kvmysupersecretsecret"
  value        = "KeyVault for the Win!"
}

If you have followed the steps described above, the result in Azure is a newly created KeyVault containing one secret:

Azure KeyVault

Service Connection

Now, we need the integration into Azure DevOps, because we finally want to access the newly created secret in a pipeline. Azure DevOps is “by nature” able to access a KeyVault and the secrets it contains. To do this, however, you have to perform some manual steps – when not using Terraform – to enable access to Azure. Fortunately, these can now be automated with Terraform. The following resources are used to create a service connection to Azure in Azure DevOps and to grant access to our project:

## Service Connection

resource "azuredevops_serviceendpoint_azurerm" "endpointazure" {
  project_id            = azuredevops_project.project.id
  service_endpoint_name = "AzureRMConnection"
  credentials {
    serviceprincipalid  = azuread_service_principal.azdevopssp.application_id
    serviceprincipalkey = random_string.password.result
  }
  azurerm_spn_tenantid      = data.azurerm_client_config.current.tenant_id
  azurerm_subscription_id   = data.azurerm_client_config.current.subscription_id
  azurerm_subscription_name = "<SUBSCRIPTION_NAME>"
}

## Grant permission to use service connection

resource "azuredevops_resource_authorization" "auth" {
  project_id  = azuredevops_project.project.id
  resource_id = azuredevops_serviceendpoint_azurerm.endpointazure.id
  authorized  = true 
}
Service Connection

Creation of an Azure DevOps variable group and pipeline definition

The last step necessary to use the KeyVault in a pipeline is to create a corresponding variable group and “link” the existing secret.

## Pipeline with access to kv secret

resource "azuredevops_variable_group" "kvintegratedvargroup" {
  project_id   = azuredevops_project.project.id
  name         = "kvintegratedvargroup"
  description  = "KeyVault integrated Variable Group"
  allow_access = true

  key_vault {
    name                = azurerm_key_vault.keyvault.name
    service_endpoint_id = azuredevops_serviceendpoint_azurerm.endpointazure.id
  }

  variable {
    name    = "kvmysupersecretsecret"
  }
}
Variable Group with KeyVault integration

Test Pipeline

All prerequisites are now in place, but we still need a pipeline with which we can test the scenario.

Script for the creation of the pipeline:

resource "azuredevops_build_definition" "buildwithkeyvault" {
  project_id = azuredevops_project.project.id
  name       = "Sample Build Pipeline with KeyVault Integration"

  ci_trigger {
    use_yaml = true
  }

  variable_groups = [
    azuredevops_variable_group.kvintegratedvargroup.id
  ]

  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repo.id
    branch_name = azuredevops_git_repository.repo.default_branch
    yml_path    = "azure-pipeline-with-keyvault.yaml"
  }
}

Pipeline definition (azure-pipeline-with-keyvault.yaml):

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: kvintegratedvargroup

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo KeyVault secret value: $(kvmysupersecretsecret)
  displayName: 'Run a multi-line script'

If you have run the Terraform script and pushed the pipeline file into the repo, you will get the following result in the next build (the secret is not shown in the console for security reasons, of course!):

Output: KeyVault integrated variable group

Wrap-Up

Setting up new Azure DevOps projects was not always the easiest task, as sometimes manual steps were required. With the release of the first Terraform provider version for Azure DevOps, this has changed almost dramatically 🙂 You can now – as one of the last building blocks for automation in a dev project – create many things via Terraform in Azure DevOps. In the example shown here, the access to an Azure KeyVault including the creation of the corresponding service connection could be achieved. However, only one module was shown here – frankly, one for a task that “annoyed” me every now and then, as most of it had to be set up manually before having a Terraform provider. The provider can also manage branch policies, set up groups and group memberships etc. With this first release you are still “at the beginning of the journey”, but in my opinion, it is a “very good start” with which you can achieve a lot.

I am curious what will be supported next!

Sample files can be found here: https://gist.github.com/cdennig/4866a74b341a0079b5a59052fa735dbc

Golo Roden: Standardlösung oder Eigenbau?

In der Softwareentwicklung steht man häufig vor der Wahl, eine fertige Standardlösung von der Stange zu verwenden oder eine Eigenentwicklung durchzuführen. Was ist ratsam?

Golo Roden: Einführung in React, Folge 6: React-State, Teil 2

Eines der wichtigsten Konzepte für Anwendungen ist das Entgegennehmen und Verarbeiten von Eingaben. Wie funktioniert das in React?

Code-Inside Blog: EWS, Exchange Online and OAuth with a Service Account

This week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.

But here is the full story:

Our goal

We wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?

The big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.

So… what now?

EWS is… old. Why?

The Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.

To mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. “Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.

Another argument for using EWS was, that we could support OnPrem and Online with one code base.

Docs from Microsoft

The good news is, that EWS and the Auth problem is more or less good documented here.

There are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.

Delegation:

Delegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.

Application:

Application means, that the app itself can do some actions without any user involved.

EWS and the application way

At first we thought that we might need to use the “application” way.

The good news is, that this was easy and worked. The bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.

Back to the delegation way:

EWS and the delegation way

The documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.

Solution / TL;DR

After some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:

  1. Follow the normal “delegate” steps from the Microsoft Docs

  2. Instead of this code, which will trigger the login UI:

...
// The permission scope required for EWS access
var ewsScopes = new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" };

// Make the interactive token request
var authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();
...

Use the “AcquireTokenByUsernamePassword” method:

...
var cred = new NetworkCredential("UserName", "Password");
var authResult = await pca.AcquireTokenByUsernamePassword(new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" }, cred.UserName, cred.SecurePassword).ExecuteAsync();
...

To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.

Now you should be able to get the AccessToken and do some EWS magic.

I posted a shorter version on Stackoverflow.com

Hope this helps!

Daniel Schädler: App Deployment mit Minikube /kubectl

Ziel

In diesem Beitrag möchte ich in meiner Umgebung die Applikations Verteilungen (Deployments) lernen. Weiter möchte ich eine .NET Applikation auf den Cluster mit kubectl verteilen.

Ich folge den hier beschriebenen Scrhitte und verwende eine einfache .NET Core Applikation für das Deployment.

Voraussetzungen

Um diese Aufgabe zu erfüllen sind folgende Voraussetzungen gegeben:

  • Minikube ist installiert
  • Eine .NET Core App ist vorhanden um auf einen Container verteilt werden zu können.
  • Docker für Windows muss installiert sein
  • Unter Umständen muss das LINUX Subsystem für Windows installiert sein, wenn ihr die entsprechende Option im Docker-Installer ausgewählt habt.

Durchführung

Um das oben genannte Ziel zu erreichen muss ein Kubernetes Deployment erstellt werden. Für die Erstellung eines Deploments wird das CLI kubectl verwendet, dass in meinem Fall im C:\Program Files\kubectl installiert worden ist.

  1. Starten der Powershell als Administrator
  2. Anschliessend den minikube starten, wenn er noch nicht gestartet worden ist.

Der erfolgreiche Start sieht dann so aus:

Erfolgreicher Minikube Start in der Powershell

Werden beim Starten Fehler festgestellt, so sollte zuerst mit minikube stop versucht werden das Ganze zu stoppen und wieder zu starten.

Den Status kann man dann damit überprüfen:

minikube status

Un erhält dann die folgende Meldung:

Minikube Status – Powershell
  1. Nun wird der Befehl
kubectl version
kubectl get nodes

ausgeführt.

Das sieht dann so aus:

kubectl version und nodes in der Powershell

Es ist ersichtlich, dass der Client und der Server verfügbar sind. Weiter ist ersichtlich, dass ein verfügbarer Knoten (Node) da ist. Kubernetes verwendet für die Applikationsverteilung die verfügbaren Ressourcen auf Basis der Knoten (Nodes).

  1. Nun kommt der spannende Teil lokal in der Powershell. Hier muss die .NET Core Applikation angegeben werden, die verteilt werden soll. Hierzu habe ich eine Beispiel-Applikation analog dem hier beschriebenen Vorgehen erstellt.

Das Docker-Image ist mit dem Befehl

docker build -t app-demo -f Dockerfile .

erstellt worden.

Anschliessend kann man überprüfen ob das Image in der lokalen Registry vorhanden ist mit:

docker images
App-Demo in der lokalen Registry – Powershell
  1. Anschliessend habe ich dann mit kubectl ein Deplyoment erstellt, das auf die lokale Docker Registry geht.
kubectl create deployment app-demo --image=app-demo:latest

kubectl teil mir dann mit, dass das Deployment erfolgreich erstellt worden ist.

Deployment ist erstellt worden – Powershell

Fazit

Das Ganze, sieht einfach aus aber der eine oder andere Fallstrick ist dann doch vorhanden. Wenn zum Beispiel das .NET Core Image nicht von einer zentralen Registry runtergeladen werden kann, sondern selber gehostet werden muss, aus Sicherheitsgründen oder politischen Vorgaben. Falls der Artikel Gefallen gefunden hat, freue ich mich über einen Like und bin für Rückmeldungen offen.

Holger Schwichtenberg: Buch zu Blazor WebAssembly und Blazor Server

Das aktuelle Fachbuch des Dotnet-Doktors bietet den Einstieg in Microsofts neues Webframework.

Jürgen Gutsch: Getting the .editorconfig working with the .NET Framework and MSBuild

I demonstrated my results of the last post about the .editorconfig to the team last week. They were quite happy about the fact that the build fails on a code style error but there was one question I couldn't really answer. The Question was:

Does this also work for .NET Framework?

It should, because it is Roslyn who analyses the code. It is not any framework who does it.

To try it out I created three different class libraries that have the same class file linked into it, with the same code style errors:

    using System;

namespace ClassLibraryNetFramework
{
    public class EditorConfigTests
    {
    public int MyProperty { get; } = 1;
    public EditorConfigTests() { 
    if(this.MyProperty == 2){
        Console.WriteLine("Hallo Welt");
        }        }
    }
}

This code file has at least eleven code style errors in it:

I created a .NET Standard library, a .NET Core library, and a .NET Framework library in VS2019 this time. The solution in VS2019 now looks like this:

I also added the MyGet Roslyn NuGet Feed to the NuGet sources and referenced the code style analyzers:

This is the URL and the package name for you to copy:

  • https://dotnet.myget.org/F/roslyn/api/v3/index.json
  • Microsoft.CodeAnalysis.CSharp.CodeStyle Version: 3.8.0-1.20330.5

I also set the global.json to the latest preview of the .NET 5 SDK to be sure to use the latest tools:

{
  "sdk": {
    "version": "5.0.100-preview.6.20318.15"
  }
}

It didn't really work - My fault in the last blog post!

I saw some code style errors in VS2019 but not all the eleven errors I expected. I tried a build and the build didn't fail. Because I knew it worked the last time I tried it using the the dotnet CLI. I did the same here. I ran dotnet build and dotnet msbuild but the build didn't fail.

This is exactly what you don't need as a software developer: Doing things exactly the same twice and one time it works and on the other time it fails and you have no idea why.

I tried a lot of things and compared project files, solution files, and .editorconfig files. Actually I compared it with the Weather Stats application I used in the last post. At the end I found one line in the the PropertyGroup of the project files of the weather application that shouldn't be there but actually was the reason why it worked.

<CodeAnalysisRuleSet>..\editorconfig.ruleset</CodeAnalysisRuleSet>

While trying to get it running for the last post, I also experimented with a ruleset file. The ruleset file is a XML file that can be used to enable or disable analysis rules in VS2019. I added a ruleset file to the solution and linked it into the projects, but forgot about that.

So it seemed the failing builds of the last post wasn't because of the .editorconfig but because of this ruleset file.

It also seemed the ruleset file is needed to get it working. That shouldn't be the case and I asked the folks via the GitHub Issue about that. The answer was fast:

  • Fakt #1: The ruleset file isn't needed

  • Fakt #2: The regular .editorconfig entries don't work yet

The solution

Currently the ruleset entries where moved to the .editorconfig this means you need to add IDE specific entries to the .editorconfig to get it running, which also means you will have redundant entries until all the code style analyzers are moved to Roslyn and are mapped to the .editorconfig:

# IDE0007: Use 'var' instead of explicit type
dotnet_diagnostic.IDE0007.severity = error

# IDE0055 Fix formatting
dotnet_diagnostic.IDE0055.severity = error

# IDE005_gen: Remove unnecessary usings in generated code
dotnet_diagnostic.IDE0005_gen.severity = error

# IDE0065: Using directives must be placed outside of a namespace declaration
dotnet_diagnostic.IDE0065.severity = error

# IDE0059: Unnecessary assignment
dotnet_diagnostic.IDE0059.severity = error

# IDE0003: Name can be simplified
dotnet_diagnostic.IDE0003.severity = error  

As mentioned, these entries are already in the .editorconfig but written differently.

In the GitHub Issue they also wrote to add a specific line, in case you don't know all the IDE numbers. This line writes out warnings for all the possible code style failures. You'll see the numbers in the warning output and you can now configure how the code style failure should be handled:

# C# files
[*.cs]
dotnet_analyzer_diagnostic.category-Style.severity = warning

This solves the problem and it actually works really good.

Conclusion

Even if it solves the problem, I really hope this is a intermediate solution only, because of the redundant entries in the .editorconfig. I would prefer to not have the IDE specific entries, but I guess this needs some more time and a lot work done by Microsoft.

Holger Schwichtenberg: AsNoTrackingWithIdentityResolution() in Entity Framework Core 5.0 ab Preview 7

Microsoft hat den Namen für die forcierte Identitätsfeststellung beim Laden im "No-Tracking"-Modus geändert.

Jürgen Gutsch: .NET Interactive in Jupyter Notebooks

Since almost a year I do a lot of Python projects. Actually Python isn't that bad. Python and Flask to build web applications work almost similar to NodeJS and ExpressJS. Similarly to NodeJS, Python development is really great using Visual Studio Code.

People who are used to use Python know Jupyter Notebooks to create interactive documentations. Interactive documentation means that the code snippets are executable and that you can use Python code to draw charts or to calculate and display data.

If I got it right, Jupyter Notebook was IPython in the past. Now Jupyter Notebook is a standalone project and the IPython project focuses on Python Interactive and Python kernels for Jupyter Notebook.

The so called kernels extend Jupyter Notebook to execute a specific language. The Python kernel is default. You are able to install a lot more kernels. There are kernels for NodeJS and more.

Microsoft is working on .NET Interactive and kernels for Jupyter Notebook. You are now able to write interactive documentations in Jupyter Notebook using C#, F# and PowerShell, as well.

In this blog post I'll try to show you how to install and to use it.

Install Jupyter Notebook

You need to have Python3 installed on your machine. The best way to install Python on Windows is to use Chocolatey:

choco install python

Actually I use Chocolatey since many years as a Windows package manager and never had any problems.

Alternatively you could download and install Pythion 3 directly or by using the Anaconda installer.

If Python is installed you can install Jupyter Notebook using the Python package manager PIP:

pip install notebook

You now can use Jupyter by just type jupyter notebook in the console. This would start the Notebook with the default Python3 kernel. The following command shows the installed kernels:

jupyter kernelspec list

We'll see the python3 kernel in the console output:

Install .NET Interactive

The goal is to have the .NET Interactive kernels running in Jupyter. To get this done you first need to install the latest build of .NET Interactive from MyGet:

dotnet tool install -g --add-source "https://dotnet.myget.org/F/dotnet-try/api/v3/index.json" Microsoft.dotnet-interactive

Since NuGet is not the place to publish continuous integration build artifacts, Microsoft uses MyGet as well to publish previews, nightly builds, and continuous integration build artifacts.

Or install the latest stable version from NuGet:

dotnet tool install -g Microsoft.dotnet-interactive

If this is installed you can use dotnet interactive to install the kernels to Jupyter Notebooks

dotnet interactive jupyter install

Let's see, whether the kernels are installed or not:

jupyter kernelspec list

listkernels02

That's it. We now have four different kernels installed.

Run Jupyter Notebook

Let's run Jupyter by calling the next command. Be sure to navigate into a folder where your notebooks are or where you want to save your notebooks:

cd \git\hub\dotnet-notebook
jupyter notebook

startnotebook

It now starts a webserver that serves the notebooks from the current location and opens a Browser. The current folder will be the working folder for the currently running Jupyter instance. I don't have any files in that folder yet.

Here we have the Python3 and the three new .NET notebook types available:

notebook01

I now want to start playing around with a C# based notebook. So I create a new .NET (C#) notebook:

Try .NET Interactive

Let's add some content and a code snippet. At first I added a Markdown cell.

The so called "cells" are content elements the support specific content types. A Markdown cell is one type as well as a Code cell. The later executes a code snippet and shows the output underneath:

notebook02

That is a easy one. Now let's play with variables usage. I placed two more code cells and some small markdown cells below:

notebook03

And re-run the entire notebook:

notebook04

As well as in Python notebooks the variables are used and valid in the entire notebook and not only in the single code cell.

What else?

Inside a .NET Interactive notebook you can do the same stuff as in a regular code file. You are able to connect to a database, to Azure or just open a file locally on your machine. You can import namespaces as well as reference NuGet packages:

#r "nuget:NodaTime,2.4.8"
#r "nuget:Octokit,0.47.0"

using Octokit;
using NodaTime;
using NodaTime.Extensions;
using XPlot.Plotly;

VS Code

VS Code is also supporting Jupyter Notebooks usingMicrosoft's Python Add-In:

vscode01

Actually, it needs a couple of seconds until the Jupyter server is started. If it is up and running, it works like charm in VS Code. I really prefer VS Code over the browser interface to write notebooks.

GitHub

If you use a notepad to open a notebook file, you will see that it is a JSON file that also contains the outputs of the code cells:

vscode01

Because of that, I was really surprised that GitHub supports Jupyter Notebooks as well and displays it in a human readable format including the outputs. I expected to see the source code of the notebook, instead of the output:

vscode01

The rendering is limited but good enough to read the document. This means, it could make sense to write a notebook instead of a simple markdown file on GitHub.

Conclusion

I really like the concept of the interactive documentations. This is pretty common in the data science, analytics, and statistics universe. Python developers, as well as MatLab developers know that concept.

Personally I see a great benefit in other areas, too. Like learning, library and API documentation, as well as in all documentations that focus on code.

I also see a benefit on documentations about production lines, where several machines working together in a chain. Since you are able to use and execute .NET code, you could connect to machine sensors to read the state of the machines to display it in the documentation. The maintaining people are now able to see the state directly in the documentation of the production line.

Marco Scheel: Beware of the Teams Admin center to create new teams (and assign owners)

The Microsoft Teams Admin Center can be used to create a new Team. The initial dialog allows you to set multiple owners for the Team. This feature was added over time and is a welcome addition to make the life of an administrator easier. But the implementation has a big shortcoming: The specified owners in this dialog will not become a member of the underlying Microsoft 365 Group in Azure Active Directory. As a result all Microsoft 365 group services checking for members will not behave as expected. For example: These owners will not be able to access Planner. Other services like Teams and SharePoint work by accident.

image

Let’s start with some basic information. Office Microsoft 365 Groups use a special AAD group type. But it is still a group in Azure Active Directory. A group in AAD is very similar to the old-school AD group in our on-premises directories. The group is made of members. In most on-premises cases these groups are managed by your directory admins. But also in AD you can specify owners of a group that will then be able to manage these groups… if they have the right tool (dsa.msc, …). In the cloud Microsoft (and myself) is pushing towards self-service for group management. This “self-service-first approach” is obvious since the introduction of now Office Microsoft 365 Groups. Teams being one of the most famous M365 Group services is also pushing to the owner/member model where end users are owners of a Team (therefor an AAD group). All end user facing UX from Microsoft is abstracting away how the underlying AAD group is managed. Teams for example will group owners and members in two sections:

image

But if you look at the underlying AAD group you will find, that every owner is also a member:

image

And the Azure AD portal shows a dedicated section to manage owners of the group:

image

The Microsoft Admin portal also has dedicated sections for members and owners:

image

In general it is important that your administrative staff is aware how group membership (including ownership) is working. The problem with the Teams Admin portal as mentioned in the beginning is the result of this initial dialog leaving the group in an inconsistent state without making that admin aware of this misalignment. Lets check the group created in the initial screenshot using the Teams admin center. We specified 5 admin users and one member after the initial dialog. Non of the admin users was added as a member in AAD (Microsoft 365 Admin portal screenshot):

image

 But looking at the teams Admin center won’t show this “misconfiguration”:

image

If Leia wants to access the associated Planner service for this Team/Group the following error will show:

image

Planner is checking against group membership. Only is the user is a member Planner (and other services) will check if the user is also an owner and then show administrative controls. Teams itself looks ok. In SharePoint implemented a hack and the owners of the AAD group are granted Site Collection Admin permission so every item is accessible, because that is what Site Collection Admins are for. If your users are reporting problem like this and based on the Teams Admin center everything looks ok, go check “a better” portal like AAD.

To fix the problem in the Teams Admin center the owner has to be demoted to member state and then promoted to being a owner again.

A quick test with the SharePoint admin center shows that the system is managing membership as intended and every owner will also be a member. SharePoint requires one owner (Leia) in the first dialog and the second dialog allows additional owners and members:

image

The result for members in AAD is correct as all owners are also added as members:

image

Until this bug design flaw is fixed we recommend to not add more owners in the initial dialog. The currently logged-in admin user will be added as the only owner of the created group. In the next step remove your admin and add the requested users as members and promote them to owners. If you rely on self-service this should is not a problem. If you have a custom creation process automated hopefully you add all owner as member, but you should be good most of the time. We at glueckkanja-gab are supporting many customers with our custom lifecycle solution. We are adding a option to report this problem and a optional config switch to fix any detected misalignments.

I’ve created a UserVoice “idea” for this “bug”. So please lets go: vote!

https://microsoftteams.uservoice.com/forums/555103-public/suggestions/40951714-add-owners-also-as-members-aad-group-in-the-init 

Daniel Schädler: Kubernetes Cluster erstellen mit Minikube

Ziel

Mein Ziel in diesem Beitrag ist, einen Kubernetes Cluster, mit meinem SetUp zu erstellen und mich mehr und mehr vertraut mit Containern und Kubernetes zu machen. Dieser Artikel beschreibt die Vorgehensweise mit meinem Setup und folgt dem Tutorial hier

Voraussetzungen

Damit ich einen Cluster installieren kann, muss zuerst Minikube installiert sein, welches eine leichtgewichtige Implementierung von Kubernetes darstellt. Dies kreiiert eine virutelle Machine (nachfolgend VM genannt) auf dem lokalen Computer.

Durchführung

In diesem Abschnitt gehe ich Schritt für Schritt das interaktive Tutorial lokal nach denselben Schritten durch.

Version und Starten von Minikube

Natürlich kann auch mit dem interaktiven Tutorial auf gearbeitet werden. Da ich nun mein eigenes Setup habe, starte ich zuerst die Powershell als Administrator und gehe wie folgt vor:

  1. Powershell als Administrator starten
  2. Minikube version anzeigen und starten
Minikube Version und Start mit Powershell

Zum Vergleich, im interaktiven Terminal sieht dass dan so aus:

Minikube Version und Start im interaktiven Terminal auf kubernetes.io

Cluster Version

Um mit Kubernetes zu interagieren wird das Kommandzeilen Werkzeug kubectl verwendet. Zuerst schauen wir uns ein paar Cluster informationen an. Zu diesem Zweck geben wir das Kommando

kubectl version
kubectl version mit Powershell

Die interaktive Konsole zeigt diese Informationen ein wenig übersichtlicher an.

kubectl version im interaktiven Terminal auf kubernetes.io

Cluster Details

Um die Cluster Details anzuzeigen geben wir folgende Kommandos ein:

kubectl cluster-info
kubectl get nodes

Mit dem Nodes-Kommando werden uns alle Nodes (Knoten) gezeigt die wir für unsere Applikationen verwenden können.

kubectl cluster und node informationen in der powershell

Das interaktive Terminal zeigt uns die Informationen wie folgt an:

kubectl cluster und node informationen im interaktiven Terminal von kubernetes.io

Fazit

Ich hätte nie gedacht, dass dies so einfach sein wird und bin mir bewusst, dass die Herausforderungen noch kommen werden. Nun freue ich mich auf die weiteren Module und werde diese mit dem Fokus einer .NET Applikation durcharbeiten. Weiter geht es dann mit dem nächsten Modul mit meiner Installation.

Daniel Schädler: Meine ersten Schritte mit Kubernetes /Minikube

Ziel

Das Ziel, ist es einen Minikube auf meinem Windows Rechner zu installieren um mich in die Welt von Container und Kubernetes einzuarbeiten. Das soll eine Artikelserie geben, die basierend auf den folgenden Tutorials ist Kubernetes.io

Voraussetzungen

Zuerst müssen die folgenden Voraussetzungen erfüllt sein, damit Minikube installiert werden kann:

  • kubectl muss installiert sein

Ich habe dies mit Powershell durchgeführt. Hierzu muss diese als Administrator ausgeführt werden. Anschliessend kann mit dem folgenden Befehl die Installation initiiert werden.

Install-Script Name 'install-kubectl' -Scope CurrentUser -Force
install-kubectl.ps1 -DownloadLocation "C:\Program Files\kubectl"

Es kann sein, dass der NuGet Provider, wie im nachfolgenden Bild, aktualisiert werden muss. Hier mit „Y“ bestätigen und den Befehl noch einmal ausführen, wie oben beschrieben.

Installation von Minikube mit Powershell

Ich selber hatte Probleme, die mit dem Bit-Filetransfer die Installation verhinderten. Damit ich trotzdem weiterkam, habe ich die Binary manuell heruntergeladen, diese in den „C:\Program Files\kubectl“ Ordner kopiert und den Pfad angegeben.

 Die Version ist mir mit folgendem Befehl angezeigt worden:

kubectl client --version
Kubectl Client version

Nun sind alle Voraussetzungen für die Minikube Installation, wie [hier](https://kubernetes.io/docs/tasks/tools/install-kubectl/) beschrieben, durchgeführt worden.

Durchführung

Für die Installation des Minikube, habe ich mich für den Windows Installer entschieden. Dieser muss dann als Administrator installiert werden.

Minikube Installation als Administrator

Minikube Setup

Ist die ausführbare Datei als Administrator gestartet worden, so werden die Installationsschritte durchgegangen:

  1. Als erstes wählen wir die Sprache "Englisch"
Setup Sprache wählen
  1. Anschliessend sind die Lizenvereinbarungen zu bestätigen.

  2. Danach den Installationsort wählen. Ich habe diesen auf dem Standardwert wie unten dargestellt belassen.

Installationsziel für Minikube

Minikube Installationbestätigung

Mein Setup ist Windows 10 mit Hyper-V. So muss Minikube gestartet werden. Eine Liste der Treiber kann hier gefunden werden.

Zu disem Zweck muss die Powershell wieder mit Administratorenrechten und dem folgenden Befehl ausgeführt werden:

minikube start --driver=hyperv

Ist alles korrekt eingegeben worden, dann startet Minikube und setzt seine Umgebung zur Verwendung mit Hyper-V auf, wie im nachfolgenden Bild dargestellt.

Minikube mit Hyper-V initialisieren

Fazit

So kann ich mir, ohne grosse Kosten auf meinem private Azure Portal einmal Kubernetes lokal in Verbindung mit Hyper-V anschauen und mich vertraut machen. In weiteren Artikeln werde ich mich analog den hier hier durchhangeln und mir dann später eine Umgebung aufzubauen, die in etwa bei den meisten Deployments vorkommt.

Golo Roden: Einführung in React, Folge 5: React-State

Anwendungen zeigen Daten nicht nur statisch an, sondern ändern diese auch von Zeit zu Zeit, nicht zuletzt bedingt durch Eingaben der Anwender. Wie geht man in React mit dynamischen Daten um?

Christian Dennig [MS]: Release to Kubernetes like a Pro with Flagger

Introduction

When it comes to running applications on Kubernetes in production, you will sooner or later have the challenge to update your services with a minimum amount of downtime for your users…and – at least as important – to be able to release new versions of your application with confidence…that means, you discover unhealthy and “faulty” services very quickly and are able to rollback to previous versions without much effort.

When you search the internet for best practices or Kubernetes addons that help you with these challenges, you will stumble upon Flagger, as I did, from WeaveWorks.

Flagger is basically a controller that will be installed in your Kubernetes cluster. It helps you with canary and A/B releases of your services by handling all the hard stuff like automatically adding services and deployments for your “canaries”, shifting load over time to these and rolling back deployments in case of errors.

As if that wasn’t good enough, Flagger also works in combination with popular Service Meshes like Istio and Linkerd. If you don’t want to use Flagger with such a product, you can also use it on “plain” Kubernetes, e.g. in combination with an NGINX ingress controller. Many choices here…

I like linkerd very much, so I’ll choose that one in combination with Flagger to demonstrate a few of the possibilities you have when releasing new versions of your application/services.

Prerequisites

linkerd

I already set up a plain Kubernetes cluster on Azure for this sample, so I’ll start by adding linkerd to it (you can find a complete guide how to install linkerd and the CLI on https://linkerd.io/2/getting-started/):

$ linkerd install | kubectl apply -f -

After the command has finished, let’s check if everything works as expected:

$ linkerd check && kubectl -n linkerd get deployments
...
...
control-plane-version
---------------------
√ control plane is up-to-date
√ control plane and cli versions match

Status check results are √
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
flagger                  1/1     1            1           3h12m
linkerd-controller       1/1     1            1           3h14m
linkerd-destination      1/1     1            1           3h14m
linkerd-grafana          1/1     1            1           3h14m
linkerd-identity         1/1     1            1           3h14m
linkerd-prometheus       1/1     1            1           3h14m
linkerd-proxy-injector   1/1     1            1           3h14m
linkerd-sp-validator     1/1     1            1           3h14m
linkerd-tap              1/1     1            1           3h14m
linkerd-web              1/1     1            1           3h14m

If you want to, open the linkerd dashboard and see the current state of your service mesh, execute:

$ linkerd dashboard

After a few seconds, the dashboard will be shown in your browser.

Microsoft Teams Integration

For alerting and notification, we want to leverage the MS Teams integration of Flagger to get notified each time a new deployment is triggered or a canary release will be “promoted” to be the primary release.

Therefore, we need to setup a WebHook in MS Teams (a MS Teams channel!):

  1. In Teams, choose More options () next to the channel name you want to use and then choose Connectors.
  2. Scroll through the list of Connectors to Incoming Webhook, and choose Add.
  3. Enter a name for the webhook, upload an image and choose Create.
  4. Copy the webhook URL. You’ll need it when adding Flagger in the next section.
  5. Choose Done.

Install Flagger

Time to add Flagger to your cluster. Therefore, we will be using Helm (version 3, so no need for a Tiller deployment upfront).

$ helm repo add flagger https://flagger.app

$ kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml

[...]

$ helm upgrade -i flagger flagger/flagger \
--namespace=linkerd \
--set crd.create=false \
--set meshProvider=linkerd \
--set metricsServer=http://linkerd-prometheus:9090 \
--set msteams.url=<YOUR_TEAMS_WEBHOOK_URL>

Check, if everything has been installed correctly:

$ kubectl get pods -n linkerd -l app.kubernetes.io/instance=flagger

NAME                       READY   STATUS    RESTARTS   AGE
flagger-7df95884bc-tpc5b   1/1     Running   0          0h3m

Great, looks good. So, now that Flagger has been installed, let’s have a look where it will help us and what kind of objects will be created for canary analysis and promotion. Remember that we use linkerd in that sample, so all objects and features discussed in the following section will just be relevant for linkerd.

How Flagger works

The sample application we will be deploying shortly consists of a VueJS Single Page Application that is able to display quotes from the Star Wars movies – and it’s able to request the quotes in a loop (to be able to put some load on the service). When requesting a quote, the web application is talking to a service (proxy) within the Kubernetes cluster which in turn talks to another service (quotesbackend) that is responsible to create the quote (simulating service-to-service calls in the cluster). The SPA as well as the proxy are accessible through a NGINX ingress controller.

After the application has been successfully deployed, we also add a canary object which takes care of the promotion of a new revision of our backend deployment. The Canary object will look like this:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: quotesbackend
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: quotesbackend
  progressDeadlineSeconds: 60
  service:
    port: 3000
    targetPort: 3000
  analysis:
    interval: 20s
    # max number of failed metric checks before rollback
    threshold: 5
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 70
    stepWeight: 10
    metrics:
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      threshold: 99
      interval: 1m
    - name: request-duration
      # maximum req duration P99
      # milliseconds
      threshold: 500
      interval: 30s

What this configuration basically does is watching for new revisions of a quotesbackend deployment. In case that happens, it starts a canary deployment for it. Every 20s, it will increase the weight of the traffic split by 10% until it reaches 70%. If no errors occur during the promotion, the new revision will be scaled up to 100% and the old version will be scaled down to zero, making the canary the new primary. Flagger will monitor the request success rate and the request duration (linkerd Prometheus metrics). If one of them drops under the threshold set in the Canary object, a rollback to the old version will be started and the new deployment will be scaled back to zero pods.

To achieve all of the above mentioned analysis, flagger will create several new objects for us:

  • backend-primary deployment
  • backend-primary service
  • backend-canary service
  • SMI / linkerd traffic split configuration

The resulting architecture will look like that:

So, enough of theory, let’s see how Flagger works with the sample app mentioned above.

Sample App Deployment

If you want to follow the sample on your machine, you can find all the code snippets, deployment manifests etc. on https://github.com/cdennig/flagger-linkerd-canary

Git Repo

First, we will deploy the application in a basic version. This includes the backend and frontend components as well as an Ingress Controller which we can use to route traffic into the cluster (to the SPA app + backend services). We will be using the NGINX ingress controller for that.

To get started, let’s create a namespace for the application and deploy the ingress controller:

$ kubectl create ns quotes

# Enable linkerd integration with the namespace
$ kubectl annotate ns quotes linkerd.io/inject=enabled

# Deploy ingress controller
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
$ kubectl create ns ingress
$ helm install my-ingress ingress-nginx/ingress-nginx -n ingress

Please note, that we annotate the quotes namespace to automatically get the Linkerd sidecar injected during deployment time. Any pod that will be created within this namespace, will be part of the service mesh and controlled via Linkerd.

As soon as the first part is finished, let’s get the public IP of the ingress controller. We need this IP address to configure the endpoint to call for the VueJS app, which in turn is configured in a file called settings.js of the frontend/Single Page Application pod. This file will be referenced when the index.html page gets loaded. The file itself is not present in the Docker image. We mount it during deployment time from a Kubernetes secret to the appropriate location within the running container.

One more thing: To have a proper DNS name to call our service (instead of using the plain IP), I chose to use NIP.io. The service is dead simple! E.g. you can simply use the DNS name 123-456-789-123.nip.io and the service will resolve to host with IP 123.456.789.123. Nothing to configure, no more editing of /etc/hosts…

So first, let’s determine the IP address of the ingress controller…

# get the IP address of the ingress controller...

$ kubectl get svc -n ingress
NAME                                            TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
my-ingress-ingress-nginx-controller             LoadBalancer   10.0.93.165   52.143.30.72   80:31347/TCP,443:31399/TCP   4d5h
my-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.157.46   <none>         443/TCP                      4d5h

Please open the file settings_template.js and adjust the endpoint property to point to the cluster (in this case, the IP address is 52.143.30.72, so the DNS name will be 52-143-30-72.nip.io).

Next, we need to add the correspondig Kubernetes secret for the settings file:

$ kubectl create secret generic uisettings --from-file=settings.js=./settings_template.js -n quotes

As mentioned above, this secret will be mounted to a special location in the running container. Here’s the deployment file for the frontend – please see the sections for volumes and volumeMounts:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: quotesfrontend
spec:
  selector:
      matchLabels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
  replicas: 1
  minReadySeconds: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        name: quotesfrontend
        quotesapp: frontend
        version: v1
    spec:
      containers:
      - name: quotesfrontend
        image: csaocpger/quotesfrontend:4
        volumeMounts:
          - mountPath: "/usr/share/nginx/html/settings"
            name: uisettings
            readOnly: true
      volumes:
      - name: uisettings
        secret:
          secretName: uisettings

Last but not least, we also need to adjust the ingress definition to be able to work with the DNS / hostname. Open the file ingress.yaml and adjust the hostnames for the two ingress definitions. In this case here, the resulting manifest looks like that:

Now we are set to deploy the whole application:

$ kubectl apply -f base-backend-infra.yaml -n quotes
$ kubectl apply -f base-backend-app.yaml -n quotes
$ kubectl apply -f base-frontend-app.yaml -n quotes
$ kubectl apply -f ingress.yaml -n quotes

After a few seconds, you should be able to point your browser to the hostname and see the “Quotes App”:

Basic Quotes app

If you click on the “Load new Quote” button, the SPA will call the backend (here: http://52-143-30-72.nip.io/quotes), request a new “Star Wars” quote and show the result of the API Call in the box at the bottom. You can also request quotes in a loop – we will need that later to simulate load.

Flagger Canary Settings

We need to configure Flagger and make it aware of our deployment – remember, we only target the backend API that serves the quotes.

Therefor, we deploy the canary configuration (canary.yaml file) discussed before:

$ kubectl apply -f canary.yaml -n quotes

You have to wait a few seconds and check the services, deployments and pods to see if it has been correctly installed:

$ kubectl get svc -n quotes

NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
quotesbackend           ClusterIP   10.0.64.206    <none>        3000/TCP   51m
quotesbackend-canary    ClusterIP   10.0.94.94     <none>        3000/TCP   70s
quotesbackend-primary   ClusterIP   10.0.219.233   <none>        3000/TCP   70s
quotesfrontend          ClusterIP   10.0.111.86    <none>        80/TCP     12m
quotesproxy             ClusterIP   10.0.57.46     <none>        80/TCP     51m

$ kubectl get po -n quotes
NAME                                     READY   STATUS    RESTARTS   AGE
quotesbackend-primary-7c6b58d7c9-l8sgc   2/2     Running   0          64s
quotesfrontend-858cd446f5-m6t97          2/2     Running   0          12m
quotesproxy-75fcc6b6c-6wmfr              2/2     Running   0          43m

kubectl get deploy -n quotes
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
quotesbackend           0/0     0            0           50m
quotesbackend-primary   1/1     1            1           64s
quotesfrontend          1/1     1            1           12m
quotesproxy             1/1     1            1           43m

That looks good! Flagger has created new services, deployments and pods for us to be able to control how traffic will be directed to existing/new versions of our “quotes” backend. You can also check the canary definition in Kubernetes, if you want:

$ kubectl describe canaries -n quotes

Name:         quotesbackend
Namespace:    quotes
Labels:       <none>
Annotations:  API Version:  flagger.app/v1beta1
Kind:         Canary
Metadata:
  Creation Timestamp:  2020-06-06T13:17:59Z
  Generation:          1
  Managed Fields:
    API Version:  flagger.app/v1beta1
[...]

You will also receive a notification in Teams, that a new deployment for Flagger has been detected and initialized:

Kick-Off a new deployment

Now comes the part where Flagger really shines. We want to deploy a new version of the backend quote API – switching from “Star Wars” quotes to “Star Trek” quotes! What will happen, is the following:

  • as soon as we deploy a new “quotesbackend”, Flagger will recognize it
  • new versions will be deployed, but no traffic will be directed to them at the beginning
  • after some time, Flagger will start to redirect traffic via Linkerd / TrafficSplit configurations to the new version via the canary service, starting – according to our canary definition – at a rate of 10%. So 90% of the traffic will still hit our “Star Wars” quotes
  • it will monitor the request success rate and advance the rate by 10% every 20 seconds
  • if 70% traffic split will be reached without throwing any significant amount of errors, the deployment will be scaled up to 100% and propagated as the “new primary”

Before we deploy it, let’s request new quotes in a loop (set the frequency e.g. to 300ms via the slider and press “Load in Loop”).

Base deployment: Load quotes in a loop.

Then, deploy the new version:

$ kubectl apply -f st-backend-app.yaml -n quotes

$ kubectl describe canaries quotesbackend -n quotes
[...]
[...]
Events:
  Type     Reason  Age                   From     Message
  ----     ------  ----                  ----     -------
  Warning  Synced  14m                   flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  14m                   flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  4m7s                  flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Starting canary analysis for quotesbackend.quotes
  Normal   Synced  3m47s                 flagger  Advance quotesbackend.quotes canary weight 10
  Warning  Synced  3m7s (x2 over 3m27s)  flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Normal   Synced  2m47s                 flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  2m27s                 flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  2m7s                  flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  107s                  flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  87s                   flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  67s                   flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  7s (x3 over 47s)      flagger  (combined from similar events): Promotion completed! Scaling down quotesbackend.quotes

You will notice in the UI that every now and then a quote from “Star Trek” will appear…and that the frequency will increase every 20 seconds as the canary deployment will receive more traffic over time. As stated above, when the traffic split reaches 70% and no errors occured in the meantime, the “canary/new version” will be promoted as the “new primary version” of the quotes backend. At that time, you will only receive quotes from “Star Trek”.

Canary deployment: new quotes backend servicing “Star Trek” quotes.

Because of the Teams integration, we also get a notification of a new version that will be rolled-out and – after the promotion to “primary” – that the rollout has been successfully finished.

Starting a new version rollout with Flagger
Finished rollout with Flagger

What happens when errors occur?

So far, we have been following the “happy path”…but what happens, if there are errors during the rollout of a new canary version? Let’s say we have produced a bug in our new service that will throw an error when requesting a new quote from the backend? Let’s see, how Flagger will behave then…

The version that will be deployed will start throwing errors after a certain amount of time. Due to the fact that Flagger is monitoring the “request success rate” via Linkerd metrics, it will notice that something is “not the way it is supposed to be”, stop the promotion of the new “error-prone” version, scale it back to zero pods and keep the current primary backend (means: “Star Trek” quotes) in place.

$ kubectl apply -f error-backend-app.yaml -n quotes

$ k describe canaries.flagger.app quotesbackend  
[...]
Events:
  Type     Reason  Age                    From     Message
  ----     ------  ----                   ----     -------
  Warning  Synced  23m                    flagger  quotesbackend-primary.quotes not ready: waiting for rollout to finish: observed deployment generation less then desired generation
  Normal   Synced  23m                    flagger  Initialization done! quotesbackend.quotes
  Normal   Synced  13m                    flagger  New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 20
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 30
  Normal   Synced  11m                    flagger  Advance quotesbackend.quotes canary weight 40
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 50
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 60
  Normal   Synced  10m                    flagger  Advance quotesbackend.quotes canary weight 70
  Normal   Synced  3m43s (x4 over 9m43s)  flagger  (combined from similar events): New revision detected! Scaling up quotesbackend.quotes
  Normal   Synced  3m23s (x2 over 12m)    flagger  Advance quotesbackend.quotes canary weight 10
  Normal   Synced  3m23s (x2 over 12m)    flagger  Starting canary analysis for quotesbackend.quotes
  Warning  Synced  2m43s (x4 over 12m)    flagger  Halt advancement no values found for linkerd metric request-success-rate probably quotesbackend.quotes is not receiving traffic: running query failed: no values found
  Warning  Synced  2m3s (x2 over 2m23s)   flagger  Halt quotesbackend.quotes advancement success rate 0.00% < 99%
  Warning  Synced  103s                   flagger  Halt quotesbackend.quotes advancement success rate 50.00% < 99%
  Warning  Synced  83s                    flagger  Rolling back quotesbackend.quotes failed checks threshold reached 5
  Warning  Synced  81s                    flagger  Canary failed! Scaling down quotesbackend.quotes

As you can see in the event log, the success rate drops to a significant amount and Flagger will halt the promotion of the new version, scale down to zero pods and keep the current version a the “primary” backend.

New backend version throwing errors
Teams notification: service rollout stopped!

Conclusion

With this article, I have certainly only covered the features of Flagger in a very brief way. But this small example shows what a great relief Flagger can be when it comes to the rollout of new Kubernetes deployments. Flagger can do a lot more than shown here and it is definitely worth to take a look at this product from WeaveWorks.

I hope I could give some insight and made you want to do more…and to have fun with Flagger 🙂

As mentioned above, all the sample files, manifests etc. can be found here: https://github.com/cdennig/flagger-linkerd-canary.

Jürgen Gutsch: Getting the .editorconfig working with MSBuild

UPDATE: While trying the .editorconfig and writing this post, I did a fundamental mistake. I added a ruleset file to the projects and this is the reason why it worked. It wasn't really the .editorconfig in this case. I'm really sorry about that. Please find this post to learn how it is really working.

In January I wrote a post about setting up VS2019 and VSCode to use the .editorconfig. In this post I'm going to write about how to get the .editorconfig settings checked during build time.

It works like it should work: In the editors. And it works in VS2019 at build-time. But it doesn't work at build time using MSBuild. This means it won't work with the .NET CLI, it won't work with VSCode and it won't work on any build server that uses MSBuild.

Actually this is a huge downside about the .editorconfig. Why shall we use the .editoconfig to enforce the coding style, if a build in VSCode doesn't fail, but it fails in VS2019 does? Why shall we use the .editorconfig, if the build on a build server doesn't fail. Not all of the developers are using VS2019, sometimes VSCode is the better choice. And we don't want to install VS2019 on a build server and don't want to call vs.exe to build the sources.

The reason why it is like this is as simple as bad: The Roslyn analyzers to check the codes using the .editorconfig are not yet done.

Actually, Microsoft is working on that and is porting the VS2019 coding style analyzers to Roslyn analyzers that can be downloaded and used via NuGet. Currently, the half of the work is done and some of the analyzers can be used in the project. See here: #33558

With this post I'd like to try it out. We need this for our projects in the YOO, the company I work for and I'm really curious about how this is going to work in a real project

Code Analyzers

To try it out, I'm going to use the Weather Stats App I created in previous posts. Feel free to clone it from GitHub and follow the steps I do within this post.

At first you need to add a NuGet package:

Microsoft.CodeAnalysis.CSharp.CodeStyle

This is currently a development version and hosted on MyGet. This needs you to follow the installation instructions on MyGet. Currently it is the following .NET CLI command:

dotnet add package Microsoft.CodeAnalysis.CSharp.CodeStyle --version 3.8.0-1.20330.5 --source https://dotnet.myget.org/F/roslyn/api/v3/index.json

The version number might change in the future. Currently I use the version 3.8.0-1.20330.5 which is out since June 30th.

You need to execute this command for every project in your solution.

After executing this command you'll have the following new lines in the project files:

<PackageReference Include="Microsoft.CodeAnalysis.CSharp.CodeStyle" Version="3.8.0-1.20330.5">
    <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    <PrivateAssets>all</PrivateAssets>
</PackageReference>

If not just copy this line into the project file and run dotnet restore to actually load the package.

This should be enough to get it running.

Adding coding style errors

To try it out I need to add some coding style errors. I simply added some these:

Roslyn conflicts

Maybe you will get a lot of warnings about that an instance of the analyzers cannot be created because of a missing Microsoft.CodeAnalysis 3.6.0 Assembly like this:

Could not load file or assembly 'Microsoft.CodeAnalysis, Version=3.6.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

This is might strange because the code analysis assemblies should already be available in case Roslyn is used. Actually this error happens, if you do a dotnet build while VSCode is running the Roslyn analyzers. Strange but reproduceable. Maybe Roslyn analyzers can only run once at the same time.

To get it running without those warnings, you can simply close VSCode or wait for a few seconds.

Get it running

Actually it didn't work in my machine the first times. The reason was that I forgot to update the global.json. I still used a 3.0 runtime to run the analyzers. This doesn't work.

After updating the global.json to a 5.0 runtime (preview 6 in my case) it failed as expected:

Since the migration of the IDE analyzers to Roslyn analyzers is half done, not all of the errors will fail the build. This is why the the IDE0003 rule doesn't appear here. I used the this keyword twice in the code above, that should also fail the build.

Conclusion

Actually I was wondering why Microsoft didn't start earlier to convert the VS2019 analyzers into Roslyn code analyzers. This is really valuable for teams where developers use VSCode, VS2019, VS for Mac or any other tool to write .NET Core applications. It is not only about showing coding style errors in an editor, it should also fail the build in case coding style errors are checked in.

Anyway, it is working Good. And hopefully Microsoft will complete the set of analyzers as soon as possible.

Golo Roden: Entdeckungsreise in die selbst genutzte Programmiersprache

Gleichwohl in welcher Programmiersprache man entwickelt, für nahezu jeden Entwickler gibt es von Zeit zu Zeit noch Neues zu entdecken. Doch wie lässt sich eine systematische Entdeckungsreise in die eigene Programmiersprache mit dem Alltag vereinbaren?

Stefan Lieser: Aufzeichnung zum Webinar “Softwareentwicklung ohne Abhängigkeiten” veröffentlicht

Soeben habe ich die Aufzeichnung des Webinars Softwareentwicklung ohne Abhängigkeiten vom 02.06.2020 veröffentlicht.

Der Beitrag Aufzeichnung zum Webinar “Softwareentwicklung ohne Abhängigkeiten” veröffentlicht erschien zuerst auf Refactoring Legacy Code.

Stefan Lieser: Folien zum Webinar “Flow Design am Beispiel”

Am 30.06.2020 habe ich das Webinar Flow Design am Beispiel gehalten. Unten finden Sie die Folien sowie den Link zu den Beispielen. Die beiden im Webinar gezeigten Beispiele finden Sie unter den folgenden Link: CSV Viewer – https://github.com/slieser/flowdesignbuch/tree/master/csharp/csvviewer/csvviewer MyStocks – https://github.com/slieser/flowdesignbuch/tree/master/csharp/mystocks  

Der Beitrag Folien zum Webinar “Flow Design am Beispiel” erschien zuerst auf Refactoring Legacy Code.

Holger Schwichtenberg: Behandlung der Umsatzsteuersätze von 5 und 16 Prozent in der Elster-Umsatzsteuervoranmeldung

Die Finanzverwaltung verzichtet nun einfach auf die Trennung nach Steuersätzen. Entwickler von Buchhaltungslösungen müssen aber deutlich agiler sein.

Code-Inside Blog: Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?

Within our product we move more and more stuff in the .NET Core land. Last week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:

.NET Framework 4.5 or higher.

With .NET Core the answer is sligthly different:

In theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.

This system is called “Framework-dependent apps roll forward” and sounds good.

The bad part

Unfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:

It’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.

The good part

With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.

Summery

Read the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.

As a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.

Hope this helps!

Jürgen Gutsch: Exploring Orchard Core - Part 1

Since I while I planned to try out the Orchard Core Application Framework. Back than I saw an awesome video where Sébastien Ros showed an early version of Orchard Core. If I remember right it was this ASP.NET Community Standup: ASP.NET Community Standup - November 27, 2018 - Sebastien Ros on Headless CMS with Orchard Core

Why a blog series

Actually this post wasn't planned to be a series but as usual the posts are getting longer and longer. The more I write, the more came in mind to write about. Bloggers now this, I guess. So I needed to decide, whether I want to write a monster blog post or a series of smaller posts. Maybe the later is easier to read and to write.

What is Orchard Core?

Orchard Core is an open-source modular and multi-tenant application framework built with ASP.NET Core, and a content management system (CMS) built on top of that application framework.

Orchard Core is not a new version of the Orchard CMS. It is a completely new thing written in ASP.NET Core. The Orchard CMS was designed as a CMS, but Orchard Core was designed to be an application framework that can be used to build a CMS, a blog or whatever you want to build. I really like the idea to have a framework like this.

I don't want to repeat the stuff, that is already on the website. To learn more about it just visit it: https://www.orchardcore.net/

I had a look into the Orchard CMS, back then when I was evaluating a new blog. It was good, but I didn't really feel confident.

Currently the RC2 is out since a couple of days and version 1 should be released in September 2020. The roadmap already defines features for future releases.

Let's have a first glimpse

When I try a CMS or something like this, I try to follow the quick start guide. I want to start the application up to have a first look and feel. As a .NET Core fan-boy I decided to use the .NET CLI to run the application. But first I have to clone the repository, to have a more detailed look later on and to run the sample application:

git clone https://github.com/OrchardCMS/OrchardCore.git

This clones the current RC2 into a local repository.

Than we need to cd into the repository and into the web sample:

cd OrchardCore\
cd src\OrchardCore.Cms.Web\

Since this should be a ASP.NET Core application, it should be possible to run the dotnet run command:

dotnet run

As usual in ASP.NET Core I get two URLs to call the app. The HTTP version on port 5000 and the HTTPS version on port 5001.

I'm now should be able to call the CMS in the browser. Et voilà:

Since every CMS has an admin area, I tried /admin for sure.

At the first start it asks you about to set initial credentials and stuff like this. I already did this before. At every other start I just see the log-in screen:

After the log-in I feel myself warmly welcomed... kinda :-D

Actually this screenshot is a little small, because it hides the administration menu which is the last item in menu. You should definitely have a look into the /admin/features page that has a ton of features to enable. Stuff like GraphQL API, Lucene search indexing, Markdown editing, templating, authentication providers and a lot more.

But I won't go threw all the menu items. You can just have a look by yourself. I actually want to explore the application framework.

I want to see some code

This is why I stopped the application and open it in VS Code and this is where the fascinating stuff is.

Ok. This is where I thought the fascinating stuff is. There is almost nothing. There are a ton of language files, an almost empty wwwroot folder, some configuration files and the common files like a *.csproj, the startup.cs and the program.cs. Except the localization part, it completely looks like an empty ASP.NET Core project.

Where is all the Orchard stuff? I expected a lot more to see.

The Program.cs looks pretty common, except the usage of NLog which is provided via OrchardCore.Logging package:

using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using OrchardCore.Logging;

namespace OrchardCore.Cms.Web
{
    public class Program
    {
        public static Task Main(string[] args)
            => BuildHost(args).RunAsync();

        public static IHost BuildHost(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureLogging(logging => logging.ClearProviders())
                .ConfigureWebHostDefaults(webBuilder => webBuilder
                    .UseStartup<Startup>()
                    .UseNLogWeb())
                .Build();
    }
}

This clears the default logging providers and adds the NLog web logger. It also uses the common Startup class which is really clean and doesn't need a lot of configuration .

using Microsoft.AspNetCore.Builder;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace OrchardCore.Cms.Web
{
    public class Startup
    {
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddOrchardCms();
        }

        public void Configure(IApplicationBuilder app, IHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            app.UseStaticFiles();

            app.UseOrchardCore();
        }
    }
}

It only adds the services for the Orchard CMS in the method ConfigureServices and uses Orchard Core stuff in the method Configure.

Actually this Startup configures the Orchard Core as CMS. It seems I would also be able to add the Orchard Core to the ServiceCollection by using AddOrchardCore(). I guess this would just add Core functionality to the application. Let's see if I'm right.

Both the AddOrchardCms() and the AddOrchardCore() methods are overloaded and can be configured using an OrchardCoreBuilder. Using this overloads you can add Orchard Core features to your application. I guess the method AddOrchardCms() has a set of features preconfigured to behave like a CMS:

It is a lot of guessing and trying right now. But I didn't read any documentation until now. I just want to play around.

I also wanted to see what is possible with the UseOrchardCore() method, but this one just has on optional parameter to add an action the retrieves the IApplicationBuilder . I'm not sure why this action is really needed. I mean I would be able to configure ASP.NET Core features inside this action. I could also nest a lot of UseOrchardCore() calls. But why?

I think, it is time to have a look into the docs at https://docs.orchardcore.net/en/dev/. Don't confuse it with the docs on https://docs.orchardcore.net/. This are the Orchard CMS docs that might be outdated now.

The docs are pretty clear. Orchard Core comes in two different targets: The Orchard Core Framework and the Orchard Core CMS. The sample I opened here is the Orchard Core CMS sample. To learn how the Framework works, I need to clone the Orchard Core Samples repository: https://github.com/OrchardCMS/OrchardCore.Samples

I will write about this in the next part this series.

Not a conclusion yet

I will continue exploring the Orchard Core Framework within the next days and continue to write about it in parallel. The stuff I saw until now is really promising and I like the fact that it simply works without a lot of configuration. Exploring the new CMS would be another topic and really interesting as well. Maybe I will find some time in the future.

Mario Noack: NDepend 2020.1

Als ich vor kurzem gefragt wurde, ob ich meinen Test der neuen NDepend Version erneuern möchte, stimmte ich gern zu. Diese Software bezeichnet sich selbst als ein „Schweizer Taschenmesser für .NET und .NET Core Entwicklung Teams“. Es bietet vielfältige Möglichkeiten, um ein Projekt auf Code-technische Probleme und technische Schulden zu untersuchen und das auch über einen zeitlichen Verlauf in grafischer Form darzustellen bzw. die konkreten Veränderungen zu ermitteln.

Die Installation ist einfach und die Integration in Visual Studio ist optional, schnell gemacht und für mich ein echter Mehrwert, da ich bereits während der Entwicklung schnellen Zugriff auf alle relevanten Funktionen habe. Obwohl NDepend dann regelmäßig automatisch Analyseergebnisse sammelt, merkt man davon während der normalen Arbeit in Visual Studio nichts. Das ist bei Visual Studio Erweiterungen anderer Hersteller leider nicht immer der Fall.

NDepend Dashboard

Mit einem einfachen Klick auf einen großen Kreis erhält man eine kleine Übersicht, kann eine Analyse von Hand starten oder einfach ins Dashboard wechseln. Dort erhält man eine Einschätzung der Projektqualität. Das basiert auf einem sehr ausgefeilten Regelwerk, welches der Hersteller entwickelt hat. Ich habe das Regelwerk nicht selbst geändert. Ich habe schließlich das Projekt der Code-Analyse wieder aufgegriffen, weil ich das Tool als neutralen und unbestechlichen Auditor meiner Softwareprojekte sehe, der wesentlich mehr Erfahrung als ich in diesem Bereich hat. Die Darstellung des Dashboards beinhaltet eine anpassbare Übersicht. Bestandteil sind einmal Diagramme über die Entwicklung der Kennzahlen. Weiter ist, im Besonderen nach der Auswahl einer Vergleichsbasis, eine Darstellung der Projektbewertung jetzt und im direkten Vergleich zu finden. Das schließt eine nach Schweregrad gruppierte Übersicht der Anzahl der Vorfälle ein. Ein Klick auf diese Zahl bringt ein sofort in eine konkrete Übersicht der Vorfälle bzw. Probleme. Dies ist besonders dann hilfreich, wenn man eine Entwicklung einer Kennzahl vielleicht nicht so erwartet hat und man nun die Ursache ergründen möchte.

NDepend issue list

Hier beginnt jetzt aber auch der anspruchsvolle Teil! Man kann sehr leicht die einzelnen Vorfälle heraussuchen und bekommt die Fundstellen sogar angezeigt, wenn Sie nur der Vergleichsbasis zu finden waren, nun also entfernt oder korrigiert wurde. Ferner sieht man die Definition des Vorfalls in einer LINQ ähnlichen Sprache, mit einer ausführlichen Erklärung und weiteren externen Verweisen. Die Beschreibungen sind sehr gut, prinzipbedingt aber gerade am Anfang keine leichte Kost! Hier muss man schon einen festen Willen haben, das persönliche Niveau zu verbessern. Keine Verständnisprobleme bei englischen Texten sind auch hilfreich. Dann findet man in NDepend jedoch einen idealen Partner.

Ein Prunkstück, auf das der Hersteller besonders stolz ist, ist der Dependency Graph. Der Name ist Programm! Grenzen scheint es für dieses Modul nicht wirklich zu geben. Man kann mit einer sehr hohen Geschwindigkeit das Zusammenspiel der eigenen oder externen Klassen, Namensbereichen oder Funktionen anschauen und durch viele, oft sehr sinnvoll vorbelegte Optionen, übersichtlich gestalten. Das kann sehr hilfreich bei der Sichtung oder Umgestaltung von Codebereichen sein. Die Leistungsklasse wird eindrucksvoll durch die Präsentation der .NET Core 3 Klassen gezeigt. In meinen eigenen aktuellen Projekten ist mir die Struktur jedoch extrem vertraut. Darum ist steht bei mir das Modul nur in der zweiten Reihe, für viele andere Entwickler wird das sicher nicht gelten.

Kommen wir noch zur Einschätzung des Preises. Dieser liegt ungefähr im Bereich der JetBrains-Tools. Da der Kundenkreis merklich kleiner sein wird, ist das bezogen auf den Funktionsumfang mehr als fair. Positive muss hier erwähnt werden, dass NDepend einer permanenten Entwicklung unterliegt und man trotzdem nicht den Eindruck bekommt, dass die Fertigstellung erst beim Kunden geschieht. Das obligatorische Subscription-Model ist damit gerechtfertigt. Im Vergleich zu meinen älteren Versionen empfinde ich beispielsweise die Integration innerhalb Visual Studio wesentlich umfangreicher und runder.

Fazit: Vermisst habe ich lediglich die Kombination der Analyse mit einer Versionsverwaltung wie SVN oder Git. Das ist schade und stellt sicher noch ein großes Potential für die Zukunft dar. Mein persönliches Highlight von NDepend bleibt ganz klar die große Menge an vordefinierten Regeln. Diese sind sauber definiert, gut gruppiert, flexibel anpassbar und vor allen Dingen gut mit den passenden Hilfethemen verbunden.

Daniel Schädler: Dokumentieren ohne Microsoft Word

Voraussetzungen

Ich bin als System Engineer bei der Schweizerischen Eidgenossenschaft tätig und wir haben die tägliche Herausforderung, dass wir Artefakte zum Beispiel für die Betriebsübergabe gemäss HERMES verpflichtet sind zu erstellen.Als Beispiel in diesem Blogpost habe ich das Betriebshandbuch und dessen Vorlage genommen, die in diesem Blogpost als Markdown verwendet werden soll.

Die Struktur des Dokumentes ist nachfolgend aufgeführt:

  • Systemübersicht
  • Aufnahme des Betriebes
    • Voraussetzungen für die Betriebsaufnahme
      • Ablauf der Betriebsaufnahme
      • Qualitätssicherung nach Betriebsaufnahme
      • Vorgaben zur Abnahme des Systems
  • Durchführung und Überwachung des Betriebes
    • Betriebsüberwachung
    • Datensicherung
    • Kontrolle zum Datenschutz
    • Statistisken, Kennzahlen, Messzahlen
    • Vorgehen im Fehlerfall
    • Hindweise auf Betriebsprozesse
  • Unterbrechung oder Beendigung des Betriebes
    • System stoppen
    • Ablauf für die Wiederinbetriebnahme
    • Qualitätssicherung nach Wiederinbetriebnahme
    • Abbau des System, Archivierung, Übergabe
  • Supportorganisation
    • Supportprozesse
    • Organisation mit Rollen
  • Changemanagement
    • Changemanagement Prozess
      • Changemanagement mit Rollen, Kontaktinformationen
  • Sicherheitsbestimmungen

Hierzu verwende ich die folgenden Visual Studio Code Extensions:

Eine sehr hilfreiche Quelle für die Anwendung des Authoring Packs findet man hier

Duchführung

Hierfür erstelle ich mir die notwendige Struktur des Betriebsdokumentes auf dem Dateisystem. Dies sieht dann bei mir so aus:

  • Systemübersicht
  • Aufnahme des Betriebes
  • Durchführung des Betriebes
  • Unterbrechung oder Beendigung des Betriebes
  • Supportorganisation
  • Changemanagement
  • Sicherheitbestimmungen

Als Beispiel die Ordnerstruktur für die Ordnerübersicht.

In diesem Ordner sind alle relevanten Artefakte für dieses Kapitel integriert. Bedauerlicherweise ist es aktuell nicht möglich mit dem Microsoft Authoring Pack SVG Grafiken zu referenzieren.

Jedoch kann mit der INCLUDE Direktive, Mardown Dateien zusammen geführt werden, sodass zum Schluss dann ein gesamthaftes Betriebshandbuch erstellt werden kann. Diese sieht dann folgendermassen aus:

[!INCLUDE [Systemoverview](Link zum Kapitel)]
[!INCLUDE [Betriebsaufnahme](Link zum Kapitel)]
[!INCLUDE [Durchführung des Betriebes](Link zum Kapitel)]
[!INCLUDE [Unterbrechung des Betriebes](Link zum Kapitel)]

Fazit

Das Microsoft Authoring Pack hilf viel beim Dokumentieren nur wäre es wünschenswert, wenn die Grafiken nicht nur als jpg/png unterstützt werden würden, sondern auch als SVG, das ja von yUML automatisch generiert wird. Für hilfreiche Rückmeldungen bin ich dankbar.

Golo Roden: Einführung in React, Folge 4: Webpack

Wer Anwendungen mit React entwickelt, braucht über kurz oder lang einen Bundler wie Webpack. Doch wie verbindet man Webpack mit React?

Holger Schwichtenberg: PowerShell 7: Null Conditional Operator ?. und ?[]

In PowerShell 7.0 gibt es als experimentelles Feature den Null Conditional Operator ?. für Einzelobjekte und ?[] für Mengen.

Stefan Henneken: 10-year Anniversary

Exactly 10 years ago today, I published the first article here on my blog. The idea was born in 2010 during a customer training in Switzerland. The announced extensions of IEC 61131-3 were lively discussed at the dinner. I had promised the participants that evening to show a small example on this topic at the end of the training. At that time, Edition 3 of IEC 61131-3 had not yet been released, but CODESYS had its first beta versions, so that the participants could familiarize themselves with the language extensions. So later in the hotel room I started to keep my promise and prepared a small example.

Pleased about the interest in the new features of IEC 61131-3, I later sat at the gate in the airport and was able to think a little bit about the last days. I asked myself again and again whether and how I could pass on the example to all others who are interested. Since I was following certain blogs regularly at that time, and I still do, the idea had come up to run a blog as well.

At the same time, Microsoft offered an appropriate platform to run your own blog without having to deal with the technical details yourself. With the Live Writer, Microsoft also provided a free editor with which texts could be created very easily and loaded directly onto the weblog publication system. At the time, I wanted to save myself the effort of administering the blogging software on a web host. I preferred to invest the time in the content of the articles.

After a few considerations and a number of discussions, I published ‘test articles’ on C# and .NET. After these exercises and the experiences from the training, I created and published the first articles on IEC 61131-3. I also noticed that by writing the articles my knowledge on the respective topic was deepened and consolidated. Additionally to the IEC 61131-3, I also wanted to deal with topics related to .NET and therefore I started a series on MEF and the TPL. But I also realized that I had to set priorities.

In the meantime Microsoft stopped its blog service, but offered a migration to WordPress. There is also the possibility to host the blog for free. The statistics functions are very helpful. These provide information about the number of hits of each article. It is also lists the country from which the articles are retrieved. Fortunately, I saw the number of hits increase each year:

In 2014, I also made a decision to publish the articles not only in German but also in English. So in the last 10 years, about 70 posts have been published, 20 of which are in English. Most of the hits still come from the German-speaking countries. Here are the top 5 from 2019:

Germany44.7 %
Switzerland6.5 %
United States6.3 %
Netherlands4.3 %
Austria4.1 %

Asian countries and India are hardly represented so far. Either the access to WordPress is limited or the search engines there rate my site differently.

After all these years, I decided to switch to a paid service at WordPress. One reason is the free choice of an own URL. Instead of https://wordpress.StefanHenneken.com my blog is now accessible via https://StefanHenneken.net. Furthermore, advertising is turned off, which I didn’t always find suitable, and on which I had no influence at all. I try to compensate the costs by affiliate marketing. This means that the book recommendations contain a link to Amazon. If the book is purchased via Amazon, I receive a few euro cents as commission. On this occasion, I also slightly changed the design of the sites.

I will continue to publish posts on IEC 61131-3 in German and English. In the medium term, however, new topics may be included.

At this point, I would like to thank all readers. I am always glad about a comment or if my page is recommended via LinkedIn, Xing, or whatever other means. My thanks also go to the people who have helped with the creation of the texts through comments, suggestions for improvement or proofreading.

Stefan Henneken: 10-jähriges Jubiläum

Auf den Tag genau ist es 10 Jahre her, das ich den ersten Artikel hier auf meinem Blog veröffentlicht habe. Die Idee ist 2010 während einer Kundenschulung in der Schweiz entstanden. Beim Abendessen wurde über die angekündigten Erweiterungen der IEC 61131-3 rege diskutiert. Den Teilnehmern hatte ich an diesem Abend versprochen, zum Ende der Schulung ein kleines Beispiel zu diesem Thema zu zeigen. Damals war die Edition 3 der IEC 61131-3 noch nicht veröffentlicht, aber von CODESYS gab es die ersten Betaversionen, so dass man sich mit den Erweiterungen der Sprache vertraut machen konnte. Somit begann ich später im Hotelzimmer mein Versprechen einzulösen und ich habe ein kleines Beispiel vorbereitet.

Erfreut über das Interesse an den Neuerungen der IEC 61131-3 saß ich später im Flughafen am Gate und konnte ein wenig über die letzten Tage nachdenken. Ich stellte mir immer wieder die Frage, ob und wie ich das Beispiel an andere Interessierte weitergeben könnte. Da ich zu dem Zeitpunkt bestimmte Blogs regelmäßig verfolgt habe, und das tue ich heute noch, war die Idee aufgekommen, ebenfalls einen Blog zu betreiben.

Microsoft bot zur gleichen Zeit eine entsprechende Plattform an, um einen eigenen Blog zu betreiben, ohne selbst sich um die technischen Details kümmern zu müssen. Auch lieferte Microsoft mit dem Live Writer einen kostenlosen Editor an, mit dem die Texte sehr einfach erstellt und auf das Weblog-Publikationssystem direkt geladen werden konnten. Damals wollte ich mir den Aufwand ersparen bei einem Webhost selbst die Blog-Software zu administrieren. Die Zeit wollte ich lieber in den Inhalt der Artikel investieren.

Nach einigen Überlegungen und etlichen Diskussionen habe ich erstmal ‚Testartikel‘ zu C# und .NET veröffentlicht. Nach diesen Übungen und den Erfahrungen aus der Schulung habe ich die ersten Artikel zur IEC 61131-3 erstellt und veröffentlicht. Auch habe ich gemerkt, dass durch das Schreiben der Artikel sich mein Wissen zu dem jeweiligen Thema weiter vertiefte und gefestigt wurde. Neben der IEC 61131-3 wollte ich mich noch zusätzlich mit Themen rund um .NET beschäftigen und habe deshalb auch eine Serie zu MEF und der TPL gestartet. Ich merkte aber auch, dass ich Schwerpunkte setzen musste.

Zwischenzeitig stellte Microsoft seinen Blog-Dienst ein, bot aber eine Migration nach WordPress an. Auch dort besteht die Möglichkeit, den Blog kostenlos zu hosten. Sehr hilfreich sind die Statistikfunktionen. Diese geben Auskunft über die Anzahl der Aufrufe der jeweiligen Artikel. Auch wird aufgeführt aus welchem Land die Aufrufe kommen. Erfreulicher Weise konnte ich beobachten, wie jedes Jahr die Anzahl der Aufrufe zunahm:

Ab 2014 faste ich zusätzlich den Endschluss, die Artikel nicht nur in Deutsch, sondern auch in Englisch zu veröffentlichen. Somit wurden in den letzten 10 Jahren ca. 70 Posts veröffentlicht, wovon 20 in Englisch sind. Nach wie vor, kommen aber die meisten Aufrufe aus dem deutschsprachigen Raum. Hier die Top 5 aus dem Jahr 2019:

undefinedGermany44,7 %
undefinedSwitzerland6,5 %
undefinedUnited States6,3 %
undefinedNetherlands4,3 %
undefinedAustria4,1 %

Kaum vertreten sind bisher die asiatischen Länder und auch Indien. Entweder sind die Zugriffe auf WordPress nur eingeschränkt möglich oder die dortigen Suchmaschinen bewerten meine Seite anders.

Nach all den Jahren habe ich mich dazu entschlossen, bei WordPress auf einen kostenpflichtigen Dienst umzustellen. Ein Grund ist die freie Wahl einer eignen URL; statt  https://wordpress.StefanHenneken.com ist mein Blog jetzt über https://StefanHenneken.net erreichbar. Außerdem wird die Werbung ausgeblendet, die ich nicht immer passend fand und auf der ich keinerlei Einfluss hatte. Die Unkosten versuche ich per Affiliate Marketing zu kompensieren. Das Bedeutet, dass die Buchempfehlungen einen Link zu Amazon enthalten. Wird darüber das Buch erworben, erhalten ich einige Eurocents als Provision. Auch das Design der Seiten habe ich bei dieser Gelegenheit etwas angepasst.

Inhaltlich werde ich weiterhin Posts zu IEC 61131-3 in Deutsch und Englisch veröffentlichen. Mittelfristig werden aber evtl. noch neue Themengebiete hinzukommen.

An dieser Stelle möchte ich mich bei allen Lesern bedanken. Ich freue mich auch immer über einen Kommentar oder wenn meine Seite per LinkedIn, Xing, oder wie auch immer weiterempfohlen wird. Mein Dank gilt auch den Personen, die durch Hinweise, Verbesserungsvorschläge oder Korrekturlesungen bei der Erstellung der Texte geholfen haben.

Johnny Graber: Abhängigkeiten im Code aufzeigen mit NDepend 2020

NDepend ist ein praktisches Werkzeug zur statischen Codeanalyse. Die grosse Neuerung in der Version 2020 ist der komplett überarbeitete Abhängigkeitsgraph. Ich hatte bereits mehrmals über NDepend geschrieben (hier, hier und hier) und möchte nicht mehr darauf verzichten. Der alte Abhängigkeitsgraph Bisher konnte man die Grafik auf der obersten Ebene nur bedingt beeinflussen. Man konnte wählen … Abhängigkeiten im Code aufzeigen mit NDepend 2020 weiterlesen

Holger Schwichtenberg: PowerShell 7: Null Coalescing Assigment Operator ??=

Eine weitere Behandlung des $null-Falls ist in PowerShell 7.0 hinzugekommen, und zwar in Form des Operators "Null Coalescing Assignment" mit ??=.

Golo Roden: Vergünstigte Remote-Workshops zu DDD, CQRS, TypeScript und Kryptografie

Weiterbildung und -qualifizierung sind im Moment vielleicht wichtiger als je zuvor. Auch wenn einige durch die Lockerungsmaßnahmen wieder im Büro sein können, arbeiten viele nach wie vor remote von zu Hause. Für sie bietet the native web ausgewählte Workshops zum rabattierten Preis an.

Daniel Schädler: Dokumentieren mit Markdown und Visual Studio Code

Ausgangslage

Für die Erstellung von Code bieten sich verschiedene Werkzeuge an. Eines davon ist Visual Studio Code, welches mit seiner Plug-In Vielfalt extrem erweitert werden kann.Viele Code-Dokumentationen bauen auf Markdown auf.

Ziel

Das Ziel ist, mit Visual Studio Code eine einfache Dokumentation, inklusive Grafiken zu erstellen und diese dann generieren zu lassen. Folgender Fall soll abgedeckt werden.

  • Eine Klassendiagramm als Bild in einem Markdown anzeigen.

Natürlich sind noch weitere Diagramme möglich. Eine Hilfestellung bietet sich beim Plug-In Ersteller an (https://marketplace.visualstudio.com/_apis/public/gallery/publishers/ms-vscode-remote/vsextensions/remote-ssh/0.51.0/vspackage) wo viele weitere Beispiele zu finden sind.

Vorbereitungen

Für meine Installation habe ich foglende Komponenten verwendet:

Somit können wir starten zu dokumentieren.

Ausführung

Als ersters wird eine yuml Datei erstellt.

yuml Datei erstellen Erstellte yuml-Datei

Es ist ersichtlich, dass auf der rechten Seite bereits eine Vorschau aktiv ist. Fängt man in der Datei mit den vorgsehenen Tags an, so wird einem mit „class“ Autovervollständigung angeboten, die mit Tab bestätig werden kann.

Autofill

Anschliessend ist der „Stub-Code“ bereits generiert.

Stub code

Auch das bereits generierte Bild lässt sich sehen und kann anschliessend im Readme, mittels Markdown Syntax, eingebettet werden. 

SVG Referenz im Markdown

Markdown Referenz auf das SVG-Bild

Das SVG-Bild, wird mit der Option

{generate:true}

forciert.

Einbinden lässt es sich dann wie nachfolgend dargestellt. 

SVG Referenz im Markdown

Markdown Referenz auf das SVG-Bild

Auch die Vorschau, auch wenn nicht Word konkurrenzfähig, lässt sich sehen. 

Doc Preview

Dokument-Vorschau

Um einenen Export zu starten drückt man CTRL+P und gibt dann Export ein.

Fazit

Mit einfachen Mitteln, kann eine Dokumentation erstellt und exportiert werden, die sich sehen lassen kann. Konstruktive Kritik und Anregungen werden gerne entgegen genommen.

Golo Roden: Einführung in React, Folge 3: React Components

Einer der wichtigsten Aspekte von React ist die Komponentenorientierung, denn sie ermöglicht die Wiederverwendbarkeit von UI-Elementen. Doch wie schreibt man React Components?

Holger Schwichtenberg: PowerShell 7: Null Coalescing Operator ??

Der neue PowerShell-Operator ?? liefert den Wert des vo­rangestellten Ausdrucks, wenn dieser nicht $null ist.

Jürgen Gutsch: [Off-Topic] 2020 will be a year full of challenges

This post really is off-topic.

It seemed that 2020 started to get a good year. Life was kind of normal and I was looking forward to the upcoming events. Community events as well as family events. I was really looking forward to the MVP Summit 2020, to meeting good friends again and to visit my favorite city in the US.

But life wouldn't be life if there were no changes and challenges.

February 1st was the end of my old life but the start of a new one. A challenging one and a life full of new chances and opportunities.

What did change?

My wife and I, we broke up. Without any drama and stuff like this. It was a kind of spontaneous decision but a needed one. It was unexpected for our family and friends but we kind of knew about this for the last three years. It was a shock for the kids for sure but, as I said, there was never any drama and we are still friends and are talking and laughing together. The shock for the kids was a small one and a short one. Actually nothing really changed for the kids, except living in two houses which for sure is a huge change but they seem to love it, to enjoy it like a kind of adventure. Every house has other things they like, other rules and it seems they love to live in both houses.

This might also be shocking for friends who are reading this right now.

To leave the wife that was on my side for around 16 years felt strange and odd. But at the end it was a good decision for both of us.

This for sure results in new challenging situations, like moving into a new apartment and stuff like this.

What else did change?

The second challenging situation is the one that happens to all of us all over the world. The COVID-19 lockdowns all over the world are challenging for everyone. Especially the kids and their parents. Fortunately, I am able to work from home and, fortunately, child care is divided 50/50 between me and my wife. (She is still called my wife because we are still married.) But to work and to do home-schooling and child care in parallel is different, challenging and might be really hard and almost impossible for some parents.

Since COVID-19 happened to central Europe, I was working from home. Actually, today is the first day since weeks I commute to work. Feels strange sitting in the train, for more than one hour, and wearing a face mask. The train is almost empty. Only a few people are talking because of those annoying masks.

And, the first time since months, I have some time to write a blog post. I used to write while commuting, so I took the chance to write some stuff.

Actually, I started this post two weeks ago. Now it is the second time since COVID-19 to commute to work :-D

What comes next?

The new situation and the less commute time are the reasons why I didn't write a blog post ore something else since January.

The move to the new apartment is done. The kids know the new situation since more than a month and get used to it. It is all settling and calming down and the numbers around COVID-19 are getting better and better in central Europe. Let's have a small look into the future:

I'm going to start challenging myself again to do some more stuff for the developer community:

  • Writing a blog post at least every second week
    • Just ask for topics.
  • Rewriting my book to update it to ASP.NET Core 5.0 and really, really, really get it published
  • Writing some more technical articles for my favorite magazine.
  • Trying to do some talks on conferences and user groups
    • The next planned talk is at the DWX in November this year
  • Getting my streaming set up and running in the new apartment and start streaming again
    • The bandwidth could be a problem in the new apartment.
    • But just recording and feed a YouTube channel could be an option, too.

Actually, this is kind of challenging but why not? I really love challenges :-)

Christian Dennig [MS]: WSL2: Making Windows 10 the perfect dev machine!

Disclaimer: I work at Microsoft. And you might think that this makes me a bit biased about the current topic. However, I was an enthusiastic MacOS / MacBook user – both privately and professionally. I work as a so-called “Cloud Solution Architect” in the area of Open Source / Cloud Native Application Development – i.e. everything concerning container technologies, Kubernetes etc. This means that almost every tool you have to deal with is Unix based – or at least, it only works perfectly on that platform. That’s why I early moved to the Apple ecosystem, because it makes your (dev) life so much easier – although you get some not so serious comments on it at work every now and then 🙂

Well, things have changed…

Introduction

In this article I would like to describe how my current setup of tools / the environment looks like on my Windows 10 machine and how to setup the latest version of the Windows Subsystem for Linux 2 (WSL2) optimally – at least for me – when working in the “Cloud Native” domain.

Long story short…let’s start!

Basics

Windows Subsystem for Linux 2 (WSL2)

The whole story begins with the installation of WSL2, which is now available with the current version of Windows (Windows 10, version 2004, build 19041 or higher). The Linux subsystem has been around for quite a while now, but it has never been really usable – at least this is the case for version 1 (in terms of performance, compatibility etc.).

The bottom line is that WSL2 gives you the ability to run ELF64 Linux binaries on Windows – with 100% system call compatibility and “near-native” performance! The Linux kernel (optimized in size and performance for WSL2) is built by Microsoft from the latest stable branch based on the sources available on “kernel.org”. Updates of the kernel are provided via Windows Update.

I won’t go into the details of the installation process as you can simply get WSL2 by following this tutorial: https://docs.microsoft.com/en-us/windows/wsl/install-win10

It comes down to:

  • installing the Subsystem for Linux
  • enabling “Virtual Machine Platform”
  • setting WSL2 as the default version

Next step is to install the distribution of your choice…

Install Ubuntu 20.04

I decided to use Ubuntu 20.04 LTS, because I already know the distribution well and have used it for private purposes for some time – there are, of couse, others: Debian, openSUSE, Kali Linux etc. No matter which one you choose, the installation itself couldn’t be easier: all you have to do is open the Windows Store app, find the desired distribution and click “Install” (or simply click on this link for Ubuntu: https://www.microsoft.com/store/apps/9n6svws3rx71).

Windows Store Ubuntu 20.04 LTS
Ubuntu Installation

Once it is installed, you have to check if “version 2” of the subsystem for Linux is used (we have set “version 2” as default, but just in case…). Therefor, open a Powershell prompt and execute the following commands:

C:\> wsl --list --verbose
  NAME                   STATE           VERSION
* Ubuntu-20.04           Running         2
  docker-desktop-data    Running         2
  docker-desktop         Running         2
  Ubuntu-18.04           Stopped         2

If you see “Version 1” for Ubuntu-20.04, please run…

C:\> wsl --set-version Ubuntu-20.04 2

This will convert the distribution to be able to run in WSL2 mode (grab yourself a coffee, the conversion takes some time 😉 ).

Windows Terminal

Next, you need a modern, feature-rich and lightweight terminal. Fortunately, Microsoft also delivered on this: the Open Source Windows Terminal.  It includes many of the features most frequently requested by the Windows command-line community including support for tabs, rich text, globalization, configurability, theming & styling etc.

The installation is also done via the Windows Store: https://www.microsoft.com/store/productId/9N0DX20HK701

Once it’s on your machine, we can tweak the settings of the terminal to use Ubuntu 20.04 as the default terminal. Therefor, open Windows Terminal and hit “Ctrl+,” (that opens the settings.json file in your default text editor).

Add the guid of the Ubuntu 20.04 profile to the “defaultProfile” property:

Default Profile

Last but not least we upgrade all existing packages to be up to date.

$ sudo apt upgrade

So, the “basics” are in place now…we have a terminal that’s running Ubuntu Linux in Windows. Next, let’s give it super-powers!

Setup / tweak the shell

The software that is now being installed is an extract of what I need for my daily work. Of course the selection differs from what you might want (although I think this will cover a lot someone in the “Cloud Native” space would install). Nevertheless, it was important for me to list almost everything here, because it basically also helps me if I have to set up the environment again in the future 🙂

SSH Keys

Since it’s in the nature of a developer to work with GitHub (and other services, of course :)), I first need an SSH key to authenticate against the service. To do this, I create a new key (or copy an existing one to ~/.ssh/), which I then publish to GitHub (via their website).

At the same time the key is added to ssh-agent, so you don’t have to enter the corresponding keyphrase all the time when using it.

$ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
# start the ssh-agent in the background
$ eval $(ssh-agent -s)
> Agent pid 59566
$ ssh-add ~/.ssh/id_rsa

Oh My Zsh

Now comes the best part 🙂 To give the Ubuntu shell (which is bash by default) real superpowers, I exchange it with zsh in combination with the awesome project Oh My Zsh (which provides hundreds of plugins, customizing options, tweaks etc. for it). zsh is an extended bash shell which has many improvements and extensions compared to bash. Among other things, the shell can be themed, the command prompt adjusted, auto-completion can be used etc.

So, let’s install both:

$ sudo apt install git zsh -y
# After the installation has finished, add OhMyZsh...
$ sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

When ready, OhMyZsh can be customized via the .zshrc file in you home directory (e.g. enable plugins, set the theme). Here are the settings I usually make:

  • Adjust Theme
  • Activate plugins

Let’s do this step by step…

Theme

As theme, I use powerlevel10k (great stuff!), which you can find here.

Sample: powerlevel10k (Source: https://github.com/romkatv/powerlevel10k)

The installation is very easy by first cloning the repo to your local machine and then activating the theme in ~/.zshrc (variable ZSH_THEME, see screenshot below):

$ git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/powerlevel10k

Adjust theme to use

The next time you open a new shell, a wizard guides you through all options of the theme and allows you to customize the look&feel of your terminal (if the wizard does not start automatically or you want to restart it, simply run p10k configure from the command prompt).

The wizard offers a lot of options. Just find the right setup for you, play around with it a bit and try out one or the other. My setup finally looks like this:

My powerlevel10k setup

Optional, but recommended…install the corresponding fonts (and adjust the settings.json of Windows Terminal to use these, see image below): https://github.com/romkatv/powerlevel10k#meslo-nerd-font-patched-for-powerlevel10k

Window Terminal settings

Plugins

In terms of OhMyZsh plugins, I use the following ones:

  • git (git shortcuts, e.g. “gp” for “git pull“, “gc” for “git commit -v“)
  • zsh-autosuggestions / zsh-completions (command completion / suggestions)
  • kubectl (kubectl shortcuts / completion, e.g. “kaf” for “kubectl apply -f“, “kgp” for “kubectl get pods“, “kgd” for “kubectl get deployment” etc.)
  • ssh-agent (starts the ssh agent automatically on startup)

You can simply add them by modifying .zshrc in your home directory:

Activate oh-my-zsh plugins in .zshrc

Additional Tools

Now comes the setup of the tools that I need and use every day. I will not go into detail about all of them here, because most of them are well known or the installation is incredibly easy. The ones, that don’t need much explanation are:

Give me more…!

There are a few tools that I would like to discuss in more detail, as they are not necessarily widely used and known. These are mainly tools that are used when working with Kubernetes/Docker. This is exactly the area where kubectx/kubens and stern are located. Docker for Windows and Visual Studio Code are certainly well known to everyone and are familiar through daily work. The reason why I want to talk about the latter two is because they meanwhile tightly integrate with WSL2!

kubectx / kubens

Who doesn’t know it? You work with Kubernetes and have to switch between clusters and/or namespaces all the time…forgetting the appropriate commands to set the context correctly and typing yourself “to death”. This is where the tools kubectx and kubens come in and help you to switch between different clusters and namespaces quickly and easily. I never want to work with a system again where these tools are not installed – honestly. To see kubectx/kubens in action, here are the samples from their GitHub repo:

kubectx in action
kubens in action

To install both tools, follow these steps:

$ sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx
$ sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
$ sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens
mkdir -p ~/.oh-my-zsh/completions
chmod -R 755 ~/.oh-my-zsh/completions
ln -s /opt/kubectx/completion/kubectx.zsh ~/.oh-my-zsh/completions/_kubectx.zsh
ln -s /opt/kubectx/completion/kubens.zsh ~/.oh-my-zsh/completions/_kubens.zsh

To be able to work with the autocomletion features of these tools, you need to add the following line at the end of your .zshrc:

autoload -U compinit && compinit

Congrats, productivity gain: 100% 🙂

stern

stern allows you to output the logs of multiple Pods simultaneously to the local command line. In Kubernetes it is normal to have many services running at the same time that communicate with each other. It is sometimes difficult to follow a call through the cluster. With stern, this becomes relatively easy, because you can select pods by e.g. label selectors from which you want to follow the logs.

With the command stern -l application=scmcontacts e.g. you can stream the logs of all pods with the label application=scmcontacts to your local shell…which then looks like that (each color represents another pod!):

stern log streams

To install stern, use this script:

$ sudo curl -fsSL -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64
$ sudo chmod 755 /usr/local/bin/stern

One more thing

Docker for Windows has been around for a long time and is probably running on your machine right now. What some people may not know is that Docker for Windows integrates seamlessly with WSL2. If you are already running Docker on Windows, a simple invocation of the settings is enough to enable Docker / WSL2 integration:

Activate WSL2 based engine
Choose WSL / distro integration

If you want more details about the integration, please visit this page: https://www.docker.com/blog/new-docker-desktop-wsl2-backend/ For this article, the fact that Docker now runs within WSL2 is sufficient 🙂

Last but not least, one short note. Of course, Visual Studio Code can also be integrated into WSL2. If you install a current version of the editor in Windows, all components to run VS Code with WSL2 are included.

A simple call of code . in the respective directory with your source code is sufficient to install the Visual Studio Code Server (https://github.com/cdr/code-server) in Ubuntu. This allows VSCode to connect remotely to your distro and work with source code / Frameworks that are located in WSL2.

That’s all 🙂

Wrap-Up

Pretty long blog post now, I know…but it contains all the tools that are necessary (take that with a “grain of salt” 😉 ) to make your Windows 10 machine a “wonderful experience” for you as a developer or architect in the “Cloud Native” space. You have a full compatible Linux “system”, which tightly integrates with Windows. You have .NET Core, Go, NodeJS, tools to work in the “Kubernetes universe”, the best code editor currently out there, git, ssh-agent etc. etc.…and a beautiful terminal which makes working with it simply fun!

For sure, there are thing that I missed or I just don’t know at the moment. I would love to hear from you, if I forgot “the one” tool to mention. Looking forward to read your comments/suggestions!

Hope this helps someone out there! Take care…

Photo by Maxim Selyuk on Unsplash

Holger Schwichtenberg: PowerShell 7: Zugriff auf den letzten Fehler

Seit PowerShell 7.0 kann man die letzten aufgetretenen Fehler mit dem Commandlet Get-Error abrufen.

Stefan Lieser: Folien zum Webinar Softwareentwicklung ohne Abhängigkeiten

Am 02.06.2020 habe ich das Webinar “Softwareentwicklung ohne Abhängigkeiten” gehalten. Unten finden Sie die Folien. Die Aufzeichnung des Webinars wird später ebenfalls hier eingestellt.

Der Beitrag Folien zum Webinar Softwareentwicklung ohne Abhängigkeiten erschien zuerst auf Refactoring Legacy Code.

Golo Roden: Fachlichen Code schreiben: Excusez-moi, do you sprechen Español?

Programmieren bedeutet nicht nur, technischen, sondern vor allem auch fachlichen Code zu schreiben. Doch in welcher natürlichen Sprache?

Code-Inside Blog: SqlBulkCopy for fast bulk inserts

Within our product OneOffixx we can create a “full export” from the product database. Because of limitations with normal MS SQL backups (e.g. compatibility with older SQL databases etc.), we created our own export mechanic. An export can be up to 1GB and more. This is nothing to serious and far from “big data”, but still not easy to handle and we had some issues to import larger “exports”. Our importer was based on a Entity Framework 6 implementation and it was really slow… last month we tried to resolve this and we are quite happy. Here is how we did it:

TL;DR Problem:

Bulk Insert with a Entity Framework based implementation is really slow. There is at least one NuGet package, which seems to help, but unfortunately we run into some obscure issues. This Stackoverflow question highlights some numbers and ways of doing it.

SqlBulkCopy to the rescure:

After my failed attempt to tame our EF implementation I discovered the SqlBulkCopy operation. In .NET (Full Framework and .NET Standard!) the usage is simple via the “SqlBulkCopy” class.

Our importer looks more or less like this:

using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TimeSpan.FromMinutes(30), TransactionScopeAsyncFlowOption.Enabled))
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(databaseConnectionString))
    {
    var dt = new DataTable();
    dt.Columns.Add("DataColumnA");
    dt.Columns.Add("DataColumnB");
    dt.Columns.Add("DataColumnId", typeof(Guid));

    foreach (var dataEntry in data)
    {
        dt.Rows.Add(dataEntry.A, dataEntry.B, dataEntry.Id);
    }

    sqlBulk.DestinationTableName = "Data";
    sqlBulk.AutoMapColumns(dt);
    sqlBulk.WriteToServer(dt);

    scope.Complete();
    }

public static class Extensions
    {
        public static void AutoMapColumns(this SqlBulkCopy sbc, DataTable dt)
        {
            sbc.ColumnMappings.Clear();

            foreach (DataColumn column in dt.Columns)
            {
                sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);
            }
        }
    }       

Some notes:

  • The TransactionScope is not required, but still nice.
  • The SqlBulkCopy instance just needs the databaseConnectionString.
  • A Datatable is needed and (I’m not sure why) all non crazy SQL datatypes are magically supported, but GUIDs needs to be typed explicitly.
  • Insert thousands of data in your dataTable, point the SqlBulkCopy to your destination table, map those columns and write the to the server.
  • You can use the same instance for multiple bulk operations.
  • There is also an Async implementation available.

Only “downside”: SqlBulkCopy is a table by table insert. You need to insert your data in the correct order if you have any db constraints in your schema.

Result:

We reduced the import from several minutes to seconds :)

Hope this helps!

Holger Schwichtenberg: Nachschau auf die Microsoft Build 2020

Der Dotnet-Doktor fasst die Highlights der ersten rein virtuellen Build-Konferenz aus Entwicklersicht zusammen.

Christian Dennig [MS]: Horizontal Autoscaling in Kubernetes #3 – KEDA

Introduction

This is the last article in the series regarding “Horizontal Autoscaling” in Kubernetes. I began with an introduction to the topic and showed why autoscaling is important and how to get started with Kubernetes standard tools. In the second part I talked about how to use custom metrics in combination with the Horizontal Pod Autoscaler to be able to scale your deployments based on “non-standard” metrics (coming from e.g. Prometheus).


This article now concludes the topic. I would like to show you how to use KEDA (Kubernetes Event-Driven Autoscaler) for horizontal scaling and how this can simplify your life dramatically.

KEDA

KEDA, as the official documentation states, is a Kubernetes-based event-driven autoscaler. The project was originally initiated by Microsoft and has been developed under open source from the beginning. Since the first release, it has been widely adopted by the community and many other vendors such as Amazon, Google etc. have contributed source code / scalers to the project.

It is safe to say that the project is “ready for production” and there is no need to fear that it will disappear in the coming years.

KEDA, under the hood, works like the many other projects in this area and – first and foremost – acts as a “Custom Metrics API” for exposing metrics to the Horizontal Pod Autoscaler. Additionally, there is a “KEDA agent” that is responsible for managing / activating deployments and scale your workload between “0” and “n” replicas (scaling to zero pods is currently not possible “out-of-the-box” with the standard Horizontal Pod Autoscaler – but there is an alpha feature for it).

So in a nutshell, KEDA takes away the (sometimes huge) pain of setting up such a solution. As seen in the last article, the usage of custom metrics can generate a lot of effort upfront. KEDA is much more “lightweight” and setting it up is just a piece of cake.

This is how KEDA looks under the hood:

Source: https://keda.sh

KEDA comes with its own custom resource definition, the ScaledObject which takes care of managing the “connection” between the source of the metric, access to the metric, your deployment and how scaling should work for it (min/max range of pods, threshold etc.).

The ScaledObject spec looks like that:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: {scaled-object-name}
spec:
  scaleTargetRef:
    deploymentName: {deployment-name} # must be in the same namespace as the ScaledObject
    containerName: {container-name}  #Optional. Default: deployment.spec.template.spec.containers[0]
  pollingInterval: 30  # Optional. Default: 30 seconds
  cooldownPeriod:  300 # Optional. Default: 300 seconds
  minReplicaCount: 0   # Optional. Default: 0
  maxReplicaCount: 100 # Optional. Default: 100
  triggers:
  # {list of triggers to activate the deployment}

The only thing missing now is the last component of KEDA: Scalers. Scalers are adapters that provide metrics from various sources. Among others, there are scalers for:

  • Kafka
  • Prometheus
  • AWS SQS Queue
  • AWS Kinesis Stream
  • GCP Pub/Sub
  • Azure Blob Storage
  • Azure EventHubs
  • Azure ServiceBus
  • Azure Monitor
  • NATS
  • MySQL
  • etc.

A complete list of supported adapters can be found here: https://keda.sh/docs/1.4/scalers/.

As we will see later in the article, setting up such a scaler couldn’t be easier. So now, let’s get our hands dirty.

Sample

Install KEDA

To install KEDA on a Kubernetes cluster, we use Helm.

$ helm repo add kedacore https://kedacore.github.io/charts
$ helm repo update
$ kubectl create namespace keda
$ helm install keda kedacore/keda --namespace keda

BTW, if you use Helm 3, you will probably receive errors/warnings regarding a hook called “crd-install”. You can simply ignore them, see https://github.com/kedacore/charts/issues/18.

The Sample Application

To show you how KEDA works, I will simply reuse the sample application from the previous blog post. If you missed it, here’s a brief summary of what it’s like:

  • a simple NodeJS / Express application
  • using Prometheus client to expose the /metrics endpoint and to create/add a custom (gauge) metric called custom_metric_counter_total_by_pod
  • the metric can be set from outside via /api/count endpoint
  • Kubernetes service is of type LoadBalancer to receive a public IP for it

Here’s the sourcecode for the application:

const express = require("express");
const os = require("os");
const app = express();
const apiMetrics = require("prometheus-api-metrics");
app.use(apiMetrics());
app.use(express.json());

const client = require("prom-client");

// Create Prometheus Gauge metric
const gauge = new client.Gauge({
  name: "custom_metric_counter_total_by_pod",
  help: "Custom metric: Count per Pod",
  labelNames: ["pod"],
});

app.post("/api/count", (req, res) => {
  // Set metric to count value...and label to "pod name" (hostname)
  gauge.set({ pod: os.hostname }, req.body.count);
  res.status(200).send("Counter at: " + req.body.count);
});

app.listen(4000, () => {
  console.log("Server is running on port 4000");
  // initialize gauge
  gauge.set({ pod: os.hostname }, 1);
});

And here’s the deployment manifest for the application and the Kubernetes service:

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: promdemo 
  labels:
    application: promtest
    service: api
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: promtest
      service: api
  template:
    metadata:
      labels:
        application: promtest
        service: api
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: "64Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          image: csaocpger/expressmonitoring:4.3
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
  name: promdemo
  labels:
    application: promtest
spec:
  ports:
  - name: http
    port: 4000
    targetPort: 4000
  selector:
    application: promtest
    service: api
  type: LoadBalancer

Collect Custom Metrics with Prometheus

In order to have a “metrics collector” that we can use in combination with KEDA, we need to install Prometheus. Due to the kube-prometheus project, this is not much of a problem. Go to https://github.com/coreos/kube-prometheus, clone it to your local machine and install Prometheus, Grafana etc. via:

$ kubectl apply -f manifests/setup
$ kubectl apply -f manifests/

After the installation has finished, we also need to tell Prometheus to scrape the /metrics endpoint of our application. Therefore, we need to create a ServiceMonitor which is a custom resource definition from Prometheus, pointing to “a source of metrics”. Apply the following Kubernetes manifest:

kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
  name: promtest
  labels:
    application: promtest
spec:
  selector:
    matchLabels:
      application: promtest
  endpoints: 
  - port: http

So now, let’s check, if everything is in place regarding the monitoring of the application. This can easily be achieved by having a look at the Prometheus targets:

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

Open your local browser at http://localhost:9090/targets.

Prometheus targets for our application

This looks good. Now, we can also check Grafana. Therefor, also “port-forward” the Grafana service to your local machine (kubectl --namespace monitoring port-forward svc/grafana 3000) and open the browser at http://localhost:3000 (the definition of the dashboard you see here is available in the GitHub repo mentioned at the end of the blog post).

Grafana

As we can see here, the application reports a current value of “1” for the custom metric (custom_metric_counter_total_by_pod).

ScaledObject Definition

So, the last missing piece to be able to scale the application with KEDA is the ScaledObject definition. As mentioned before, this is the connection between the Kubernetes deployment, the metrics source/collector and the Kubernetes HorizontalPodAutoscaler.

For the current sample, the definition looks like this:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: prometheus-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    deploymentName: promdemo
  triggers:
  - type: prometheus
    metadata:
      serverAddress: http://prometheus-k8s.monitoring.svc:9090
      metricName: custom_metric_counter_total_by_pod
      threshold: '3'
      query: sum(custom_metric_counter_total_by_pod{namespace!="",pod!=""})

Let’s just walk through the spec part of the definition…we tell KEDA the target we want to scale, in this case, the deployment (scaleTargetRef) of the application. In the triggers section, we point KEDA to our Prometheus service and specify the metric name, the query to issue to collect the current value of the metric and the threshold – the target value for the Horizontal Pod Autoscaler that will automatically be created and managed by KEDA.

And how does that HPA look like? Here is the current version of it as a result of the applied ScaledObject definition from above.

apiVersion: v1
items:
- apiVersion: autoscaling/v1
  kind: HorizontalPodAutoscaler
  metadata:
    annotations:
      autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"ReadyForNewScale","message":"recommended
        size matches current size"},{"type":"ScalingActive","status":"True","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"ValidMetricFound","message":"the
        HPA was able to successfully calculate a replica count from external metric
        custom_metric_counter_total_by_pod(\u0026LabelSelector{MatchLabels:map[string]string{deploymentName:
        promdemo,},MatchExpressions:[]LabelSelectorRequirement{},})"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2020-05-20T07:02:21Z","reason":"DesiredWithinRange","message":"the
        desired count is within the acceptable range"}]'
      autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":"External","external":{"metricName":"custom_metric_counter_total_by_pod","metricSelector":{"matchLabels":{"deploymentName":"promdemo"}},"currentValue":"1","currentAverageValue":"1"}}]'
      autoscaling.alpha.kubernetes.io/metrics: '[{"type":"External","external":{"metricName":"custom_metric_counter_total_by_pod","metricSelector":{"matchLabels":{"deploymentName":"promdemo"}},"targetAverageValue":"3"}}]'
    creationTimestamp: "2020-05-20T07:02:05Z"
    labels:
      app.kubernetes.io/managed-by: keda-operator
      app.kubernetes.io/name: keda-hpa-promdemo
      app.kubernetes.io/part-of: prometheus-scaledobject
      app.kubernetes.io/version: 1.4.1
    name: keda-hpa-promdemo
    namespace: default
    ownerReferences:
    - apiVersion: keda.k8s.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: ScaledObject
      name: prometheus-scaledobject
      uid: 3074e89f-3b3b-4c7e-a376-09ad03b5fcb3
    resourceVersion: "821883"
    selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/keda-hpa-promdemo
    uid: 382a20d0-8358-4042-8718-bf2bcf832a31
  spec:
    maxReplicas: 100
    minReplicas: 1
    scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: promdemo
  status:
    currentReplicas: 1
    desiredReplicas: 1
    lastScaleTime: "2020-05-25T10:26:38Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

You can see, the ScaledObject properties have been added as annotations to the HPA and that it was already able to fetch the metric value from Prometheus.

Scale Application

Now, let’s see KEDA in action…

The application exposes an endpoint (/api/count) with which we can set the value of the metric in Prometheus (again, as a reminder, the value of custom_metric_counter_total_by_pod is currently set to “1”).

Custom metric before adjusting the value

Now let’s set the value to “7”. With the current threshold of “3”, KEDA should scale up to at least three pods.

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 7
}'

Let’s look at the events within Kubernetes:

$ k get events
LAST SEEN   TYPE     REASON              OBJECT                                      MESSAGE
2m25s       Normal   SuccessfulRescale   horizontalpodautoscaler/keda-hpa-promdemo   New size: 3; reason: external metric custom_metric_counter_total_by_pod(&LabelSelector{MatchLabels:map[string]string{deploymentName: promdemo,},MatchExpressions:[]LabelSelectorRequirement{},}) above target
<unknown>   Normal   Scheduled           pod/promdemo-56946cb44-bn75c                Successfully assigned default/promdemo-56946cb44-bn75c to aks-nodepool1-12224416-vmss000001
2m24s       Normal   Pulled              pod/promdemo-56946cb44-bn75c                Container image "csaocpger/expressmonitoring:4.3" already present on machine
2m24s       Normal   Created             pod/promdemo-56946cb44-bn75c                Created container application
2m24s       Normal   Started             pod/promdemo-56946cb44-bn75c                Started container application
<unknown>   Normal   Scheduled           pod/promdemo-56946cb44-mztrk                Successfully assigned default/promdemo-56946cb44-mztrk to aks-nodepool1-12224416-vmss000002
2m24s       Normal   Pulled              pod/promdemo-56946cb44-mztrk                Container image "csaocpger/expressmonitoring:4.3" already present on machine
2m24s       Normal   Created             pod/promdemo-56946cb44-mztrk                Created container application
2m24s       Normal   Started             pod/promdemo-56946cb44-mztrk                Started container application
2m25s       Normal   SuccessfulCreate    replicaset/promdemo-56946cb44               Created pod: promdemo-56946cb44-bn75c
2m25s       Normal   SuccessfulCreate    replicaset/promdemo-56946cb44               Created pod: promdemo-56946cb44-mztrk
2m25s       Normal   ScalingReplicaSet   deployment/promdemo                         Scaled up replica set promdemo-56946cb44 to 3

And this is how the Grafana dashboard looks like…

Grafana dashboard after setting a new value for the metric

As you can see here, KEDA is incredibly fast in querying the metric and the corresponding scaling process (in combination with the HPA) of the target deployment. The deployment has been scaled up to three pods, based on a custom metric – all we wanted to achieve 🙂

Wrap-Up

So, this was the last article about horizontal scaling in Kubernetes. In my opinion the community has provided a powerful and simple tool with KEDA to trigger scaling processes based on external sources / custom metrics. If you compare this to the sample in the previous article, KEDA is way easier to setup! By the way, it works perfectly in combination with Azure Functions, which can be run in a Kubernetes cluster based on Docker images Microsoft provides. Cream of the crop is then to outsource such workloads on virtual nodes, so that the actual cluster is not stressed. But this is a topic for a possible future article 🙂

I hope I was able to shed some light on how horizontal scaling works and how to automatically scale your own deployments based on metrics coming from external / custom sources. If you miss something, give me a short hint 🙂

As always, you can find the source code for this article on GitHub: https://github.com/cdennig/k8s-keda

So long…

Articles in this series:

Christian Dennig [MS]: Horizontal Autoscaling in Kubernetes #2 – Custom Metrics

In my previous article, I talked about the “why” and “how” of horizontal scaling in Kubernetes and I gave an insight, how to use the Horizontal Pod Autoscalar based on CPU metrics. As mentioned there, CPU is not always the best choice when it comes to deciding whether the application/service should scale in or out. Therefore Kubernetes (with the concept of the Metrics Registry and the Custom or External Metrics API) offers the possibility to also scale based on your own, custom metrics.

Introduction

In this post, I will show you how to scale a deployment (NodeJS / Express app) based on a custom metric which is collected by Prometheus. I chose Prometheus, as it is one of the most popular montoring tools in the Kubernetes space…but, it can easily be exchanged by any other montoring solution out there – as long as there is a corresponding “adapter” (custom metrics API) for it. But more on that later.

How does it work?

To be able to work at all with custom metrics as a basis for scaling services, various requirements must be met. Of course, you need an application that provides the appropriate metrics (the metrics source). In addition, you need a service – in our case Prometheus – that is able to query the metrics from the application at certain intervals (the metrics collector). Once you have set up these things, you have fulfilled the basic requirements from the application’s point of view. These components are probably already present in every larger solution.

Now, to make the desired metric available to the Horizontal Pod Autoscaler, it must first be added to a Metrics Registry. And last but not least, a custom metrics API must provide access to the desired metric for the Horizontal Pod Autoscaler.

To give you the complete view of what’s possible…here’s the list of available metric types:

  • Resource metrics (predefined metrics like CPU)
  • Custom metrics (associated with a Kubernetes object)
  • External metrics (coming from external sources like e.g. RabbitMQ, Azure Service Bus etc.)
Metrics Pipeline (with Prometheus as metrics collector)
Metrics Pipeline (with Prometheus as metrics collector)

Sample

In order to show you a working sample of how to use a custom metric for scaling, we need have a few things in place/installed:

  • An application (deployment) that exposes a custom metric
  • Prometheus (incl. Grafana to have some nice charts) and a Prometheus Service Monitor to scrape the metrics endpoint of the application
  • Prometheus Adapter which is able to provide a Prometheus metric for the custom metrics API
  • Horizontal Pod Autoscaler definition that references the custom metric

Prometheus

To install Prometheus, I chose “kube-prometheus” (https://github.com/coreos/kube-prometheus) which installs Prometheus as well as Grafana (and Alertmanager etc.) and is super easy to use! So first, clone the project to your local machine and deploy it to your cluster:

# from within the cloned repo...

$ kubectl apply -f manifests/setup
$ kubectl apply -f manifests/

Wait a few seconds until everything is installed and then check access to the Prometheus Web UI and Grafana (by port-forwarding the services to your local machine):

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
Prometheus / Web UI
Prometheus / Web UI

Check the same for Grafana (if you are promted for a username and password, default is: “admin”/”admin” – you need to change that on the first login):

$ kubectl --namespace monitoring port-forward svc/grafana 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000
Grafana
Grafana

In terms of “monitoring infrastructure”, we are good to go. Let’s add the sample application that exposes the custom metric.

Metrics Source / Sample Application

To demonstrate how to work with custom metrics, I wrote a very simple NodeJS application that provides a single endpoint (from the application perspective). If requests are sent to this endpoint, a counter (Prometheus Gauge – later more on other options) is set with a value provided from the body of the request. The application itself uses the Express framework and an additional library that allows to “interact” with Prometheus (prom-client) – to provide metrics via a /metrics endpoint.

This is how it looks like:

const express = require("express");
const os = require("os");
const app = express();
const apiMetrics = require("prometheus-api-metrics");
app.use(apiMetrics());
app.use(express.json());

const client = require("prom-client");

// Create Prometheus Gauge metric
const gauge = new client.Gauge({
  name: "custom_metric_counter_total_by_pod",
  help: "Custom metric: Count per Pod",
  labelNames: ["pod"],
});

app.post("/api/count", (req, res) => {
  // Set metric to count value...and label to "pod name" (hostname)
  gauge.set({ pod: os.hostname }, req.body.count);
  res.status(200).send("Counter at: " + req.body.count);
});

app.listen(4000, () => {
  console.log("Server is running on port 4000");
  // initialize gauge
  gauge.set({ pod: os.hostname }, 1);
});

As you can see in the sourcecode, a “Gauge” metric is created – which is one of the types, Prometheus supports. Here’s a list of what metrics are offered (description from the official documentation):

  • Counter – a cumulative metric that represents a single monotonically increasing counter whose value can only increase or be reset to zero on restart
  • Gauge – a gauge is a metric that represents a single numerical value that can arbitrarily go up and down
  • Histogram – a histogram samples observations (usually things like request durations or response sizes) and counts them in configurable buckets. It also provides a sum of all observed values.
  • Summary – Similar to a histogram, a summary samples observations (usually things like request durations and response sizes). While it also provides a total count of observations and a sum of all observed values, it calculates configurable quantiles over a sliding time window.

To learn more about the metrics types and when to use what, see the official Prometheus documentation.

Let’s deploy the application (plus a service for it) with the following YAML manifest:

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: promdemo 
  labels:
    application: promtest
    service: api
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: promtest
      service: api
  template:
    metadata:
      labels:
        application: promtest
        service: api
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: "64Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          image: csaocpger/expressmonitoring:4.3
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 4000
---
apiVersion: v1
kind: Service
metadata:
  name: promdemo
  labels:
    application: promtest
spec:
  ports:
  - name: http
    port: 4000
    targetPort: 4000
  selector:
    application: promtest
    service: api
  type: LoadBalancer

Now that we have Prometheus installed and an application that exposes a custom metric, we also need to tell Prometheus to scrape the /metrics endpoint (BTW, this endpoint is automatically created by one of the libraries used in the app). Therefore, we need to create a ServiceMonitor which is a custom resource definition from Prometheus, pointing “a source of metrics”.

This is how the ServiceMonitor looks like for the current sample:

kind: ServiceMonitor
apiVersion: monitoring.coreos.com/v1
metadata:
  name: promtest
  labels:
    application: promtest
spec:
  selector:
    matchLabels:
      application: promtest
  endpoints: 
  - port: http

What that basically does, is telling Prometheus to look for a service called “promtest” and scrape the metrics via the (default) endpoint /metrics on the http port (which is set to port 4000 in the Kubernetes service).

The /metrics endpoint reports values like that:

$ curl --location --request GET 'http://<EXTERNAL_IP_OF_SERVICE>:4000/metrics' | grep custom_metric
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 26337  100 26337    0     0   325k      0 --:--:-- --:--:-- --:--:--  329k
# HELP custom_metric_counter_total_by_pod Custom metric: Count per Pod
# TYPE custom_metric_counter_total_by_pod gauge
custom_metric_counter_total_by_pod{pod="promdemo-56946cb44-d5846"} 1

When the ServiceMonitor has been applied, Prometheus will be able to discover the pods/endpoints behind the service and pull the corresponding metrics. In the web UI, you should be able to see the following “scrape target”:

Prometheus Targets

Time to test the environment! First, let’s see how the current metric looks like. Open Grafana an have a look at the Custom dashboard (you can find the JSON for the dashboard in the GitHub repo mentioned at the end of the post). You see, we have one pod running in the cluster reporting a value of “1” at the moment .

Grafana Showing the Custom Metric

If everything is set up correctly, we should be able to call our service on /api/count and set the custom metric via a POST request with a JSON document that looks like that:

{
    "count": 7
}

So, let’s try this out…

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 7
}'
Grafana Dashboard – After setting the counter to a new value

So, this works as expected. After setting the value via a POST request to “7”, Prometheus receives the updated value by scraping the metrics endpoint and Grafana is able to show the updated chart. To be able to execute the full example in the end on the basis of a “clean environment”, we set the counter back to “1”.

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 1
}'

From a monitoring point of view, we have finished all the neccessary work and are now able to add the Prometheus adapter which “connects” the Prometheus custom metric with the Kubernetes Horizontal Pod Autoscaler.

Let’s do that now.

Prometheus Adapter

So, now the adapter must be installed. I have decided to use the following implementation for this sample: https://github.com/DirectXMan12/k8s-prometheus-adapter

The installation works quite smoothly, but you have to adapt a few small things for it.

First you should clone the repository and change the URL for the Prometheus server in the file k8s-prometheus-adapter/deploy/manifests/custom-metrics-apiserver-deployment.yaml. In the current case, this is http://prometheus-k8s.monitoring.svc:9090/.

Furthermore, an additional rule for our custom metric must be defined in the ConfigMap, which defines the rules for mapping from Prometheus metrics to the Metrics API schema (k8s-prometheus-adapter/deploy/manifests/custom-metrics-config-map.yaml). This rule looks like this:

- seriesQuery: 'custom_metric_counter_total_by_pod{namespace!="",pod!=""}'
      seriesFilters: []
      resources:
        overrides:
          namespace:
            resource: namespace
          pod:
            resource: pod
      name:
        matches: "^(.*)_total_by_pod"
        as: "${1}"
      metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>})

If you want to find out more about how the mapping works, please have a look at the official documentation. In our case, only the query for custom_metric_counter_total_by_pod is executed and the results are mapped to the metrics schema as total/sum values.

To enable the adapter to function as a custom metrics API in the cluster, a SSL certificate must be created that can be used by the Prometheus adapter. All traffic from the Kubernetes control plane components to each other must be secured by SSL. This SSL certificate must be added upfront as a secret in the cluster, so that the Custom Metrics Server can automatically map it as a volume during deployment.

In the corresponding GitHub repository for this article, you can find a gencerts.sh file that needs to be executed and does all the heavy-lifting for you. The result of the script is a file called cm-adapter-serving-certs.yaml containing the certificate. Please add that secret to the cluster before installing the adpater. To whole process looks like this (in the folder of the git clone):

$ ./gencerts.sh
$ kubectl create namespace custom-metrics
$ kubectl apply -f cm-adapter-serving-certs.yaml -n custom-metrics
$ kubectl apply -f manifests/

As soon as the installation is completed, open a terminal and let’s query the custom metrics API for our metric called custom_metric_counter_total_by_pod via kubectl. If everything is set up correctly, we should be able to get a result from the metrics server:

$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/custom_metric_counter_total_by_pod?pod=$(kubectl get po -o name)" | jq

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/custom_metric_counter_total_by_pod"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "promdemo-56946cb44-d5846",
        "apiVersion": "/v1"
      },
      "metricName": "custom_metric_counter_total_by_pod",
      "timestamp": "2020-05-26T12:22:15Z",
      "value": "1",
      "selector": null
    }
  ]
}

Here we go! The Custom Metrics API returns a result for our custom metric – stating that the current value is “1”.

I have to admit it was a lot of work, but we are now finished with the infrastructure and can test our application in combination with the Horizontal Pod Autoscaler and our custom metric.

Horizontal Pod Autoscaler

Now we need to deploy a Horizontal Pod Autoscaler that targets our app deployment and references the custom metric custom_metric_counter_total_by_pod as a metrics source. The manifest file for it looks like this, apply it to the cluster:

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
  name: prometheus-demo-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: promdemo
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metricName: custom_metric_counter_total_by_pod
      targetAverageValue: "3"

Now that we have added the HPA manifest, we can make a new request against our API to increase the value of the counter back to “7”. Please keep in mind that the target value for the HPA was “3”. This means that the Horizotal Pod Autoscaler should scale our deployment to a total of three pods after a short amount of time. Let’s see what happens:

$ curl --location --request POST 'http://<EXTERNAL_IP_OF_SERVICE>:4000/api/count' \
--header 'Content-Type: application/json' \
--data-raw '{
	"count": 7
}'

How does the Grafana Dashboard look like after that request?

Grafana after setting the value of the counter to “7”

Also, the Custom Metrics API reports a new value:

$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/custom_metric_counter_total_by_pod?pod=$(kubectl get po -o name)" | jq

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/custom_metric_counter_total_by_pod"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "promdemo-56946cb44-d5846",
        "apiVersion": "/v1"
      },
      "metricName": "custom_metric_counter_total_by_pod",
      "timestamp": "2020-05-26T12:38:58Z",
      "value": "7",
      "selector": null
    }
  ]
}

And last but not least, the Horizontal Pod Autoscaler does its job and scales the deployment the three pods! Hooray…

$ kubectl get events
LAST SEEN   TYPE     REASON              OBJECT                                        MESSAGE

118s        Normal   ScalingReplicaSet   deployment/promdemo                           Scaled up replica set promdemo-56946cb44 to 3
118s        Normal   SuccessfulRescale   horizontalpodautoscaler/prometheus-demo-app   New size: 3; reason: pods metric custom_metric_counter_total_by_pod above target
Grafana showing three pods now!

Wrap-Up

So what did we see in this example? We first installed Prometheus with the appropriate addons like Grafana, Alert Manager (which we did not use in this example…). Then we added a custom metric to an application and used Prometheus scraping to retrieve it, so that it was available for evaluation within Prometheus. The next step was to install the Prometheus adapter, which admittedly was a bit more complicated than expected. Finally, we created an Horizontal Pod Autoscaler in the cluster that used the custom metric to scale the pods of our deployment.

All in all, it was quite an effort to scale a Kubernetes deplyoment based on custom metrics. In the next article, I will therefore talk about KEDA (Kubernetes Event-Driven Autoscaling), which will make our lives – also regardings this example – much easier.

You can find all the Kubernetes manifest files, Grafana dashboard configs and the script to generate the SSL certificate in this GitHub repo: https://github.com/cdennig/k8s-custom-metrics.

Articles in this series:

Christian Dennig [MS]: Horizontal Autoscaling in Kubernetes #1 – An Introduction

Kubernetes is taking the world of software development by storm and every company in the world feels tempted to develop their software on the platform or migrate existing solutions to it. There are a few basic principles to be considered when the application is finally operated productively in Kubernetes.

One of them, is the implementation of a clean autoscaling. Kubernetes offers a number of options, especially when it comes to horizontally scaling your workloads running in the cluster.

I will discuss the different options in a short series of blog posts. This article is about the introduction to the topic and the presentation of the “out-of-the-box” options of Kubernetes. In the second article I will discuss the possibilities of scaling deployments using custom metrics and how this can work in combination with popular monitoring tools like Prometheus. The last article then will be about the use of KEDA (Kubernetes Event-Driven Autoscaling) and looks at the field of “Event Driven” scaling.

Why?

A good implementation of horizontal scaling is extremely important for applications running in Kubernetes. Load peaks can basically occur at any time of day, especially if the application is offered worldwide. Ideally, you have implemented measures that automatically respond to these load peaks so that no operator has to perform manual scaling (e.g. increase the number of pods in a deployment) – I mention this because this type of “automatic scaling” is still frequently seen in the field.

Your app should be able to automatically deal with these challenges:

  • When the resource demand increases, your service(s) should be able to automatically scale up
  • When the demand decreases, your service(s) should be able to scale down (and save you some money!)

Options for scaling your workload

As already mentioned, Kubernetes offers several options for horizontal scaling of services/pods. This includes:

  • Load / CPU-based scaling
  • (Custom) Metrics-based scaling
  • Event-Driven scaling

Recap: Kubernetes Objects

Before I start with the actual topic, here is a short recap of the (basically) involved Kubernetes objects when services/applications are hosted in a cluster – just to put everyone on the same page.

Recap: Kubernetes Deployments, RS, Pods etc.
  • Pods: smallest unit of compute/scale in Kubernetes. Hosts your application and n additonal/„sidecar“ containers.
  • Labels: key/value pairs to identify workloads/kubernetes objects
  • ReplicaSet: is responsible for scaling your pods. Ensures that the desired number of pods are up and running
  • Deployment: defines the desired state of a workload (number of pods, rollout process/update strategy etc.). Manages ReplicaSets.

If you need more of a “refresh”, please head over to the offical Kubernetes documentation.

Horizontal Pod Autoscaler

To avoid manual scaling, Kubernetes offers the concept of the “Horizontal Pod Autoscaler”, which many people have probably already heard of before. The Horizontal Pod Autoscaler (HPA) itself is a controller and configured by HorizontalPodAutoscaler resource objects. It is able to scale pods in a deployment/replicaset (or statefulset) based on obeserved metrics like e.g. CPU.

How does it work?

The HPA works in combination with the Metrics server, which determines the corresponding metrics from running Pods and makes them available to the HPA via an API (Resource Metrics API). Based on this information, the HPA is able to make scaling decisions and scale up or down according to what you defined in terms of thresholds. Let’s have a look at a sample definition in YAML:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: my-hpa
spec:
  maxReplicas: 20
  minReplicas: 5
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-deployment
  targetCPUUtilizationPercentage: 50

The sample shown above means, that if the CPU utilization constantly surpasses 50% of the deployment (considering all pods!), the autoscaler is able to scale the number of pods in the deployment between 5 and 20.

The algorithm of how the number of replicas is determined, works as follows:

replicas = ceil[currentReplicas * ( currentValue / desiredValue )]

Here’s the whole process:

Pretty straightforward. So now, let’s have a look at a concrete sample.

Sample

Let’s see the Horizontal Pod Autoscaler in action. First, we need a workload as our scale target:

$ kubectl create ns hpa
namespace/hpa created

$ kubectl config set-context --current --namespace=hpa
Context "my-cluster" modified.

$  kubectl run nginx-worker --image=nginx --requests=cpu=200m --expose --port=80
service/nginx-worker created
deployment.apps/nginx-worker created

As you can see above, we created a new namspace called hpa and added a deployment of nginx pods (apparently only one at the moment – exposing port 80, to be able to fire some request against the pods from within the cluster). Next, create the horizontal pod autoscaler object.

$ kubectl autoscale deployment nginx-worker --cpu-percent=5 --min=1 --max=10

Let’s have a brief look at the YAML for the HPA:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-worker
spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-worker
  targetCPUUtilizationPercentage: 5

So now that everything is in place, we need some load on our nginx deployment. BTW: You might wonder, why the target value is that low (only 5%!). Well, seems like the creators of NGINX have done a pretty good job. It’s really hard to “make NGINX sweat” 🙂

Let’s simulate load! Create another pod (in this case “busybox”) in the same namespace, “exec into it” and use wget to request the default page of NGINX in an endless loop:

$ kubectl run -i --tty load-generator --image=busybox /bin/sh
$ while true; do wget -q -O- http://nginx-worker.hpa.svc; done

While this is running, open another terminal and “watch” the Horizontal Pod Autoscaler do his job.

HPA in Action

As you can see, after a short period of time, the HPA recognizes that there is load on our deployment and that the current value of our metric (CPU utilization) is above the target value (29% vs. 5%). It then starts to scale the pods within the deployment to an apropiate number, so that the utilization drops after a few seconds back to a value that is – more or less – within the defined range.

After a certain time the wget loop was aborted, so that basically no more requests were sent to the pods. As you can see here, the autoscaler does not start removing pods immediately after that. It waits a given time (in this case 5min) until pods are killed and the deployment is set back to “replicas: 1”.

Wrap-Up

In this first article, I discussed the basics of horizontal scaling in Kubernetes and how you can leverage CPU utilization – a standard metric in Kubernetes – to automatically scale your pods up and down. In the next article, I will show you how to use custom metrics for scaling decisions – as you might guess, CPU is not always the best metric to decide if your service is under heavy load and needs more replicas to do the job.

Articles in this series:

Golo Roden: Einführung in React, Folge 2: React-Setup

Wie installiert man React und entwickelt eine erste Anwendung? Um diese Themen geht es in der zweiten Folge des Videokurses zu React von the native web, die ab sofort kostenlos verfügbar ist.

Holger Schwichtenberg: PowerShell 7: Verbesserte Fehlerdarstellung

Seit PowerShell 7.0 ist "ConciseView" der neue, übersichtlichere Standard für die Fehlerausgabe.

Golo Roden: Ich reagiere auf "Wie bleibt man auf dem Laufenden?"

In der aktuellen Folge von "Götz & Golo" lautete die Fragestellung, wie man auf dem Laufenden bleibt. Golos Reaktion auf den Blogpost von Götz im Video.

Stefan Lieser: Folien zum Webinar Clean Code Development mit Flow Design

Am 30. April habe ich bei der GFU mein Webinar “Clean Code Development mit Flow Design” gehalten. Es drehte sich um die Frage, wie man mit Hilfe von Flow Design von den Anforderungen zum Code gelangt. Die Folien zu diesem Webinar findest du unten. Leider wurde das Webinar nicht aufgezeichnet. Es wird aber weitere Webinare ... Read more

Der Beitrag Folien zum Webinar Clean Code Development mit Flow Design erschien zuerst auf Refactoring Legacy Code.

Golo Roden: Einführung in React, Folge 1: Vorstellung und Einführung

Heute startet der neue Videokurs zu React von the native web. Der Kurs steht vollständig kostenlos zur Verfügung und führt Entwickler von der ersten Zeile bis zum Schreiben komplexer React-Anwendungen.

Holger Schwichtenberg: Entwickler-Update 2020 zu .NET 5.0, C# 9.0 und Blazor am 26. Mai

Der diesjährige Softwareentwickler-Infotag für .NET- und Webentwickler findet am 26. Mai als Online-Veranstaltung per Webkonferenzsoftware und Chat statt.

Golo Roden: Wie man relevante Nachrichten aus dem Grundrauschen herausfiltert

Für Entwickler ist es wichtig, über technologische Neuerungen informiert zu sein. Doch woher kommen diese Informationen? Folgt man zu wenigen Quellen, läuft man Gefahr, wichtige Nachrichten zu verpassen. Folgt man zu vielen, gehen wichtige Themen in der Masse unter. Wie findet man die richtige Balance?

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.