Holger Schwichtenberg: Softwareentwickler-Update für .NET- und Web-Entwickler am 8.6.2021 (Online)

Der Infotag am 8.6.2021 behandelt .NET 6, C# 10, WinUI3, Cross-Plattform mit MAUI und Blazor Desktop sowie Visual Studio 2022.

Jürgen Gutsch: ASP.NET Core in .NET 6 - Part 08 - CSS isolation for MVC Views and Razor Pages

This is the eighth part of the ASP.NET Core on .NET 6 series. In this post, I'd like to have a quick into the support for SS isolation for MVC Views and Razor Pages.

Blazor components already support CSS isolation. MVC Views and Razor Pages now do the same. Since the official blog post shows it on Razor Pages, I'd like to try it in an MVC application.

Trying CSS isolation for MVC Views

At first, I'm going to create a new MVC application project using the .NET CLI:

dotnet new mvc -n CssIsolation -o CssIsolation
cd CssIsolation
code .

These commands create the project, change the directory into the project folder, and opens VSCode.

After VSCode opens, create an Index.cshtml.css file in the Views/Home folder. In Visual Studio this file will be nested under the Index.cshtml. VSCode doesn't support this kind of nesting yet.

Like in Microsoft's blog post, I just add a small CSS snippet to the new CSS file to change the color of the H1 element:

h1 {
    color: red;
}

This actually doesn't have any effect yet. Unlike Blazor, we need to add a reference to a CSS resource that bundles all the isolated CSS. Open the _Layout.cshtml that is located in the Views/Shared folder and add the following line right after the reference to the site.css:

<link rel="stylesheet" href="CssIsolation.styles.css" />

Ensure the first part of the URL is the name of your application. It is CssIsolation in my case. If you named your application like FooBar, the CSS reference is FooBar.styles.css.

We'll now have a red H1 header:

Isolated CSS: red header

How is this solved?

I had a quick look at the sources to see how the CSS isolation is solved. Every element of the rendered View gets an autogenerated empty attribute that identifies the view:

<div b-zi0vwlqhpg class="text-center">
    <h1 b-zi0vwlqhpg class="display-4">Welcome</h1>
    <p b-zi0vwlqhpg>Learn about <a b-zi0vwlqhpg href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.</p>
</div>

Calling the CSS bundle resource in the browser (https://localhost:5001/cssisolation.styles.css) we can see how the CSS is structured:

/* _content/CssIsolation/Views/Home/Index.cshtml.rz.scp.css */
h1[b-zi0vwlqhpg] {
  color: red;
}
/* _content/CssIsolation/Views/Home/Privacy.cshtml.rz.scp.css */
h1[b-tqxfxf7tqz] {
  color: blue;
}

I did the same for the Privacy.cshtml to see how the isolation is done in the CSS resource. This is why you see two different files listed here. The autogenerated attribute is used with every CSS selector that is used here. This creates unique CSS selectors per view.

I assume this works the same with Razor Pages since both MVC and Razor Pages use the same technique.

This is pretty cool and helpful.

What's next?

In the next part In going to look into the support for Infer component generic types from ancestor components in ASP.NET Core.

Holger Schwichtenberg: Support-Ende für .NET Framework 4.5.2, 4.6 und 4.6.1 schon im April 2022

Microsoft hat bekanntgegeben, die Unterstützung der Versionen 4.5.2, 4.6 und 4.6.1 des klassischen .NET Frameworks vorzeitig schon in einem Jahr zu beenden.

Jürgen Gutsch: ASP.NET Core in .NET 6 - Part 07 - Support for custom event arguments in Blazor

This is the seventh part of the ASP.NET Core on .NET 6 series. In this post, I want to have a quick into the support for custom event arguments in Blazor.

In Blazor you can create custom events and Microsoft now added the support for custom event arguments for those custom events in Blazor as well. Microsoft added a sample in the blog post about the preview 2 that I like to try in a small Blazor project.

Exploring custom event arguments in Blazor

At first, I'm going to create a new Blazor WebAssembly project using the .NET CLI:

dotnet new blazorwasm -n BlazorCustomEventArgs -o BlazorCustomEventArgs
cd BlazorCustomEventArgs
code .

These commands create the project, change the directory into the project folder, and opens VSCode.

After VSCode opens, I create a new folder called CustomEvents and place a new C# file called CustomPasteEventArgs.cs in it. This file contains the first snippet:

using System;
using Microsoft.AspNetCore.Components;

namespace BlazorCustomEventArgs.CustomEvents
{
    [EventHandler("oncustompaste", typeof(CustomPasteEventArgs), enableStopPropagation: true, enablePreventDefault: true)]
    public static class EventHandlers
    {
        // This static class doesn't need to contain any members. It's just a place where we can put
        // [EventHandler] attributes to configure event types on the Razor compiler. This affects the
        // compiler output as well as code completions in the editor.
    }

    public class CustomPasteEventArgs : EventArgs
    {
        // Data for these properties will be supplied by custom JavaScript logic
        public DateTime EventTimestamp { get; set; }
        public string PastedData { get; set; }
    }
}

Additionally, I added a namespace to be complete.

In the Index.razor in the Pages folder we add the next snippet of the blog post:

@page "/"
@using BlazorCustomEventArgs.CustomEvents

<p>Try pasting into the following text box:</p>
<input @oncustompaste="HandleCustomPaste" />
<p>@message</p>

@code {
    string message;

    void HandleCustomPaste(CustomPasteEventArgs eventArgs)
    {
        message = $"At {eventArgs.EventTimestamp.ToShortTimeString()}, you pasted: {eventArgs.PastedData}";
    }
}

I need to add the using to match the namespace of the CustomPasteEventArgs. This creates an input element and outputs a message that will be generated on the CustomPaste event handler.

At the end, we need to add some JavaScript in the index.html that is located in the wwwroot folder. This file hosts the actual WebAssembly application. Place this script directly after the script tag for the blazor.webassembly.js:

<script>
    Blazor.registerCustomEventType('custompaste', {
        browserEventName: 'paste',
        createEventArgs: event => {
            // This example only deals with pasting text, but you could use arbitrary JavaScript APIs
            // to deal with users pasting other types of data, such as images
            return {
                eventTimestamp: new Date(),
                pastedData: event.clipboardData.getData('text')
            };
        }
    });
</script>

This binds the default paste event to the custompaste event and adds the pasted text data, as well as the current date to the CustomPasteEventArgs. In that case the JavaScript object literal should match the CustomPasteEventArg to get it working property, except the casing of the properties.

Blazor doesn't protect you to write some JavaScript ;-)

Let's try it out. I run the application by calling the dotnet run command or the dotnet watch command in the console:

dotnet run

If the browser doesn't start automatically copy the displayed HTTPS URL into the browser. It should look like this:

custom event args 1

Now I past some text into the input element. Et voilà:

custom event args 2Don't be confused about the date. Since it is created via JavaScript using new Date() it is a UTC date, which means minus two hours within the CET time zone, during daylight saving time.

What's next?

In the next part In going to look into the support for CSS isolation for MVC Views and Razor Pages in ASP.NET Core.

Code-Inside Blog: How to self host Google Fonts

Google Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.

In one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.

After some research we discovered this tool: Google-Web-Fonts-Helper

x

Pick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)

The project site is on GitHub.

Hope this helps!

Christina Hirth : What Does Continuous Delivery to a Team

Tl;dr: Continuous integration and delivery are not about a pipeline, it is about trust, psychological safety, a common goal and real teamwork.

What is needed for CI/CD – and how to achieve those?

  • No feature branches but trunk-based development and feature toggles: feature branches mean discontinuous development. CI/CD works with only one temporary branch: the local copy on your machine getting integrated at the moment you want to push. “No feature branches” also means pushing your changes at least once a day.
  • A feeling of safety to commit and push your code: trust in yourself and trust in your environment to help you if you fall – or steady you to not fall at all.
  • Quality gates to keep the customer safe
  • Observing and reducing the outcome of your work (as a team, of course)
  • Resilience: accept that errors will happen and make sure that they are not fatal, that you can live with them. This means also being aware of the risk involved in your changes

What happens in the team, in the team-work:

  • It enables a growing maturity, autonomy due to fast feedback, failing fast and early
  • It makes us real team-workers, “we fail together, we succeed together”
  • It leads to better programmers due to the need for XP practices and the need to know how to deliver backwards compatible software
  • It has an impact on the architecture and the design (see Accelerate)
  • Psychological safety: eliminates the fear of coding, of making decisions, of having code reviews
  • It gives a common goal, valuable for everybody: customers, devs, testers, PO, company
  • It makes everybody involved happy because of much faster feedback from customers instead of the feedback of the PO => it allows to validate the assumption that the new feature is valuable
  • It drives new ideas, new capabilities bc it allows experiments
  • Sets the right priorities: not to jump to code but to think about how to deliver new capabilities, to solve problems (sometimes even by deleting code)

How to start:

  • Agree upon setting CI/CD as a goal for the whole team: focus on how to get there not on the reasons why it cannot work out
  • Consider all requirements (safety net, coding and review practices, creating the pipeline and the quality gates) as necessary steps and work on them, one after another
  • Agree upon team rules making CI/CD as a team responsibility (monitoring errors, fixing them, flickering tests, processes to improve leaks in the safety net, blameless post-mortems)
  • Learn to give and get feedback on a professional manner (“I am not my work”). For example by reading the book Agile Conversations and/or practice it in the meetup

– – – – –

This bullet-point list was born during this year’s CITCON, a great un-conference on continuous improvement. I am aware that they can trigger questions and needs for explanations – and I would be happy to answer them 🙂

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Part 06 - Nullable Reference Type Annotations

This is the sixth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a quick into the new Nullable Reference Type Annotations in some ASP.NET Core APIs

Microsoft added Nullable Reference Types in C# 8 and this is why they applied nullability annotations to parts of ASP.NET Core. This provides additional compile-time safety while using reference types and protects against possible null reference exceptions.

This is not only a new thing with preview 1 but an ongoing change for the next releases. Microsoft will add more and more nullability annotations to the ASP.NET Core APIs in the next versions. You can see the progress in this GitHub Issue: https://github.com/aspnet/Announcements/issues/444

Exploring Nullable Reference Type Annotations

I'd quickly like to see whether this change is already visible in a newly created MVC project.

dotnet new mvc -n NullabilityDemo -o NullabilityDemo
cd NullabilityDemo

This creates a new MVC project and changes the directory into it.

Projects that enable using nullable annotations may see new build-time warnings from ASP.NET Core APIs. To enable nullable reference types, you should add the following property to your project file:

<PropertyGroup>
    <Nullable>enable</Nullable>
</PropertyGroup>

In the following screenshot you'll see, the build result before and after enabling nullable annotations:

null warnings on build

Actually, there is no new warning, It just shows a warning for the RequestId property in the ErrorViewModel because it might be null. After changing it to a nullable string, the warning disappears.

public class ErrorViewModel
{
    public string? RequestId { get; set; }

    public bool ShowRequestId => !string.IsNullOrEmpty(RequestId);
}

However. How can I try the changed APIs?

I need to have a look into the already mentioned GitHub Issue to choose an API to try.

I'm going with the Microsoft.AspNetCore.WebUtilitiesQueryHelpers.ParseQuery method:

using Microsoft.AspNetCore.WebUtilities;

// ...

private void ParseQuery(string queryString)
{
    QueryHelpers.ParseQuery(queryString);
}

If you now set the queryString variable to null, you'll get yellow squiggles that tell you that null may be null:

null hints

You get the same message if you mark the input variable with a nullable annotation:

private void ParseQuery(string? queryString)
{
	QueryHelpers.ParseQuery(queryString);
}

nullable hints

It's working and it is quite cool to prevent null reference exceptions against ASP.NET Core APIs.

What's next?

In the next part In going to look into the support for Support for custom event arguments in Blazor in ASP.NET Core.

Golo Roden: Luca-App versus Open Source

Die Luca-App enthält Code aus einem Open-Source-Projekt, hat aber dessen Lizenz verletzt. Die zugehörige Meldung bestimmte die Schlagzeilen der vergangenen Wochen. Doch was genau ist eigentlich Open Source, was ist der Unterschied zu Freier Software, und was lässt sich über Open Source lernen?

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Part 05 - Input ElementReference in Blazor

This is the fifth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the input ElementReference in Blazor that is exposed to relevant components.

Microsoft exposes the ElementReference of the Blazor input elements to the underlying input. This effects the following components: InputCheckbox, InputDate, InputFile, InputNumber, InputSelect, InputText, and InputTextArea.

Exploring the ElementReference

To test it, I created a Blazor Server project using the dotnet CLI:

dotnet new blazorserver -n ElementReferenceDemo -o ElementReferenceDemo

CD into the project and call dotnet watch

I will reuse the index.razor to try the form ElementReference:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

At first, add the following code block at the end of the file:

@code{
    Person person = new Person{
      FirstName = "John",
      LastName = "Doe"
    };

    InputText firstNameReference;
    InputText lastNameReference;

    public class Person
    {
        public string FirstName { get; set; }

        public string LastName { get; set; }
    }
}

This creates a Person type and initializes it. We will use it later as a model in the EditForm. There are also two variables added that will reference the actual InputText elements in the form. We will add some more code later on, but let's add the form first:

<EditForm Model=@person>
    <InputText @bind-Value="person.FirstName" @ref="firstNameReference" /><br>
    <InputText @bind-Value="person.LastName" @ref="lastNameReference" /><br>

    <input type="submit" value="Submit" class="btn btn-primary" /><br>
    
    <input type="button" value="Focus FirstName" class="btn btn-secondary" 
        @onclick="HandleFocusFirstName" />
    <input type="button" value="Focus LastName" class="btn btn-secondary" 
        @onclick="HandleFocusLastName" />
</EditForm>

This form has the person object assigned as a model. It contains two InputText elements, the default input button as well as two input buttons that will be used to test the ElementReference.

The reference Variables are assigned to the @ref attribute of the InputText elements. We will use these variables later on.

The buttons have @onclick methods assigned that we need to add to the code section:

private async Task HandleFocusFirstName()
{
}

private async Task HandleFocusLastName()
{
}

As described by Microsoft the input elements now expose the ElementReference. This can be used to set the Focus of an element. Add the following lines to focus the InputText elements:

private async Task HandleFocusFirstName()
{
   await firstNameReference.Element.Value.FocusAsync();
}

private async Task HandleFocusLastName()
{
   await lastNameReference.Element.Value.FocusAsync();
}

This might be pretty useful. Instead of playing around with JavaScript Interop, you can use C# completely.

On the other hand, it would be great, if Microsoft exposes much more features via the ElementReference, instead of just focusing an element.

What's next?

In the next part In going to look into the support for Nullable Reference Type Annotations in ASP.NET Core.

Norbert Eder: Python lernen #2: Installation / Tools

Im zweiten Teil der Serie geht es um die Installation. Das notwendige Setup findest du unter https://www.python.org/downloads/. Unterstützt werden alle gängigen Betriebssysteme. Ich installiere Python 3.9.2 für Windows.

Bei der Installation wähle ich die Standard-Installation. Vorher empfehle ich, den Installationsphad in die PATH-Umgebungsvariable mit aufnehmen zu lassen:

Weitere Zwischenschritte gibt es hier nicht.

Das Setup wurde erfolgreich ausgeführt. Starten wir nun die Konsole und führen einen ersten Test durch, ob auch tatsächlich alles funktioniert hat:

D:\>python
Python 3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 23:44:55) [MSC v.1928 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>

Das sieht doch gut aus und wir befinden uns damit auch schon in der Python Shell.

Nun noch schnell ein einfacher Test mit dem Befehl print. Damit können wir Informationen in der Standardausgabe ausgeben:

>>> print('visit norberteder.com')
visit norberteder.com

Natürlich kann auch mit Dateien gearbeitet werden. Nachfolgend schreiben wir die Codezeile in eine Python-Datei:

D:\>echo "print('visit norberteder.com') > hello.py
"print('visit norberteder.com') > hello.py

Und führen diese auch aus:

D:\>python hello.py
visit norberteder.com

Tools

Gemeinsam mit Python wird die IDLE installiert. Damit bekommt man ein einfaches Werkzeug, um sowohl die Python Shell auszuführen, als auch Python-Dateien zu schreiben/editieren und zu debuggen.

IDLE

Für die ersten Schritte ist IDLE sicherlich ausreichend, für größere Vorhaben werde ich einen anderen Editor verwenden.

Für private Zwecke bzw. für die Entwicklung von Open Source Software gibt es von JetBrains eine kostenlose PyCharm Community Edition. Da ich bereits mit anderen sprachspezifischen Editoren von JetBrains vertraut bin, werde ich in den nächsten Teilen dieser Serie auf PyCharm umsteigen.

Weitere Tools kommen für den Anfang nicht zum Einsatz. Möglicherweise ändert sich das im Laufe der Serie – wir werden sehen :)

Damit hätten wir die Basis sichergestellt und haben vorerst alles, was wir brauchen. Im nächsten Teil geht es um die Grundlagen der Programmiersprache. Welche Naming Conventions gibt es, wie definiert man Variablen und Funktionen.

In Python lernen #1: Der Einstieg findest du eine Liste aller verfügbaren Artikel zu meiner Python-Serie. Ich freue mich über dein Feedback, damit ich meine Serie weiter verbessern kann.

Der Beitrag Python lernen #2: Installation / Tools erschien zuerst auf Norbert Eder.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Part 04 - DynamicComponent in Blazor

This is the fourth part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the DynamicComponent in Blazor.

What does Microsoft say about it?

DynamicComponent is a new built-in Blazor component that can be used to dynamically render a component specified by type.

That sounds nice. It is a component that dynamically renders any other component. Unfortunately, there is no documentation available yet, except a comment in the blog. So let's create a small one:

Trying the DynamicComponent

To test it, I created a Blazor Server project using the dotnet CLI

dotnet new blazorserver -n BlazorServerDemo -o BlazorServerDemo

CD into the project and call dotnet watch

Now let's try the DynamicComponent on the index.razor:

@page "/"

<h1>Hello, world!</h1>

Welcome to your new app.

<SurveyPrompt Title="How is Blazor working for you?" />

My idea is to render the SurveyPrompt component dynamically with a different title:

@code{
    var someType = typeof(SurveyPrompt);
    var myDictionaryOfParameters = new Dictionary<string, object>
    {
        { "Title", "Foo Bar"}
    };
}

<DynamicComponent Type="@someType" Parameters="@myDictionaryOfParameters" />

At first, I needed to define the type of the component I want to render. At second I needed to define the parameters I want to pass to that component. In that case, it is just the title property.

DynamicComponent

Why could this be useful?

This is great in case you want to render components dynamically based on data inputs or whatever.

Think about a timeline of news, a newsfeed, or stuff like this on a web page, that can render different kind of content like text, videos, pictures. You can now just loop through the news list and render the DynamicComponent and pass the type of the actual component to it, as well as the attribute values the components need.

What's next?

In the next part In going to look into the support for ElementReference in Blazor.

Holger Schwichtenberg: Kommende Entwickler-Events: Vor Ort und/oder Online

Ein Liste der kommenden Entwicklerveranstaltungen im deutschsprachigen Raum bis Mai 2022.

Stefan Henneken: IEC 61131-3: Different versions of the same library in a TwinCAT project

Library placeholders allow to reference multiple versions of the same library in a PLC project. This situation can be helpful if a library has to be updated in an existing project due to new functions, but the update turns out to give an older FB a changed behavior.

The mentioned problem can be solved by including different versions of the same library in the project using placeholders. Placeholders for libraries are comparable with references. Instead of adding libraries directly to a project, they are referenced indirectly via placeholders. Each placeholder is linked to a library either with a specific version or so that the current library is always used. If libraries are added via the standard dialog, placeholders are always used automatically.

In the following short post, I want to show how to add several versions of the same library to a project. In our example, I will add two different versions of the Tc3_JsonXml library to a project. There are currently three different versions of the library on my computer.

V3.3.7.0 and V3.3.14.0 will be used in parallel in the example.

Open the dialog for adding a library. Then switch to the view Advanced.

Switch to the tab Placeholder and enter a unique name for the new placeholder.

Select the library that will be referenced by the placeholder. A specific version or the latest version can always be selected using the ‘*’.

If you then select the placeholder in the project tree under References and switch to the properties window, the properties of the placeholder will be displayed there.

The namespace has still to be adjusted here. The namespace is used later in the PLC program and is used to address elements of both libraries via different names. I presented the basic concept of namespaces in IEC 61131-3: Namespaces. I chose the same identifiers for the namespace as for the placeholders.

After performing the same steps for the V3.3.14.0 version of the library, both placeholders should be available with a unique name and customized namespace.

The Library Manager, which is opened by double-clicking on References, provides a good overview.

Here you can clearly see how the placeholders are resolved. Usually, the placeholders have the same name as the libraries that are referenced. The ‘*’ means that always the newest version of the library is used, which is available on the development computer. The right column shows the version referenced by the placeholder. The names of the placeholders have been adapted for the two placeholders of the Tc3_JsonXml library.

FB_JsonSaxWriter will be used as an example in the PLC program. If the FB is specified without a namespace when the instance is declared,

PROGRAM MAIN
VAR
  fbJsonSaxWriter    : FB_JsonSaxWriter;
END_VAR

the compiler will output an error message:

The name FB_JsonSaxWriter cannot be uniquely resolved because two different versions of the Tc3_JsonXml library (V3.3.7.0 and V3.3.14.0) are available in the project. Thus, FB_JsonSaxWriter is also contained twice in the project.

By using the namespaces, targeted access to the individual elements of the desired library is possible:

PROGRAM MAIN
VAR
  fbJsonSaxWriter_Build7           : Tc3_JsonXml_Build7.FB_JsonSaxWriter;
  fbJsonSaxWriter_Build14          : Tc3_JsonXml_Build14.FB_JsonSaxWriter;
  sVersionBuild7, sVersionBuild14  : STRING;
END_VAR
 
fbJsonSaxWriter_Build7.AddBool(TRUE);
fbJsonSaxWriter_Build14.AddBool(FALSE);
 
sVersionBuild7 := Tc3_JsonXml_Build7.stLibVersion_Tc3_JsonXml.sVersion;
sVersionBuild14 := Tc3_JsonXml_Build14.stLibVersion_Tc3_JsonXml.sVersion;

In this short example, the current version number is also read out via the global structure that is contained in every library:

Both libraries can now be used in parallel in the same PLC project. However, it must be ensured that both libraries are available in exactly the required versions (V3.3.7.0 and V3.3.14.0) on the development computer.

Norbert Eder: Python lernen #1: Der Einstieg

Du möchtest, genauso wie ich, Python lernen? Dann geh diesen Weg gemeinsam mit mir. In dieser Artikelserie werde ich meine Fragen, Antworten und Erkenntnisse mit dir teilen.

Den Anfang macht eine kleine Liste von Informationsquellen, die ich mir im Vorfeld herausgesucht habe und die ich in der weiteren Zeit verwenden werde. Gerne freue ich mich auch über Hinweise auf weitere interessante Webseiten, Bücher und Videos.

Am Ende dieses Artikels findest du eine Liste aller Artikel, die laufend erweitert wird und vorerst nur diesen beinhält – aber hoffentlich kontinuierlich wächst.

Motivation

In der letzten Zeit komme ich vermehrt mit Themen in Berührung, die förmlich nach Python schreien. Vorrangig handelt es sich hierbei um das Thema Machine Learning.

Zusätzlich möchte auch mein Nachwuchs in das Thema Softwareentwicklung einsteigen. Hier eignet sich Python hervorragend für Spielereien mit Raspberry Pi und Co. Zudem soll sie einfach zu erlenen sein, was gerade für den Nachwuchs von Vorteil ist.

Bücher

Als meine Grundlage und Leitfaden dient das Buch Einstieg in Python.

Weitere Bücher mit guten Rezensionen findest du hier:

         

Auch wenn ich damit vielleicht etwas altmodisch wirke, für ein strukturiertes Lernen einer neuen Sprache etc. setze ich nach wie vor gerne auf Bücher. Grundsätzlich aber kann das natürlich jeder halten, wie er möchte.

Links

Im Laufe der Zeit werden sich sicherlich viele hilfreiche Links ansammeln. Vorerst halte ich mich jedoch an die offiziellen Webseiten, um mich mit den notwendigen Downloads und Informationen zu versorgen:

Auf https://www.python.org/ finden sich zusätzlich auch viele Lernunterlagen, Videos und natürlich eine große Community.

Unter https://github.com/python finden sich sämtliche offizielle Repositories. Natürlich kann man auch einen Blick darauf werfen, was es sonst noch so auf Github mit Python gibt.

Videos

Wer sich gerne Videos ansieht, der findet gerade auf Youtube eine Menge zum Thema. Ganz gut sieht diese 29-teilige Serie zu Python 3 aus:

Python-Serie

  1. Python lernen #1: Der Einstieg [dieser Artikel]
  2. Python lernen #2: Installation und Setup
  3. Python lernen #3: Grundlagen der Programmiersprache (Naming Conventions, Variablen deklarieren, Datentypen, Funktionen usw.)
  4. Python lernen #4: Module und Gliederung von Projekten
  5. Python lernen #5: Ein erstes Beispiel

Weitere Artikel folgen in Kürze, diese Liste wird laufend von mir erweitert.

Ich freue mich auch gerne über jedes Feedback, Unterstützung, Wünsche, Fragen und dergleichen. Bitte schreib mir hier einen Kommentar, oder melde dich über das Kontakt-Formular bei mir.

Der Beitrag Python lernen #1: Der Einstieg erschien zuerst auf Norbert Eder.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Part 03 - Support for IAsyncDisposable in MVC

This is the third part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look at the Support for IAsyncDisposable in MVC.

The IAsyncDisposable is a thing since .NET Core 3.0. If I'm right, we got that together with the async streams to release those kind of streams asynchronously. Now MVC is supporting this interface as well and you can use it anywhere in your code on controllers, classes, etc. to release async resources.

When should I use IAsyncDisposable?

When you work with asynchronous enumerators like in async steams and when you work with instances of unmanaged resources which needs resource-intensive I/O operation to release.

When implementing this interface you can use the DisposeAsync method to release those kind of resources.

Let's try it

Let's assume we have a controller that creates and uses a Utf8JsonWriter which as well is a IAsyncDisposable resource

public class HomeController : Controller, IAsyncDisposable
{
    private Utf8JsonWriter _jsonWriter;

    private readonly ILogger<HomeController> _logger;

    public HomeController(ILogger<HomeController> logger)
    {
        _logger = logger;
        _jsonWriter = new Utf8JsonWriter(new MemoryStream());
    }

The interface needs us to implement the DisposeAsync method. This should be done like this:

public async ValueTask DisposeAsync()
{
    // Perform async cleanup.
    await DisposeAsyncCore();
    // Dispose of unmanaged resources.
    Dispose(false);
    // Suppress GC to call the finalizer.
    GC.SuppressFinalize(this);
}

This is a higher level method that calls a DisposeAsyncCore that actually does the async cleanup. It also calls the regular Dispose method to release other unmanaged resources and it tells the garbage collector not to call the finalizer. I guess this could release the instance before the async cleanup finishes.

This needs us to add another method called DisposeAsyncCore():

protected async virtual ValueTask DisposeAsyncCore()
{
    if (_jsonWriter is not null)
    {
        await _jsonWriter.DisposeAsync();
    }

    _jsonWriter = null;
}

This will actually dispose the async resource .

Further reading

Microsoft has some really detailed docs about it:

What's next?

In the next part In going to look into the support for DynamicComponent in Blazor.

Holger Schwichtenberg: Konsolenfenster und Windows-Fenster in einer .NET-5.0-App

Um Windows Forms oder Windows Presentation Foundation (WPF) in einer Konsolenanwendung nutzen zu können, ist eine spezielle Einstellung notwendig.

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Part 02 - Update on dotnet watch

This is the second part of the ASP.NET Core on .NET 6 series. In this post, I want to have a look into the updates on dotnet watch. The announcement post from February 17th mentioned that dotnet watch now does dotnet watch run by default.

Actually, this doesn't work in preview 1 because this feature didn't make it to this release by accident: https://github.com/dotnet/aspnetcore/issues/30470

BTW: This feature isn't mentioned anymore. The team changed the post and didn't add it to preview 2 though.

The idea is to just use dotnet watch without specifying the run command that should be executed after a file is changed. run is now the default command:

dotnetwatch.png

This is just a small thing but might save some time.

What's next?

In the next part In going to look into the support for IAsyncDisposable in MVC.

Golo Roden: Tests schreiben – die Grundlagen

Tests sind ein wesentlicher Baustein für qualitativ hochwertige und nachhaltige Softwareentwicklung. Doch warum genau sind Tests so wichtig, was sind ihre Vorteile, und welche Gründe sprechen für ihren Einsatz? Könnte man nicht alternativ auch von Hand testen?

Jürgen Gutsch: How to suppress dotnet whatch run to open a browser

An interesting question on Twitter leads me to write this small post. The question was how to suppress opening a browser when you run dotnet watch run.

The thing is, that you might not want to open a browser if you run dotnet watch run on an Web API project. Since the Web API projects have Swagger enabled by default, this might make sense but often you just want to run your backend project while your frontend project is open in a browser or whatever frontend you have.

Using an environment variable

There are two options to change that behavior. You can set an environment variable, which sets the behavior globally or in a console session:

SET DOTNET_WATCH_SUPPRESS_LAUNCH_BROWSER=1

This will override the default behavior.

Using the launchSettings.json

The better option is to change it project-wise for all the projects you want to suppress. This can be done in the launchSettings.json that you will find in the Properties folder of each project. The launchsettings.json contains iisSettings and two or more profiles that configures how the applications will be launched:

{
  "iisSettings": {
    "windowsAuthentication": false,
    "anonymousAuthentication": true,
    "iisExpress": {
      "applicationUrl": "http://localhost:32265",
      "sslPort": 44369
    }
  },
  "profiles": {
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "launchUrl": "swagger",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    },
    "MyProject": {
      "commandName": "Project",
      "dotnetRunMessages": "true",
      "launchBrowser": true, 
      "launchUrl": "swagger",
      "applicationUrl": "https://localhost:5001;http://localhost:5000",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      }
    }
  }
}

The launchBrowser property in the profiles defines whether the browser should be opened or not. Set it to false in case you want to suppress it.

In case you set the environment variable, it will override the setting of the launchsettings.json.

Conclusion

In many cases you might want to see the Swagger UI in your browser to test your API but there are cases as well, where you just want to spin up up your backend and to work on your front end using the running API.

Albert Weinert: AUA: Gregor und Albert live auf Sendung am 19. März 2021

AUA: Gregor und Albert live auf Sendung am 19. März 2021

Willkommen zur großen Ask Us Anything Session auf dem Twitch Kanal von Gregor Biswanger. Am 19. März 2021 ab 20:30 Uhr stellen Gregor und ich euch unsere Zeit, unser Wissen und unsere Ahnungslosigkeit euch zu Verfügung. Wir lösen eure Herausforderungen beim Programmieren.

Alles Rund im .NET, ASP.NET Core, JavaScript, Datenbanken, Wetter, Architektur, Ideen, Tooling, Testing, Security, Schuhgrößen, Git oder was auch immer. Fragt was Ihr wollt, bleibt freundlich und ob wir das Problem live angehen entscheiden wir spontan. Wir garantieren nichts, nicht mal die Lösung. Wir hoffen auf einen schönen und unterhaltsamen Abend, mit viel Beteiligung.

AUA: Gregor und Albert live auf Sendung am 19. März 2021
Gregor Biswanger
Gregor ist aktueller Microsoft MVP, entwickelt am liebsten auf Windows mit Visual Studio und Visual Studio Code. Macht was mit Angular, .NET, Mongo Db und vieles mehr.
AUA: Gregor und Albert live auf Sendung am 19. März 2021
Albert Weinert
Albert ist ehemaliger Microsoft MVP, entwickelt am liebsten auf dem Mac mit Rider und WebStorm. Macht was mit Vue, .NET, Identity Server und vieles mehr.

Floskelalarm

Also kommt reichlich. Im Zweifel wissen wir nichts. Der Abend ist kostenlos, sicher nicht umsonst. Wer nicht dabei ist, verpasst es. Aber kann es später ohne die eigenen Fragen stellen zu können auf YouTube nachschauen.

Holger Schwichtenberg: Blazor-WebAssembly-Tutorial nun vollständig

Das fünfteilige Tutorial zur Webprogrammierung mit Blazor steht nun einschließlich des zugehörigen Quellcodes vollständig online zur Verfügung.

Golo Roden: RTFM #5: Structure and Interpretation of Computer Programs (SICP)

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär handelt es sich um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Structure and Interpretation of Computer Programs" von Hal Abelson, Gerald Jay Sussman und Julie Sussman.

Golo Roden: Was sind Interfaces?

Interfaces sind eines der wichtigsten Konstrukte in der Programmierung, um Code sauber zu strukturieren, weshalb sie auch als Grundlage für viele Entwurfsmuster dienen. Doch was genau sind Interfaces und warum sind sie so relevant?

Holger Schwichtenberg: Microsoft baut an neuem Upgrade-Assistent von .NET Framework zu .NET 5 und .NET 6

Microsoft startet mit dem .NET Upgrade Assistant einen neuen Versuch eines Werkzeugs, das Entwickler beim Umstieg von .NET Framework auf .NET 5 und .NET 6 unterstützen soll.

Golo Roden: Datenbanktypen im Vergleich

Früher waren relationale Datenbanken das Maß der Dinge, doch in den vergangenen 15 Jahren haben sich zahlreiche andere Datenbanktypen etabliert. Wie unterscheiden sie sich, und spielen relationale Datenbanken heutzutage überhaupt noch eine Rolle?

Jürgen Gutsch: Trying the REST Client extension for VSCode

I recently stumbled upon a tweet by Lars Richter who mentioned and linked to a rest client extension in VSCode. I had a more detailed look and was pretty impressed by this extension.

I can now get rid of Fiddler and Postman.

Let's start at the beginning

The REST Client Extension for VSCode was developed by Huachao Mao from China. You will find the extension on the visual studio marketplace or in the extensions explorer in VS Code:

  • https://marketplace.visualstudio.com/items?itemName=humao.rest-client

If you follow this link, you will find a really great documentation about the extension, how it works, and how to use it. This also means this post is pretty useless, except you want to read a quick overview ;-)

rest client extension

The source code of the REST Client extension is hosted on GitHub:

  • https://github.com/Huachao/vscode-restclient

This extension is actively maintained, has almost one and a half installations and an awesome rating (5.0 out of 5) by more than 250 people

What does it solve?

Compared to Fiddler and Postman it is absolutely minimalistic. There is no overloaded and full-blown UI. While Fidler is completely overloaded but full of features, Postman's UI is nicer, easier, and more intuitive, but the REST Client doesn't need a UI at all, except the VSCode shell and a plain text editor.

While Fiddler and Postman cannot easily share the request configurations, the REST Client stores the request configurations in text files using the *.http or *.rest extension that can be committed to the source code repository and shared with the entire team.

Let's see how it works

To test it out in a demo, let's create a new Web API project, change to the project directory, and open VSCode:

dotnet new webapi -n RestClient -o RestClient
cd RestClient
code .

This project already contains a Web API controller. I'm going to use this for the first small test of the REST Client. I will create and use a more complex controller later in the blog post

To have the *.http files in one place I created an ApiTest folder and place a WeatherForecast.http in it. I'm not yet sure if it makes sense to put such files into the project, because these files won't go into production. I think, in a real-world project, I would place the files somewhere outside the actual project folder, but inside the source code repository. Let's keep it there for now:

http file

I already put the following line into that file:

GET https://localhost:5001/WeatherForecast/ HTTP/1.1

This is just a simple line of text in a plain text file with the file extension *.http but the REST Client extension does some cool magic with it while parsing it:

On the top border, you can see that the REST Client extension supports the navigation inside the file structure. This is cool. Above the line, it also adds a CodeLens actionable link to the configured request to send the request.

At first, start the project by pressing F5 or by using dotnet run in the shell.

If the project is running you can click the Send Request CodeLens link and see what happens.

result

It opens the response in a new tab group in VSCode and shows you the response headers as well as the response content

A more complex sample

I created another API controller that handles persons. The PersonController uses GenFu to create fake users. The Methods POST, PUT and DELETE don't really do anything but the controller is good to test no.

using System.Collections.Generic;
using Microsoft.AspNetCore.Mvc;

using GenFu;

using RestClient.Models;

namespace RestClient.Controllers
{
    [ApiController]
    [Route("[controller]")]
    public class PersonController : ControllerBase
    {

        [HttpGet]
        public ActionResult<IEnumerable<Person>> Get()
        {
            return A.ListOf<Person>(15);
        }

        [HttpGet("{id:int}")]
        public ActionResult<Person> Get(int id)
        {
            var person = A.New<Person>(new Person { Id = id });
            return person;
        }

        [HttpPost]
        public ActionResult Post(Person person)
        {
            return Ok(person);
        }

        [HttpPut("{id:int}")]
        public ActionResult Put(int id, Person person)
        {
            return Ok(person);

        }

        [HttpDelete("{id:int}")]
        public ActionResult Delete(int id)
        {
            return Ok(id);
        }
    }
}

The Person model is simple:

namespace RestClient.Models
{
    public class Person
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public string Email { get; set; }
        public string Telephone { get; set; }
        public string Street { get; set; }
        public string Zip { get; set; }
        public string City { get; set; }
    }
}

If you now start the project you will see the new endpoints in the Swagger UI that is already configured in the Web API project. Call the following URL to see the Swagger UI: https://localhost:5001/swagger/index.html

swaggerui

The Swagger UI will help you to configure the REST Client files.

Ok. Let's start. I created a new file called Person.http in the ApiTests folder. You can add more than one REST Client request configuration into one file.

We don't need the Swagger UI for the two GET endpoints and for the DELETE endpoints, since they are the easy ones and look the same as in the WeatherForecast.http:

GET https://localhost:5001/Person/ HTTP/1.1

###

GET https://localhost:5001/Person/2 HTTP/1.1

### 

DELETE https://localhost:5001/Person/2 HTTP/1.1

The POST request is just a little more complex

If you now open the POST /Person section in the Swagger UI and try the request, you'll get all the information you need for the REST Client:

swagger details

In the http file it will look like this:

POST https://localhost:5001/Person/ HTTP/1.1
content-type: application/json

{
  "id": 0,
  "firstName": "Juergen",
  "lastName": "Gutsch",
  "email": "juergen@example.com",
  "telephone": "08150815",
  "street": "Mainstr. 2",
  "zip": "12345",
  "city": "Smallville"
}

You can do the same with the PUT request:

PUT https://localhost:5001/Person/2 HTTP/1.1
content-type: application/json

{
  "id": 2,
  "firstName": "Juergen",
  "lastName": "Gutsch",
  "email": "juergen@example.com",
  "telephone": "08150815",
  "street": "Mainstr. 2",
  "zip": "12345",
  "city": "Smallville"
}

This is how it looks in VSCode, if you click the CodeLens link for the GET request :

results

You are now able to test all the API endpoints this way

Conclusion

Actually, it is not only about REST. You can test any kind of HTTP request this way. You can even send binary data, like images to your endpoint.

This is a really great extension for VSCode and I'm sure I will use Fiddler or Postman only in environments where I don't have a VS Code installed.

Stefan Henneken: IEC 61131-3: unterschiedliche Versionen der gleichen Bibliothek in einem TwinCAT Projekt

Bibliotheksplatzhalter ermöglichen es, mehrere Versionen der gleichen Bibliothek in einem SPS-Projekt zu referenzieren. Diese Situation kann hilfreich sein, wenn auf Grund neuer Funktionen eine Bibliothek in einem bestehenden Projekt aktualisiert werden soll, sich durch das Update aber herausstellt, dass ein älterer FB ein geändertes Verhalten erhält.

Das genannte Problem ist dadurch lösbar, indem verschiedene Versionen der gleichen Bibliothek über Platzhalter in dem Projekt eingebunden werden. Platzhalter bei Bibliotheken sind vergleichbar mit Referenzen. Statt Bibliotheken direkt zu einem Projekt hinzuzufügen, werden diese indirekt über Platzhalter referenziert. Jeder Platzhalter wird mit einer Bibliothek verknüpft. Entweder mit einer konkreten Version oder so, dass immer die aktuelle Bibliothek verwendet wird. Werden Bibliotheken über den Standarddialog hinzugefügt, kommen immer automatisch Platzhalter zum Einsatz.

Wie mehrere Versionen der gleichen Bibliothek in einem Projekt eingebunden werden, will ich in dem folgenden, kurzen Post vorstellen. Bei unserem Beispiel werde ich zwei verschiedene Versionen der Tc3_JsonXml Bibliothek in ein Projekt hinzufügen. Auf meinen Rechner sind aktuellen drei verschiedene Versionen der Bibliothek vorhanden.

V3.3.7.0 und V3.3.14.0 sollen in dem Beispiel parallel zum Einsatz kommen.

Öffnen Sie den Dialog zum Hinzufügen einer Bibliothek. Wechseln Sie anschließend in die erweitere Ansicht.

Wechseln Sie in den Bereich Platzhalter und geben Sie für den neuen Platzhalter einen eindeutigen Namen ein.

Wählen Sie die Bibliothek aus, auf der der Platzhalter referenzieren soll. Hierbei kann eine konkrete Version oder durch das ‚*‘, immer die neuste Version ausgewählt werden.

Wenn Sie anschließend im Projektbaum unter References den Platzhalter auswählen und in das Eigenschaftsfenster wechseln, so werden dort die Eigenschaften des Platzhalters angezeigt.

Hier muss der Namensraum noch angepasst werden. Der Namensraum wird später im SPS-Programm verwendet und dient dazu, Elemente beider Bibliotheken über unterschiedliche Bezeichner anzusprechen. Das Grundkonzept von Namensräumen hatte ich unter IEC 61131-3: Namensräume vorgestellt. Für den Namensraum habe ich die gleichen Bezeichner gewählt, wie für die Platzhalter.

Nachdem die gleichen Arbeitsschritte auch für die Version V3.3.14.0 der Library ausgeführt wurden, sollten beide Platzhalter mit einen eindeutigen Namen und angepassten Namensraum vorhanden sein.

Einen guten Überblick liefert die Bibliotheksverwaltung, die durch ein Doppelklick auf References geöffnet wird.

Hier ist gut zu erkennen, wie die Platzhalter aufgelöst werden. In der Regel haben die Platzhalter den gleichen Namen wie die Bibliotheken, auf die verwiesen wird. Das ‚*‘ bedeutet, dass immer die neuste Version der Bibliothek verwendet wird, die auf dem Entwicklungsrechner vorhanden ist. In der rechten Spalte wird die Version angezeigt, auf die der Platzhalter verweist. Für die beiden Platzhalter der Tc3_JsonXml Bibliothek wurden die Namen der Platzhalter angepasst.

Als Beispiel soll in dem SPS-Programm FB_JsonSaxWriter zum Einsatz kommen. Wird der FB bei der Deklaration der Instanz ohne Namensraum angegeben,

PROGRAM MAIN
VAR
  fbJsonSaxWriter    : FB_JsonSaxWriter;
END_VAR

so gibt der Compiler eine Fehlermeldung aus:

Der Name FB_JsonSaxWriter kann nicht eindeutig aufgelöst werden, da von der Bibliothek Tc3_JsonXml zwei verschiedene Versionen (V3.3.7.0 und V3.3.14.0) in dem Projekt vorhanden sind. Somit ist auch FB_JsonSaxWriter in dem Projekt zweimal enthalten.

Durch die Verwendung der Namensräume ist ein gezielter Zugriff auf die einzelnen Elemente der gewünschten Bibliothek möglich:

PROGRAM MAIN
VAR
  fbJsonSaxWriter_Build7           : Tc3_JsonXml_Build7.FB_JsonSaxWriter;
  fbJsonSaxWriter_Build14          : Tc3_JsonXml_Build14.FB_JsonSaxWriter;
  sVersionBuild7, sVersionBuild14  : STRING;
END_VAR

fbJsonSaxWriter_Build7.AddBool(TRUE);
fbJsonSaxWriter_Build14.AddBool(FALSE);

sVersionBuild7 := Tc3_JsonXml_Build7.stLibVersion_Tc3_JsonXml.sVersion;
sVersionBuild14 := Tc3_JsonXml_Build14.stLibVersion_Tc3_JsonXml.sVersion;

In diesem kurzen Beispiel wird des Weiteren über eine globale Struktur, die in jeder Bibliothek enthalten ist, die aktuelle Versionsnummer ausgelesen:

Beide Bibliotheken lassen sich jetzt parallel im gleichen SPS-Projekt verwenden. Allerdings muss hierbei sichergestellt werden, dass beide Bibliotheken in genau den geforderten Versionen (V3.3.7.0 und V3.3.14.0) auf dem Entwicklungsrechner vorhanden sind.

Golo Roden: RTFM #4: Common Lisp

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär geht es um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Common Lisp: A Gentle Introduction to Symbolic Computation" von David S. Touretzky.

Golo Roden: Algorithmen für künstliche Intelligenz

Im Bereich der künstlichen Intelligenz (KI) gibt es zahlreiche Algorithmen für die verschiedensten Arten von Problemen. Welche grundlegenden Algorithmen sollte man in dem Zusammenhang einordnen können?

Jürgen Gutsch: ASP.​NET Core in .NET 6 - Part 01 - Overview

.NET 5 was released just about 3 months age and Microsoft announced the first preview of .NET 6 last week. This is really fast. Actually, they already started working on .NET 6 before version 5 was released. But it is anyway cool to have a preview available to start playing around. Also, the ASP.NET team wrote a new blog post. It is about ASP.NET Core updates on .NET 6.

I will take the chance to have a more detailed look into the updates and the new feature. I'm going to start a series about those updates and features. This is also a chance to learn what I need to rewrite, If I need to update my book that recently got published by Packt.

Install .NET 6 preview

At first I'm going to download ..NET 6 preview from https://dotnet.microsoft.com/download/dotnet/6.0 and install it on my machine.

download.png

I chose the x64 Installer for Windows and started the installation

install01.png

After the installation is done the new SDK is available. Type dotnet --info in a terminal:

dotnetinfo.png

Be careful

Since I didn't add a global.json yet, the .NET 6 preview is the default SDK. This means I need to be careful if I want to create a .NET 5 project. I need to add a global.json every time I want to create a .NET 5 project:

dotnet new globaljson --sdk-version 5.0.103

This creates a small JSON file that contains the SDK version number in the current folder.

{
  "sdk": {
    "version": "5.0.103"
  }
}

Now all folder and subfolder will use this SDK version.

Series posts

This series will start with the following topics:

Preview 1

ASP.NET Core Updates in .NET 6 preview 1

Preview 2

ASP.NET Core Updates In .NET 6 preview 2

Preview 3

ASP.NET Core updates in .NET 6 Preview 3

  • Smaller SignalR, Blazor Server, and MessagePack scripts
  • Enable Redis profiling sessions
  • HTTP/3 endpoint TLS configuration
  • Initial .NET Hot Reload support
  • Razor compiler no longer produces a separate Views assembly
  • Shadow-copying in IIS
  • Vcpkg port for SignalR C++ client
  • Reduced memory footprint for idle TLS connections
  • Remove slabs from the SlabMemoryPool
  • BlazorWebView controls for WPF & Windows Forms

(I will update this list as soon I add a new post or as soon Microsoft adds a new release).

Christian Dennig [MS]: Getting started with KrakenD on Kubernetes / AKS

If you develop applications in a cloud-native environment and, for example, rely on the “microservices” architecture pattern, you will sooner or later have to deal with the issue of “API gateways”. There is a wide range of offerings available “in the wild”, both as managed versions from various cloud providers, as well as from the open source domain. Many often think of the well-known OSS projects such as “Kong”, “tyk” or “gloo” when it comes to API gateways. The same is true for me. However, when I took a closer look at the projects, I wasn’t always satisfied with the feature set. I was always looking for product that can be hosted in your Kubernetes cluster, is flexible and easy to configure (“desired state”) and offers good performance. During my work as a cloud solution architect at Microsoft, I became aware of the OSS API Gateway “KrakenD” during a project about 1.5 years ago.

KrakenD API Gateway

krakend logo
KrakenD logo

KrakenD is an API gateway implemented in Go that relies on the ultra-fast GIN framework under the hood. It offers an incredible number of features out-of-the-box that can be used to implement about any gateway requirement:

  • request proxying and aggregation (merge multiple responses)
  • decoding (from JSON, XML…)
  • filtering (allow- and block-lists)
  • request & response transformation
  • caching
  • circuit breaker pattern via configuration, timeouts…
  • protocol translation
  • JWT validation / signing
  • SSL
  • OAuth2
  • Prometheus/OpenCensus integration

As you can see, this is quite an extensive list of features, which is nevertheless far from being “complete”. On their homepage and documentation, you can find much more information, what the product offers in its entirety​.

The creators also recently published an Azure Marketplace offer, a container image that you can directly push / integrate to your Azure Container Registry…so I thought, it’s an appropriate time to publish a blog post about how to get started with KrakenD on Azure Kubernetes Service (AKS).

Getting Started with KrakenD on AKS

Ok, let’s get started then. First, we need a Kubernetes cluster on which we can roll out a sample application that we want to expose via KrakenD. So, as with all Azure deployments, let’s start with a resource group and then add a corresponding AKS service. We will be using the Azure Command Line Interface for this, but you can also create the cluster via the Azure Portal.

# create an Azure resource group

$ az group create --name krakend-aks-rg \
   --location westeurope

{
  "id": "/subscriptions/xxx/resourceGroups/krakend-aks-rg",
  "location": "westeurope",
  "managedBy": null,
  "name": "krakend-aks-rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

# create a Kubernetes cluster

$ az aks create -g krakend-aks-rg \
   -n krakend-aks \
   --enable-managed-identity \
   --generate-ssh-keys

After a few minutes, the cluster has been created and we can download the access credentials to our workstation.

$ az aks get-credentials -g krakend-aks-rg \
   -n krakend-aks 

# in case you don't have kubectl on your 
# machine, there's a handy installer coming with 
# the Azure CLI:

$ az aks install-cli

Let’s check, if we have access to the cluster…

$ kubectl get nodes

NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-34625029-vmss000000   Ready    agent   24h   v1.18.14
aks-nodepool1-34625029-vmss000001   Ready    agent   24h   v1.18.14
aks-nodepool1-34625029-vmss000002   Ready    agent   24h   v1.18.14

Looks great and we are all set from an infrastructure perspective. Let’s add a service that we can expose via KrakenD.

Add a sample service

We are now going to deploy a very simple service implemented in dotnet core, that is capable of creating / storing “contact” objects in a MS SQL server 2019 (Linux) that is running – for convenience reasons – on the same Kubernetes cluster as a single container/pod. After having the services deployed, the in-cluster situation looks like that:

In-cluster architecture /wo KrakenD

Let’s deploy everything. First, the MS SQL server with its service definition:

# content of sql-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 30
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql
          image: mcr.microsoft.com/mssql/server:2019-latest
          ports:
            - containerPort: 1433
          env:
            - name: MSSQL_PID
              value: 'Developer'
            - name: ACCEPT_EULA
              value: 'Y'
            - name: SA_PASSWORD
              value: 'Ch@ngeMe!23'
---
apiVersion: v1
kind: Service
metadata:
  name: mssqlsvr
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: ClusterIP

Create a file called sql-server.yaml and apply it to the cluster.

$ kubectl apply -f sql-server.yaml

deployment.apps/mssql-deployment created
service/mssqlsvr created

Second, the contacts API plus a service definition:

# content of contacts-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ca-deploy
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: scmcontacts
      service: contactsapi
  template:
    metadata:
      labels:
        application: scmcontacts
        service: contactsapi
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '256Mi'
              cpu: '500m'
          image: ghcr.io/azuredevcollege/adc-contacts-api:3.0
          env:
            - name: ConnectionStrings__DefaultConnectionString
              value: "Server=tcp:mssqlsvr,1433;Initial Catalog=scmcontactsdb;Persist Security Info=False;User ID=sa;Password=Ch@ngeMe!23;MultipleActiveResultSets=False;Encrypt=False;TrustServerCertificate=True;Connection Timeout=30;"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: contacts
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  type: ClusterIP
  selector:
    application: scmcontacts
    service: contactsapi
  ports:
    - port: 8080
      targetPort: 5000

Create a file called contacts-app.yaml and apply it to the cluster.

$ kubectl apply -f contacts-app.yaml

deployment.apps/ca-deploy created
service/contacts created

To check, if the contact pods can communicate with the MSSQL server, let’s quickly spin up an interactive pod and issue a few requests from within the cluster. As you can see in the YAML manifests, the services have been added as type ClusterIP which means they don’t get an external IP address. Exposing the contacts service to the public will be the responsibility of KrakenD.

$ kubectl run -it --rm --image csaocpger/httpie:1.0 http --restart Never -- /bin/sh
If you don't see a command prompt, try pressing enter.

$ echo '{"firstname": "Satya", "lastname": "Nadella", "email": "satya@microsoft.com", "company": "Microsoft", "avatarLocation": "", "phone": "+1 32 6546 6545", "mobile": "+1 32 6546 6542", "description": "CEO of Microsoft", "street": "Street", "houseNumber": "1", "city": "Redmond", "postalCode": "123456", "country": "USA"}' | http POST http://contacts:8080/api/contacts

HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 10:58:57 GMT
Location: http://contacts:8080/api/contacts/ee176782-a767-45ad-a7df-dbcefef22688
Server: Kestrel
Transfer-Encoding: chunked

{
    "avatarLocation": "",
    "city": "Redmond",
    "company": "Microsoft",
    "country": "USA",
    "description": "CEO of Microsoft",
    "email": "satya@microsoft.com",
    "firstname": "Satya",
    "houseNumber": "1",
    "id": "ee176782-a767-45ad-a7df-dbcefef22688",
    "lastname": "Nadella",
    "mobile": "+1 32 6546 6542",
    "phone": "+1 32 6546 6545",
    "postalCode": "123456",
    "street": "Street"
}

$ http GET http://contacts:8080/api/contacts
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 11:00:07 GMT
Server: Kestrel
Transfer-Encoding: chunked

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

As you can see, we can create new contacts by POSTing a JSON payload to the endpoint http://contacts:8080/api/contacts (first request) and also retrieve what has been added to the database by GETing data from http://contacts:8080/api/contacts endpoint (second request).

Create a KrakenD Configuration

So far, everything works as expected and we have a working API in the cluster that is storing its data in a MSSQL server. As discussed in the previous section, we did not expose the contacts service to the internet on purpose. We will do this later by adding KrakenD in front of that service giving the API gateway a public IP so that it is externally reachable.

But first, we need to create a KrakenD configuration (a plain JSON file) where we configure the endpoints, backend services, how requests should be routed etc. etc. Fortunately, KrakenD has a very easy-to-use designer that gives you a head-start when creating that configuration file – it’s simply called the KrakenDesigner.

kraken designer
KrakenDesigner – sample service
kraken designer logging config
KrakenDesigner – logging configuration

When creating such a configuration, it comes down to these simple steps:

  1. Adjust “common” configuration for KrakenD like service name, port, CORS, exposed/allowed headers etc.
  2. Add backend services, in our case just the Kubernetes service for our contacts API (http://contacts:8080)
  3. Exposed endpoints (/contacts) at the gateway and to which backend to route to (http:/contacts:8080/api/contacts). Here you can also define, if a JWT token should be validated, which headers to pass to the backend etc. A lot of options – which we obviously don’t need in our simple setup.
  4. Add logging configuration – it’s optional, but you should do it. We simply enable stdout logging, but you can also use OpenCensuse.g. and even expose metrics to a Prometheus instance (nice!).

You can export the configuration you have done in the UI as a last step to a JSON file. For our sample here, this file looks like that:

{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

We simply expose two endpoints, one that let’s us create (POST) contacts and one that retrieves (GET) all contacts from the database – so basically the same sample we did when calling the contacts service from within the cluster.

Save that file above to your local machine (name it krakend.json) as we need to add it later to Kubernetes as a ConfigMap.

Add the KrakenD API Gateway

So, now we are ready to deploy KrakenD to the cluster: we have an API that we want to expose and we have the KrakenD configuration. To dynamically add the configuration (krakend.json) to our running KrakenD instance, we will use a Kubernetes ConfigMap object. This gives us the ability to decouple configuration from our KrakenD application instance/pod – if you are not familiar with the concepts, have a look at the official documentation here.

During the startup of KrakenD we will then use this ConfigMap and mount the content of it (krakend.json file) into the container (folder /etc/krakend) so that the KrakenD process can pick it up and apply the configuration.

In the folder where you saved the config file, issue the following commands:

$ kubectl create configmap krakend-cfg --from-file=./krakend-cfg.json

configmap/krakend-cfg created

# check the contents of the configmap

$ kubectl describe configmap krakend-cfg

Name:         krakend-cfg
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
krakend.json:
----
{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

Events:  <none>

That looks great. We are finally ready to spin up KrakenD in the cluster. We therefor apply the following Kubernetes manifest file which creates a deployment and a Kubernetes service of type LoadBalancer – which gives us a public IP address for KrakenD via the Azure loadbalancer.

# content of api-gateway.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: krakend-deploy
  labels:
    application: apigateway
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: apigateway
  template:
    metadata:
      labels:
        application: apigateway
    spec:
      automountServiceAccountToken: false
      volumes:
        - name: krakend-cfg
          configMap:
            name: krakend-cfg
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '1024Mi'
              cpu: '1000m'
          image: devopsfaith/krakend:1.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          volumeMounts:
          - name: krakend-cfg
            mountPath: /etc/krakend

---
apiVersion: v1
kind: Service
metadata:
  name: apigateway
  labels:
    application: apigateway
spec:
  type: LoadBalancer
  selector:
    application: apigateway
  ports:
    - port: 8080
      targetPort: 8080

Let me highlight the two important parts here, that mount the configuration file into our pod. First, we create a volume on line 26 (named krakend-cfg) referencing the ConfigMap we created before and second, we mount that volume (line 43) to our pod (mountPath /etc/krakend).

Save the manifest file and apply it to the cluster.

$ kubectl apply -f api-gateway.yaml

deployment.apps/krakend-deploy created
service/apigateway created

The resulting architecture within the cluster is now as follows:

Architecture with krakend
Architecture with KrakenD API gateway

As a last step, we just need to retrieve the public IP of our “LoadBalancer” service.

$ kubectl get services

NAME         TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)          AGE
apigateway   LoadBalancer   10.0.26.150   104.45.73.37   8080:31552/TCP   4h53m
contacts     ClusterIP      10.0.155.35   <none>         8080/TCP         3h47m
kubernetes   ClusterIP      10.0.0.1      <none>         443/TCP          26h
mssqlsvr     ClusterIP      10.0.192.57   <none>         1433/TCP         3h59m

So, in our case here, we got 104.45.73.37. Let’s issue a few request (either with a browser or a tool like httpie – which I use all the time) against the resulting URL http://104.45.73.37:8080/contacts.

$ http http://104.45.73.37:8080/contacts

HTTP/1.1 200 OK
Content-Length: 337
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 12:10:20 GMT
Server: Kestrel
Vary: Origin
X-Krakend: Version 1.2.0
X-Krakend-Completed: false

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

Works like a charm! Also, have a look at the logs of the KrakenD container:

$ kubectl logs krakend-deploy-86c44c787d-qczjh -f=true

Parsing configuration file: /etc/krakend/krakend.json
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ ERROR unable to create the GELF writer: getting the extra config for the krakend-gelf module
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ INFO Listening on port: 8080
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN influxdb: unable to load custom config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN opencensus: no extra config defined for the opencensus module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN building the etcd client: unable to create the etcd client: no config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN bloomFilter: no config for the bloomfilter
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN no config present for the httpsecure module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.747 ▶ INFO registering usage stats for cluster ID '293C0vbu4hqE6jM0BsSNl/HCzaAKsvjhSbHtWo9Hacc='
[GIN] 2021/02/17 - 10:01:44 | 200 |    4.093438ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:46 | 200 |    5.397977ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:56 | 200 |    6.820172ms |      10.244.1.1 | GET      "/contacts"
[GIN] 2021/02/17 - 10:01:57 | 200 |    5.911475ms |      10.244.1.1 | GET      "/contacts"

As mentioned before, KrakenD logs its events to stdout and we can see how the request are coming in, the destination and the time the request needed to complete at the gateway level.

Wrap-Up

In this brief article, I showed you how you can deploy KrakenD to an AKS/Kubernetes cluster on Azure and how to setup a first, simple sample of how to expose an API running in Kubernetes via the KrakenD API gateway. The project has so many useful features, that this post only covers the very, very basic stuff. I really encourage you to have a look at the product when you consider hosting an API gateway within your Kubernetes cluster. The folks at KrakenD do a great job and are also open and accept pull requests, if you want to contribute to the project.

As mentioned in the beginning of this article, they recently published a version of their KrakenD container image to the Azure Marketplace. This gives you the ability to directly push their current and future image to your own Azure Container Registry, enabling scenarios like static image scanning, Azure Security Center integration, geo-replication etc. You can find their offering here: KrakenD API Gateway

Hope you enjoyed this brief introduction…happy hacking, friends! 🖖

Golo Roden: Grundbegriffe der künstlichen Intelligenz

Künstliche Intelligenz (KI) ist eines der wichtigsten Themen der vergangenen Jahre. Ein zumindest grundlegendes Verständnis ist daher hilfreich, um gewisse Themen ins rechte Licht rücken zu können. Welche Grundbegriffe der künstlichen Intelligenz sollte man kennen?

Golo Roden: Wie man Aufwand schätzt

Jede Entwicklerin und jeder Entwickler kennt die Herausforderung, Aufwand für die Entwicklung Code zu schätzen. Die wenigsten machen das gerne. Warum ist Schätzen so unbeliebt, warum ist es überhaupt erforderlich und worauf sollte man achten?

Golo Roden: RTFM #3: Game Engine Black Book: Doom

Die Serie RTFM stellt in unregelmäßigen Abständen zeitlose und empfehlenswerte Bücher für Entwicklerinnen und Entwickler vor. Primär geht es um Fachbücher, doch gelegentlich sind auch Romane darunter. Heute geht es um "Game Engine Black Book: Doom" von Fabien Sanglard.

Golo Roden: Fünf Maßnahmen für mehr Codequalität

Die Codequalität zu verbessern, ist vielen Teams ein wichtiges Anliegen. Dabei gibt es einige grundlegende Maßnahmen, die man mit verhältnismäßig überschaubarem Aufwand anwenden kann. Welche sind das?

Jürgen Gutsch: Working inside a Docker container using Visual Studio Code

As mentioned in the last post I want to write about remote working inside a docker container. But at first, we should get an idea about why we should ever remote work inside a docker container.

Why should I do that?

One of our customers is running an OpenShift/Kubernetes cluster and also likes to have the technology-specific development environments in a container that runs in Kubernetes. We had a NodeJS development container, a Python development container, and so on... All the containers had an SSH server installed, Git, the specific SDKs, and all the stuff that is needed to develop. Using VSCode we connected to the containers via SSH and developed inside the container.

Having the development environment in a container is one reason. Maybe not the most popular reason. But trying stuff inside a container because the local environment isn't the same makes a lot of sense. If you want to debug an application in a production-like environment makes absolute sense.

How does it work?

VSCode has a great set of tools to work remotely. In installed Remote WSL (used in the last post), Remote SSL was the one we used with OpenShift (maybe I will write about it, too), and with this post, I'm gonna use Remote Container. All three of them will work inside the remote explorer within VS Code. All three add-ins will work pretty similarly.

If the remote machine doesn't have the VSCode Server Installed, the remote toll will install it and start it. The VSCode server is like a full VSCode without a user interface. It also needs to have add-ins installed to work with the specific technologies. The local VSCode will connect to the remote VSCode server and mirrors it in the user interface of your locally installed VSCode. It is like a remote session to the other machine but feels local.

Setup the demo

I created a small ASP.NET Core MVC project:

dotnet new mvc -n RemoteDocker -o RemoteDocker
cd RemoteDocker

Than I added a dockerfile to it:

FROM mcr.microsoft.com/dotnet/sdk:5.0

COPY . /app

WORKDIR /app

EXPOSE 5000 5001

# ENTRYPOINT ["dotnet", "run"] not needed to just work in the container

If you don't have the docker tool installed, VSCode will ask you to install it as soon you have the dockerfile open. If it's installed you can just right-click the dockerfile in the VSCode Explorer and select "Build image...".

image-20210203220213602

This will prompt you for an image name. You can use the proposed name which is "remotedocker:latest" in my case. It seems it uses the project name or the folder name which makes sense:

image-20210203220356005

Select the Docker tab in VSCode and you will find your newly built image in the list of images:

image-20210203220705183

You can now right-click the tag latest and choose "Run Interactive". If you just choose "Run" the container stops, because we commented out the entry point. We need an interactive session. This will start-up the container and it will now appear as a running container in the container list:

image-20210203220954330

You can browse and open the files inside the container from this containers list, but editing will not work. This is not what we want to do. We want to remotely connect VSCode to this Docker container.

Connecting to Docker

This can be done using two different ways:

  1. Just right-click the running container and choose "Attach Visual Studio Code"

image-20210203221956964

  1. Or select the Remote Explorer tab, ensure the Remote Containers add-in is selected in the upper-right dropdown box, wait for the containers to load. If all the containers are visible, choose the one you want to connect, right-click it and choose "Attach to container" or "Attach in New Window". It does the same thing as the previous way

image-20210203221546976

Now you have a VSCode instance open that is connected to the container. You now can see the files in the project, you can sue the terminal inside the container and you can now edit the files inside the project.

image-20210203222354886

You can see that this is a different VSCode than your local instance by having a look at the tabs on the left side. Not all the add-ins are installed on that instance. In my case, the database tools are missing as well as the Kubernetes tools and some others.

Working inside the Container

Since we disabled the entry point in the dockerfile we are now able to start debugging by pressing F5.

image-20210204221414743

This also opens the local browser and shows the application that is running inside the container. This is really awesome. It feels like really local development:

image-20210204222047723

Let's change something to see that this is really working. Like in the last demo, I'm going to change the page title. I would like to see the name "Remote Docker demo":

image-20210204222347623

Just save and restart debugging in VSCode:

image-20210204222617177

That's it.

Conclusion

Isn't this cool?

You can easily start docker containers to test, debug and develop in a production-like environment. You can configure a production-like environment with all Docker containers you need using docker-compose on your machine. Then add your development, or your testing container to the composition and start it all up. Now you can connect to this container and start playing around within this environment. It is all fast, accessible, and on your machine.

This is cool!

I'd like to see this is also working if the containers running on Azure. I will try it within the next weeks and maybe I can put the results in a new blog post.

Golo Roden: Servicemodelle in der Cloud

Der Begriff "Cloud" ist längst in die Alltagssprache eingezogen, doch die wenigsten können im Detail erklären, was genau damit gemeint ist. Tatsächlich gibt es eine Definition des NIST, die vier Servicemodelle beschreibt. Welche sind das?

Jürgen Gutsch: Finally - My first book got published

I always had the idea to write a book. Twelve or thirteen years ago, Stefan Falz told me not to do it, because it is a lot of effort and takes a lot of your time. Even if my book is just a small one and smaller than Stefan's books for sure, now I know what he meant, I guess :-)

How it started

My journey writing a book starts in fall 2018 when I started the "Customizing ASP.NET Core" series. A reader asked me to bundle the series as a book. I took my time to thought about it and started to work on it in July 2019. The initial idea to use LeanPub and create a book the open source way was good but But there was no pressure, no timeline, and that project was had lower priority besides life and other stuff. The release of ASP.NET Core was a good event to put some more pressure on it. From September last year on I started to update all the contents and samples to ASP.NET Core 5.0. I also updated the text in a way that it matches a book more than a blog series.

Actually my very first book is a compilation of the old blog series, but updated to ASP.NET Core 5.0 and it includes an additional thirteenth chapter that wasn't part of the original series.

I was almost done end of October and ready to publish it around the .NET Conf 2020 when .NET 5 and ASP.NET Core were announced. Then I decided to try an experiment:

How it went

At that time, I did a technical review of a book about Blazor for Packt and I decided to ask Packt if my book is worth it to get published by Packt. They said yes and wanted to publish it. That was awesome. My idea was to improve the quality of the book, to have professional editors, and reviewers and most important to not do the publishing and the marketing by myself.

The downside of this decision: I wasn't able to publish the book around the .NET Conf 2020. Packt started to work on it and it was a really impressive experience:

  • An editor worked on it to make the texts more "booky" than "bloggy", and I had to review and rework some texts
  • A fellow MVP Toi B. Wright did the technical review, and I had a lot more to fix.
  • Another technical reviewer executed all the samples and snippets, and I had to fix some small issues.
  • A copy editor went through all the chapters and had feedback about formatting.
  • In the meanwhile I had to work on the front matter and the preface.

I also never thought about a foreword of my book until I worked on the preface. I didn't want to write the foreword by myself and had the right person in mind.

I asked Damien Bowden the smartest and coolest ASP.NET Core security guru I know. He also is a fellow MVP and a famous blogger. His posts got shared many times and often mentioned in the ASP.NET Community Standup. It's always a pleasure to talk to him and we had a lot of fun at the MVP summits in Redmond and Bellevue.

Thanks Damien for writing this awesome foreword :-)

How it is right now

Sure, my very first book is just a compilation of the old blog series, but updated to ASP.NET Core 5.0 and it includes an additional thirteenth chapter that wasn't part of the original series:

  1. Customizing Logging
  2. Customizing App Configuration
  3. Customizing Dependency Injection
  4. Configuring and Customizing HTTPS
  5. Using IHostedService and BackgroundService
  6. Writing Custom Middleware
  7. Content negotiation using custom OutputFormatter
  8. Managing inputs with custom ModelBinders
  9. Creating custom ActionFilter
  10. Creating custom TagHelpers
  11. Configuring WebHostBuilder
  12. Using different Hosting models
  13. Working with Endpoint Routing

This book also contains details about ASP.NET Core 3.1. I'm mentioning 3.1, if it differs from 5.0. Because ASP.NET Core 3.1 is a LTS version and some companies definitely will stay on LTS.

Packt helped me to higher the quality of the contents and it now is is a compact cook book with 13 recipes you should know about ASP.NET Core.

It is definitely a book for ASP.NET Core beginners, who already know C# and the main concepts about ASP.NET Core

Where to get it

Last Saturday Packt published it on Amazon as Kindle edition and as paperback

Damien, do you see your name below the title? ;-)

I guess it will be as available on Packt as well soon, for those of you who have a Packt subscription.

Would be awesome, if you would drop a review as soon you read it

Thanks

I would like to say thanks to some persons, who helped me do this.

  • At first I say thanks to my family, friends, and colleagues who supported me and motivated me to finish the work.

  • I also say thanks to Packt. They did a great job supporting me and they added a lot more value to the book. I also like the cover design.

  • I say thanks again to Damien for that great foreword

  • Also thanks to the developer community and the readers of my blog, since this book is mainly powered by the community.

What's next?

My plan is to keep this book up-to-date. I will update the samples and concepts with every new major version.

For now, I will focus on my blog again. I've written almost nothing in the past six months. In any case, I already have an idea for another book :-)

Code-Inside Blog: Microsoft Graph: Read user profile and group memberships

In our application we have a background service, that “syncs” user data and group membership information to our database from the Microsoft Graph.

The permission model:

Programming against the Microsoft Graph is quite easy. There are many SDKS available, but understanding the permission model is hard.

‘Directory.Read.All’ and ‘User.Read.All’:

Initially we only synced the “basic” user data to our database, but then some customers wanted to reuse some other data already stored in the graph. Our app required the ‘Directory.Read.All’ permission, because we thought that this would be the “highest” permission - this is wrong!

If you need “directory” information, e.g. memberships, the Directory.Read.All or Group.Read.All is a good starting point. But if you want to load specific user data, you might need to have the User.Read.All permission as well.

Hope this helps!

Golo Roden: Was man über Unicode wissen sollte

Nahezu jede Entwicklerin und jeder Entwickler kennt Unicode, zumindest vom Hörensagen. Doch vielen ist nicht klar, was genau Unicode eigentlich ist, was Encodings sind und wie das alles im Detail funktioniert. Was sollte man über Unicode wissen?

Marco Scheel: Microsoft Teams Incoming Webhook update required

With the Message Center Notification MC234048 Microsoft announced a change to the Microsoft Teams App “Incoming Webhook”. The URL currently used will be deprecated by mid of April 2021. The exact wording is:

We will begin transitioning to the new webhook URLs on Monday January 11, 2021; however, existing webhooks URLs will continue to work for three (3) months to allow for migration time Source (as of 2021-01-26): https://admin.microsoft.com/Adminportal/Home?#/MessageCenter/:/messages/MC234048

If you created a webhook prior January 11, 2021 you will need to update your existing connector configuration!

This app is in regular use by most companies, if not disabled by a Teams App permission policy in the tenant. The app is a very easy option to post a message to a team. The URI of a webhook is cryptic and the only security in place. If you send a well-crafted HTTP message to the endpoint, you will create a Teams post in the channel the app is connected to. Here is the Microsoft documentation and a great community article.

Currently Microsoft is using a non-tenant specific URI (outlook.office.com). The new URI will be tenant related (YOURTENANT.webhook.office.com).

This feature is communicated for Microsoft Teams, but it is also a Microsoft 365 Group Connector feature so these might also affected.

image

Check if the app is used

It could be a good idea to check, if the app is active in your tenant. As a Teams administrator you can check the application in your admin center.

image

Even if you checked the app and the Teams App Permission policy you could still have the app installed prior this configuration. It is easy to check if the application is installed in a Microsoft Team. To query for installed apps we will need to use the preview version of the MicrosoftTeams module (as of writing 1.1.10-preview). Using the Teams PowerShell you can get a list of Teams the app is installed in.

Get the application ID and more details:

Get-TeamsApp | Where-Object { $_.DisplayName -eq "Incoming Webhook"}

Result:

ExternalId Id                                   DisplayName      DistributionMethod
---------- --                                   -----------      ------------------
           203a1e2c-26cc-47ca-83ae-be98f960b6b2 Incoming Webhook store

With the application id we now can query all teams and check if the app is installed:

Get-Team | ForEach-Object {
    $team = $_;
    $apps = Get-TeamsAppInstallation -TeamId $team.GroupId | Where-Object { $_.TeamsAppId -eq "203a1e2c-26cc-47ca-83ae-be98f960b6b2"};
    if ($apps -ne $null){
        $team;
    }
}

Result for my two teams with the app installed:

GroupId                              DisplayName        Visibility  Archived  MailNickName       Description
-------                              -----------        ----------  --------  ------------       -----------
a6687ed4-c1a6-4c7b-9171-2d625a60b76e GK Malachor MSDN   Public      False     GKMalachorMSDN     Check here for or…
75366f42-6fc6-4857-90d1-3283236789b6 20200906 Demo Acc… Private     False     20200906DemoAcces… 20200906 Demo Acc…

Based on this information we now can contact the owners/members of a team and make them check if they use the app and need to update the URI. Currently I am not aware of a method to get the specific channel the webhook is attached to. The user needs to check all the channels to find the connectors.

How to fix the problem

The user needs to navigate to the team and check for the connector of all channels:

image

image

Open the “x configured” (1) if available and click on the “Manage” (2) button for the specific implementation:

image

This will show you the current configuration of the webhook:

image

You need to click on “Update URL” and you will receive a new URI with the tenant specific part. The connector page did not refresh automatically. I quite the page and reopened the dialog. Now the page is not complaining about a required update and I could copy the new webhook URI:

image

Now you just need to remember and find the app you integrated the webhook in :)

NOTE: I was not able to update the incoming webhook, if the account that created the webhook is not the account updating the webhook. You can see the account that did the setup in the connector list and you will notice the “Save” button is disabled. In this case an easy option is to delete webhook and recreate it with the same name.

image

Summary

Check your tenant (admin) or teams (power users) for the configuration of incoming webhook. Remember as soon as you update the URL the webhook for this will stop working and not accept messages. Updating the URL is only solving 50% of the problem. You also need to update your Power Automate flows, Azure Functions, Azure Automation Runbooks or your PowerShell scripts in your on-prem servers task scheduler.

Bonus

Get the owners of the groups to send an email:

Get-Team | ForEach-Object {
    $team = $_;
    $apps = Get-TeamsAppInstallation -TeamId $team.GroupId | Where-Object { $_.TeamsAppId -eq "203a1e2c-26cc-47ca-83ae-be98f960b6b2"};
    if ($apps -ne $null){
        Get-TeamUser -GroupId $team.GroupId -Role Owner | ForEach-Object {
            $owner = $_;
            $fields = @{
                Team = $team.DisplayName
                OwnerEmail = $owner.User
            }
            New-Object -TypeName PSObject -Property $fields;
        }
    }
}

Result: image

Golo Roden: Verschlüsseln mit elliptischen Kurven

Elliptische Kurven bilden die Grundlage für moderne asymmetrische Kryptografie. Mathematisch sind sie verhältnismäßig komplex, aber ihre Funktionsweise lässt sich dennoch anschaulich erklären. Wie also funktionieren sie?

Marco Scheel: Create your Azure AD application via script - M365.TeamsBackup

If you are using Azure AD authentication for your scripts, apps, or other scenarios at some point you will end up creating your own application in your directory. Normally you open the Azure portal and navigate to the “App registrations” part of AAD. This is fine during development, but if you want to share the solution or a customer wants to run the software in their own tenant, things get complicated and error prone. For my Microsoft Teams backup solution this is very real because you need to hit all required permissions and configure the public client part otherwise the solution will not run.

This post provides you will all the needed information to create your own script. I’m using my M365 Teams Backup solution as a reference. The key components are:

image

Choose a scripting environment (Azure CLI vs Azure AD PS)

During my day job I created some applications based on Microsoft Graph and I tried a few approaches to script the Azure AD app creation. It is important to understand that an Azure AD application consists of two parts. The application registration is like a blueprint for your app. The enterprise application is the implementation of your blueprint.

The application permissions are defined in the “App registration”. Here you select the permissions that your app will request from users in the tenant. Without a consent the permissions are not in effect. If your only have an app registered but not received consent the app will not be able to use the requested permissions. Check the Microsoft documentation for a deeper look at the consent framework.

Most of my applications leverage application permissions or require admin consent for delegate permissions. The �M365.TeamsBackup� solution is using a bunch of Microsoft Graph permissions and some of them are pretty powerful. If you have an application with this kind of permission requirements it is needed to have admin consent given by a (best case) global administrator.

If your apps are like mine, it might be the best to use the Azure CLI because this is as of my knowledge the only way to script the admin consent. I am not a CLI guy. I am a PowerShell fan. I struggled in the past integrating the CLI and its output into my scripting flow. That is why I wanted to show you what and how it can be done. If you are OK with opening the portal to give admin consent or you don’t want to give admin consent during application setup, I also have an Azure AD PowerShell version of the script.

Setup Azure CLI and connect

The Azure CLI is not purely targeted at Azure AD. It is the other way around because the CLI is used to script all the Azure things available. There is great Microsoft docs on installing the Azure CLI. I’m running on Windows, so I typically go the MSI route:

  • Download the release version of the MSI (that is what I’m running)
  • Install the MSI (bring some extra time because the installation is slow)
  • After the download open a new PowerShell (this ensured the path is set and available)
  • You can check if the installation worked using the ‘az –version’ command

image

As you can see my version is not up to date. As with most tools you need to keep these tools at the latest version. The Azure CLI can be updated by installing the newest MSI or by running an admin command using this command ‘az upgrade’. The upgrade command will download the MSI and start the installation for you.

Installation is finish and now it is time to login to your tenant. The CLI is different from your normal “PowerShell Connect-SERVICE” (SharePoint, AD, Teams, …) command. The Azure CLI will remember your last login. If you close and open your terminal you will still be logged in. If you use the Azure CLI just for the one-time setup, please consider a logout after you finish any script. But first lets login. I’m a big fan of device code authentication where possible. Azure CLI is supporting this flow so that is how I roll:

  • Login:
    • az login –use-device-code –allow-no-subscriptions
  • Check current login:
    • az account show
  • Logout:
    • az logout

image

Check my script using the Azure CLI:

Setup Azure AD PowerShell and connect

Azure AD PowerShell versioning is complicated. For my job (M365 Modern Collaboration) I am always using the AzureADPreview module and this is what I recommend in most cases. The AzureADPreview cannot be installed side by side with the AzureADPreview, so at some point you will have to move to the AzureADPreview. As the Azure CLI is not PowerShell based I am using my Windows Terminal default that is PowerShell 7. The Azure AD modules are not yet ready for PowerShell 7 so you will need to open your old school PowerShell 5.

Installing the Azure AD module is like most modern modules and relies on the Powershell gallery.

  • Open your PowerShell as an administrator and execute
    • Install-module AzureADPreview
  • Check your version opening a non admin session
    • Get-Module AzureADPreview -ListAvailable

image

If you are not on the latest version, you need to upgrade the module like any other module:

  • Open your PowerShell as an administrator and execute
    • Update-Module AzureADPreview

image

To connect to Azure AD you cannot rely on device authentication and you will need to login directly on the script execution. If you need to execute multiple scripts, check if you want to disable the login command “Connect-AzureAD” in the script to prevent multiple logins (incl. MFA).

  • Open your PowerShell and execute
    • Connect-AzureAD

Check my script using the Azure CLI:

Script the creation process

Now we are prepared, and we can create our application. The easy part is to create an “App registration”. If your app need permissions the trouble begins. There are two challenges:

  • Setting the permission in the two scripts
  • Getting the permission definition in the first place

Getting the permission translated from the nice Azure AD portal UX to a script-ready solution is harder to research than expected. I’ve done a blog post (in German) about this in the past. In the next section I will show you how to get the ID of the Microsoft Graph application and the IDs of the required permissions.

My Teams Backup solution requires many permissions from the Microsoft Graph (this time delegation because app permissions for some require Microsoft approval) so let�s have a look at a non-error prone implementation that is also easy to read and extend.

Azure CLI

For reference: create-aadapp-cli.ps1 Microsoft docs: az command overview

Use the Azure CLI to query the Azure Active Directory for the service principal with the name of the Microsoft Graph.

$servicePrincipalName = "Microsoft Graph";
$servicePrincipalId = az ad sp list --filter "displayname eq '$servicePrincipalName'" --query '[0].appId' | ConvertFrom-Json

Using the query parameter we select the first result (there is only one Microsoft Graph) and “cast” the app ID. Using the ConvertFrom-JSON makes it easy to parse the result and we receive “00000003-0000-0000-c000-000000000000” as the value for the app id.

Next, we need to get the ID for each required permission. This info is part of the “oauth2Permissions” property from the MS Graph service principal:

$servicePrincipalNameOauth2Permissions = @("Channel.ReadBasic.All", "ChannelMember.Read.All", "ChannelMessage.Read.All", "ChannelSettings.Read.All", "Group.Read.All", "GroupMember.Read.All", "Team.ReadBasic.All", "TeamMember.Read.All", "TeamSettings.Read.All", "TeamsTab.Read.All");
(az ad sp show --id $servicePrincipalId --query oauth2Permissions | ConvertFrom-Json) | ? { $_.value -in $servicePrincipalNameOauth2Permissions} | % {
    $permission = $_

    $delPermission = @{
        id = $permission.Id
        type = "Scope"
    }
    $reqGraph.resourceAccess += $delPermission
}

Using the “-in” filter we receive all specified entries for the array we need. To use the IDs in the next command the script creates a hashtable that can be converted in the needed JSON file (correct a file). The permissions are added as “Scope” representing “Delegation” permission. The “az ad app create” command will require a file with the permissions.

Set-Content ./required_resource_accesses.json -Value ("[" + ($reqGraph | ConvertTo-Json) + "]")
$newapp = az ad app create --display-name $appName --available-to-other-tenants false --native-app true --required-resource-accesses `@required_resource_accesses.json | ConvertFrom-Json

This creates an app that is only valid in your tenant “–available-to-other-tenants false” and allows the login as a public client “–native-app true”. The result is a JSON representing the new application.

The benefit of using the Azure CLI is the possibility to grant admin consent for the newly created app

az ad app permission admin-consent --id $newapp.appId

PowerShell with Azure AD

For reference: create-aadapp.ps1 Microsoft docs: Azure AD Application command overview

To get the ID for the Microsoft Graph Service principal we query the current directory and filter to the display name.

$servicePrincipalName = "Microsoft Graph";
$servicePrincipal = Get-AzureADServicePrincipal -All $true | ? { $_.DisplayName -eq $servicePrincipalName };

Where the Azure CLI requires a file to setup permission, the PowerShell version requires a .NET object. The Microsoft Graph service principal ID is the ResourceAppID.

$reqGraph = New-Object -TypeName "Microsoft.Open.AzureAD.Model.RequiredResourceAccess";
$reqGraph.ResourceAppId = $servicePrincipal.AppId;

From the returned object we can select the “Oauth2Permissions” property to filter on our array with the required permissions. For each permission another .NET object is created and added to the collection named “ResourceAccess”.

$servicePrincipalNameOauth2Permissions = @("Channel.ReadBasic.All", "ChannelMember.Read.All", "ChannelMessage.Read.All", "ChannelSettings.Read.All", "Group.Read.All", "GroupMember.Read.All", "Team.ReadBasic.All", "TeamMember.Read.All", "TeamSettings.Read.All", "TeamsTab.Read.All");
$servicePrincipal.Oauth2Permissions | ? { $_.Value -in $servicePrincipalNameOauth2Permissions} | % {
    $permission = $_
    $delPermission = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList $permission.Id,"Scope" #delegate permission (oauth) are always "Scope"
    $reqGraph.ResourceAccess += $delPermission
}

Now it is time to setup the application (only in this directory and as public client) and retrieve the ID of the new app:

New-AzureADApplication -DisplayName $appName -AvailableToOtherTenants:$false -PublicClient:$true -RequiredResourceAccess $reqGraph;
"ClientId: " + $newapp.AppId;
"TenantId: " + (Get-AzureADTenantDetail).ObjectId;
"Check AAD app: https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationMenuBlade/CallAnAPI/appId/" + $newapp.AppId + "/objectId/" + $newapp.ObjectId + "/isMSAApp/";

The last line creates a link to the Azure AD to grant admin consent.

Summary

You can check out my linked solution to get the full picture from the client code using the app to the setup required for authentication. I would recommend checking out the Azure CLI because it is the most complete solution even though it does not feel natural to me as a PowerShell guy. The example should give you an idea how to get the needed IDs and how to constructed the required objects/file to create the app. Let me know how you setup Azure AD apps and if there are other options. I’ve ignored the PowerShell AZ module because you are not able to grant admin consent too and chances are higher you may have AzureAD PowerShell installed already.

Golo Roden: Welche Programmiersprachen lernen?

Persönliche Weiterentwicklung ist für viele Entwicklerinnen und Entwickler gerade zu Beginn eines neuen Jahres häufig ein großes Thema. Besonders gut eignet sich dazu das Lernen einer neuen Programmiersprache. Welche Sprachen bieten sich an?

Christina Hirth : Project vs. Product Development – a Comparison

Last week I have realized that I had a blind spot: I thought that every developer is aware that selling software or delivering a product are not quite the same. As it turned out, I was wrong so I created this list to explain what I mean.

Goals And Interests In Project Development (aka Feature Factory):

  • The main stakeholder is the company paying for the features (called client further on), not the customer who is using them.
  • The responsibility for maintaining and evolving of the platform is not my job.
  • The requirements are defined by the client: I have no way to validate them because I have no contact with the users of the features. Feedback-based decisions are not possible.
  • Fast development but slow delivery.
  • Features are defined as a whole and delivered as a whole, not iteratively. Visual requirements (mock-ups) are un-negotiable because they are ordered as-is, even if the end user might not see it that way.
  • Perfection instead of usability.
  • Innovation is limited by restricted access to the infrastructure or other 3rd party services used by the client.
  • No involvement in long- and medium-term planning, as the goals of the client are not my goals. Very limited possibility to plan the architecture aligned with the strategy of the client.
  • The product my company sells is time and/or LoC. (Disclaimer: this would not be the case when working with Extreme Contracts)
  • The most important metrics are:
    • hours per week,
    • features per unit of time,
    • LoC

Goals And Interests In Product Development:

  • The main stakeholders are the end customers and the company itself (me and my team included).
  • The main goal is to identify users’ problems, develop solutions for them and solve them in the correct order. The job is no longer spending time with work or moving tasks on a Jira board, but to provide solutions.
  • Nowadays, with a large number of competitors who could appear every day, time-to-market (i.e. time) is decisive, but not at the expense of quality.
  • We own the maintenance and the evolution of the platform. It is our interest to produce high quality and robust software.
  • Through the cooperation of business analysts, UX experts, software developers and cloud experts, we are able to deliver features (capabilities) step by step, measure their benefits and decide on the next measures.
  • I can use all my skills and my company can benefit for them.
  • The user stories are written in a business-oriented manner, they can be taken literally. They document the proposed solution, can be cut into meaningful slices to be implemented quickly and reliably and to be delivered fast.
  • “Fail Fast” and “Inspect and Adapt” are the most important principles.
  • Usability, not perfection.
  • The most important metrics are:
    • customer satisfaction (measured with business metrics and the usage of delivered features),
    • lead time (time between idea and in use),
    • time to recovery,
    • change failure rate (Accelerate)

Jürgen Gutsch: Working inside WSL using Visual Studio Code

It is a long time since I did the last post... I was kind of busy finalizing a book. Also, COVID19 and the remote only working periods steal my commuting writing time. Two hours on the train that I used to use to write blog posts and stuff. My book is finished and will be published soon and to make 2021 better than 2020, I force myself to write for my blog.

For a while, I have the WSL2 (Windows Subsystem for Linux) installed on my computer to play around with Linux and to work with Docker. We did a lot using Docker the last year at the YOO and id is pretty easy using Docker Desktop and the WSL. Recently I had to check a demo building and running on Linux. My first thought was using and running a Docker container to work with, but this seemed to be too much effort for a simple check.

So why not do this in the WSL directly?

If you don't have the WSL installed, you should follow this installation guide: https://docs.microsoft.com/en-us/windows/wsl/install-win10

If the WSL is installed, you will have a Ubuntu terminal to work with. It seems this hosts the wsl.exe that is the actual bash to work:

bash1

You can also start the wsl.exe directly, or host it in the Windows Terminal or in the cmder which is my favorite terminal:

cmder

Installing the .NET 5 SDK

The installation packages for the Linux distributions are a little bit hidden inside the docs. You can follow the links from https://dot.net or just look here for the Ubuntu packages: https://docs.microsoft.com/de-de/dotnet/core/install/linux#ubuntu

As you can see in the first screenshot, my WSL2 is based on Ubuntu 18.04 LTS. So, I should choose the link to the package for this specific version:

ubuntu packages

The link forwards me to the installation guide.

At first, I need to download and add the key to the Microsoft package repository. Otherwise, I won't be able to download and install the package:

wget https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

After that, I can install the .NET 5 SDK:

sudo apt-get update; \
  sudo apt-get install -y apt-transport-https && \
  sudo apt-get update && \
  sudo apt-get install -y dotnet-sdk-5.0

This needs some time to finalize. If this is done, you can prove the installation by typing dotnet --info into the terminal:

dotnet --info

That's it about the installation of the .NET 5 SDK. Now let's create a project

Creating a ASP.NET Core project inside the WSL

This doesn't really differ from creating a project on Windows, except it is on the Linux file system.

Create a Razor Pages project using the dotnet CLI

dotnet new webapp -o wsldemo -n wsldemo
cd wsldemo

After changing into the project directory you can start it using the following command

dotnet run

You can now see the familiar output in your terminal:

dotnet run

The cool thing now is that you can call the running web with your local browser. The request gets directly forwarded into the WSL:

WSL demo

That's it about creating and running an application inside the WSL. Let's see how you can use your local VSCode to develop inside the WSL

Developing inside WSL using Visual Studio Code

To remotely develop in the WSL using VSCode, you need to have the Remote - WSL extension installed

Remote WSL

This extension will be visible in the Remote Explorer in VS Code. It directly shows you the existing WSL Target on your computer:

Remote Explorer

Right-click the Ubuntu-18.08 item and connect, or click the small connect icon on the right of the WSL item to connect to the WSL. This opens a new instance of VSCode that doesn't have a folder open. If you now open a folder, you can directly select the project folder from inside the WSL:

Open Folder

Click OK or press Enter, if you selected the right folder. When you connect the first time, it installs the VSCode Server inside the WSL, which is the actual VSCode instance that does the actual work. You really work, code, and debug inside the WSL. Your local VSCode instance is a terminal session into the WSL. IntelliSense, code analysis, and all the good stuff act inside the WSL. This also means you might need to install VSCode Extensions again in the WSL, even if you already installed it on your machine. Even the VSCode terminal is connected to the WSL:

VSCode Terminal

The Explorer shows you the current project:

VSCode Explorer

To see that remote coding is working, I open the _Layout.cshtml in the Pages/Shared/ folder and change the app titles to make it a little more readable. I change all wsldemo to WSL Demo:

WSL Demo code

There is another occurrence at the end of the file.

What I didn't try until I write this line, is to press F5 in VS Code to start debugging the application. So I do now and voila: debugging starts and a browser opens and shows my changes:

WSL Demo

That's it.

Conclusion

This was really easy and smoothly done. Microsoft did a lot to make remote development as easy as possible. Now I'm able to also test my applications on Linux or to develop for Linux.

Actually, I didn't expect that I can call a web that runs inside the WSL directly in a browser in Windows. This makes testing and front end debugging really easy.

To not mess up the WSL, I would avoid doing too much different things on it. Installing a .NET 5 runtime isn't a big thing, but if I also want to test a Nginx integration or other stuff, I would go with Docker Containers. Remote development inside a Docker container is also possible and I will write about it in one of the next posts.

Holger Schwichtenberg: Planung für Entity Framework Core 6.0 veröffentlicht

Microsoft hat eine Liste von Features bekannt gegeben, die in der Version 6.0 des objektrelationalen Mappers im November 2021 erscheinen sollen.

Holger Schwichtenberg: Mehrere Wege führen nach Rom in der PowerShell und die Leistungsunterschiede

In Microsofts PowerShell gibt es oft mehrere Wege, um ein Ziel zu erreichen. Dabei gibt es manchmal erhebliche Geschwindigskeitsunterschiede.

Martin Richter: HP + UPS = Der Service des Grauens / oder man braucht viel Geduld (Teil 1)

Nach langer Zeit hatte ich mich im Sommer entschieden meinen lang dienendes Samsung Laptop auszumustern. Ich habe mir ein HP-Envy 15″ Laptop gegönnt. So ein richtig schönes mit allem drum und dran.
Vor allem wollte ich einen Touchscreen.

Irgendwann bemerkte ich, dass die Akku-Stands Anzeige ungenau war und auch der HP-Assistent meinte ich müsste mal meinen Akku kallibrieren. Wozu auch immer.

Das ging aber nicht. Die Aufforderung zum Kalibrieren blieb. Also habe ich, als ich endlich mal Zeit hatte und der Laptop nicht so benötigt wurde HP kontaktiert und hier nun die Abfolge chronolgisch:

Di – 22.12.2020 16:39 Uhr – erster Kontakt mit dem Support von HP
Auch eine neue Erfahrung per Facebook Chat. Nach einigen Anweisungen und Tests die ich durchführen sollte ergab sich keine Besserung.

Mi – 23.12.2020 08:35 Uhr – erneuter Kontakt mit HP
Nun hatte ich alle Logs und Meldungen und Screenshots hochgeladen.
Das Laptop muss eingesendet werden.

Mi – 23.12.2020 10:55 Uhr – Antwort von HP
Das Laptop muss eingesendet werden und dazu die folgenden Instruktionen.
– Das Laptop wird abgeholt
– Ich muss eine Datensicherung machen (Mist aber ich hätte sowieso niemals meine Daten auf dem Laptop belassen)
– Das Laptop wird abgeholt durch UPS
– Die Verpackung und das Label bringt UPS mit.
Das ging mir zu schnell… Laptop von meinen Daten putzen Datensicherung machen…

Mi – 23.12.2020 15:15 Uhr – gebe ich das OK zur Abholung
OK. ging doch schneller als gedacht.
Ich gebe alle Kontaktdaten durch, falls diese noch nicht vorhanden sind.

Do- 24.12.2020 08:43 Uhr – Antwort von HP
Nochmals erhalte ich die Anweisungen: „Nicht einpacken“, „Label kommt von UPS“, „Kiste kommt von UPS“

Do- 24.12.2020 09:38 Uhr – Email von HP (hp.customer.care@hp.com)
Email Bestätigung mit Falldaten und allen bekannten Infos nochmal.
Mit bestätigtem Abholtermin 28.12. durch UPS.

Mo- 28.12.2020 07:04 Uhr – Email von UPS (HP.notifications@ups.com)
Email Bestätigung mit bestätigtem Abholtermin 28.12. durch UPS.

Mo – 28.12.2020 Vormittags
Bei uns kommt UPS oft Vormittags oder später am Nachmittag.
UPS Laster fährt an unserem Haus vorbei. OK. Geduld. Die fahren in der Weihnachtszeit ja öfter.
Aber kein UPS am Montag.
Ja unsere Adresse kann man mit Google finden. Unsere Adresse ist seit Jahren unverändert. Wir bekommen seit Jahren Lieferungen durch Versanddienste! (Nur falls jemand meint wir würden in Hessisch-Uganda leben)

Di – 29.12.2020 Vormittags
UPS fährt wieder an unserem Haus vorbei. Auf der UPS Seite kann ich zu meiner Sendung in der Sendungsverfolgung nichts sehen. Keine Infos zur Abholung. Keine Adresse. Nichts. Nicht mal, dass es abgeholt werden soll.

Ich habe keine Lust ewig auf UPS zu warten und kontaktiere, telefonisch die Hotline.

Di – 29.12.2020 11:35 Uhr- Anruf bei UPS (20 Cent der Anruf)

Ich gebe meine UPS Sendungsnummer an und erfahre, dass es zu dieser Sendungsnummer keine Abholung gibt. Ich verlange den Vorgesetzten.

Ein unerfreuliches Gespräch:
– UPS würde niemals Kisten bereitstellen.
– Ich müsste das Label drucken und die Abholung veranlassen
– Ich hätte niemals eine Bestätigung von UPS per Email bekommen.
– Eine Email Adresse kann der Mitarbeiter mit bei UPS nicht geben, sonst hätte ich ihm die Email zugeschickt.
Ich beende total sauer das Gespräch.

Di – 29.12.2020 11:50 Uhr- Anruf bei HP

Nach x-Minuten gelange ich nach verständigem Gespräch in der Logistik.
Diese sagt, sie kümmert sich drum und eskaliert das bei UPS.

Di – 29.12.2020 Nachmittags

Kein UPS.

Di – 29.12.2020 16:49 Uhr- Anruf bei UPS (20 Cent der Anruf)

Genau der gleiche Verlauf.
– Angeblich habe ich keinen Abholtermin.
– Ich muss ein Label haben.
Ich bin geladen und der Kundenbetreuer betont unwillig.
Bei UPS hat scheinbar keiner eine Ahnung wie Abwicklungen mit HP laufen.

Di – 29.12.2020 16:59 Uhr- Anruf bei HP

Nochmal Logistik Abteilung. Nochmal die Beteuerung: „Wir kümmern und drum!“

Di – 29.12.2020 17:24 Uhr- Email von HP

Nochmal die Sendungsinfos und die Aufforderung ich solle mich an UPS wenden. Dazu die bekannt 20 Cent Telefonnummer.
Soll ich lachen oder weinen?

Mi – 30.12.2020 Vormittags

Wie erwartet fährt der UPS Laster an unserem Haus vorbei.

Mi- 30.12.2020 10:30 Uhr- Anruf bei HP

Nochmal ein Gespräch mit einem Supporter.
Der versteht zwar meinen Ärger aber es nutzt nichts.
Wieder ein Gespräch mit einem Mitarbeiter der Logistik. Wieder das Versprechen das etwas passiert.
Nochmal Telefonnummer und Handy Nummer durchgegeben.
Mir wird versprochen, dass ich eine SMS oder einen Anruf erhalte.

Do – 31.12.2020 10:30
Kein UPS. Keine Versprochene SMS. Keine weitere Email. Kein Anruf!
Ich bin wütend. Habe keinen Bock meine Zeit mit warten zu verbringen oder andere Mitbewohner zu instruieren was zu machen ist wenn UPS kommt (inkl. Zettel schreiben und an die Tür hängen)

HP ist nett aber unfähig etwas in die Wege zu leiten.

Bei UPS hat keiner eine Ahnung und man ist dort nicht mal gewillt einem zu helfen. (und dafür zahlt man noch 20 Cent pro Anruf)

Anmerkung: Ich habe bei UPS nicht einmal mit jemandem gesprochen, der mich zu 100% verstanden hat, oder den ich zu 100% verstehen konnte.
Ich habe nichts gegen einen Akzent in der Sprache aber verstehen und ausdrücken sollte man schon.

BTW: Wäre die Kalibrierung bereits Teil der Endkontrolle bei HP gewesen, dann wäre das Gerät niemals ausgeliefert worden. Aber auch hier wird halt lieber alles auf den Kunden abgewälzt.

To be continued…




Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Code-Inside Blog: How to get all distribution lists of a user with a single LDAP query

In 2007 I wrote a blogpost how easy it is to get all “groups” of a given user via the tokenGroup attribute.

Last month I had the task to check why “distribution list memberships” are not part of the result.

The reason is simple:

A pure distribution list (not security enabled) is not a security group and only security groups are part of the “tokenGroup” attribute.

After some thoughts and discussions we agreed, that it would be good if we could enhance our function and treat distribution lists like security groups.

How to get all distribution lists of a user?

The get all groups of a given user might be seen as trivial, but the problem is, that groups can contain other groups. As always, there are a couple of ways to get a “full flat” list of all group memberships.

A stupid way would be to load all groups in a recrusive function - this might work, but will result in a flood of requests.

A clever way would be to write a good LDAP query and let the Active Directory do the heavy lifting for us, right?

1.2.840.113556.1.4.1941

I found some sample code online with a very strange LDAP query and it turns out: There is a “magic” ldap query called “LDAP_MATCHING_RULE_IN_CHAIN” and it does everything we are looking for:

var getGroupsFilterForDn = $"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:= {distinguishedName}))";
                using (var dirSearch = CreateDirectorySearcher(getGroupsFilterForDn))
                {
                    using (var results = dirSearch.FindAll())
                    {
                        foreach (SearchResult result in results)
                        {
                            if (result.Properties.Contains("name") && result.Properties.Contains("objectSid") && result.Properties.Contains("groupType"))
                                groups.Add(new GroupResult() { Name = (string)result.Properties["name"][0], GroupType = (int)result.Properties["groupType"][0], ObjectSid = new SecurityIdentifier((byte[])result.Properties["objectSid"][0], 0).ToString() });
                        }
                    }
                }

With a given distinguishedName of the target user, we can load all distribution and security groups (see below…) transitive!

Combine tokenGroups and this

During our testing we found some minor differences between the LDAP_MATCHING_RULE_IN_CHAIN and the tokenGroups approach. Some “system-level” security groups were missing with the LDAP_MATCHING_RULE_IN_CHAIN way. In our production code we use a combination of those two approaches and it seems to work.

A full demo code how to get all distribution lists for a user can be found on GitHub.

Hope this helps!

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.