Golo Roden: Tools für Web- und Cloud-Entwicklung

Die vergangene Folge von "Götz & Golo" hat sich mit der Frage beschäftigt, wann Teams gut zusammen arbeiten. Der Fokus lag dabei auf der Arbeit remote und vor Ort. Doch wie sieht es mit den eingesetzten Tools aus?

Holger Schwichtenberg: User-Group-Vortrag und Workshop zu Continuous Delivery mit Azure DevOps

Der Dotnet-Doktor hält am 7. November einen Vortrag und bietet vom 2. bis 4. Dezember einen Workshop in Essen an.

Norbert Eder: MySQL-Queries mitloggen

Beim Microsoft SQL Server kann man SQL Queries recht einfach mitloggen in dem man mal schnell den SQL Server Profiler startet. MySQL bietet ein derartiges Tool nicht an, zumindest kann es die MySQL Workbench nicht. Dennoch kann man aber die abgesetzten Queries aufzeichnen.

So kann man beispielsweise alle Queries in eine Log-Datei schreiben:

SET global general_log_file='c:/Temp/mysql.log'; 
SET global general_log = on; 
SET global log_output = 'file';

Das kann man dann natürlich auch wieder deaktivieren:

SET global general_log = off; 

Weiterführende Informationen finden sich in der MySQL Dokumentation.

Der Beitrag MySQL-Queries mitloggen erschien zuerst auf Norbert Eder.

Christina Hirth : My Reading List @KDDDConf

(formerly known as KanDDDinsky 😉)

Accelerate - Building and Scaling High Performing Technology Organizations

Accelerate by Nicole Forsgren, Gene Kim, Jez Humble

This book was referenced to in a lot of talks, mostly with the same phrase “hey folks, you have to read this!”


Domain Modeling Made Functional by Scott Wlaschin

The book was called as the only real currently published reference work for DDD for functional programming.

More books and videos to find on fsharpforfunandprofit


Functional Core, Imperative Shell by Gary Bernhard – a talk

The comments on this tweet are telling me, watching this video is long overdue …


37 Things One Architect Knows About IT Transformation by Gregor Hohpe

The name @ghohpe was also mentioned a few times at @KDDDconf


Domain Storytelling

A Collaborative Modeling Method

by Stefan Hofer and Henning Schwentner


Drive: The surprising truth about what motivates us by Daniel H Pink

There is also a TLDR-Version: a talk on vimeo


Sapiens – A Brief History of Humankind by Yuval Noah Harari

This book was recommended by @weltraumpirat after our short discussion about how broken or industry is. Thank you Tobias! I’m afraid, the book will give me no happy ending.

UPDATE:

It is not a take-away from KDDD-Conf but still a must have book (thank you Thomas): The Phoenix Project

Jürgen Gutsch: ASP.NET Core 3.0 Weather Application - The gRPC Server

Introduction

As mentioned in the last post, the next couple of posts will be a series that describes how to build a kind of a microservice application that reads weather data in, stores them and provides statistical information about that weather.

I'm going to use gRPC, Worker Services, SignalR, Blazor and maybe the Identity Server to secure all the services. If some time is left, I'll put all the stuff into docker containers.

I will write a small gRPC services which will be our weather station in Kent. I'm also goin to write a worker service that hosts a gRPC Client to connect to the weather station to fetch the data every day. This worker service also stores the date into a database. The third application is a Blazor app that fetches the data from the database and displays the data in a chart and in a table.

In this case I use downloaded weather data of Washington state and I'm going to simulate a day in two seconds.

In this post I will start with the weather station.

Setup the app

In my local git project dump folder I create a new folder called WeatherStats, which will be my project solution folder:

mkdir weatherstats
cd weatherstats
dotnet new sln -n WeatherStats
dotner new grpc -n WeatherStats.Kent -o WeatherStats.Kent
dotnet sln add WeatherStats.Kent

This line create the folder, creates a new solution file (sln) with the name WeatherStats. The fourth line creates the gRPC project and the last line adds the project to the solution file.

The solution file helps MSBuild to build all the projects, to see the dependencies and so on. And it helps user who like to use Visual Studio.

If this is done I open VSCode using the code command in the console:

code .

The database is the SQLite database that I created for my talk about the ASP.NET Core Health Checks. Just copy this database to your own repository into the folder of the weather station WeatherStats.Kent.

In the Startup.cs we only have the services for gRPC registered:

services.AddGrpc();

But we also need to add a DbContext:

services.AddDbContext<ApplicationDbContext>(options =>
{
    options.UseSqlite(
        Configuration["ConnectionStrings:DefaultConnection"]);
});

The configuration points to a SQLite database in the current project:

{
  "ConnectionStrings": {
    "DefaultConnection": "Data Source=wa-weather.db"
  },

In the Configure method the gRPC middleware is mapped to the WeatherService:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseRouting();

    app.UseEndpoints(endpoints =>
    {
        endpoints.MapGrpcService<WeatherService>();

        endpoints.MapGet("/", async context =>
        {
            await context.Response.WriteAsync("Communication with gRPC endpoints must be made through a gRPC client. To learn how to create a client, visit: https://go.microsoft.com/fwlink/?linkid=2086909");
        });
    });
}

A special in this project type is the proto folder with the greet.proto in it. This is a text file that describes the gRPC endpoint. We are going to rename it to weather.proto later on and to change it a little bit. If you change the name outside of Visual Studio 2019, you also need to change it in the project file. I never tried it, but the Visual Studio 2019 tooling should also rename the references.

You will also find a GreeterService in the Services folder. This file is the implementation of the service that is defined in the greeter.proto.

And last but not least we have the DbContext to create, which isn't really complex in out case:

public class ApplicationDbContext : DbContext
{
    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
        : base(options)
    {
    }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<WeatherData>()
            .HasKey(x => x.Id );
        modelBuilder.Entity<WeatherData>()
            .HasOne(p => p.WeatherStation)
                .WithMany(b => b.WeatherData);
        modelBuilder.Entity<WeatherStation>()
            .HasKey(x => x.Id);
    }

    public DbSet<WeatherData> WeatherData { get; set; }
    public DbSet<WeatherStation> WeatherStation { get; set; }
}

The gRPC endpoint

Let's start changing the gRPC endpoint. Personally I really love starting to code from the UI perspective, this forces me to not do more than the UI really needs. In hour case the gRPC endpoint is the UI. So I use the weather.proto file to design the API:

syntax = "proto3";
import "google/protobuf/timestamp.proto";

option csharp_namespace = "WeatherStats.Kent";

package Weather;

// The weather service definition.
service Weather {
  // Sends a greeting
  rpc GetWeather (WeatherRequest) returns (WeatherReply);
}

// The request message containing the date.
message WeatherRequest {
  google.protobuf.Timestamp date = 1;
}

// The response message containing the weather.
message WeatherReply {
  google.protobuf.Timestamp date = 1;
  float avgTemperature = 2;
  float minTemperature = 3;
  float maxTemperature = 4;
  float avgWindSpeed = 5;
  float precipitaion = 6;
}

I need to import the support for timestamp to work with dates. The namespace was predefined by the tooling. I changed the package name and the service name to Weather. The rpc method now is called GetWeather and takes an WeatherRequest as an argument and returns a ReatherReply.

After that the types (messages) are defined. The WeatherRequest only has the date in it, which is the requested date. The WeatherReply also contains the date as well as the actual weather data of that specific day.

That's it. When I now build the application, the gRPC tooling builds a lot of C# code in the background for us. This code will be used in the WeatherService, that fetches the date from the database:

public class WeatherService : Weather.WeatherBase
{
    private readonly ILogger<WeatherService> _logger;
    private readonly ApplicationDbContext _dbContext;

    public WeatherService(
        ILogger<WeatherService> logger,
        ApplicationDbContext dbContext)
    {
        _logger = logger;
        _dbContext = dbContext;
    }

    public override Task<WeatherReply> GetWeather(
        WeatherRequest request, 
        ServerCallContext context)
    {
        var weatherData = _dbContext.WeatherData
            .SingleOrDefault(x => x.WeatherStationId == WeatherStations.Kent
                && x.Date == request.Date.ToDateTime());

        return Task.FromResult(new WeatherReply
        {
            Date = Timestamp.FromDateTime(weatherData.Date),
            AvgTemperature = weatherData?.AvgTemperature ?? float.MinValue,
            MinTemperature = weatherData?.MinTemperature ?? float.MinValue,
            MaxTemperature = weatherData?.MaxTemperature ?? float.MinValue,
            AvgWindSpeed = weatherData?.AvgWindSpeed ?? float.MinValue,
            Precipitaion = weatherData?.Precipitaion ?? float.MinValue
        });
    }
}

This service will fetches a specific WeatherData item from the database using a Entity Framework Core DbContext that we created previously. gRPC has another date and time implementation. This needs to add the Google.Protobuf.WellKnownTypes package via NuGet. This package also provides functions to convert between this two date and time implementations.

The WeatherService derives from the WeatherBase class, which is auto generated from the weather.proto file. Also the types WeatherRequest and WeatherReply are auto generated as defined in the weather.proto. As you can see the WeatherBase is in the WeatherStats.Kent.Weather namespace, which is a combination of the csharp_namespace and the package name.

That's it. We are able to test the service after the client is done.

Conclusion

This is all the code for the weather station. Not really complex, but enough to demonstrate the gRPC server.

In the next part, I will show how to connect to the gRPC server using a gRPC client and how to store the weather data into a database. The client will run inside a worker service to fetch the date regularly, e. g. once a day.

Code-Inside Blog: IdentityServer & Azure AD Login: Unkown Response Type text/html

The problem

Last week we had some problems with our Microsoft Graph / Azure AD login based system. From a user perspective it was all good until the redirect from the Microsoft Account to our IdentityServer.

As STS and for all auth related stuff we use the excellent IdentityServer4.

We used the following configuration:

services.AddAuthentication()
            .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
            {
                options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                options.ClientSecret = office365Config.MicrosoftAppClientSecret;    // Client-Secret from the AppRegistration 
                options.Authority = office365Config.AuthorizationEndpoint;          // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                options.ResponseType = "code id_token";
                options.GetClaimsFromUserInfoEndpoint = true;
                options.SaveTokens = true;
                options.CallbackPath = "/oidc-signin"; 
                
                foreach (var scope in office365Scopes)
                {
                    options.Scope.Add(scope);
                }
            });

The “office365config” contains the basic OpenId Connect configuration entries like ClientId and ClientSecret and the needed scopes.

Unfortunatly with this configuration we couldn’t login to our system, because after we successfully signed in to the Microsoft Account this error occured:

System.Exception: An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
   --- End of inner exception stack trace ---
   at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
   at IdentityServer4.Hosting.FederatedSignOut.AuthenticationRequestHandlerWrapper.HandleRequestAsync() in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\FederatedSignOut\AuthenticationRequestHandlerWrapper.cs:line 38
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Cors.Infrastructure.CorsMiddleware.InvokeCore(HttpContext context)
   at IdentityServer4.Hosting.BaseUrlMiddleware.Invoke(HttpContext context) in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\BaseUrlMiddleware.cs:line 36
   at Microsoft.AspNetCore.Server.IIS.Core.IISHttpContextOfT`1.ProcessRequestAsync()

Fix

After some code research I found the problematic code:

We just needed to disable “GetClaimsFromUserInfoEndpoint” and everything worked. I’m not sure why we the error occured, because this code was more or less untouched a couple of month and worked as intended. I’m not even sure what “GetClaimsFromUserInfoEndpoint” really does in the combination with a Microsoft Account.

I wasted one or two hours with this behavior and maybe this will help someone in the future. If someone knows why this happend: Use the comment section or write me an email :)

Full code:

   services.AddAuthentication()
                .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
                {
                    options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                    options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                    options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                    options.ClientSecret = office365Config.MicrosoftAppClientSecret;  // Client-Secret from the AppRegistration 
                    options.Authority = office365Config.AuthorizationEndpoint;        // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                    options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                    options.ResponseType = "code id_token";
                    // Don't enable the UserInfoEndpoint, otherwise this may happen
                    // An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
                    // at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
                    options.GetClaimsFromUserInfoEndpoint = false; 
                    options.SaveTokens = true;
                    options.CallbackPath = "/oidc-signin"; 
                    
                    foreach (var scope in office365Scopes)
                    {
                        options.Scope.Add(scope);
                    }
                });

Hope this helps!

Martin Richter: Note 1 für den Support von Schaudin / RC-WinTrans

Seit Jahren benutzen wir RC-WinTrans von Schaudin.com für unsere die Multilinguale Unterstützung unserer Software.

Durch eine Änderung in VC-2019 16.3.3 wurden nun RC Dateien nicht mehr ANSI Codepage 1252 gespeichert sondern grundsätzlich als UTF-8 Dateien. D.h. alle RC Dateien, die nicht in UTF-8 oder UTF-16 vorliegen, werden zwangsweise in UTF-8 konvertiert.

Jetzt hatten wir ein Problem. Unsere Tools von Schaudin (RC-WinTrans) können kein UTF-8 in der von uns genutzten Version. Zuerst habe ich bei Microsoft einen Case zu öffnen, weil so ein erzwungenes Encoding ist für mich ein No-Go.

Eine Anfrage in Stackoverflow brachte keine Erkenntnis außer, das das Problem ist bereits bekannt unter mehreren Incidents
Link1, Link2, Link3

Also habe ich mich an den Support von Schaudin gewandt. Neuere Tools können zwar kein UFT-8 aber UTF-16 verarbeiten. Also müssen wir eben ein Update kaufen.
Nach einigen Emails hin und her bot mir Schaudin an, die nächste Version nach meiner (die auch UTF-16 unterstützt) kostenlos zu erhalten.

Ich bin etwas sprachlos! So etwas (kostenlos) auf die nächste Version, ist doch nicht so ganz üblich in unserer Welt.

Ich sage Danke und gebe der Firma Schaudin die Note 1 in Kulanz und Support.


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Holger Schwichtenberg: Aktuelle Fachbücher zu C# 8.0 und Entity Framework Core 3.0

Der Dotnet-Doktor hat seine Fachbücher zu C# 8.0 und Entity Framework Core 3.0 auf den Stand der am 23. September 2019 erschienen endgültigen Versionen gebracht.

Norbert Eder: Cascadia Code: Neuer Font für Visual Studio Code

Microsoft hat einen neuen nichtproportionalen (monospaced) Font (für Visual Studio Code, Terminal etc.) veröffentlicht: Cascadia Code.

This is a fun, new monospaced font that includes programming ligatures and is designed to enhance the modern look and feel of the Windows Terminal.

Ich habe den Font getestet und finde ihn empfehlenswert. Und so kannst du ihn auch verwenden:

Installation

Öffne die Cascadia Code Releases. Klicke auf Cascadia.ttf und lade somit die Datei auf deinen Computer. Öffne den Font anschließend mit der Windows Schriftartenanzeige.

Links oben kann der Font nun via Installieren am System installiert und registriert werden.

Nun kann der Font in jeder Anwendung verwendet werden.

Font in Visual Studio Code ändern

Unter File > Preferences > Settings > Text Editor > Font kann der verwendete Font in Visual Studio Code geändert werden. Hierfür einfach im Feld Font Family einfach 'Cascadia Code', Consolas, 'Courier New', monospace eintragen. Um Ligaturen zu verwenden, ist das entsprechende Flag zu aktivieren:

Cascadia Code und Ligaturen in Visual Studio Code konfigurieren | Norbert Eder

Cascadia Code und Ligaturen in Visual Studio Code konfigurieren

Der Beitrag Cascadia Code: Neuer Font für Visual Studio Code erschien zuerst auf Norbert Eder.

Christina Hirth : About silos and hierarchies in software development

Disclaimer: this is NOT a rant about people. In most of the situations all devs I know want to deliver a good work. This is a rant about organisations imposing such structures calling themselves “an agile company”.

To give you some context: a digital product, sold online as a subscription. The application in my scenario is the usual admin portal to manage customers, get an overview of their payment situation, like balance, etc.
The application is built and maintained by a frontend team. The team is using the GraphQL API built and maintained by a backend team. Every team has a team lead and over all of them is at least one other lead. (Of course there are also a lot of other middle-management, etc.) 

Some time ago somebody must have decided to include in the API a field called “total” containing the balance of the customer so that it can be displayed in the portal. Obviously I cannot know what happened (I’m just a user of this product), but fact is, this total was implemented as an integer. Do you see the problem? We are talking about money displayed on the website, about a balance which is almost never an integer. This small mistake made the whole feature unusable.

Point 1: Devs implement technical requests instead of improving the product 
I don’t know if the developer who implemented this made an error by not thinking about what this total should represent or he/she simple didn’t had the experience in e-commerce but it is not my point. My point is that this person was obviously not involved in the discussion about this feature, why it is needed, what is the benefit. I can see it with my spiritual eyes how this feature became turned in code: The team lead, software lead (xyz lead) decided that this task has to be done. The task didn’t referred to the customer benefit, it stripped everything down to “include a new property called total having as value the sum of some other numbers”. I can see it because I had a lot of meetings like this. I delivered a string to the other team and this string was sometimes a URL and sometimes a name. But I did this in a company which didn’t called himself agile. 

Point 2: No chance for feedback, no chance for commitment for the product
Again: I wasn’t there as this feature was requested and built, I just can imagine that this is what it happened, but it really doesn’t matter. It is not about a special company or about special people but about the ability to deliver features or only just some lines of code sold as a product. Back to my “total”: this code was reviewed, integrated, deployed to development, then to some in-between stages and finally to production. NOBODY on this whole chain asked himself if the new field included in a public(!) API is implemented as it should. And I would bet that nobody from the frontend team was asked to review the API to see if their needs can be fulfilled.

Point 3: Power play, information hiding makes teams slow artificially (and kills innovation and the wish to commit themselves to the product they build) 
If this structure wouldn’t be built on power and position and titles then the first person observing the error could have talked to the very first developer in the team responsible for the feature to correct it. They could have changed it in a few minutes (this was the first person noticing the error ergo nobody was using it yet) and everybody would have been happy. But not if you have leads of every kind who must be involved in everything (because this is why they have their position, isn’t it?) Then somebody young and enthusiastic wanting to deliver a good product would create a JIRA ticket. In a week or two this ticket will be eventually discussed (by the leads of course)  and analyzed and it will eventually moved forward in the backlog – or not. It doesn’t matter anyway because the frontend team had a deadline and they had to solve their problem somehow.

Epilogue: the culture of “talk only to the leads” bans the cooperation between teams
at this moment I did finally understood the reason behind of another annoying behavior in the admin panel: the balance is calculated in the frontend and is equal with the sum of the shown items. I needed some time to discover this and was always wondering WTF… Now I can see what happened: the total in the API was not a total (only the integer part of the balance) and the ticket had to be finished so that somebody had this idea to create a total adding the values from the displayed items. Unfortunately this was a very short-sighted idea because it only works if you have less then 25 payments, the default number of items pro page. Or you can use the calculator app to add the single totals on every page…

All this is on so many levels wrong! For every involved person is a lose-lose situation. 

What do you think? Is this only me arguing for better “habitat for devs” or it is time that this kind of structures disappear.

Jürgen Gutsch: New in ASP.NET Core 3.0: Worker Services

I mentioned in on of the first posts of this series, that we are now able to create ASP.NET Core applications without a web server and without all the HTTP stuff that is needed to provide content via HTTP or HTTPS. At the first glance it sounds wired. Why should I create a ASP.NET application that doesn't provide any kind of an endpoint over HTTP? Is this really ASP.NET. Well, it is not ASP.NET in the sense of creating web applications. But it is part of the ASP.NET Core and uses all the cool features that we are got used to in ASP.NTE Core:

  • Logging
  • Configuration
  • Dependency Injection
  • etc.

In this kind of applications we are able to span up a worker service which is completely independent from the HTTP stack.

Worker services can run in any kind of .NET Core applications, but they don't need the IWebHostBuilder to run

The worker service project

In Visual Studio or by using the .NET CLI you are able to create a new worker service project.

dotnet new worker -n MyWorkerServiceProject -o MyWorkerServiceProject

This project looks pretty much like a common .NET Core project, but all the web specific stuff is missing. The only two code files here are the Program.cs and a Worker.cs.

The Program.cs looks a little different compared to the other ASP.NET Core projects:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
        	.ConfigureServices((hostContext, services) =>
            {
                services.AddHostedService<Worker>();
            });
}

There is just a IHostBuilder created, bot no IWebHostBuilder. There is also no Startup.cs created, which actually isn't needed in general. The Startup.cs should only be used to keep the Program.cs clean and simple. Actually the DI container is configure in the Program.cs in the method ConfigureServices.

In a regular ASP.NET Core application the line to register the Worker in the DI container, will actually also work in the Startup.cs.

The worker is just a simple class that derives from BackgroundService:

public class Worker : BackgroundService
{
    private readonly ILogger<Worker> _logger;

    public Worker(ILogger<Worker> logger)
    {
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
            await Task.Delay(1000, stoppingToken);
        }
    }
}

The BackgroundService base class is still a well known IHostedService that exists for a while. It just has some base implementation in it to simplify the API. You would also be able to create a WorkerService by implementing the IHostedService directly.

This demo worker just does a endless loop and writes the current date and time every second to the logger:

What you can do with Worker Services

With this kind of services you are able to create services that do some stuff for you in the background or you can simply create service applications that can run as a windows service or as a service inside a docker container.

Worker Services are running one time on startup or just create a infinite loop to do stuff periodically. They run asynchronously in a separate thread and don't block the main application. With this in mind you are able to execute tasks that aren't really related to the applications domain logic

  • Fetching data periodically
  • Sending mails periodically
  • Calculating data in the background
  • Startup initialization

In a microservice environment it would make sense to run one or more worker services in console applications inside docker containers. This way it is easy to maintain and deploy them separately from the main application and they can be scaled separately.

Let's create an example

With the next couple of post I'm going to create an example on how to use worker services.

I'm going to write weather station that provides a gRPC endpoint to fetch the whether data of a specific date. I'll also write a worker service that fetches the data using a gRPC Client and prepares the data for another app that will displaying it. At the end we will at least have three Applications:

  • The weather station: A gRPC service that provides an endpoint to fetch the weather data of an specific date.
  • The weather data loader: A worker service running a gRPC Client that fetches the data every day and puts the data into a database. Console application.
  • The weather stats app: Loads the data from the database and shows the current weather and a graph of all loaded weather data. Blazor Server Side

I'm going to put those apps and the database into docker containers and put them together using docker-compose.

I'll simulate the days by changing to the next day every second starting by 1/1/2019. I already have weather data of some weather stations in Washington state and will reuse this data.

The weather station will have a SQLite inside the docker container. The separate database on a fourth docker container is for the worker and the web app to share the date. I'm not yet sure what database I want to use. If you have an idea, just drop me a comment.

I'm going to create a new repository on GitHub for this project and will add the link to the next posts.

Conclusion

I guess the Worker Services will be as most useful in micro service environments. But it might also be a good way to handle those mentioned aspects in common ASP.NET Core applications. Feel free to try it out.

But what I tried to show here as well, is the possibility to use a different hosting model to run a different kind of (ASP.NET) Core application, which still uses all the useful features of the ASP.NET Core framework. The way Microsoft decoupled ASP.NET from the generic hosting model is awesome.

Golo Roden: Virtuell vereint: Wann Teams remote und/oder vor Ort gut zusammenarbeiten

Bei der Frage nach Arbeit vor Ort oder remote gibt es keine pauschale Antwort. Teams funktionieren gut, wenn sie gemeinsame Ziele aus eigenem Antrieb verfolgen.

Code-Inside Blog: Enforce Administrator mode for builded dotnet exe applications

The problem

Let’s say you have a .exe application builded from Visual Studio and the application always needs to be run from an administrator account. Windows Vista introduced the “User Account Control” (UAC) and such applications are marked with a special “shield” icon like this:

x

TL;DR-version:

To build such an .exe you just need to add a __“application.manifest” and request the needed permission like this:

<requestedExecutionLevel  level="requireAdministrator" uiAccess="false" />

Step by Step for .NET Framework apps

Create your WPF, WinForms or Console project and add a application manifest file:

x

The file itself has quite a bunch of comments in it and you just need to replace

<requestedExecutionLevel level="asInvoker" uiAccess="false" />

with

<requestedExecutionLevel  level="requireAdministrator" uiAccess="false" />

… and you are done.

Step by Step for .NET Core apps

The same approach works more or less for .NET Core 3 apps:

Add a “application manifest file”, change the requestedExecutionLevel and it should “work”

Be aware: For some unkown reasons the default name for the application manifest file will be “app1.manifest”. If you rename the file to “app.manifest”, make sure your .csproj is updated as well:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <ApplicationManifest>app.manifest</ApplicationManifest>
  </PropertyGroup>

</Project>

Hope this helps!

View the source code on GitHub.

Holger Schwichtenberg: Nachlese zur BASTA!-Konferenz: Videos und Unterlagen zum Download

Die Herbst-BASTA! Konferenz letzte Woche war in ihrem 22. Jahr besonders spannend, weil .NET Core 3.0, ASP.NET Core 3.0 und Entity Framework Core 3.0 am Vorabend der Hauptkonferenz erschienen sind.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Blazor Client Side

In the last post we had a quick look into Blazor Server Side, which doesn't really differ on the hosting level. This is a regular ASP.NET Core application that will run on a web server. Blazor Client Site on the other hand differs for sure, because it doesn't need a web server, it completely runs in the browser.

Microsoft compiled the Mono runtime into a WebAssembly. With this, it is possible to execute .NET Assemblies natively inside the WebAssembly in the browser. This doesn't need a web serve. There is no HTTP traffic between the browser and a server part anymore. Except you are fetching data from a remote service.

Let's have a look at the HostBuilder

This time the Program.cs look different compared to the default ASP.NET Core projects:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IWebAssemblyHostBuilder CreateHostBuilder(string[] args) =>
        BlazorWebAssemblyHost.CreateDefaultBuilder()
            .UseBlazorStartup<Startup>();
}

Here we create IWebAssemblyHostBuilder instead of a IHostBuilder. Actually it is a completely different Interface and doesn't derive from the IHostBuilder at the time I wrote this. But it looks pretty similar. In this case also a default configuration of the IWebAssemblyHostBuilder is created and similar to the ASP.NET Core projects, a Startup class is used to configure the application.

The Startup class is pretty empty but has the same structure as all the other ones. You are able to configure services to the IoC container and to configure the application:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
    }

    public void Configure(IComponentsApplicationBuilder app)
    {
        app.AddComponent<App>("app");
    }
}

Usually you won't configure a lot here, except the services. The only thing you can really do here is to execute code on startup, to maybe initialize a kind of a database or whatever you need to do on startup.

The important line of code here is the line where the root component is added to the application. Actually it is the App.cshtml in the root of the project. In Blazor server side this is the host page that calls the root component and here it is configured in the Startup.

All the other UI stuff is pretty much equal in both versions of Blazor.

What you can do with Blazor Client Side

In general you can do the same things in both versions of Blazor. You can also share the same UI logic. Both versions are made to create single page application with C# and Razor and without to learn a JavaScript framework like React or Angular. It will be pretty easy for you to build single page applications, if you know C# and Razor.

The Client side version will live in the WebAssembly only and will work without a connection to a web server, if no remote service is needed. Usually every single page application needs a remote service to fetch data or to store date.

Blazor Client Side will have a lot faster UI, because it is all rendered natively on the client. All the C# and Razor code is running in the WebAssembly and Blazor Server Side still needs to send UI from the server to the client.

Conclusion

In this part you learned a different kind of Hosting in ASP.NET Core and this will lead us back to the generic hosting approach of ASP.NET Core 3.0.

In the next post I will write about a different hosting model to run service worker and background services without the full web server stack.

Jürgen Gutsch: .NET Conf 2019

From September 23 to 25 the .NET Conf 2019, hosted by Microsoft, was running virtually on Twitch. Like last year the third day was full of talks done by the community. As well as last year, I also did a talk this year. I talked about the ASP.NET Core Health Checks and it went much better this time. There were no technical problems and because of my own live stream on twitch I'm a little bit more used to speak to an screen and a camera.

The conference

This years .NET Conf was full of .NET Core 3.0, which was launched during the first day. Also C# 8 and the latest DevOps, Azure, Xamarin and Visual Studio features were hot topics this year.

If you want to watch the talks on demand, there is a playlist on YouTube with all of the recordings, as well as a list on Channel 9. Since the conference was streamed via Twitch, the videos are also available there.

Some of the talks were pretty funny. While Dan Roth was talking about Blazor, Jeff Fritz interrupted the show and gave him his blazing Blazer.

Because of the time difference, I wasn't able to watch the entire live stream. There are so many recording, just for the first two days, I didn't got the chance to watch them all. However, I'm going to take some time to watch all the other awesome recordings.

My own talk

I was talking about the ASP.NET Core health Checks, which is a cool and fascinating topic. I did a quick introduction about it and demoed the basic configuration und usage. After that I did a demo about a more enhanced scenario with dependent sub systems running in Docker containers that needed to be checked. I also showed the health checks UI which can be used to display the health states on a nice user interface.

The presentation and codes of my presentation are available on GitHub.

My talk was at 11AM in central European time, which was 2AM in the morning in Seattle for Jeff Fritz and Jon Galloway, who moderated the conference during that time. But it seems they had a lot of fun and enough caffeine.

I'm going to link the recording of my talk here in this post, as soon it is available.

The community day

The third day was full of thirty minute presentations done by folks out of the community and some folks from Microsoft. There were a ton of cool presentations and a lot of fun while moderating the community day in the Channel 9 studio

I was happy to see the presentations of Maarten Balliauw, Oren Eini, Shawn Wildermuth, Ed Charbeneau, Steve Smith and a lot more...

Conclusion

This was a lot of fun, even if I was pretty excited and super nervous the hours before I started the presentation. It is all about technique, if you do a livestream and you are not able to see the audience. More technique means more possible problems, but it all went well.

However, I would be happy to to get the chance to do a talk like this next year.

Holger Schwichtenberg: Migration von .NET Framework zu .NET Core per PowerShell-Skript statt Klickorgie

Leider gibt es bislang kein Migrationswerkzeug von Microsoft, um WPF- und Windows Forms-Projekte auf .NET Core umzustellen. Dieser Beitrag stellt ein PowerShell-Skript vor, das bei der Migration einige manuelle Arbeit abnimmt.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Blazor Server Side

To have a look into the generic hosting models, we should also have a look into the different application models we have in ASP.NET Core. In this and the next post I'm going to write about Blazor, which is a new member of the ASP.NET Core family. To be more precisely, Blazor are two members of the ASP.NET Core family. On the one hand we have Blazor Server Side which actually is ASP.NET Core running on the server and on the other hand we have Blazor Client Side which looks like ASP.NET Core and is running on the browser inside a WebAssembly. Both frameworks share the same view framework, which is Razor Components. Both Frameworks may share the same view logic and business logic. Both frameworks are single page application (SPA) frameworks, there is no page reload from the server visible while browsing the application. Both frameworks look pretty similar up from the Program.cs

Under the hood, both frameworks are hosted completely different. While Blazor Client Side is completely running on the Client, there is no web server needed. Blazor Server Side on the other hand is running upon a web server and is using WebSockets and a generic JavaScript client to simulate the same SPA behavior as Blazor Client Side.

Hosting and Startup

Within this post I'm trying to compare Blazor Server Side to the already known ASP.NET Core frameworks like MVC and Web API.

First let's create a new Blazor Server Side project using the .NET Core 3 Preview 7 SDK:

dotnet new blazorserverside -n BlazorServerSideDemo -o BlazorServerSideDemo
cd BlazorServerSideDemo
code .

The second and third line changes the current directory to the project directory and opens it into Visual Studio Code, if it is installed.

The first thing I usually do is to have a short glimpse into the Program.cs, but in this case this class looks completely equal to the other projects. There is absolutely no difference:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
}

At first a default IHostBuilder is created and upon this a IWebHostBuilder is created to spin up a Kestrel web server and to host a default ASP.NET Core application. Nothing spectacular here.

The Startup.cs may be more special.

Actually it looks like a common ASP.NET Core Startup class except there are different services registered and a different Middlewares is used:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    public void ConfigureServices(IServiceCollection services)
    {
        services.AddRazorPages();
        services.AddServerSideBlazor();
        services.AddSingleton<WeatherForecastService>();
    }

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
            app.UseHsts();
        }

        app.UseHttpsRedirection();
        app.UseStaticFiles();

        app.UseRouting();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapBlazorHub();
            endpoints.MapFallbackToPage("/_Host");
        });
    }
}

In the ConfigureServices method there are the Razor Pages added to the IoC container. Razor Pages is used to provide the page that is hosting the Blazor application. In this case it is the _Host.cshtml in the Pages directory. Every single page application (SPA) has at least one almost static page which is hosting the actual application that is running in the browser. React, Vue, Angular and so on have to have the same thing. It is a index.html that is loading all the JavaScripts and hosting the JavaScript application. In case of Blazor there is also a generic JavaScript running on the hosting page. This JavaScript will connect to a SignalR WebSocket that is running on the server side.

Additional to the Razor Pages, the services needed for Blazor Server Side will be added to the IoC container. This services will be needed by the Blazor Hub which actually is the SignalR Hub that provides the WebSocket endpoint.

The Configure also looks similar to the other ASP.NET Core frameworks. The only differences are in the last lines, where the Blazor Hub gets added and where the fallback page gets added. This fallback page actually is the hosting Razor Page mentioned before. Since the SPA supports deep links and created URLs for the different views created on the client, the application need to route to a fallback page in case the user directly navigates to client side route that is not existing on the server. So the server will just provide the hosting page and the client will load the right views depending on the URLs in the browser afterwards.

Blazor

The key feature of Blazor are the razor based components, which get interpreted on a runtime that understand C# and Razor and rendered on the client. With Blazor Client Side it the Mono runtime running inside the WebAssembly and on the Server Side version it is the .NET Core runtime running on the server. That means the Razor components get interpreted and rendered on the server. After that they get pushed to the client using SignalR and placed on the right place inside the hosting page using the generic JavaScript which is connected to the SignalR.

So we have a server side rendered single page application, without any visible roundtrip to the server.

The Razor components are also placed in the pages folder, but have the file extension .razor. Except the App.razor which is directly in the project directory. Those are the actual view components, which contain the logic of the application.

If you have a more detailed look into the components, you'll see some similarities to React or Angular, in case you know those frameworks. I mentioned the App.razor which is the root component. Angular and React also have this kind of root component. Inside the Shared directory there is a MainLayout.razor, which is the layout component. (Also this kind of components are available in React and Angular.) All the other components in the pages directory are using this layout implicitly because it is set as the default layout in the _Imports.razor. Those components also define a route that is used to navigate to the component. Reusable components without a specific route are placed inside the Shared directory.

Conclusion

Even this is just a small introduction and overview about Blazor Server side, but I only want to quickly show the new ASP.NET Core 3.0 frameworks to create web applications. This is the last kind of normal server application I want to show. In the next part, I'm going to show Blazor Client side which uses a completely different hosting model.

Blazor server side by the way is the new replacement for ASP.NET WebForms to create stateful web applications using C#. WebForms won't be migrated to ASP.NET Core. It will be supported in the same way as the full .NET Framework will be supported in the future. Which there will be no new versions and no new features in the future. With this new in mind, it absolutely makes sense to have a more detailed look into Blazor Server Side.

Holger Schwichtenberg: Word-Automatisierung in einem Scheduled Task des Windows-Servers

So löst man die Probleme beim Start der Word-Automatisierungsobjekte in einem Hintergrundprozess.

Golo Roden: Plädoyer für eine offene und tolerante Kommunikation in der IT-Unternehmenskultur

Informatiker gelten häufig als fachlich kompetent, aber sozial inkompetent. Dieses Vorurteil lässt sich aber mit der richtigen Kommunikationskultur beheben.

Code-Inside Blog: Check installed version for ASP.NET Core on Windows IIS with Powershell

The problem

Let’s say you have a ASP.NET Core application without the bundled ASP.NET Core runtime (e.g. to keep the download as small as possible) and you want to run your ASP.NET Core application on a Windows Server hosted by IIS.

General approach

The general approach is the following: Install the .NET Core hosting bundle and you are done.

Each .NET Core Runtime (and there are quite a bunch of them) is backward compatible (at least the 2.X runtimes), so if you have installed 2.2.6, your app (created while using the .NET runtime 2.2.1), still runs.

Why check the minimum version?

Well… in theory the app itself (at least for .NET Core 2.X applications) may run under runtime versions, but each version might fix something and to keep things safe it is a good idea to enforce security updates.

Check for minimum requirement

I stumbled upon this Stackoverflow question/answer and enhanced the script, because that version only tells you “ASP.NET Core seems to be installed”. My enhanced version searchs for a minimum required version and if this is not installed, it exit the script.

$DotNetCoreMinimumRuntimeVersion = [System.Version]::Parse("2.2.5.0")

$DotNETCoreUpdatesPath = "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Updates\.NET Core"
$DotNetCoreItems = Get-Item -ErrorAction Stop -Path $DotNETCoreUpdatesPath
$MinimumDotNetCoreRuntimeInstalled = $False

$DotNetCoreItems.GetSubKeyNames() | Where { $_ -Match "Microsoft .NET Core.*Windows Server Hosting" } | ForEach-Object {

                $registryKeyPath = Get-Item -Path "$DotNETCoreUpdatesPath\$_"

                $dotNetCoreRuntimeVersion = $registryKeyPath.GetValue("PackageVersion")

                $dotNetCoreRuntimeVersionCompare = [System.Version]::Parse($dotNetCoreRuntimeVersion)

                if($dotNetCoreRuntimeVersionCompare -ge $DotNetCoreMinimumRuntimeVersion) {
                                Write-Host "The host has installed the following .NET Core Runtime: $_ (MinimumVersion requirement: $DotNetCoreMinimumRuntimeVersion)"
                                $MinimumDotNetCoreRuntimeInstalled = $True
                }
}

if ($MinimumDotNetCoreRuntimeInstalled -eq $False) {
                Write-host ".NET Core Runtime (MiniumVersion $DotNetCoreMinimumRuntimeVersion) is required." -foreground Red
                exit
}

The “most” interesting part is the first line, where we set the minimum required version.

If you have installed a version of the .NET Core runtime on Windows, this information will end up in the registry like this:

x

Now we just need to compare the installed version with the existing version and know if we are good to go.

Hope this helps!

Holger Schwichtenberg: Assembly-Meta-Daten (AssemblyInfo.cs) in .NET Core

In .NET-Core-Projekten werden die Metadaten im Standard in der Projektdatei gespeichert. Eine AssemblyInfo.cs wie im klassischen .NET ist aber dennoch möglich.

Golo Roden: Neue Serie: Götz & Golo

Am 3. September 2019 wird es soweit sein: Die neue Serie "Götz & Golo" startet auf diesem Blog. Ein kurzer Ausblick, worum es in dieser Serie gehen und was das Konzept dahinter sein wird.

Jürgen Gutsch: ASP.NET Core 3.0: Endpoint Routing

The last two posts were just a quick look into the Program.cs and the Startup.cs. This time I want to have a little deeper look into the new endpoint routing.

Wait!

Sometimes I have an Idea about a specific topic to write about and start writing. While writing I'm remembering that I maybe already wrote about it. Than I take a look into the blog archive and there it is:

Implement Middlewares using Endpoint Routing in ASP.NET Core 3.0

Maybe I get old now... ;-)

This is why I just link to the already existing post.

Anyways. The next two posts are a quick glimpse into Blazor Server Side and Blazor Client Side.

Why? Because I also want to focus on the different Hosting models and Blazor Client Side is using a different one.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Taking a quick look into the Startup.cs

I the last post, I took a quick look into the Program.cs of ASP.NET Core 3.0 and I quickly explored the Generic Hosting Model. But also the Startup class has something new in it. We will see some small but important changes.

Just one thing I forgot to mention in the last post: It should just work ASP.NET Core 2.1 code of the Program.cs and the Startup.cs in ASP.NET Core 3.0, if there is no or less customizing. The IWebHostBuilder is still there and can be uses the 2.1 way and also the default 2.1 Startup.cs should run in ASP.NET Core 3.0. It may be that you only need to do some small changes there.

The next snippet is the Startup class of an newly created empty web project:

public class Startup
{
    // This method gets called by the runtime. Use this method to add services to the container.
    // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
    public void ConfigureServices(IServiceCollection services)
    {
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        app.UseRouting();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapGet("/", async context =>
            {
                await context.Response.WriteAsync("Hello World!");
            });
        });
    }
}

The empty web project is a ASP.NET Core project without any ASP.NET Core UI feature. This is why the ConfigureServices method is empty. There is no additional service added to the dependency injection container.

The new stuff is into in the Configure method. The first lines look familiar. Depending on the hosting environment the development exception page will be shown.

app.UseRouting() is new. This is a middleware that enables the new endpoint routing. The new thing is, that routing is decoupled from the specific ASP.NET Feature. In the previous Version every feature (MVC, Razor Pages, SIgnalR, etc.) had its own endpoint implementation. Now the endpoint and routing configuration can be done independently. The Middlewares that need to handle a specific endpoint will now be mapped to a specific endpoint or route. So the Middlewares don't need to handle the routes anymore.

If you wrote a Middleware in the past which needs to work on a specific endpoint, you added the logic to check the endpoint inside the middleware or you used the MapWhen() extension method on the IApplicationBuilder to add the Middleware to a specific endpoint.

Now you create a new pipeline (using IApplicationBuilder) per endpoint and Map the Middleware to the specific new pipeline.

The MapGet() method above does this implicitly. It created a new endpoint "/" and maps the delegate Middleware to the new pipeline that was created internally.

That was a simple snippet. Now let's have a look into the Startup.cs of a new full blown web application using individual authentication. Created by using this .NET CLI command:

dotnet new mvc --auth Individual

Overall this also looks pretty familiar if you already know the previous versions:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {

        services.AddDbContext<ApplicationDbContext>(options =>
            options.UseSqlite(
                Configuration.GetConnectionString("DefaultConnection")));
        services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true)
            .AddEntityFrameworkStores<ApplicationDbContext>();

        services.AddControllersWithViews();
        services.AddRazorPages();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
            app.UseDatabaseErrorPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
            // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
            app.UseHsts();
        }

        app.UseHttpsRedirection();
        app.UseStaticFiles();

        app.UseRouting();

        app.UseAuthentication();
        app.UseAuthorization();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapControllerRoute(
                name: "default",
                pattern: "{controller=Home}/{action=Index}/{id?}");
            endpoints.MapRazorPages();
        });
    }
}

This is a MVC application, but did you see the lines where MVC is added? I'm sure you did. It is not longer called MVC, even if it is the MVC pattern used, because it was a little bit confusing with Web API.

To add MVC you now need to add AddControllersWithViews(). If you want to add Web API only you just need to add AddControllers(). I think this is a small but useful change. This way you can be more specific by adding ASP.NET Core features. In this case also Razor pages where added to the project. It is absolutely no problem to mix ASP.NET Core features.

AddMvc() still exists and is still working in ASP.NET Core

The Configure method doesn't really change, except the new endpoint routing part. There are two endpoints configured. One for controller routes (Which is Web API and MVC) and one for RazorPages.

Conclusion

This is also just a quick look into the Startup.cs with just some small but useful changes.

In the next post I'm going to do a little more detailed look into the new endpoint routing. While working on the GraphQL endpoint for ASP.NET Core, I learned a lot about the endpoint routing. This feature makes a lot of sense to me, even if it means to rethink some things, when you build and provide a Middleware.

Golo Roden: Funktionale Programmierung mit Objekten

JavaScript kennt verschiedene Methoden zur funktionalen Programmierung, beispielsweise map, reduce und filter. Allerdings stehen sie nur für Arrays zur Verfügung, nicht für Objekte. Mit ECMAScript 2019 lässt sich das jedoch auf elegante Weise ändern.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Generic Hosting Environment

In ASP.NET Core 3.0 the hosting environment changes to get more generic. Hosting is not longer bound to Kestrel and not longer bound to ASP.NET Core. This means you are able to create a host, that doesn't start the Kestrel web server and doesn't need to use the ASP.NET Core Framework.

This is a small introduction post about the Generic Hosting Environment in ASP.NET Core 3.0. During the next posts I'm going to write more about it and what you can do with it in combination with some more ASP.NET Core 3.0 features.

In the next posts we will see a lot more details about why this makes sense. For the short term: There are different hosting models. One is the already known web hosting. One other model is running a worker service without a web server and without ASP.NET Core. Also Blazor uses a different hosting model inside the web assembly.

How does it look like in ASP.NET Core 3.0?

First let's recap how it looks in previous versions. This is a ASP.NET Core 2.2 Startup.cs that creates an IWebHostBuilder to start up Kestrel and to bootstrap ASP.NET Core using the Startup class:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

The next snippet shows the Program.cs of a new ASP.NET Core 3.0 web project:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
}

Now a IHostBuilder will be created and configured first. If the default host builder is created, a IWebHostBuilder is created to use the configured Startup class.

The typical .NET Core App features like configuration, logging and dependency injection are configured on the level of the IHostBuilder. All the ASP.NET specific features like authentication, Middlewares, ActionFilters, Formatters, etc. are configured on the level of the IWebHostBuilder.

Conclusion

This makes the Hosting environment a lot more generic and flexible.

I'm going to write about specific scenarios during the next posts about the new ASP.NET Core 3.0 features. But first I will have a look into Startup.cs to see what is new in ASP.NET Core 3.0.

Marco Scheel: Mange Microsoft Teams membership with Azure AD Access Review

This post will introduce you to the Azure AD Access Review feature. With the introduction of modern collaboration through Microsoft 365 and Microsoft Teams being the main tool it is important to mange who is a member to the underlying Office 365 Group (Azure AD Group).

<DE>Für eine erhöhte Reichweite wird der Post heute auf Englisch erscheinen. Es geht um die Einführung von Access Reviews (Azure AD) im Zusammenspiel mit Microsoft Teams. Das Verwalten der Mitgliedschaft eines Teams wird durch den Einsatz von diesem Feature unterstützt und stellt die Besitzer weiter in den Mittelpunkt. Sollte großes Interesse an einer komplett deutschen Version bestehen, dann lasst es mich bitte wissen.</DE>

Microsoft has great resources to get started on a technical level. The feature enables a set of people to review another set of people. Azure AD is leveraging this capability (all under the bigger umbrella called Identity Governance) on two assets: Azure AD Groups and Azure AD Apps. Microsoft Teams as a hub for collaboration is build on top of Office 365 Groups and so we will have a closer look at the Access Review part for Azure AD Groups.

Each Office 365 Group (each Team) is build from a set of owners and members. With the open nature of Office 365, members can be employees, contractors, or people outside of the organization.

image

In our modern collaboration (Teams, SharePoint, …) implementation we strongly recommend to leverage full self service group creation that is already built into the system. With this setup everyone is able to create and manage/own a group. Permanent user education is needed for everyone to understand the concept behind modern groups. Many organizations also have a strong set of internal rules that forces a so called information owner (could be equal to the owner of a group) to review who has access to their data. Most organization rely on the fact people are fulfilling their duties as demanded, but lets face it owners are just human beings that need to do their “real” job. With the introduction of Azure AD Access Review we can support these owner duties and make the process documented and easy to execute.

AAD Access Review can do the following to support an up to date group membership:

  • Setup an Access Review for an Azure AD Group
  • Specify the duration (start date, recurrence, duration, …)
  • Specify who will do the review (owner, self, specific people, …)
  • Specify who will be reviewed (all members, guests, …)
  • Specify what will happen if the review is not executed (remove members, …)

Before we start we need to talk about licensing. It is obvious that M365 E5 is the best SKU to start with ;) but if you are not that lucky, you need at least an Azure AD P2 license. It is not a “very” common license as it was only part of the EMS E5 SKU, but Microsoft started some time ago really attractive license bundles. Many orgs with strong security requirements will at some point hit a license SKU that will include AAD P2. For your trusty lab tenants start a EMS E5 trial to test these features today. To be precise only the accounts reviewing (executing the Access Review) need the license, at least this is my understanding and as always with licensing ask your usual licensing people to get the definitive answer.

The setup of an Access Review (if not automated through MS Graph Beta) is setup in the Azure Portal in the identity governance blade of AAD. To create our first Access Review we need to on-board to this feature.

image

Please note we are looking at Access Review in the context of modern collaboration (groups created by Teams, SharePoint, Outlook, …). Access Review can be used to review any AAD group that you use to grant access to a specific resource or keep a list of trusted users for an infrastructure piece of tech in Azure. The following information might not always be valid for your scenario!

This is the first half of the screen we need to fill-out for a new Access Review:

image


Review name: This is a really important piece! The Review name will be the “only” visible clue for the reviewer once they get the email about the outstanding review. With self service setup and with the nature of how people name their groups we need to ensure people are understanding what they review. We try to automate the creation of the reviews so we put the review timing, the group name and the groups object id in the review name. The ID is helping during support if you send out 4000 Access Reviews and people ask why they got this email they can provide you with the ID and things get easier. For example: 2019-Q1 GRP New Order (af01a33c-df0b-4a97-a7de-c6954bd569ef)

Frequency: Also very important! You have to understand that an Access Review is somehow static. You could do a recurring review, but some information will be out of sync. For example the group could be renamed, but the title will not be updated and people might get confused about misleading information in the email that is send out. If you choose to let the owner of a group do the review, the owners will be “copied” to the Access Review config and not updated for future reviews. Technically this could be fixed by Microsoft, but as of now we ran into problems in the context of modern collaboration.

image

Users: “Members of a group” is our choice for collaboration. The other option is “Assigned to an application” and not our focus. For a group we have the option to do a guest only review or review everybody as a member of a group. Based on organizational needs and information like the confidentiality we can make a decision. As a starting point it could be a good option to go with guests only because guests are not very well controlled in most environments. An employee at least has a contract and the general trust level should be higher.

Group: Select a group the review should apply to. The latest changes to the Access Review feature allowed to select multiple groups at once. From a collaboration perspective I would avoid it, because at the end of the creation process each group will have its own Access Review instance and the settings are no longer shared. Once again from a collab point of view we need some kind of automation because it is not feasible to create these reviews by an manual task in a foreseeable future.

Reviewers: The natural choice for an Office 365 Group (Team) is to go with the “Group owners” option. Especially if we automate the process and don’t have an extra database to lookup who is the information owner. For static groups or highly confidential groups the option “Selected users” could make sense. An interesting option is also the last one called “Members (self)”. This option will "force” each member to take a decision if the user is any longer part of this project, team or group. We at Glück & Kanja are currently thinking about doing this for some of our internal clients teams. Most of our groups are public and accessible by most of the employee, but membership will document some kind of current involvement for the client represented by the group. This could also naturally reduce the number of teams that show up in your Microsoft Teams client app. As mentioned earlier at the moment it seems that the option “Group owners” will be resolved once the Access Review starts and the instance of the review is then fixed. So any owner change could be not reflected in future instances in recurring reviews. Hopefully this will be fixed by Microsoft.

Program: This is a logical grouping of access reviews. For example we could add all collaboration related reviews to one program vs administration reviews with a more static route.

image

More advanced settings are collapsed, but should definitely be reviews.

Upon completion settings: Allows to automatically apply the review results. I would suggest to try this settings, because it will not only document the review but take the required action on the membership. If group owners are not aware what these Access Review email are, then we talk about potential loss of access for members not reviewed, but at the end that is what we want. People need to take this part of identity governance for real and take care of their data. Any change by the system is document (Audit log of the group) and can be reverse manually. If the system is not executing the results of the review, someone must look up results regularly and then ensure to remove the users based on the outcome. If you go for Access Review, I strongly recommend on automatically applying the results (after you own internal tests).

Lets take a look on the created Access Review.

image


Azure Portal: This is an overview for the admin (non recurring access review).

image


Email: As you can see the prominent Review name is what is standing out to the user. The group name (also highlighted red) is buried within all other text.

image


Click on “Start Review” from the email: The user now can take action based on recommendations (missing in my lab tenant due to inactivity of my lab users).

image

Take Review: Accept 6 users.

image

Review Summary: This is the summary if the owner has taken all actions.

image

Azure Portal: Audit log information for the group.

After the user completed the review the system didn’t make a change to the group. Based on the configuration if actions should be automatically applied the results apply at the end of the review process! Until this time the owners can change their mind. Once the review period is over the system will apply the needed changes.

I really love this feature in the context of modern collaboration. The process of keeping a current list of involved members in a team is a big benefit for productivity and security. The “need to know” principal is supported by a technical implementation “free of cost” (a mentioned everyone should have AAD P2 through some SKU 😎).

Our GK O365 Lifecycle tool was extended to allow the creation of Access Reviews through the Microsoft Graph based on the Group/Team classification. Once customers read or get a demo about this feature and own the license we immediately start a POC implementation. If our tool is already in place it is only a matter of some JSON configuration to be up and running.

Code-Inside Blog: SQL Server, Named Instances & the Windows Firewall

The problem

“Cannot connect to sql\instance. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)”

Let’s say we have a system with a running SQL Server (Express or Standard Edition - doesn’t matter) and want to connect to this database from another machine. The chances are high that you will see the above error message.

Be aware: You can customize more or less anything, so this blogposts does only cover a very “common” installation.

I struggled last week with this problem and I learned that this is a pretty “old” issue. To enlighten my dear readers I made the following checklist:

Checklist:

  • Does the SQL Server allow remote connections?
  • Does the SQL Server allow your authentication schema of choice (Windows or SQL Authentication)?
  • Check the “SQL Server Configuration Manager” if the needed TCP/IP protocol is enabled for your SQL Instance.
  • Check if the “SQL Server Browser”-Service is running
  • Check your Windows Firewall (see details below!)

Windows Firewall settings:

Per default SQL Server uses TCP Port 1433 which is the minimum requirement without any special needs - use this command:

netsh advfirewall firewall add rule name = SQLPort dir = in protocol = tcp action = allow localport = 1433 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

If you use named instances we need (at least) two additional ports:

netsh advfirewall firewall add rule name = SQLPortUDP dir = in protocol = udp action = allow localport = 1434 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

This UDP Port 1434 is used to query the real TCP port for the named instance.

Now the most important part: The SQL Server will use a (kind of) random dynamic port for the named instance. To avoid this behavior (which is really a killer for Firewall settings) you can set a fixed port in the SQL Server Configuration Manager.

SQL Server Configuration Manager -> Instance -> TCP/IP Protocol (make sure this is "enabled") -> *Details via double click* -> Under IPAll set a fixed port under "TCP Port", e.g. 1435

After this configuration, allow this port to communicate to the world with this command:

netsh advfirewall firewall add rule name = SQLPortInstance dir = in protocol = tcp action = allow localport = 1435 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

(Thanks Stackoverflow!)

Check the official Microsoft Docs for further information on this topic, but these commands helped me to connect to my SQL Server.

The “dynamic” port was my main problem - after some hours of Googling I found the answer on Stackoverflow and I could establish a connection to my SQL Server with the SQL Server Management Studio.

Hope this helps!

Kazim Bahar: Künstliche Intelligenz für .NET Anwendungen

Mit dem neuen ML.NET Framework aus dem Hause Microsoft lassen sich bestehende .NET Applikationen mit...

Stefan Henneken: IEC 61131-3: Exception Handling with __TRY/__CATCH

When executing a program, there is always the possibility of an unexpected runtime error occurring. These occur when a program tries to perform an illegal operation. This kind of scenario can be triggered by events such as division by 0 or a pointer which tries to reference an invalid memory address. We can significantly improve the way these exceptions are handled by using the keywords __TRY and __CATCH.

The list of possible causes for runtime errors is endless. What all these errors have in common is that they cause the program to crash. Ideally, there should at least be an error message with details of the runtime error:

Pic01

Because this leaves the program in an undefined state, runtime errors cause the system to halt. This is indicated by the yellow TwinCAT icon:

Pic02

For an operational system, an uncontrolled stop is not always the optimal response. In addition, the error message does not provide enough information about where in the program the error occurred. This makes improving the software a tricky task.

To help track down errors more quickly, you can add check functions to your program.

Pic03 

Check functions are called whenever the relevant operation is executed. The best known is probably CheckBounds(). Each time an array element is accessed, this function is implicitly called beforehand. The parameters passed to this function are the array bounds and the index of the element being accessed. This function can be configured to automatically correct attempts to access elements which are out of bounds. This approach does, however, have some disadvantages.

  1. CheckBounds() is not able to determine which array is being accessed, so error correction has to be the same for all arrays.
  2. Because CheckBounds() is called whenever an array element is accessed, it can significantly slow down program execution.

It’s a similar story with other check functions.

It is not unusual for check functions to be used during development only. Check functions include breakpoints, which stop the program when an operation throws up an error. The call stack can then be used to determine where in the program the error has occurred.

The ‘try/catch’ statement

Runtime errors in general are also known as exceptions. IEC 61131-3 includes __TRY, __CATCH and __ENDTRY statements for detecting and handling these exceptions:

__TRY
  // statements
__CATCH (exception type)
  // statements
__ENDTRY
// statements

The TRY block (the statements between __TRY and __CATCH) contains the code with the potential to throw up an exception. Assuming that no exception occurs, all of the statements in the TRY block will be executed as normal. The program will then continue from the line immediately following the __ENDTRY statement. If, however, one of the statements within the TRY block causes an exception, the program will jump straight to the CATCH block (the statements between __CATCH and __ENDTRY). All subsequent statements within the TRY block will be skipped.

The CATCH block is only executed if an exception occurs; it contains the error handling code. After processing the CATCH block, the program continues from the statement immediately following __ENDTRY.

The __CATCH statement takes the form of the keyword __CATCH followed, in brackets, by a variable of type __SYSTEM.ExceptionCode. The __SYSTEM.ExceptionCode data type contains a list of all possible exceptions. If an exception occurs, causing the CATCH block to be called, this variable can be used to query the cause of the exception.

The following example divides two elements of an array by each other. The array is passed to the function using a pointer. If the return value is negative, an error has occurred. The negative return value provides additional information on the cause of the exception:

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR
 
__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__ENDTRY

The ‘finally’ statement

The optional __FINALLY statement can be used to define a block of code that will always be called whether or not an exception has occurred. There’s only one condition: the program must step into the TRY block.

We’re going to extend our example so that a value of one is added to the result of the calculation. We’re going to do this whether or not an error has occurred.

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR
 
__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__FINALLY
  F_Calc := F_Calc + 1;
__ENDTRY

Sample 1 (TwinCAT 3.1.4024 / 32 Bit) on GitHub

The statement in the FINALLY block (line 24) will always be executed whether or not an exception has occurred.

If no exception occurs within the TRY block, the FINALLY block will be called straight after the TRY block.

If an exception does occur, the CATCH block will be executed first, followed by the FINALLY block. Only then will the program exit the function.

__FINALLY therefore enables you to perform various operations irrespective of whether or not an exception has occurred. This generally involves releasing resources, for example closing a file or dropping a network connection.

Extra care should be taken in implementing the CATCH and FINALLY blocks. If an exception occurs within these blocks, it will give rise to an unexpected runtime error, resulting in an immediate uncontrolled program stop.

The sample program runs under 32-bit TwinCAT 3.1.4024 or higher. 64-bit systems are not currently supported.

Stefan Henneken: IEC 61131-3: Ausnahmebehandlung mit __TRY/__CATCH

Bei der Ausführung eines SPS-Programms kann es zu unerwarteten Laufzeitfehlern kommen. Diese treten auf, sobald das SPS-Programm versucht eine unzulässige Operation auszuführen. Auslöser solcher Szenarien kann z.B. eine Division durch 0 sein oder ein Pointer verweist auf einen ungültigen Speicherbereich. Mit den Schlüsselwörtern __TRY und __CATCH kann auf diese Ausnahmen deutlich besser reagiert werden als bisher.

Die Liste der möglichen Ursachen für Laufzeitfehler kann endlos erweitert werden. Allen Fehlern ist aber gemeinsam: Sie führen zum Absturz des Programms. Bestenfalls wird durch eine Meldung auf den Laufzeitfehler hingewiesen:

Pic01

Da sich anschließend das SPS-Programm in einem nicht definierten Zustand befindet, wird das System gestoppt. Dies ist anhand des gelben TwinCAT Icon in der Windows Taskleiste zu erkennen:

Pic02

Für in Betrieb befindliche Anlagen ist das unkontrollierte Stoppen nicht immer die optimalste Reaktion. Außerdem gibt die Meldung nur unzureichend Auskunft darüber, wo genau im SPS-Programm der Fehler aufgetreten ist. Eine Optimierung der Software ist dadurch nur schwer möglich.

Um Fehler schneller ausfindig zu machen, können in dem SPS-Programm Überprüfungsfunktionen eingefügt werden.

Pic03

Überprüfungsfunktionen werden jedes Mal aufgerufen, wenn die entsprechende Operation ausgeführt wird. Am bekanntesten dürfte die Funktion CheckBounds() sein. Bei jedem Zugriff auf ein Arrayelement wird vorher diese Funktion implizit aufgerufen. Als Parameter erhält die Funktion die Arraygrenzen und den Index des Elements, auf das der Zugriff erfolgen soll. Die Funktion kann so angepasst werden, dass bei einem Zugriff außerhalb der Arraygrenzen eine Korrektur erfolgt. Dieser Ansatz hat allerdings einige Nachteile:

  1. In CheckBounds() kann nicht festgestellt werden auf welches Array zugegriffen wird. Somit kann nur für alle Arrays die gleiche Fehlerkorrektur implementiert werden.
  2. Da bei jedem Arrayzugriff die Überprüfungsfunktion aufgerufen wird, kann sich die Laufzeit des Programms erblich verschlechtern.

Ähnlich verhält es sich auch bei den anderen Überprüfungsfunktionen.

Nicht selten werden die Überprüfungsfunktionen nur während der Entwicklungsphase eingesetzt. In den Funktionen werden Breakpoints aktiviert, die, sobald eine fehlerhafte Operation ausgeführt wird, das SPS-Programm anhalten. Über den Callstack kann anschließend die entsprechende Stelle im SPS-Programm ermittelt werden.

Die ‚try/catch‘-Anweisung

Allgemein werden Laufzeitfehler als Ausnahmen (Exceptions) bezeichnet. Für das Erkennen und Bearbeiten von Exceptions gibt es in der IEC 61131-3 die Anweisungen __TRY, __CATCH und __ENDTRY:

__TRY
  // statements
__CATCH (exception type)
  // statements
__ENDTRY
// statements

Der TRY-Block (die Anweisungen zwischen __TRY und __CATCH) beinhaltet die Anweisungen, die potenziell eine Exception verursachen können. Tritt keine Exception auf, werden alle Anweisungen im TRY-Block ausgeführt. Anschließend setzt das SPS-Programm hinter __ENDTRY seine Arbeit fort. Verursacht eine der Anweisungen innerhalb des TRY-Blocks jedoch eine Exception, so wird der Programmablauf unmittelbar im CATCH-Block (die Anweisungen zwischen __CATCH und __ENTRY) fortgeführt. Alle übrigen Anweisungen innerhalb des TRY-Blocks werden dabei übersprungen.

Der CATCH-Block wird nur im Falle einer Exception ausgeführt und enthält die gewünschte Fehlerbehandlung. Nach der Abarbeitung des CATCH-Blocks wird das SPS-Programm mit den Anweisungen nach __ENDTRY fortgesetzt.

Hinter der __CATCH-Anweisung wird in runden Klammern eine Variable vom Typ __SYSTEM.ExceptionCode angegeben. Der Datentyp __SYSTEM.ExceptionCode enthält eine Auflistung aller möglichen Exceptions. Wird der CATCH-Block durch eine Exception aufgerufen, so kann über diese Variable die Ursache der Exception abgefragt werden.

In dem folgenden Beispiel werden zwei Elemente aus einem Array dividiert. Das Array wird hierbei durch einen Pointer an die Funktion übergeben. Ist der Rückgabewert der Funktion negativ, so ist bei der Ausführung ein Fehler aufgetreten. Der negative Rückgabewert der Funktion gibt genauere Informationen über die Ursache der Exception:

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR

__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__ENDTRY

Die ‚finally‘-Anweisung

Mit __FINALLY kann optional ein Codeblock definiert werden, der immer aufgerufen wird, unabhängig davon ob eine Exception aufgetreten ist oder nicht. Es gibt nur eine einzige Randbedingung: Das SPS-Programm muss zumindest in den TRY-Anweisungsblock eintreten.

Das Beispiel soll so erweitert werden, dass das Ergebnis der Berechnung zusätzlich um Eins erhöht wird. Dieses soll unabhängig davon erfolgen, ob ein Fehler aufgetreten ist oder nicht.

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR

__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__FINALLY
  F_Calc := F_Calc + 1;
__ENDTRY

Beispiel 1 (TwinCAT 3.1.4024 / 32 Bit) auf GitHub

Die Anweisung im FINALLY-Block (Zeile 24) wird immer aufgerufen, unabhängig davon ob eine Exception erzeugt wird oder nicht.

Wird im TRY-Block keine Exception ausgelöst, so wird der FINALLY-Block direkt nach dem TRY-Block ausgerufen.

Tritt eine Exception auf, so wird erst der CATCH-Block ausgeführt und anschließend auch der FINALLY-Block. Erst danach wird die Funktion verlassen.

__FINALLY gestattet es somit, diverse Operationen unabhängig davon auszuführen, ob eine Exception aufgetreten ist oder nicht. Dabei handelt es sich in der Regel um die Freigabe von Ressourcen, wie z.B. das Schließen einer Datei oder das Beenden einer Netzwerkverbindung.

Besonders sorgfältig sollte man die Implementierung der CATCH– und FINALLY-Blöcke vornehmen. Tritt in einem dieser Codeblöcke eine Exception auf, so löst dieses einen unerwarteten Laufzeitfehler aus. Mit dem Ergebnis, dass das SPS-Programm unmittelbar gestoppt wird.

An dieser Stelle möchte ich noch auf den Blog von Matthias Gehring hinweisen. In einem seiner Posts (https://www.codesys-blog.com/tipps/exceptionhandling-in-iec-applikationen-mit-codesys) wird das Thema Exception Handling ebenfalls behandelt.

Das Beispielprogramm ist unter 32-Bit Systemen ab TwinCAT 3.1.4024 lauffähig. 64-Bit Systeme werden derzeit noch nicht unterstützt.

Stefan Henneken: IEC 61131-3: Parameter transfer via FB_init

Depending on the task, it may be necessary for function blocks to require parameters that are only used once for initialization tasks. One possible way to pass them elegantly is to use the FB_init() method.

Before TwinCAT 3, initialisation parameters were very often transferred via input variables.

(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
  nDatabits  : BYTE(7..8);
  eParity    : E_Parity;
  nStopbits  : BYTE(1..2);
END_VAR

This had the disadvantage that the function blocks became unnecessarily large in the graphic display modes. It was also not possible to prevent changing the parameters at runtime.

Very helpful is the method FB_init(). This method is implicitly executed one time before the PLC task is started and can be used to perform initialization tasks.

The dialog for adding methods offers a finished template for this purpose.

Pic01

The method contains two input variables that provide information about the conditions under which the method is executed. The variables may not be deleted or changed. However, FB_init() can be supplemented with further input variables.

Example

An example is a block for communication via a serial interface (FB_SerialCommunication). This block should also initialize the serial interface with the necessary parameters. For this reason, three variables are added to FB_init():

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);        
END_VAR

The serial interface is not initialized directly in FB_init(). Therefore, the parameters must be copied into variables located in the function block.

FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
  nInternalDatabits    : BYTE(7..8);
  eInternalParity      : E_Parity;
  nInternalStopbits    : BYTE(1..2);
END_VAR

During initialization, the values from FB_init() are copied in these three variables.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR
 
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

If an instance of FB_SerialCommunication is created, these three additional parameters must also be specified. The values are specified directly after the name of the function block in round brackets:

fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
                                               eParity := E_Parity.None,
                                               nStopbits := 1);

Even before the PLC task starts, the FB_init() method is implicitly called, so that the internal variables of the function block receive the desired values.

Pic02

With the start of the PLC task and the call of the instance of FB_SerialCommunication, the serial interface can now be initialized.

It is always necessary to specify all parameters. A declaration without a complete list of the parameters is not allowed and generates an error message when compiling:

Pic03

Arrays

If FB_init() is used for arrays, the complete parameters must be specified for each element (with square brackets):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
                 (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                 (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];

If all elements are to have the same initialization values, it is sufficient if the parameters exist once (without square brackets):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
                                                             eParity := E_Parity.None,
                                                             nStopbits := 1);

Multidimensional arrays are also possible. All initialization values must also be specified here:

aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
                      (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
                      (nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];

Inheritance

If inheritance is used, the method FB_init() is always inherited. FB_SerialCommunicationRS232 is used here as an example:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication

If an instance of FB_SerialCommunicationRS232 is created, the parameters of FB_init(), which were inherited from FB_SerialCommunication, must also be specified:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1);

It is also possible to overwrite FB_init(). In this case, the same input variables must exist in the same order and be of the same data type as in the basic FB (FB_SerialCommunication). However, further input variables can be added so that the derived function block (FB_SerialCommunicationRS232) receives additional parameters:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
  nBaudrate    : UDINT; 
END_VAR
 
THIS^.nInternalBaudrate := nBaudrate;

If an instance of FB_SerialCommunicationRS232 is created, all parameters, including those of FB_SerialCommunication, must be specified:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1,
                                                         nBaudRate := 19200);

In the method FB_init() of FB_SerialCommunicationRS232, only the copying of the new parameter (nBaudrate) is necessary. Because FB_SerialCommunicationRS232 inherits from FB_SerialCommunication, FB_init() of FB_SerialCommunication is also executed implicitly before the PLC task is started. Both FB_init() methods of FB_SerialCommunication and of FB_SerialCommunicationRS232 are always called implicitly. When inherited, FB_init() is always called from ‘bottom’ to ‘top’, first from FB_SerialCommunication and then from FB_SerialCommunicationRS232.

Forward parameters

The function block (FB_SerialCommunicationCluster) is used as an example, in which several instances of FB_SerialCommunication are declared:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
  fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  nInternalDatabits       : BYTE(7..8);
  eInternalParity         : E_Parity;
  nInternalStopbits       : BYTE(1..2); 
END_VAR

FB_SerialCommunicationCluster also receives the method FB_init() with the necessary input variables so that the parameters of the instances can be set externally.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR
 
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

However, there are some things to be taken into consideration here. The call sequence of FB_init() is not clearly defined in this case. In my test environment the calls are made from ‘inside’ to ‘outside’. First fbSerialCommunication01.FB_init() and fbSerialCommunication02.FB_init() are called, then fbSerialCommunicationCluster.FB_init(). It is not possible to pass the parameters from ‘outside’ to ‘inside’. The parameters are therefore not available in the two inner instances of FB_SerialCommunication.

The sequence of the calls changes as soon as FB_SerialCommunication and FB_SerialCommunicationRS232 are derived from the same basic FB. In this case FB_init() is called from ‘outside’ to ‘inside’. This approach cannot always be implemented for two reasons:

  1. If FB_SerialCommunication is located in a library, the inheritance cannot be changed just offhand.
  2. The call sequence of FB_init() is not further defined with nesting. So it cannot be excluded that this can change in future versions.

One way to solve the problem is to explicitly call FB_SerialCommunication.FB_init() from FB_SerialCommunicationCluster.FB_init().

fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);

All parameters, including bInitRetains and bInCopyCode, are passed on directly.

Attention: Calling FB_init() always initializes all local variables of the instance. This must be considered as soon as FB_init() is explicitly called from the PLC task instead of implicitly before the PLC task.

Access via properties

By passing the parameters by FB_init(), they can neither be read from outside nor changed at runtime. The only exception would be the explicit call of FB_init() from the PLC task. However, this should principally be avoided, since all local variables of the instance will be reinitialized in this case.

If, however, access should still be possible, appropriate properties can be created for the parameters:

Pic04

The setter and getter of the respective properties access the corresponding local variables in the function block (nInternalDatabits, eInternalParity and nInternalStopbits). Thus, the parameters can be specified in the declaration as well as at runtime.

By removing the setter, you can prevent the parameters from being changed at runtime. If the setter is available, FB_init() can be omitted. Properties can also be initialized directly when declaring an instance.

fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
                                                   Parity := E_Parity.Odd,
                                                   Stopbits := 1);

The parameters of FB_init() and the properties can also be specified simultaneously:

fbSerialCommunication  : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
                                               (Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);

In this case, the initialization values of the properties have priority. The transfer by property and FB_init() has the disadvantage that the declaration of the function block becomes unnecessarily long. To implement both does not seem necessary to me either. If all parameters can also be written via properties, the initialization via FB_init() can be omitted. Conclusion: If parameters must not be changeable at runtime, the use of FB_init() has to be considered. If the write access is possible, properties are another opportunity.

Sample 1 (TwinCAT 3.1.4022) on GitHub

David Tielke: #DWX2019 - Inhalt meiner Sessions

Das war sie wieder, die Developer Week 2019 in Nürnberg. An drei Konferenztagen und natürlich dem traditionellen Workshoptag am Donnerstag sind wir alle erschöpft aber glücklich zuhause wieder angekommen. Neben Sessions zu CoCo 2.0 und Softwarequalität, gab es in diesem Jahr auch zwei Abendveranstaltungen von mir, eine davon mit Kollege Christian Giesswein. Nachdem mein Mitarbeiter Sebastian und ich nun die Nacharbeit abgeschlossen haben, stellen wir hier nun die Inhalte meiner Sessions und unseres gemeinsamen Workshops am Donnerstag zur Verfügung.

Softwarequalität


Composite Components 2.0

Da während er Session mein Notebook fast vollständig den Dienst mit einem Zeichenstift verweigert hat, gibt es an dieser Stelle leider nicht die von mir gewohnten Drawings dazu. Dafür hier nun die Repos zu den Beispielimplementierungen der Composite Components 1.0 & 2.0 auf github:


Workshop: Architektur 2.0



Hier noch die entwickelten Beispielprojekte zu beiden Versionen der Architektur.

Holger Schwichtenberg: Die VSTS CLI ist tot – es lebe die Azure DevOps CLI

Die "Azure DevOps CLI", der Nachfolger der "VSTS CLI", hat seit dem 8.7.2019 den Status "General Availability" – ist aber keineswegs fertig.

Jürgen Gutsch: MVP for four times in a row

Another year later, again it was the July 1st and I got the email from the Global MVP Administrator I'm waiting for :-)

Yes, this is kind of a yearly series of posts. But I'm really excited that I got re-awarded to be an MVP for the fifth year in a row. This is absolutely amazing and makes me really proud.

Even though some folks reduces the MVP to just a marketing instrument of Microsoft and they say MVPs are just selling Microsoft to the rest of the world, it tells me that my work in my spare time is important for some people outside. These folks are right anyway. Sure I'm selling Microsoft to the rest of the world, but this is my hobby. I don't sell it explicitly, I'm just telling other people about stuff I work with, stuff I use to get things done and to earn money at the end. It is about .NET and ASP.NET as well as about software development and the developer community. It is also about stuff I just learned while looking into new technology.

Selling Microsoft is just a side effect with no additional effort and it doesn't feel wrong.

I'm not sure whether I put a lot more effort into my hobby since I'm a MVP or not. I think it was a bit more, because being a MVP makes me proud, makes me feel successful and tells me that my work is important for some folks. Who cares :-)

While some folks are reading my blog, attending the user group meetings or watching my live streams. I will continue doing that kind work.

As already written I'm proud of it and proud to get the fifth ring to my MVP award trophy, which will be blue this time.

And I'm feeling lucky that I'm able to attend the Global MVP summit the fifth time next year in March and to see all the MVP friends again. I'm really looking forward to that event and to be in the nice and always sunny Seattle area. (Yes, it is always sunny in Seattle, when I'm there.)

I'm also happy to see that almost all MVP friends got re-awarded.

Congratulations to all awarded and re-awarded MVP

Many thanks to developer community for being a part of it. And many thanks for that amazing feedback I get as a result of my work. It is a lot of fun to help and to contribute to that awesome community :-)

Marco Scheel: App Permissions für Microsoft Graph Calls automatisiert einrichten

Für unser Glück & Kanja Lifecycle Tool setze ich im Schwerpunkt auf Microsoft Graph Calls. Für ein sauberes Setup habe ich mittlerweile ein Script. Es nutzt die PowerShell AZ und die Azure CLI. Besonders beim Erstellen einer Azure AD App (genauer Berechtigen und Granten) ist die Azure CLI noch ein Stück besser bzw. umfangreicher als die AZ PowerShell.

Die Lifecycle App arbeitet mit AD Settings und Groups. Erweiterte Funktionen setzen auf Access Reviews Feature aus dem AAD P2 Lizenzset. Diese Graph Berechtigungen setze ich direkt per CLI Script:

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions 19dbc75e-c2e2-444c-a770-ec69d8559fc7=Role #msgraph Directory.ReadWrite.All

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions 62a82d76-70ea-41e2-9197-370581804d09=Role #msgraph Group.ReadWrite.All

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions ef5f7d5c-338f-44b0-86c3-351f46c8bb5f=Role #msgraph AccessReview.ReadWrite.All

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions 60a901ed-09f7-4aa5-a16e-7dd3d6f9de36=Role #msgraph ProgramControl.ReadWrite.All

Die Azure CLI kann dann auch gleich noch den Admin Grant erledigen (wenn man nicht in der Azure Cloud Shell läuft!):

az ad app permission admin-consent –id $adapp.ApplicationId

Hier ein Beispiel, wie das Ergebnis dann im Azure AD Portal aussieht:

image

Wer nun die Guid für seine Berechtigung sucht, kann ganz einfach mit diesem Befehlt (Azure Active Directory PowerShell 2.0) auf das ständig wachsende Set an App Permissions zugreifen:

(Get-AzureADServicePrincipal -filter “DisplayName eq ‘Microsoft Graph’”).AppRoles | Select Id, Value | Sort Value

Id                                   Value
–                                   —–
d07a8cc0-3d51-4b77-b3b0-32704d1f69fa AccessReview.Read.All
ef5f7d5c-338f-44b0-86c3-351f46c8bb5f AccessReview.ReadWrite.All
18228521-a591-40f1-b215-5fad4488c117 AccessReview.ReadWrite.Membership
134fd756-38ce-4afd-ba33-e9623dbe66c2 AdministrativeUnit.Read.All
5eb59dd3-1da2-4329-8733-9dabdc435916 AdministrativeUnit.ReadWrite.All
1bfefb4e-e0b5-418b-a88f-73c46d2cc8e9 Application.ReadWrite.All
18a4783c-866b-4cc7-a460-3d5e5662c884 Application.ReadWrite.OwnedBy
b0afded3-3588-46d8-8b3d-9842eff778da AuditLog.Read.All
798ee544-9d2d-430c-a058-570e29e34338 Calendars.Read
ef54d2bf-783f-4e0f-bca1-3210c0444d99 Calendars.ReadWrite
a7a681dc-756e-4909-b988-f160edc6655f Calls.AccessMedia.All
284383ee-7f6e-4e40-a2a8-e85dcb029101 Calls.Initiate.All
4c277553-8a09-487b-8023-29ee378d8324 Calls.InitiateGroupCall.All
f6b49018-60ab-4f81-83bd-22caeabfed2d Calls.JoinGroupCall.All
fd7ccf6b-3d28-418b-9701-cd10f5cd2fd4 Calls.JoinGroupCallAsGuest.All
7b2449af-6ccd-4f4d-9f78-e550c193f0d1 ChannelMessage.Read.All
4d02b0cc-d90b-441f-8d82-4fb55c34d6bb ChannelMessage.UpdatePolicyViolation.All
6b7d71aa-70aa-4810-a8d9-5d9fb2830017 Chat.Read.All
294ce7c9-31ba-490a-ad7d-97a7d075e4ed Chat.ReadWrite.All
7e847308-e030-4183-9899-5235d7270f58 Chat.UpdatePolicyViolation.All
089fe4d0-434a-44c5-8827-41ba8a0b17f5 Contacts.Read
6918b873-d17a-4dc1-b314-35f528134491 Contacts.ReadWrite
1138cb37-bd11-4084-a2b7-9f71582aeddb Device.ReadWrite.All
7a6ee1e7-141e-4cec-ae74-d9db155731ff DeviceManagementApps.Read.All
dc377aa6-52d8-4e23-b271-2a7ae04cedf3 DeviceManagementConfiguration.Read.All
2f51be20-0bb4-4fed-bf7b-db946066c75e DeviceManagementManagedDevices.Read.All
58ca0d9a-1575-47e1-a3cb-007ef2e4583b DeviceManagementRBAC.Read.All
06a5fe6d-c49d-46a7-b082-56b1b14103c7 DeviceManagementServiceConfig.Read.All
7ab1d382-f21e-4acd-a863-ba3e13f7da61 Directory.Read.All
19dbc75e-c2e2-444c-a770-ec69d8559fc7 Directory.ReadWrite.All
7e05723c-0bb0-42da-be95-ae9f08a6e53c Domain.ReadWrite.All
7c9db06a-ec2d-4e7b-a592-5a1e30992566 EduAdministration.Read.All
9bc431c3-b8bc-4a8d-a219-40f10f92eff6 EduAdministration.ReadWrite.All
4c37e1b6-35a1-43bf-926a-6f30f2cdf585 EduAssignments.Read.All
6e0a958b-b7fc-4348-b7c4-a6ab9fd3dd0e EduAssignments.ReadBasic.All
0d22204b-6cad-4dd0-8362-3e3f2ae699d9 EduAssignments.ReadWrite.All
f431cc63-a2de-48c4-8054-a34bc093af84 EduAssignments.ReadWriteBasic.All
e0ac9e1b-cb65-4fc5-87c5-1a8bc181f648 EduRoster.Read.All
0d412a8c-a06c-439f-b3ec-8abcf54d2f96 EduRoster.ReadBasic.All
d1808e82-ce13-47af-ae0d-f9b254e6d58a EduRoster.ReadWrite.All
38c3d6ee-69ee-422f-b954-e17819665354 ExternalItem.ReadWrite.All
01d4889c-1287-42c6-ac1f-5d1e02578ef6 Files.Read.All
75359482-378d-4052-8f01-80520e7db3cd Files.ReadWrite.All
5b567255-7703-4780-807c-7be8301ae99b Group.Read.All
62a82d76-70ea-41e2-9197-370581804d09 Group.ReadWrite.All
e321f0bb-e7f7-481e-bb28-e3b0b32d4bd0 IdentityProvider.Read.All
90db2b9a-d928-4d33-a4dd-8442ae3d41e4 IdentityProvider.ReadWrite.All
6e472fd1-ad78-48da-a0f0-97ab2c6b769e IdentityRiskEvent.Read.All
db06fb33-1953-4b7b-a2ac-f1e2c854f7ae IdentityRiskEvent.ReadWrite.All
dc5007c0-2d7d-4c42-879c-2dab87571379 IdentityRiskyUser.Read.All
656f6061-f9fe-4807-9708-6a2e0934df76 IdentityRiskyUser.ReadWrite.All
19da66cb-0fb0-4390-b071-ebc76a349482 InformationProtectionPolicy.Read.All
810c84a8-4a9e-49e6-bf7d-12d183f40d01 Mail.Read
e2a3a72e-5f79-4c64-b1b1-878b674786c9 Mail.ReadWrite
b633e1c5-b582-4048-a93e-9f11b44c7e96 Mail.Send
40f97065-369a-49f4-947c-6a255697ae91 MailboxSettings.Read
6931bccd-447a-43d1-b442-00a195474933 MailboxSettings.ReadWrite
658aa5d8-239f-45c4-aa12-864f4fc7e490 Member.Read.Hidden
3aeca27b-ee3a-4c2b-8ded-80376e2134a4 Notes.Read.All
0c458cef-11f3-48c2-a568-c66751c238c0 Notes.ReadWrite.All
c1684f21-1984-47fa-9d61-2dc8c296bb70 OnlineMeetings.Read.All
b8bb2037-6e08-44ac-a4ea-4674e010e2a4 OnlineMeetings.ReadWrite.All
0b57845e-aa49-4e6f-8109-ce654fffa618 OnPremisesPublishingProfiles.ReadWrite.All
b528084d-ad10-4598-8b93-929746b4d7d6 People.Read.All
246dd0d5-5bd0-4def-940b-0421030a5b68 Policy.Read.All
79a677f7-b79d-40d0-a36a-3e6f8688dd7a Policy.ReadWrite.TrustFramework
eedb7fdd-7539-4345-a38b-4839e4a84cbd ProgramControl.Read.All
60a901ed-09f7-4aa5-a16e-7dd3d6f9de36 ProgramControl.ReadWrite.All
230c1aed-a721-4c5d-9cb4-a90514e508ef Reports.Read.All
5e0edab9-c148-49d0-b423-ac253e121825 SecurityActions.Read.All
f2bf083f-0179-402a-bedb-b2784de8a49b SecurityActions.ReadWrite.All
bf394140-e372-4bf9-a898-299cfc7564e5 SecurityEvents.Read.All
d903a879-88e0-4c09-b0c9-82f6a1333f84 SecurityEvents.ReadWrite.All
a82116e5-55eb-4c41-a434-62fe8a61c773 Sites.FullControl.All
0c0bf378-bf22-4481-8f81-9e89a9b4960a Sites.Manage.All
332a536c-c7ef-4017-ab91-336970924f0d Sites.Read.All
9492366f-7969-46a4-8d15-ed1a20078fff Sites.ReadWrite.All
21792b6c-c986-4ffc-85de-df9da54b52fa ThreatIndicators.ReadWrite.OwnedBy
fff194f1-7dce-4428-8301-1badb5518201 TrustFrameworkKeySet.Read.All
4a771c9a-1cf2-4609-b88e-3d3e02d539cd TrustFrameworkKeySet.ReadWrite.All
405a51b5-8d8d-430b-9842-8be4b0e9f324 User.Export.All
09850681-111b-4a89-9bed-3f2cae46d706 User.Invite.All
df021288-bdef-4463-88db-98f22de89214 User.Read.All
741f803b-c850-494e-b5df-cde7c675a1ca User.ReadWrite.All

Jürgen Gutsch: Self-publishing a book

While writing on the Customizing ASP.NET Core series, a reader asked me to bundle all the posts into a book. I was thinking about it for a while. Also because I tried to write a book in the past together with a collogue at the YOO. But publishing a book with a publisher in behind turned out to be stress. Since we have a family with small kids and a job where we work on different projects, the book never has priority one. The publisher didn't see that fact. Fortunately the publisher quits the contract because we weren't able to deliver a chapter per week.

This is the planned cover for the bundled series:

(I took that photo at the Tschentenalp above Adelboden in Switzerland. It is the View to the Lohner Mountains)

Leanpub

In the past I already had a look into different self publishing platforms like Leanpub which looks pretty easy and modern. But it also has a downside:

  • Leanpub gives me 80 % of the salary, but we need to do the publishing and the marketing to sell that book
  • A publisher only gives me 20%, but does a professional publishing and marketing. He will sell a lot more books.

At the end you cannot get rich by publishing a book like this. But it is anyway nice to get some money out of your effort. Also Amazon provides a possibility to publish a book by yourself which looks nice for self-publisher. I'm going to try this as well.

In the past Leanpub also provides print on demand. This seemed to be stopped now. I couldn't found any information about it now. Anyway, it is good enough to publish in various eBook formats.

So I decided to go with Leanpub to try the self-publishing way.

Writing

Even if the most of the contents are already written for the blog, I decided to go over all the parts to also update all the stuff to ASP.NET Core 3.0. I also decided to also leave the ASP.NET Core 2.2 information, because this will also be valid for a while. So the chapter will handle 3.0 and 2.2.

Writing for Leanpub also works with GitHub and Markdown files, which also reduces the effort. I'm able to bind a GitHub repository to Leanpub and push Markdown files into it. I need to structure and order the different files in a book.txt file. Every markdown file is a chapter in that book.

Currently I have 13 chapters a preface, a about me chapter, a chapter to describe the technical requirements for this book and a small postface. All in all about 80 pages.

Rewriting

Sometimes it was hard to rewrite the demos and contents to ASP.NET Core. If you are writing about customizing that goes deeply into the APIs, you will definitely face some significant changes. So it wasn't that easy to get a custom DI container running in ASP.NET Core 3. Also adding the Middlewares using a custom Route changes from 2.2 to 3.0 Preview 3 and changes again from the preview 3 to the preview 6. Iven though I already had some experience with 3.0 there where some changes between the different previews.

But luckily I also have some chapters without any differences between 2.2 and 3.0

Updating the blog posts

I'm not yet sure whether I need to update the blog post or not. My current idea is to create new posts and to mention the new post in the old ones.

There is definitely enough stuff for a lot of new posts about About ASP.NET Core. One thing for example is the new Framework reference that was a pain in the ass during a live stream where I tried to update a preview 3 solution to preview 6.

Publishing

Currently I'm not sure when I'm able to publish this book. At the moment it is review by to people doing the non technical review and one guy doing the technical review.

I think I'm going to publish this book during the summer.

Contributing

If you want to help making this book better, feel free to go to the repositories, fork them and to create PRs.

It would also be helpful to propose a price you would pay for such a book. Until yet I got some proposals, but his seem to be a pretty high price from my perspective. It seems some folks are really willing to pay around 25 EUR. https://leanpub.com/customizing-aspnetcore/. What do you think?

Marco Scheel: Microsoft Graph, Postman und Wie bekomme ich ein App Only Token?

Der Microsoft Graph ist das “Schweizer Taschenmesser” für alle im Microsoft 365 Umfeld. Eine API für “alle” Dienste und noch besser immer das gleiche Authentifizierungsmodel. Im Hairless in the Cloud Podcast Nummer 18 habe ich meine Eindrücke zum Microsoft Graph schon geschildert. Der Graph Explorer auf der Website ist eine gute Methode den Graph kennenzulernen. Ich für meinen Teil bewege mich aber überwiegend ohne Benutzerinteraktion im Graph und somit nutze ich in meinen Anwendungen die Application Permissions. Die meisten APIs (vgl. Teams) kommen allerdings erstmal ohne App Permissions daher. Die Enttäuschung ist groß, wenn man über den Graph Explorer sein Research gemacht hat und dann feststellt, dass die Calls als App Permission scheitern.

Jeremy Thake aus dem Microsoft Graph Team hat vor einigen Monaten angefangen, die Samples (und mehr) aus dem Graph Explorer in einer Collection für Postman zu veröffentlichen. Diese Collection vereinfacht das Testen der eigenen Calls und gibt Anregung für neue Szenarien.

In der Vergangenheit habe ich mir aus meiner Azure Function das Token “geklaut” und dann im Postman als Bearer Token direkt hinterlegt:

image

Es gibt aber eine viel elegantere Version. Die MS Graph Postman Collection arbeitet mit dem Environment und Variablen. Eine Methode, die eigentlich dem Code in der eigene App (bei mir eine Azure Function) entspricht, ist aber auch mit an Bord. Postman bietet eine native OAuth Integration an. Man wählt einfach OAuth 2.0 aus und kann dann folgende Informationen aus seiner eigenen App hinterlegen:

image


Hinweis: Ich habe meine App schon wieder gelöscht. Sie ist nicht länger nutzbar, also ist das Secret im Code auch kein Geheimnis mehr.

Über “Request Token” kann ich dann ein Token holen und für alle weiteren Requests verwenden. Zum Prüfen des Token (hat der Scope geklappt) kann man einfach auf jwt.io oder auf den Microsoft Service jwt.ms gehen.

Hinweis: Solche Token Decoder sind eine tolle Sache, aber bitte denkt dran, wenn ihr das mit produktiven Token macht, dann müsst ihr dem Service vertrauen, denn er hat in dem Moment eure Berechtigung! In meinem Fall könnten die beiden Websites das Token nehmen und gegen meinen Tenant einsetzen! Ich nutze hier mein LAB Tenant und ich glaube, dass ich weiß was ich tue :) Also alles gut!

image

Mit dem Token kann man dann zum Beispiel in meinem Fall die Azure AD Access Reviews einsehen.

image

Mein Debugging wurde extrem vereinfacht, da ich so einfach meine App Permissions testen kann.

Code-Inside Blog: Jint: Invoke Javascript from .NET

If you ever dreamed to use Javascript in your .NET application there is a simple way: Use Jint.

Jint implements the ECMA 5.1 spec and can be use from any .NET implementation (Xamarin, .NET Framework, .NET Core). Just use the NuGet package and has no dependencies to other stuff - it’s a single .dll and you are done!

Why should integrate Javascript in my application?

In our product “OneOffixx” we use Javascript as a scripting language with some “OneOffixx” specific objects.

The pro arguments for Javascript:

  • It’s a well known language (even with all the brainfuck in it)
  • You can sandbox it quite simple
  • With a library like Jint it is super simple to interate

I highly recommend to checkout the GitHub page, but here a some simple examples, which should show how to use it:

Example 1: Simple start

After the NuGet action you can use the following code to see one of the most basic implementations:

public static void SimpleStart()
{
    var engine = new Jint.Engine();
    Console.WriteLine(engine.Execute("1 + 2 + 3 + 4").GetCompletionValue());
}

We create a new “Engine” and execute some simple Javascript and returen the completion value - easy as that!

Example 2: Use C# function from Javascript

Let’s say we want to provide a scripting environment and the script can access some C# based functions. This “bridge” is created via the “Engine” object. We create a value, which points to our C# implementation.

public static void DefinedDotNetApi()
{
    var engine = new Jint.Engine();

    engine.SetValue("demoJSApi", new DemoJavascriptApi());

    var result = engine.Execute("demoJSApi.helloWorldFromDotNet('TestTest')").GetCompletionValue();

    Console.WriteLine(result);
}

public class DemoJavascriptApi
{
    public string helloWorldFromDotNet(string name)

    {
        return $"Hello {name} - this is executed in {typeof(Program).FullName}";
    }
}

Example 3: Use Javascript from C#

Of course we also can do the other way around:

public static void InvokeFunctionFromDotNet()
{
    var engine = new Engine();

    var fromValue = engine.Execute("function jsAdd(a, b) { return a + b; }").GetValue("jsAdd");

    Console.WriteLine(fromValue.Invoke(5, 5));

    Console.WriteLine(engine.Invoke("jsAdd", 3, 3));
}

Example 4: Use a common Javascript library

Jint allows you to inject any Javascript code (be aware: There is no DOM, so only “libraries” can be used).

In this example we use handlebars.js:

public static void Handlebars()
{
    var engine = new Jint.Engine();

    engine.Execute(File.ReadAllText("handlebars-v4.0.11.js"));

    engine.SetValue("context", new
    {
        cats = new[]
        {
            new {name = "Feivel"},
            new {name = "Lilly"}
        }
    });

    engine.SetValue("source", "  says meow!!!\n");

    engine.Execute("var template = Handlebars.compile(source);");

    var result = engine.Execute("template(context)").GetCompletionValue();

    Console.WriteLine(result);
}

Example 5: REPL

If you are crazy enough, you can build a simple REPL like this (not sure if this would be a good idea for production, but it works!)

public static void Repl()
{
    var engine = new Jint.Engine();

    while (true)
    {
        Console.Write("> ");
        var statement = Console.ReadLine();
        var result = engine.Execute(statement).GetCompletionValue();
        Console.WriteLine(result);
    }
}

Jint: Javascript integration done right!

As you can see: Jint is quite powerfull and if you feel the need to integrate Javascript in your application, checkout Jint!

The sample code can be found here .

Hope this helps!

Norbert Eder: Scratch – Kinder lernen programmieren

Ohne Computer läuft heute gar nichts mehr. Umso wichtiger ist es, zu verstehen, wie sowohl Computer, als auch die darauf laufende Software, funktionieren. Um dieses so wichtige Verständnis zu schüren, sollten schon Kinder mit dem Thema des Programmierens in Berührung kommen.

Dazu gibt es unterschiedlichste Werkzeuge. Eines, das ich – aus Erfahrung – sehr empfehlen kann, ist Scratch.

Scratch ist ein tolles Hilfsmittel für Neueinsteiger, vor allem aber Kinder und Jugendliche. Programme bestehen hier aus interaktiven Komponenten, die zusammengesetzt und mit „Leben“ versehen werden können. Mittels unterschiedlicher Bausteine können die Komponenten bewegt werden, es ist möglich, auf Ereignisse zu reagieren oder aber auch Sound abzuspielen und vieles mehr.

Durch das Bausteinsystem werden Syntaxfehler vermieden. Statt Frust gibt es schnelle Erfolge und treiben zu weiteren „Spielereien“ ein. Innerhalb kürzester Zeit können so zum Beispiel kleine Spiele entwickelt werden.

Kinder lernen so spielerisch einige Grundkonzepte der Programmierung kennen und können so in kurzer Zeit auf komplexere Sprachen umsteigen und sich weiterentwickeln.

Die Voraussetzungen für Scratch sind gering: Ein Computer und ein Browser werden benötigt. Die Entwicklung findet komplett im Browser statt. Die Programme können abgespeichert oder geladen werden und stehen so auch sofort zur Verfügung. Es kann auch offline entwickelt werden. Dazu steht Scratch-Desktop für Windows 10 und MacOS 10.13+ zur Verfügung.

Scratch - Programmieren lernen

Scratch – Programmieren lernen

Damit man nicht ganz alleine starten muss, gibt es auch eine große Community und zahlreiche Hilfen für den Einstieg. Vielleicht gibt es ja auch in deiner Nähe ein CoderDojo. Hier in Österreich gibt es das CoderDojo Linz und das CoderDojo Graz. Hier bekommt man Unterstützung, wenn man als Elternteil nicht ganz so firm in diesen Dingen ist.

Besonders hilfreich ist die Liste der Übungsbeispiele des CoderDojo Linz zu Scratch und HTML.

In diesem Sinne wünsche ich Happy Coding und interessante, lehrreiche Stunden mit dem Nachwuchs.

Der Beitrag Scratch – Kinder lernen programmieren erschien zuerst auf Norbert Eder.

Holger Schwichtenberg: .NET Framework 4.8 erkennen

Wie bei den Vorgängern stellt man ein vorhandenes .NET Framework 4.8 über einen Registry-Eintrag fest.

Stefan Henneken: IEC 61131-3: Parameterübergabe per FB_init

Je nach Aufgabenstellung kann es erforderlich sein, dass Funktionsblöcke Parameter benötigen, die nur einmalig für Initialisierungsaufgaben verwendet werden. Ein möglicher Weg, diese elegant zu übergeben, bietet die Methode FB_init().

Vor TwinCAT 3 wurden Initialisierungs-Parameter sehr häufig über Eingangsvariablen übergeben.

(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
  nDatabits   : BYTE(7..8);
  eParity     : E_Parity;
  nStopbits   : BYTE(1..2);	
END_VAR

Dieses hatte den Nachteil, dass in den graphischen Darstellungsarten die Funktionsblöcke unnötig groß wurden. Auch war es nicht möglich, ein Ändern der Parameter zur Laufzeit zu verhindern.

Sehr hilfreich ist hierbei die Methode FB_init(). Diese Methode wird vor dem Start der SPS-Task einmal implizit ausgeführt und kann dazu dienen, Initialisierungsaufgaben durchzuführen.

Der Dialog zum Hinzufügen von Methoden bietet hierzu eine fertige Vorlage an.

Pic01

In der Methode sind zwei Eingangsvariablen enthalten, welche Auskunft darüber geben, unter welchen Bedingungen die Methode ausgeführt wird. Die Variablen dürfen weder gelöscht noch verändert werden. Allerdings kann FB_init() um weitere Eingangsvariablen ergänzt werden.

Beispiel

Als Beispiel soll ein Baustein zur Kommunikation über eine serielle Schnittstelle dienen (FB_SerialCommunication). Dieser Baustein soll ebenfalls die serielle Schnittstelle mit den notwendigen Parametern initialisieren. Aus diesem Grund werden zu FB_init() drei Variablen hinzugefügt:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);		
END_VAR

Das Initialisieren der seriellen Schnittstelle erfolgt nicht direkt in FB_init(). Deshalb müssen die Parameter in Variablen kopiert werden, die sich im Funktionsblock befinden.

FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
  nInternalDatabits    : BYTE(7..8);
  eInternalParity      : E_Parity;
  nInternalStopbits    : BYTE(1..2);
END_VAR

In diesen drei Variablen werden die Werte aus FB_init() während der Initialisierung kopiert.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR

THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

Wird eine Instanz von FB_SerialCommunication angelegt, so sind diese drei zusätzlichen Parameter mit anzugeben. Die Werte werden direkt nach dem Namen des Funktionsblocks in runden Klammern angegeben:

fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
                                               eParity := E_Parity.None,
                                               nStopbits := 1);

Noch bevor die SPS-Task startet, wird die Methode FB_init() implizit aufgerufen, so dass die internen Variablen des Funktionsblocks die gewünschten Werte erhalten.

Pic02 

Mit dem Start der SPS-Task und dem Aufruf der Instanz von FB_SerialCommunication kann jetzt die Initialisierung der seriellen Schnittstelle erfolgen.

Es ist immer notwendig alle Parameter anzugeben. Eine Deklaration ohne eine vollständige Auflistung der Parameter ist nicht erlaubt und erzeugt beim Compilieren eine Fehlermeldung:

Pic03 

Arrays

Wird FB_init() bei Arrays verwendet, so sind für jedes Element die vollständigen Parameter anzugeben (mit eckige Klammern):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
                 (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                 (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];

Sollen alle Elemente die gleichen Initialisierungswerte erhalten, so ist es ausreichend, wenn die Parameter einmal vorhanden sind (ohne eckige Klammern):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
                                                             eParity := E_Parity.None,
                                                             nStopbits := 1);

Mehrdimensionale Arrays sind ebenfalls möglich. Auch hier müssen alle Initialisierungswerte angegeben werden:

aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
                      (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
                      (nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];

Vererbung

Kommt Vererbung zum Einsatz, so wird die Methode FB_init() immer mit vererbt. Als Beispiel soll hier FB_SerialCommunicationRS232 dienen:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication

Wird eine Instanz von FB_SerialCommunicationRS232 angelegt, so müssen auch die Parameter von FB_init() angegeben werden, welche von FB_SerialCommunication geerbt wurden:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1);

Es besteht außerdem die Möglichkeit FB_init() zu überschreiben. In diesem Fall müssen die gleichen Eingangsvariablen in der gleichen Reihenfolge und vom gleichen Datentyp vorhanden sein, wie bei dem Basis-FB (FB_SerialCommunication). Es können aber weitere Eingangsvariablen hinzugefügt werden, so dass der abgeleitete Funktionsblock (FB_SerialCommunicationRS232) zusätzliche Parameter erhält:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
  nBaudrate    : UDINT;	
END_VAR

THIS^.nInternalBaudrate := nBaudrate;

Wird eine Instanz von FB_SerialCommunicationRS232 angelegt, so sind alle Parameter, auch die von FB_SerialCommunication, anzugeben:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1,
                                                         nBaudRate := 19200);

In der Methode FB_init() von FB_SerialCommunicationRS232 ist nur das Kopieren des neuen Parameters (nBaudrate) notwendig. Dadurch, dass FB_SerialCommunicationRS232 von FB_SerialCommunication erbt, wird vor dem Start der SPS-Task auch FB_init() von FB_SerialCommunication implizit ausgeführt. Es werden immer beide FB_init() Methoden implizit aufgerufen, sowohl die von FB_SerialCommunication, als auch die von FB_SerialCommunicationRS232. Der Aufruf von FB_init() erfolgt bei Vererbung immer von ‚unten‘ nach ‚oben‘. Also erst von FB_SerialCommunication und anschließend von FB_SerialCommunicationRS232.

Parameter weiterleiten

Als Beispiel soll der Funktionsblock (FB_SerialCommunicationCluster) dienen, in dem mehrere Instanzen von FB_SerialCommunication deklariert werden:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
  fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  nInternalDatabits       : BYTE(7..8);
  eInternalParity         : E_Parity;
  nInternalStopbits       : BYTE(1..2);	
END_VAR

Damit die Parameter der Instanzen von außen einstellbar sind, erhält auch FB_SerialCommunicationCluster die Methode FB_init() mit den notwendigen Eingangsvariablen.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR

THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

Hierbei gibt es allerdings einiges zu beachten. Die Aufrufreihenfolge von FB_init() ist in diesem Fall nicht eindeutig definiert. In meiner Testumgebung erfolgen die Aufrufe von ‚innen‘ nach ‚außen‘. Erst wird fbSerialCommunication01.FB_init() und fbSerialCommunication02.FB_init() aufgerufen, danach erst fbSerialCommunicationCluster.FB_init(). Es ist nicht möglich, die Parameter von ‚außen‘ nach ‚innen‘ durchzureichen. Die Parameter stehen in den beiden inneren Instanzen von FB_SerialCommunication somit nicht zur Verfügung.

Die Reihenfolge der Aufrufe ändert sich, sobald FB_SerialCommunication und FB_SerialCommunicationRS232 vom gleichen Basis-FB abgeleitet werden. In diesem Fall wird FB_init() von ‚außen‘ nach ‚innen‘ aufgerufen. Dieser Ansatz ist aus zwei Gründen nicht immer umzusetzen:

  1. Liegt FB_SerialCommunication in einer Bibliothek, so kann die Vererbung nicht ohne weiteres geändert werden.
  2. Die Aufrufreihenfolge von FB_init() ist bei Verschachtelung nicht weiter definiert. Es ist also nicht auszuschließen, dass sich dieses in zukünftigen Versionen ändern kann.

Eine Variante das Problem zu lösen, ist der explizite Aufruf von FB_SerialCommunication.FB_init() aus FB_SerialCommunicationCluster.FB_init().

fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);

Alle Parameter, auch bInitRetains und bInCopyCode, werden direkt weitergegeben.

Achtung: Der Aufruf von FB_init() hat immer zur Folge das alle lokalen Variablen der Instanz initialisiert werden. Das muss beachtet werden, sobald FB_init() aus der SPS-Task explizit aufgerufen wird, statt implizit vor der SPS-Task.

Zugriff über Eigenschaften

Durch die Übergabe der Parameter per FB_init(), können diese zur Laufzeit weder von Außen gelesen noch verändert werden. Die einzige Ausnahme wäre der explizite Aufruf von FB_init() aus der SPS-Task. Dieses sollte aber grundsätzlich vermieden werden, da dadurch alle lokalen Variablen der Instanz werden neu initialisiert.

Soll der Zugriff aber dennoch möglich sein, so können für die Parameter entsprechende Eigenschaften angelegt werden:

Pic04

Die Setter und Getter der jeweiligen Eigenschaften greifen auf die entsprechenden lokalen Variablen in dem Funktionsblock zu (nInternalDatabits, eInternalParity und nInternalStopbits). Somit lassen sich die Parameter bei der Deklaration, als auch zur Laufzeit vorgeben.

Durch das Entfernen der Setter kann ein Ändern der Parameter zur Laufzeit verhindert werden. Sind die Setter vorhanden kann allerdings auch auf FB_init() verzichtet werden. Eigenschaften können ebenfalls direkt bei der Deklaration einer Instanz initialisiert werden.

fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
                                                   Parity := E_Parity.Odd,
                                                   Stopbits := 1);

Es können die Parameter von FB_init() und die Eigenschaften auch gleichzeitig angegeben werden:

fbSerialCommunication  : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
                                               (Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);

Vorrang haben in diesem Fall die Initialisierungswerte der Eigenschaften. Die Übergabe per Eigenschaft und FB_init() hat hier den Nachteil, das die Deklaration des Funktionsblocks unnötig lang wird. Beides zu implementieren erscheint mir auch nicht notwendig. Sind alle Parameter auch über Eigenschaften schreibbar, so kann auf die Initialisierung per FB_init() verzichtet werden. Als Fazit gilt: Dürfen Parameter zur Laufzeit nicht änderbar sein, so ist der Einsatz von FB_init() in Betracht zu ziehen. Soll der Schreibzugriff möglich sein, so bieten sich Eigenschaften an.

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Code-Inside Blog: Build Windows Server 2016 Docker Images under Windows Server 2019

Since the uprising of Docker on Windows we also invested some time into it and packages our OneOffixx server side stack in a Docker image.

Windows Server 2016 situation:

We rely on Windows Docker Images, because we still have some “legacy” parts that requires the full .NET Framework, thats why we are using this base image:

FROM microsoft/aspnet:4.7.2-windowsservercore-ltsc2016

As you can already guess: This is based on a Windows Server 2016 and besides the “legacy” parts of our application, we need to support Windows Server 2016, because Windows Server 2019 is currently not available on our customer systems.

In our build pipeline we could easily invoke Docker and build our images based on the LTSC 2016 base image and everything was “fine”.

Problem: Move to Windows Server 2019

Some weeks ago my collegue updated our Azure DevOps Build servers from Windows Server 2016 to Windows Server 2019 and our builds began to fail.

Solution: Hyper-V isolation!

After some internet research this site popped up: Windows container version compatibility

Microsoft made some great enhancements to Docker in Windows Server 2019, but if you need to “support” older versions, you need to take care of it, which means:

If you have a Windows Server 2019, but want to use Windows Server 2016 base images, you need to activate Hyper-V isolation.

Example from our own cake build script:

var exitCode = StartProcess("Docker", new ProcessSettings { Arguments = "build -t " + dockerImageName + " . --isolation=hyperv", WorkingDirectory = packageDir});

Hope this helps!

Holger Schwichtenberg: Viele Breaking Changes in Entity Framework Core 3.0

Von Entity Framework Core 3.0 gibt es mittlerweile eine vierte Preview-Version, in der man aber noch nicht keine der unten genannten neuen Funktionen findet. Vielmehr hat Microsoft eine erhebliche Anzahl von Breaking Changes eingebaut. Die Frage ist warum?

Johnny Graber: Buch-Rezension zu „Java by Comparison“

„Java by Comparison“ von Simon Harrer, Jörg Lenhard und Linus Dietz erschien 2018 bei The Pragmatic Programmers. Dieses Buch wagt sich an eine grosse Herausforderung: Wie kann man das über Jahre angeeignete Expertenwissen in einfacher Form Programmier-Anfängern zugänglich machen? Die Autoren nutzen dazu 70 Beispiele, in denen ein funktionierender erster Wurf einer wartbaren und durchdachten … Buch-Rezension zu „Java by Comparison“ weiterlesen

Holger Schwichtenberg: Wie man Entity Framework Core dazu bringt, die Klassennamen statt der DbSet-Namen als Tabellennamen zu verwenden

Microsofts objektrelationaler Mapper Entity Framework Core hat eine unangenehme Grundeinstellung: Die Datenbanktabellen heißen nicht wie die Klassennamen der Entitätsklassen, sondern wie die Property-Namen, die in der Kontextklasse bei der Deklaration des DbSet verwendet werden.

Code-Inside Blog: Update OnPrem TFS 2018 to AzureDevOps Server 2019

We recently updated our OnPrem TFS 2018 installation to the newest release: Azure DevOps Server

The product has the same core features as TFS 2018, but with a new UI and other improvements. For a full list you should read the Release Notes.

*Be aware: This is the OnPrem solution, even with the slightly missleading name “Azure DevOps Server”. If you are looking for the Cloud solution you should read the Migration-Guide.

“Updating” a TFS 2018 installation

Our setup is quite simple: One server for the “Application Tier” and another SQL database server for the “Data Tier”. The “Data Tier” was already running with SQL Server 2016 (or above), so we only needed to touch the “Application Tier”.

Application Tier Update

In our TFS 2018 world the “Application Tier” was running on a Windows Server 2016, but we decided to create a new (clean) server with Windows Server 2019 and doing a “clean” Azure DevOps Server install, but pointing to the existing “Data Tier”.

In theory it is quite possible to update the actual TFS 2018 installation, but because “new is always better”, we also switched the underlying OS.

Update process

The actual update was really easy. We did a “test run” with a copy of the database and everything worked as expected, so we reinstalled the Azure DevOps Server and run the update on the production data.

Steps:

x

x

x

x

x

x

x

x

x

x

x

x

x

x

Summary

If you are running a TFS installation, don’t be afraid to do an update. The update itself was done in 10-15 minutes on our 30GB-ish database.

Just download the setup from the Azure DevOps Server site (“Free trial…”) and you should be ready to go!

Hope this helps!

Jürgen Gutsch: Customizing ASP.NET Core Part 12: Hosting

In this 12th part of this series, I'm going to write about how to customize hosting in ASP.NET Core. We will look into the hosting options, different kind of hosting and a quick look into hosting on the IIS. And while writing this post this again seems to get a long one.

This will change in ASP.NET Core 3.0. I anyway decided to do this post about ASP.NET Core 2.2 because it still needs some time until ASP.NET Core 3.0 is released.

This post is just an overview bout the different kind of application hosting. It is surely possible to go a lot more into the details for each topic, but this would increase the size of this post a lot and I need some more topics for future blog posts ;-)

This series topics

Quick setup

For this series we just need to setup a small empty web application.

dotnet new web -n ExploreHosting -o ExploreHosting

That's it. Open it with Visual Studio Code:

cd ExploreHosting
code .

And voila, we get a simple project open in VS Code:

WebHostBuilder

Like in the last post, we will focus on the Program.cs. The WebHostBuilder is our friend. This is where we configure and create the web host. The next snippet is the default configuration of every new ASP.NET Core web we create using File => New => Project in Visual Studio or dotnet new with the .NET CLI:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
        	.UseStartup<Startup>();
}

As we already know from the previous posts the default build has all the needed stuff pre-configured. All you need to run an application successfully on Azure or on an on-premise IIS is configured for you.

But you are able to override almost all of this default configurations. Also the hosting configuration.

Kestrel

After the WebHostBuilder is created we can use various functions to configure the builder. Here we already see one of them, which specifies the Startup class that should be used. In the last post we saw the UseKestrel method to configure the Kestrel options:

.UseKestrel((host, options) =>
{
    // ...
})

Reminder: Kestrel is one possibility to host your application. Kestrel is a web server built in .NET and based on .NET socket implementations. Previously it was built on top of libuv, which is the same web server that is used by NodeJS. Microsoft removes the dependency to libuv and created an own web server implementation based on .NET sockets.

Docs: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel

This first argument is a WebHostBuilderContext to access already configured hosting settings or the configuration itself. The second argument is an object to configure Kestrel. This snippet shows what we did in the last post to configure the socket endpoints where the host needs to listen to:

.UseKestrel((host, options) =>
{
    var filename = host.Configuration.GetValue("AppSettings:certfilename", "");
    var password = host.Configuration.GetValue("AppSettings:certpassword", "");
    
    options.Listen(IPAddress.Loopback, 5000);
    options.Listen(IPAddress.Loopback, 5001, listenOptions =>
    {
        listenOptions.UseHttps(filename, password);
    });
})

This will override the default configuration where you are able to pass in URLs, eg. using the applicationUrl property of the launchSettings.json or an environment variable.

HTTP.sys

Do you know that there is another hosting option? A different web server implementation? It is HTTP.sys. This is a pretty mature library deep within Windows that can be used to host your ASP.NET Core application.

.UseHttpSys(options =>
{
    // ...
})

The HTTP.sys is different to Kestrel. It cannot be used in IIS because it is not compatible with the ASP.NET Core Module for IIS.

The main reason to use HTTP.sys instead of Kestrel is Windows Authentication which cannot be used in Kestrel only. Another reason is, if you need to expose it to the internet without the IIS.

Also the IIS is running on top of HTTP.sys for years. Which means UseHttpSys() and IIS are using the same web server implementation. To learn more about HTTP.sys please read the docs.

Hosting on IIS

An ASP.NET Core Application shouldn't be directly exposed to the internet, even if it's supported for even Kestrel or the HTTP.sys. It would be the best to have something like a reverse proxy in between or at least a service that watches the hosting process. For ASP.NET Core the IIS isn't only a reverse proxy. It also takes care of the hosting process in case it brakes because of an error or whatever. It'll restart the process in that case. Also Nginx may be used as an reverse proxy on Linux that also takes care of the hosting process.

To host an ASP.NET Core web on an IIS or on Azure you need to publish it first. Publishing doesn't only compiles the project. It also prepares the project to host it on IIS, on Azure or on an webserver on Linux like Nginx.

dotnet publish -o ..\published -r win32-x64

This produces an output that can be mapped in the IIS. It also creates a web.config to add settings for the IIS or Azure. It contains the compiled web application as a DLL.

If you publish a self-contained application it also contains the runtime itself. A self-contained application brings it's own .NET Core runtime, but the size of the delivery increases a lot.

And on the IIS? Just create a new web and map it to the folder where you placed the published output:

It get's a little more complicated if you need to change the security, if you have some database connections and so on. This would be a topic for a separate blog post. But in this small sample it simply works:

This is the output of the small Middleware in the startup.cs of the demo project:

app.Run(async (context) =>
{
    await context.Response.WriteAsync("Hello World!");
});

Nginx

Unfortunately I cannot write about Nginx, because I don't have a running Linux currently to play around with it. This is one of the many future projects I have. I just got ASP.NET Core running on Linux using the Kestrel webserver.

Conclusion

ASP.NET Core and the .NET CLI already contain all the tools to get it running on various platforms and to set it up to get it ready for Azure and the IIS, as well as Nginx. This is super easy and well described in the docs.

BTW: What do you think about the new docs experience compared to the old MSDN documentation?

I'll definitely go deeper into some of the topics and in ASP.NET Core there are some pretty cool hosting features that make it a lot more flexible to host your application:

Currently we have the WebHostBuilder that creates the hosting environment of the applications. In 3.0 we get the HostBuilder that is able to create a hosting environment that is completely independent from any web context. I'm going to write about the HostBuilder in one of the next blog posts.

Holger Schwichtenberg: Magdeburger Developer Days vom 20. bis 22. Mai 2019

Die Entwickler-Community-Konferenz "Magdeburger Developer Days" geht in die vierte Runde.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.