Jürgen Gutsch: ASP.NET Core 3.0: Endpoint Routing

The last two posts were just a quick look into the Program.cs and the Startup.cs. This time I want to have a little deeper look into the new endpoint routing.

Wait!

Sometimes I have an Idea about a specific topic to write about and start writing. While writing I'm remembering that I maybe already wrote about it. Than I take a look into the blog archive and there it is:

Implement Middlewares using Endpoint Routing in ASP.NET Core 3.0

Maybe I get old now... ;-)

This is why I just link to the already existing post.

Anyways. The next two posts are a quick glimpse into Blazor Server Side and Blazor Client Side.

Why? Because I also want to focus on the different Hosting models and Blazor Client Side is using a different one.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Taking a quick look into the Startup.cs

I the last post, I took a quick look into the Program.cs of ASP.NET Core 3.0 and I quickly explored the Generic Hosting Model. But also the Startup class has something new in it. We will see some small but important changes.

Just one thing I forgot to mention in the last post: It should just work ASP.NET Core 2.1 code of the Program.cs and the Startup.cs in ASP.NET Core 3.0, if there is no or less customizing. The IWebHostBuilder is still there and can be uses the 2.1 way and also the default 2.1 Startup.cs should run in ASP.NET Core 3.0. It may be that you only need to do some small changes there.

The next snippet is the Startup class of an newly created empty web project:

public class Startup
{
    // This method gets called by the runtime. Use this method to add services to the container.
    // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
    public void ConfigureServices(IServiceCollection services)
    {
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        app.UseRouting();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapGet("/", async context =>
            {
                await context.Response.WriteAsync("Hello World!");
            });
        });
    }
}

The empty web project is a ASP.NET Core project without any ASP.NET Core UI feature. This is why the ConfigureServices method is empty. There is no additional service added to the dependency injection container.

The new stuff is into in the Configure method. The first lines look familiar. Depending on the hosting environment the development exception page will be shown.

app.UseRouting() is new. This is a middleware that enables the new endpoint routing. The new thing is, that routing is decoupled from the specific ASP.NET Feature. In the previous Version every feature (MVC, Razor Pages, SIgnalR, etc.) had its own endpoint implementation. Now the endpoint and routing configuration can be done independently. The Middlewares that need to handle a specific endpoint will now be mapped to a specific endpoint or route. So the Middlewares don't need to handle the routes anymore.

If you wrote a Middleware in the past which needs to work on a specific endpoint, you added the logic to check the endpoint inside the middleware or you used the MapWhen() extension method on the IApplicationBuilder to add the Middleware to a specific endpoint.

Now you create a new pipeline (using IApplicationBuilder) per endpoint and Map the Middleware to the specific new pipeline.

The MapGet() method above does this implicitly. It created a new endpoint "/" and maps the delegate Middleware to the new pipeline that was created internally.

That was a simple snippet. Now let's have a look into the Startup.cs of a new full blown web application using individual authentication. Created by using this .NET CLI command:

dotnet new mvc --auth Individual

Overall this also looks pretty familiar if you already know the previous versions:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {

        services.AddDbContext<ApplicationDbContext>(options =>
            options.UseSqlite(
                Configuration.GetConnectionString("DefaultConnection")));
        services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = true)
            .AddEntityFrameworkStores<ApplicationDbContext>();

        services.AddControllersWithViews();
        services.AddRazorPages();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
            app.UseDatabaseErrorPage();
        }
        else
        {
            app.UseExceptionHandler("/Home/Error");
            // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
            app.UseHsts();
        }

        app.UseHttpsRedirection();
        app.UseStaticFiles();

        app.UseRouting();

        app.UseAuthentication();
        app.UseAuthorization();

        app.UseEndpoints(endpoints =>
        {
            endpoints.MapControllerRoute(
                name: "default",
                pattern: "{controller=Home}/{action=Index}/{id?}");
            endpoints.MapRazorPages();
        });
    }
}

This is a MVC application, but did you see the lines where MVC is added? I'm sure you did. It is not longer called MVC, even if it is the MVC pattern used, because it was a little bit confusing with Web API.

To add MVC you now need to add AddControllersWithViews(). If you want to add Web API only you just need to add AddControllers(). I think this is a small but useful change. This way you can be more specific by adding ASP.NET Core features. In this case also Razor pages where added to the project. It is absolutely no problem to mix ASP.NET Core features.

AddMvc() still exists and is still working in ASP.NET Core

The Configure method doesn't really change, except the new endpoint routing part. There are two endpoints configured. One for controller routes (Which is Web API and MVC) and one for RazorPages.

Conclusion

This is also just a quick look into the Startup.cs with just some small but useful changes.

In the next post I'm going to do a little more detailed look into the new endpoint routing. While working on the GraphQL endpoint for ASP.NET Core, I learned a lot about the endpoint routing. This feature makes a lot of sense to me, even if it means to rethink some things, when you build and provide a Middleware.

Golo Roden: Funktionale Programmierung mit Objekten

JavaScript kennt verschiedene Methoden zur funktionalen Programmierung, beispielsweise map, reduce und filter. Allerdings stehen sie nur für Arrays zur Verfügung, nicht für Objekte. Mit ECMAScript 2019 lässt sich das jedoch auf elegante Weise ändern.

Jürgen Gutsch: New in ASP.NET Core 3.0 - Generic Hosting Environment

In ASP.NET Core 3.0 the hosting environment changes to get more generic. Hosting is not longer bound to Kestrel and not longer bound to ASP.NET Core. This means you are able to create a host, that doesn't start the Kestrel web server and doesn't need to use the ASP.NET Core Framework.

This is a small introduction post about the Generic Hosting Environment in ASP.NET Core 3.0. During the next posts I'm going to write more about it and what you can do with it in combination with some more ASP.NET Core 3.0 features.

In the next posts we will see a lot more details about why this makes sense. For the short term: There are different hosting models. One is the already known web hosting. One other model is running a worker service without a web server and without ASP.NET Core. Also Blazor uses a different hosting model inside the web assembly.

How does it look like in ASP.NET Core 3.0?

First let's recap how it looks in previous versions. This is a ASP.NET Core 2.2 Startup.cs that creates an IWebHostBuilder to start up Kestrel and to bootstrap ASP.NET Core using the Startup class:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

The next snippet shows the Program.cs of a new ASP.NET Core 3.0 web project:

public class Program
{
    public static void Main(string[] args)
    {
        CreateHostBuilder(args).Build().Run();
    }

    public static IHostBuilder CreateHostBuilder(string[] args) =>
        Host.CreateDefaultBuilder(args)
            .ConfigureWebHostDefaults(webBuilder =>
            {
                webBuilder.UseStartup<Startup>();
            });
}

Now a IHostBuilder will be created and configured first. If the default host builder is created, a IWebHostBuilder is created to use the configured Startup class.

The typical .NET Core App features like configuration, logging and dependency injection are configured on the level of the IHostBuilder. All the ASP.NET specific features like authentication, Middlewares, ActionFilters, Formatters, etc. are configured on the level of the IWebHostBuilder.

Conclusion

This makes the Hosting environment a lot more generic and flexible.

I'm going to write about specific scenarios during the next posts about the new ASP.NET Core 3.0 features. But first I will have a look into Startup.cs to see what is new in ASP.NET Core 3.0.

Marco Scheel: Mange Microsoft Teams membership with Azure AD Access Review

This post will introduce you to the Azure AD Access Review feature. With the introduction of modern collaboration through Microsoft 365 and Microsoft Teams being the main tool it is important to mange who is a member to the underlying Office 365 Group (Azure AD Group).

<DE>Für eine erhöhte Reichweite wird der Post heute auf Englisch erscheinen. Es geht um die Einführung von Access Reviews (Azure AD) im Zusammenspiel mit Microsoft Teams. Das Verwalten der Mitgliedschaft eines Teams wird durch den Einsatz von diesem Feature unterstützt und stellt die Besitzer weiter in den Mittelpunkt. Sollte großes Interesse an einer komplett deutschen Version bestehen, dann lasst es mich bitte wissen.</DE>

Microsoft has great resources to get started on a technical level. The feature enables a set of people to review another set of people. Azure AD is leveraging this capability (all under the bigger umbrella called Identity Governance) on two assets: Azure AD Groups and Azure AD Apps. Microsoft Teams as a hub for collaboration is build on top of Office 365 Groups and so we will have a closer look at the Access Review part for Azure AD Groups.

Each Office 365 Group (each Team) is build from a set of owners and members. With the open nature of Office 365, members can be employees, contractors, or people outside of the organization.

image

In our modern collaboration (Teams, SharePoint, …) implementation we strongly recommend to leverage full self service group creation that is already built into the system. With this setup everyone is able to create and manage/own a group. Permanent user education is needed for everyone to understand the concept behind modern groups. Many organizations also have a strong set of internal rules that forces a so called information owner (could be equal to the owner of a group) to review who has access to their data. Most organization rely on the fact people are fulfilling their duties as demanded, but lets face it owners are just human beings that need to do their “real” job. With the introduction of Azure AD Access Review we can support these owner duties and make the process documented and easy to execute.

AAD Access Review can do the following to support an up to date group membership:

  • Setup an Access Review for an Azure AD Group
  • Specify the duration (start date, recurrence, duration, …)
  • Specify who will do the review (owner, self, specific people, …)
  • Specify who will be reviewed (all members, guests, …)
  • Specify what will happen if the review is not executed (remove members, …)

Before we start we need to talk about licensing. It is obvious that M365 E5 is the best SKU to start with ;) but if you are not that lucky, you need at least an Azure AD P2 license. It is not a “very” common license as it was only part of the EMS E5 SKU, but Microsoft started some time ago really attractive license bundles. Many orgs with strong security requirements will at some point hit a license SKU that will include AAD P2. For your trusty lab tenants start a EMS E5 trial to test these features today. To be precise only the accounts reviewing (executing the Access Review) need the license, at least this is my understanding and as always with licensing ask your usual licensing people to get the definitive answer.

The setup of an Access Review (if not automated through MS Graph Beta) is setup in the Azure Portal in the identity governance blade of AAD. To create our first Access Review we need to on-board to this feature.

image

Please note we are looking at Access Review in the context of modern collaboration (groups created by Teams, SharePoint, Outlook, …). Access Review can be used to review any AAD group that you use to grant access to a specific resource or keep a list of trusted users for an infrastructure piece of tech in Azure. The following information might not always be valid for your scenario!

This is the first half of the screen we need to fill-out for a new Access Review:

image


Review name: This is a really important piece! The Review name will be the “only” visible clue for the reviewer once they get the email about the outstanding review. With self service setup and with the nature of how people name their groups we need to ensure people are understanding what they review. We try to automate the creation of the reviews so we put the review timing, the group name and the groups object id in the review name. The ID is helping during support if you send out 4000 Access Reviews and people ask why they got this email they can provide you with the ID and things get easier. For example: 2019-Q1 GRP New Order (af01a33c-df0b-4a97-a7de-c6954bd569ef)

Frequency: Also very important! You have to understand that an Access Review is somehow static. You could do a recurring review, but some information will be out of sync. For example the group could be renamed, but the title will not be updated and people might get confused about misleading information in the email that is send out. If you choose to let the owner of a group do the review, the owners will be “copied” to the Access Review config and not updated for future reviews. Technically this could be fixed by Microsoft, but as of now we ran into problems in the context of modern collaboration.

image

Users: “Members of a group” is our choice for collaboration. The other option is “Assigned to an application” and not our focus. For a group we have the option to do a guest only review or review everybody as a member of a group. Based on organizational needs and information like the confidentiality we can make a decision. As a starting point it could be a good option to go with guests only because guests are not very well controlled in most environments. An employee at least has a contract and the general trust level should be higher.

Group: Select a group the review should apply to. The latest changes to the Access Review feature allowed to select multiple groups at once. From a collaboration perspective I would avoid it, because at the end of the creation process each group will have its own Access Review instance and the settings are no longer shared. Once again from a collab point of view we need some kind of automation because it is not feasible to create these reviews by an manual task in a foreseeable future.

Reviewers: The natural choice for an Office 365 Group (Team) is to go with the “Group owners” option. Especially if we automate the process and don’t have an extra database to lookup who is the information owner. For static groups or highly confidential groups the option “Selected users” could make sense. An interesting option is also the last one called “Members (self)”. This option will "force” each member to take a decision if the user is any longer part of this project, team or group. We at Glück & Kanja are currently thinking about doing this for some of our internal clients teams. Most of our groups are public and accessible by most of the employee, but membership will document some kind of current involvement for the client represented by the group. This could also naturally reduce the number of teams that show up in your Microsoft Teams client app. As mentioned earlier at the moment it seems that the option “Group owners” will be resolved once the Access Review starts and the instance of the review is then fixed. So any owner change could be not reflected in future instances in recurring reviews. Hopefully this will be fixed by Microsoft.

Program: This is a logical grouping of access reviews. For example we could add all collaboration related reviews to one program vs administration reviews with a more static route.

image

More advanced settings are collapsed, but should definitely be reviews.

Upon completion settings: Allows to automatically apply the review results. I would suggest to try this settings, because it will not only document the review but take the required action on the membership. If group owners are not aware what these Access Review email are, then we talk about potential loss of access for members not reviewed, but at the end that is what we want. People need to take this part of identity governance for real and take care of their data. Any change by the system is document (Audit log of the group) and can be reverse manually. If the system is not executing the results of the review, someone must look up results regularly and then ensure to remove the users based on the outcome. If you go for Access Review, I strongly recommend on automatically applying the results (after you own internal tests).

Lets take a look on the created Access Review.

image


Azure Portal: This is an overview for the admin (non recurring access review).

image


Email: As you can see the prominent Review name is what is standing out to the user. The group name (also highlighted red) is buried within all other text.

image


Click on “Start Review” from the email: The user now can take action based on recommendations (missing in my lab tenant due to inactivity of my lab users).

image

Take Review: Accept 6 users.

image

Review Summary: This is the summary if the owner has taken all actions.

image

Azure Portal: Audit log information for the group.

After the user completed the review the system didn’t make a change to the group. Based on the configuration if actions should be automatically applied the results apply at the end of the review process! Until this time the owners can change their mind. Once the review period is over the system will apply the needed changes.

I really love this feature in the context of modern collaboration. The process of keeping a current list of involved members in a team is a big benefit for productivity and security. The “need to know” principal is supported by a technical implementation “free of cost” (a mentioned everyone should have AAD P2 through some SKU 😎).

Our GK O365 Lifecycle tool was extended to allow the creation of Access Reviews through the Microsoft Graph based on the Group/Team classification. Once customers read or get a demo about this feature and own the license we immediately start a POC implementation. If our tool is already in place it is only a matter of some JSON configuration to be up and running.

Code-Inside Blog: SQL Server, Named Instances & the Windows Firewall

The problem

“Cannot connect to sql\instance. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)”

Let’s say we have a system with a running SQL Server (Express or Standard Edition - doesn’t matter) and want to connect to this database from another machine. The chances are high that you will see the above error message.

Be aware: You can customize more or less anything, so this blogposts does only cover a very “common” installation.

I struggled last week with this problem and I learned that this is a pretty “old” issue. To enlighten my dear readers I made the following checklist:

Checklist:

  • Does the SQL Server allow remote connections?
  • Does the SQL Server allow your authentication schema of choice (Windows or SQL Authentication)?
  • Check the “SQL Server Configuration Manager” if the needed TCP/IP protocol is enabled for your SQL Instance.
  • Check your Windows Firewall (see details below!)

Windows Firewall settings:

Per default SQL Server uses TCP Port 1433 which is the minimum requirement without any special needs - use this command:

netsh advfirewall firewall add rule name = SQLPort dir = in protocol = tcp action = allow localport = 1433 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

If you use named instances we need (at least) two additional ports:

netsh advfirewall firewall add rule name = SQLPortUDP dir = in protocol = udp action = allow localport = 1434 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

This UDP Port 1434 is used to query the real TCP port for the named instance.

Now the most important part: The SQL Server will use a (kind of) random dynamic port for the named instance. To avoid this behavior (which is really a killer for Firewall settings) you can set a fixed port in the SQL Server Configuration Manager.

SQL Server Configuration Manager -> Instance -> TCP/IP Protocol (make sure this is "enabled") -> *Details via double click* -> Under IPAll set a fixed port under "TCP Port", e.g. 1435

After this configuration, allow this port to communicate to the world with this command:

netsh advfirewall firewall add rule name = SQLPortInstance dir = in protocol = tcp action = allow localport = 1435 remoteip = localsubnet profile = DOMAIN,PRIVATE,PUBLIC

(Thanks Stackoverflow!)

Check the official Microsoft Docs for further information on this topic, but these commands helped me to connect to my SQL Server.

The “dynamic” port was my main problem - after some hours of Googling I found the answer on Stackoverflow and I could establish a connection to my SQL Server with the SQL Server Management Studio.

Hope this helps!

Kazim Bahar: Künstliche Intelligenz für .NET Anwendungen

Mit dem neuen ML.NET Framework aus dem Hause Microsoft lassen sich bestehende .NET Applikationen mit...

Stefan Henneken: IEC 61131-3: Exception Handling with __TRY/__CATCH

When executing a program, there is always the possibility of an unexpected runtime error occurring. These occur when a program tries to perform an illegal operation. This kind of scenario can be triggered by events such as division by 0 or a pointer which tries to reference an invalid memory address. We can significantly improve the way these exceptions are handled by using the keywords __TRY and __CATCH.

The list of possible causes for runtime errors is endless. What all these errors have in common is that they cause the program to crash. Ideally, there should at least be an error message with details of the runtime error:

Pic01

Because this leaves the program in an undefined state, runtime errors cause the system to halt. This is indicated by the yellow TwinCAT icon:

Pic02

For an operational system, an uncontrolled stop is not always the optimal response. In addition, the error message does not provide enough information about where in the program the error occurred. This makes improving the software a tricky task.

To help track down errors more quickly, you can add check functions to your program.

Pic03 

Check functions are called whenever the relevant operation is executed. The best known is probably CheckBounds(). Each time an array element is accessed, this function is implicitly called beforehand. The parameters passed to this function are the array bounds and the index of the element being accessed. This function can be configured to automatically correct attempts to access elements which are out of bounds. This approach does, however, have some disadvantages.

  1. CheckBounds() is not able to determine which array is being accessed, so error correction has to be the same for all arrays.
  2. Because CheckBounds() is called whenever an array element is accessed, it can significantly slow down program execution.

It’s a similar story with other check functions.

It is not unusual for check functions to be used during development only. Check functions include breakpoints, which stop the program when an operation throws up an error. The call stack can then be used to determine where in the program the error has occurred.

The ‘try/catch’ statement

Runtime errors in general are also known as exceptions. IEC 61131-3 includes __TRY, __CATCH and __ENDTRY statements for detecting and handling these exceptions:

__TRY
  // statements
__CATCH (exception type)
  // statements
__ENDTRY
// statements

The TRY block (the statements between __TRY and __CATCH) contains the code with the potential to throw up an exception. Assuming that no exception occurs, all of the statements in the TRY block will be executed as normal. The program will then continue from the line immediately following the __ENDTRY statement. If, however, one of the statements within the TRY block causes an exception, the program will jump straight to the CATCH block (the statements between __CATCH and __ENDTRY). All subsequent statements within the TRY block will be skipped.

The CATCH block is only executed if an exception occurs; it contains the error handling code. After processing the CATCH block, the program continues from the statement immediately following __ENDTRY.

The __CATCH statement takes the form of the keyword __CATCH followed, in brackets, by a variable of type __SYSTEM.ExceptionCode. The __SYSTEM.ExceptionCode data type contains a list of all possible exceptions. If an exception occurs, causing the CATCH block to be called, this variable can be used to query the cause of the exception.

The following example divides two elements of an array by each other. The array is passed to the function using a pointer. If the return value is negative, an error has occurred. The negative return value provides additional information on the cause of the exception:

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR
 
__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__ENDTRY

The ‘finally’ statement

The optional __FINALLY statement can be used to define a block of code that will always be called whether or not an exception has occurred. There’s only one condition: the program must step into the TRY block.

We’re going to extend our example so that a value of one is added to the result of the calculation. We’re going to do this whether or not an error has occurred.

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR
 
__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__FINALLY
  F_Calc := F_Calc + 1;
__ENDTRY

Sample 1 (TwinCAT 3.1.4024 / 32 Bit) on GitHub

The statement in the FINALLY block (line 24) will always be executed whether or not an exception has occurred.

If no exception occurs within the TRY block, the FINALLY block will be called straight after the TRY block.

If an exception does occur, the CATCH block will be executed first, followed by the FINALLY block. Only then will the program exit the function.

__FINALLY therefore enables you to perform various operations irrespective of whether or not an exception has occurred. This generally involves releasing resources, for example closing a file or dropping a network connection.

Extra care should be taken in implementing the CATCH and FINALLY blocks. If an exception occurs within these blocks, it will give rise to an unexpected runtime error, resulting in an immediate uncontrolled program stop.

The sample program runs under 32-bit TwinCAT 3.1.4024 or higher. 64-bit systems are not currently supported.

Stefan Henneken: IEC 61131-3: Ausnahmebehandlung mit __TRY/__CATCH

Bei der Ausführung eines SPS-Programms kann es zu unerwarteten Laufzeitfehlern kommen. Diese treten auf, sobald das SPS-Programm versucht eine unzulässige Operation auszuführen. Auslöser solcher Szenarien kann z.B. eine Division durch 0 sein oder ein Pointer verweist auf einen ungültigen Speicherbereich. Mit den Schlüsselwörtern __TRY und __CATCH kann auf diese Ausnahmen deutlich besser reagiert werden als bisher.

Die Liste der möglichen Ursachen für Laufzeitfehler kann endlos erweitert werden. Allen Fehlern ist aber gemeinsam: Sie führen zum Absturz des Programms. Bestenfalls wird durch eine Meldung auf den Laufzeitfehler hingewiesen:

Pic01

Da sich anschließend das SPS-Programm in einem nicht definierten Zustand befindet, wird das System gestoppt. Dies ist anhand des gelben TwinCAT Icon in der Windows Taskleiste zu erkennen:

Pic02

Für in Betrieb befindliche Anlagen ist das unkontrollierte Stoppen nicht immer die optimalste Reaktion. Außerdem gibt die Meldung nur unzureichend Auskunft darüber, wo genau im SPS-Programm der Fehler aufgetreten ist. Eine Optimierung der Software ist dadurch nur schwer möglich.

Um Fehler schneller ausfindig zu machen, können in dem SPS-Programm Überprüfungsfunktionen eingefügt werden.

Pic03

Überprüfungsfunktionen werden jedes Mal aufgerufen, wenn die entsprechende Operation ausgeführt wird. Am bekanntesten dürfte die Funktion CheckBounds() sein. Bei jedem Zugriff auf ein Arrayelement wird vorher diese Funktion implizit aufgerufen. Als Parameter erhält die Funktion die Arraygrenzen und den Index des Elements, auf das der Zugriff erfolgen soll. Die Funktion kann so angepasst werden, dass bei einem Zugriff außerhalb der Arraygrenzen eine Korrektur erfolgt. Dieser Ansatz hat allerdings einige Nachteile:

  1. In CheckBounds() kann nicht festgestellt werden auf welches Array zugegriffen wird. Somit kann nur für alle Arrays die gleiche Fehlerkorrektur implementiert werden.
  2. Da bei jedem Arrayzugriff die Überprüfungsfunktion aufgerufen wird, kann sich die Laufzeit des Programms erblich verschlechtern.

Ähnlich verhält es sich auch bei den anderen Überprüfungsfunktionen.

Nicht selten werden die Überprüfungsfunktionen nur während der Entwicklungsphase eingesetzt. In den Funktionen werden Breakpoints aktiviert, die, sobald eine fehlerhafte Operation ausgeführt wird, das SPS-Programm anhalten. Über den Callstack kann anschließend die entsprechende Stelle im SPS-Programm ermittelt werden.

Die ‚try/catch‘-Anweisung

Allgemein werden Laufzeitfehler als Ausnahmen (Exceptions) bezeichnet. Für das Erkennen und Bearbeiten von Exceptions gibt es in der IEC 61131-3 die Anweisungen __TRY, __CATCH und __ENDTRY:

__TRY
  // statements
__CATCH (exception type)
  // statements
__ENDTRY
// statements

Der TRY-Block (die Anweisungen zwischen __TRY und __CATCH) beinhaltet die Anweisungen, die potenziell eine Exception verursachen können. Tritt keine Exception auf, werden alle Anweisungen im TRY-Block ausgeführt. Anschließend setzt das SPS-Programm hinter __ENDTRY seine Arbeit fort. Verursacht eine der Anweisungen innerhalb des TRY-Blocks jedoch eine Exception, so wird der Programmablauf unmittelbar im CATCH-Block (die Anweisungen zwischen __CATCH und __ENTRY) fortgeführt. Alle übrigen Anweisungen innerhalb des TRY-Blocks werden dabei übersprungen.

Der CATCH-Block wird nur im Falle einer Exception ausgeführt und enthält die gewünschte Fehlerbehandlung. Nach der Abarbeitung des CATCH-Blocks wird das SPS-Programm mit den Anweisungen nach __ENDTRY fortgesetzt.

Hinter der __CATCH-Anweisung wird in runden Klammern eine Variable vom Typ __SYSTEM.ExceptionCode angegeben. Der Datentyp __SYSTEM.ExceptionCode enthält eine Auflistung aller möglichen Exceptions. Wird der CATCH-Block durch eine Exception aufgerufen, so kann über diese Variable die Ursache der Exception abgefragt werden.

In dem folgenden Beispiel werden zwei Elemente aus einem Array dividiert. Das Array wird hierbei durch einen Pointer an die Funktion übergeben. Ist der Rückgabewert der Funktion negativ, so ist bei der Ausführung ein Fehler aufgetreten. Der negative Rückgabewert der Funktion gibt genauere Informationen über die Ursache der Exception:

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR

__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__ENDTRY

Die ‚finally‘-Anweisung

Mit __FINALLY kann optional ein Codeblock definiert werden, der immer aufgerufen wird, unabhängig davon ob eine Exception aufgetreten ist oder nicht. Es gibt nur eine einzige Randbedingung: Das SPS-Programm muss zumindest in den TRY-Anweisungsblock eintreten.

Das Beispiel soll so erweitert werden, dass das Ergebnis der Berechnung zusätzlich um Eins erhöht wird. Dieses soll unabhängig davon erfolgen, ob ein Fehler aufgetreten ist oder nicht.

FUNCTION F_Calc : LREAL
VAR_INPUT
  pData     : POINTER TO ARRAY [0..9] OF LREAL;
  nElementA : INT;
  nElementB : INT;
END_VAR
VAR
  exc       : __SYSTEM.ExceptionCode;
END_VAR

__TRY
  F_Calc := pData^[nElementA] / pData^[nElementB];
__CATCH (exc)
  IF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ARRAYBOUNDS) THEN
    F_Calc := -1;
  ELSIF ((exc = __SYSTEM.ExceptionCode.RTSEXCPT_FPU_DIVIDEBYZERO) OR
         (exc = __SYSTEM.ExceptionCode.RTSEXCPT_DIVIDEBYZERO)) THEN
    F_Calc := -2;
  ELSIF (exc = __SYSTEM.ExceptionCode.RTSEXCPT_ACCESS_VIOLATION) THEN
    F_Calc := -3;
  ELSE
    F_Calc := -4;
  END_IF
__FINALLY
  F_Calc := F_Calc + 1;
__ENDTRY

Beispiel 1 (TwinCAT 3.1.4024 / 32 Bit) auf GitHub

Die Anweisung im FINALLY-Block (Zeile 24) wird immer aufgerufen, unabhängig davon ob eine Exception erzeugt wird oder nicht.

Wird im TRY-Block keine Exception ausgelöst, so wird der FINALLY-Block direkt nach dem TRY-Block ausgerufen.

Tritt eine Exception auf, so wird erst der CATCH-Block ausgeführt und anschließend auch der FINALLY-Block. Erst danach wird die Funktion verlassen.

__FINALLY gestattet es somit, diverse Operationen unabhängig davon auszuführen, ob eine Exception aufgetreten ist oder nicht. Dabei handelt es sich in der Regel um die Freigabe von Ressourcen, wie z.B. das Schließen einer Datei oder das Beenden einer Netzwerkverbindung.

Besonders sorgfältig sollte man die Implementierung der CATCH– und FINALLY-Blöcke vornehmen. Tritt in einem dieser Codeblöcke eine Exception auf, so löst dieses einen unerwarteten Laufzeitfehler aus. Mit dem Ergebnis, dass das SPS-Programm unmittelbar gestoppt wird.

An dieser Stelle möchte ich noch auf den Blog von Matthias Gehring hinweisen. In einem seiner Posts (https://www.codesys-blog.com/tipps/exceptionhandling-in-iec-applikationen-mit-codesys) wird das Thema Exception Handling ebenfalls behandelt.

Das Beispielprogramm ist unter 32-Bit Systemen ab TwinCAT 3.1.4024 lauffähig. 64-Bit Systeme werden derzeit noch nicht unterstützt.

Stefan Henneken: IEC 61131-3: Parameter transfer via FB_init

Depending on the task, it may be necessary for function blocks to require parameters that are only used once for initialization tasks. One possible way to pass them elegantly is to use the FB_init() method.

Before TwinCAT 3, initialisation parameters were very often transferred via input variables.

(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
  nDatabits  : BYTE(7..8);
  eParity    : E_Parity;
  nStopbits  : BYTE(1..2);
END_VAR

This had the disadvantage that the function blocks became unnecessarily large in the graphic display modes. It was also not possible to prevent changing the parameters at runtime.

Very helpful is the method FB_init(). This method is implicitly executed one time before the PLC task is started and can be used to perform initialization tasks.

The dialog for adding methods offers a finished template for this purpose.

Pic01

The method contains two input variables that provide information about the conditions under which the method is executed. The variables may not be deleted or changed. However, FB_init() can be supplemented with further input variables.

Example

An example is a block for communication via a serial interface (FB_SerialCommunication). This block should also initialize the serial interface with the necessary parameters. For this reason, three variables are added to FB_init():

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);        
END_VAR

The serial interface is not initialized directly in FB_init(). Therefore, the parameters must be copied into variables located in the function block.

FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
  nInternalDatabits    : BYTE(7..8);
  eInternalParity      : E_Parity;
  nInternalStopbits    : BYTE(1..2);
END_VAR

During initialization, the values from FB_init() are copied in these three variables.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR
 
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

If an instance of FB_SerialCommunication is created, these three additional parameters must also be specified. The values are specified directly after the name of the function block in round brackets:

fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
                                               eParity := E_Parity.None,
                                               nStopbits := 1);

Even before the PLC task starts, the FB_init() method is implicitly called, so that the internal variables of the function block receive the desired values.

Pic02

With the start of the PLC task and the call of the instance of FB_SerialCommunication, the serial interface can now be initialized.

It is always necessary to specify all parameters. A declaration without a complete list of the parameters is not allowed and generates an error message when compiling:

Pic03

Arrays

If FB_init() is used for arrays, the complete parameters must be specified for each element (with square brackets):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
                 (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                 (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];

If all elements are to have the same initialization values, it is sufficient if the parameters exist once (without square brackets):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
                                                             eParity := E_Parity.None,
                                                             nStopbits := 1);

Multidimensional arrays are also possible. All initialization values must also be specified here:

aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
                      (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
                      (nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];

Inheritance

If inheritance is used, the method FB_init() is always inherited. FB_SerialCommunicationRS232 is used here as an example:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication

If an instance of FB_SerialCommunicationRS232 is created, the parameters of FB_init(), which were inherited from FB_SerialCommunication, must also be specified:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1);

It is also possible to overwrite FB_init(). In this case, the same input variables must exist in the same order and be of the same data type as in the basic FB (FB_SerialCommunication). However, further input variables can be added so that the derived function block (FB_SerialCommunicationRS232) receives additional parameters:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
  nBaudrate    : UDINT; 
END_VAR
 
THIS^.nInternalBaudrate := nBaudrate;

If an instance of FB_SerialCommunicationRS232 is created, all parameters, including those of FB_SerialCommunication, must be specified:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1,
                                                         nBaudRate := 19200);

In the method FB_init() of FB_SerialCommunicationRS232, only the copying of the new parameter (nBaudrate) is necessary. Because FB_SerialCommunicationRS232 inherits from FB_SerialCommunication, FB_init() of FB_SerialCommunication is also executed implicitly before the PLC task is started. Both FB_init() methods of FB_SerialCommunication and of FB_SerialCommunicationRS232 are always called implicitly. When inherited, FB_init() is always called from ‘bottom’ to ‘top’, first from FB_SerialCommunication and then from FB_SerialCommunicationRS232.

Forward parameters

The function block (FB_SerialCommunicationCluster) is used as an example, in which several instances of FB_SerialCommunication are declared:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
  fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  nInternalDatabits       : BYTE(7..8);
  eInternalParity         : E_Parity;
  nInternalStopbits       : BYTE(1..2); 
END_VAR

FB_SerialCommunicationCluster also receives the method FB_init() with the necessary input variables so that the parameters of the instances can be set externally.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR
 
THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

However, there are some things to be taken into consideration here. The call sequence of FB_init() is not clearly defined in this case. In my test environment the calls are made from ‘inside’ to ‘outside’. First fbSerialCommunication01.FB_init() and fbSerialCommunication02.FB_init() are called, then fbSerialCommunicationCluster.FB_init(). It is not possible to pass the parameters from ‘outside’ to ‘inside’. The parameters are therefore not available in the two inner instances of FB_SerialCommunication.

The sequence of the calls changes as soon as FB_SerialCommunication and FB_SerialCommunicationRS232 are derived from the same basic FB. In this case FB_init() is called from ‘outside’ to ‘inside’. This approach cannot always be implemented for two reasons:

  1. If FB_SerialCommunication is located in a library, the inheritance cannot be changed just offhand.
  2. The call sequence of FB_init() is not further defined with nesting. So it cannot be excluded that this can change in future versions.

One way to solve the problem is to explicitly call FB_SerialCommunication.FB_init() from FB_SerialCommunicationCluster.FB_init().

fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);

All parameters, including bInitRetains and bInCopyCode, are passed on directly.

Attention: Calling FB_init() always initializes all local variables of the instance. This must be considered as soon as FB_init() is explicitly called from the PLC task instead of implicitly before the PLC task.

Access via properties

By passing the parameters by FB_init(), they can neither be read from outside nor changed at runtime. The only exception would be the explicit call of FB_init() from the PLC task. However, this should principally be avoided, since all local variables of the instance will be reinitialized in this case.

If, however, access should still be possible, appropriate properties can be created for the parameters:

Pic04

The setter and getter of the respective properties access the corresponding local variables in the function block (nInternalDatabits, eInternalParity and nInternalStopbits). Thus, the parameters can be specified in the declaration as well as at runtime.

By removing the setter, you can prevent the parameters from being changed at runtime. If the setter is available, FB_init() can be omitted. Properties can also be initialized directly when declaring an instance.

fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
                                                   Parity := E_Parity.Odd,
                                                   Stopbits := 1);

The parameters of FB_init() and the properties can also be specified simultaneously:

fbSerialCommunication  : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
                                               (Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);

In this case, the initialization values of the properties have priority. The transfer by property and FB_init() has the disadvantage that the declaration of the function block becomes unnecessarily long. To implement both does not seem necessary to me either. If all parameters can also be written via properties, the initialization via FB_init() can be omitted. Conclusion: If parameters must not be changeable at runtime, the use of FB_init() has to be considered. If the write access is possible, properties are another opportunity.

Sample 1 (TwinCAT 3.1.4022) on GitHub

David Tielke: #DWX2019 - Inhalt meiner Sessions

Das war sie wieder, die Developer Week 2019 in Nürnberg. An drei Konferenztagen und natürlich dem traditionellen Workshoptag am Donnerstag sind wir alle erschöpft aber glücklich zuhause wieder angekommen. Neben Sessions zu CoCo 2.0 und Softwarequalität, gab es in diesem Jahr auch zwei Abendveranstaltungen von mir, eine davon mit Kollege Christian Giesswein. Nachdem mein Mitarbeiter Sebastian und ich nun die Nacharbeit abgeschlossen haben, stellen wir hier nun die Inhalte meiner Sessions und unseres gemeinsamen Workshops am Donnerstag zur Verfügung.

Softwarequalität


Composite Components 2.0

Da während er Session mein Notebook fast vollständig den Dienst mit einem Zeichenstift verweigert hat, gibt es an dieser Stelle leider nicht die von mir gewohnten Drawings dazu. Dafür hier nun die Repos zu den Beispielimplementierungen der Composite Components 1.0 & 2.0 auf github:


Workshop: Architektur 2.0



Hier noch die entwickelten Beispielprojekte zu beiden Versionen der Architektur.

Holger Schwichtenberg: Die VSTS CLI ist tot – es lebe die Azure DevOps CLI

Die "Azure DevOps CLI", der Nachfolger der "VSTS CLI", hat seit dem 8.7.2019 den Status "General Availability" – ist aber keineswegs fertig.

Jürgen Gutsch: MVP for four times in a row

Another year later, again it was the July 1st and I got the email from the Global MVP Administrator I'm waiting for :-)

Yes, this is kind of a yearly series of posts. But I'm really excited that I got re-awarded to be an MVP for the fifth year in a row. This is absolutely amazing and makes me really proud.

Even though some folks reduces the MVP to just a marketing instrument of Microsoft and they say MVPs are just selling Microsoft to the rest of the world, it tells me that my work in my spare time is important for some people outside. These folks are right anyway. Sure I'm selling Microsoft to the rest of the world, but this is my hobby. I don't sell it explicitly, I'm just telling other people about stuff I work with, stuff I use to get things done and to earn money at the end. It is about .NET and ASP.NET as well as about software development and the developer community. It is also about stuff I just learned while looking into new technology.

Selling Microsoft is just a side effect with no additional effort and it doesn't feel wrong.

I'm not sure whether I put a lot more effort into my hobby since I'm a MVP or not. I think it was a bit more, because being a MVP makes me proud, makes me feel successful and tells me that my work is important for some folks. Who cares :-)

While some folks are reading my blog, attending the user group meetings or watching my live streams. I will continue doing that kind work.

As already written I'm proud of it and proud to get the fifth ring to my MVP award trophy, which will be blue this time.

And I'm feeling lucky that I'm able to attend the Global MVP summit the fifth time next year in March and to see all the MVP friends again. I'm really looking forward to that event and to be in the nice and always sunny Seattle area. (Yes, it is always sunny in Seattle, when I'm there.)

I'm also happy to see that almost all MVP friends got re-awarded.

Congratulations to all awarded and re-awarded MVP

Many thanks to developer community for being a part of it. And many thanks for that amazing feedback I get as a result of my work. It is a lot of fun to help and to contribute to that awesome community :-)

Marco Scheel: App Permissions für Microsoft Graph Calls automatisiert einrichten

Für unser Glück & Kanja Lifecycle Tool setze ich im Schwerpunkt auf Microsoft Graph Calls. Für ein sauberes Setup habe ich mittlerweile ein Script. Es nutzt die PowerShell AZ und die Azure CLI. Besonders beim Erstellen einer Azure AD App (genauer Berechtigen und Granten) ist die Azure CLI noch ein Stück besser bzw. umfangreicher als die AZ PowerShell.

Die Lifecycle App arbeitet mit AD Settings und Groups. Erweiterte Funktionen setzen auf Access Reviews Feature aus dem AAD P2 Lizenzset. Diese Graph Berechtigungen setze ich direkt per CLI Script:

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions 19dbc75e-c2e2-444c-a770-ec69d8559fc7=Role #msgraph Directory.ReadWrite.All

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions 62a82d76-70ea-41e2-9197-370581804d09=Role #msgraph Group.ReadWrite.All

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions ef5f7d5c-338f-44b0-86c3-351f46c8bb5f=Role #msgraph AccessReview.ReadWrite.All

az ad app permission add –id $adapp.ApplicationId –api 00000003-0000-0000-c000-000000000000 –api-permissions 60a901ed-09f7-4aa5-a16e-7dd3d6f9de36=Role #msgraph ProgramControl.ReadWrite.All

Die Azure CLI kann dann auch gleich noch den Admin Grant erledigen (wenn man nicht in der Azure Cloud Shell läuft!):

az ad app permission admin-consent –id $adapp.ApplicationId

Hier ein Beispiel, wie das Ergebnis dann im Azure AD Portal aussieht:

image

Wer nun die Guid für seine Berechtigung sucht, kann ganz einfach mit diesem Befehlt (Azure Active Directory PowerShell 2.0) auf das ständig wachsende Set an App Permissions zugreifen:

(Get-AzureADServicePrincipal -filter “DisplayName eq ‘Microsoft Graph’”).AppRoles | Select Id, Value | Sort Value

Id                                   Value
–                                   —–
d07a8cc0-3d51-4b77-b3b0-32704d1f69fa AccessReview.Read.All
ef5f7d5c-338f-44b0-86c3-351f46c8bb5f AccessReview.ReadWrite.All
18228521-a591-40f1-b215-5fad4488c117 AccessReview.ReadWrite.Membership
134fd756-38ce-4afd-ba33-e9623dbe66c2 AdministrativeUnit.Read.All
5eb59dd3-1da2-4329-8733-9dabdc435916 AdministrativeUnit.ReadWrite.All
1bfefb4e-e0b5-418b-a88f-73c46d2cc8e9 Application.ReadWrite.All
18a4783c-866b-4cc7-a460-3d5e5662c884 Application.ReadWrite.OwnedBy
b0afded3-3588-46d8-8b3d-9842eff778da AuditLog.Read.All
798ee544-9d2d-430c-a058-570e29e34338 Calendars.Read
ef54d2bf-783f-4e0f-bca1-3210c0444d99 Calendars.ReadWrite
a7a681dc-756e-4909-b988-f160edc6655f Calls.AccessMedia.All
284383ee-7f6e-4e40-a2a8-e85dcb029101 Calls.Initiate.All
4c277553-8a09-487b-8023-29ee378d8324 Calls.InitiateGroupCall.All
f6b49018-60ab-4f81-83bd-22caeabfed2d Calls.JoinGroupCall.All
fd7ccf6b-3d28-418b-9701-cd10f5cd2fd4 Calls.JoinGroupCallAsGuest.All
7b2449af-6ccd-4f4d-9f78-e550c193f0d1 ChannelMessage.Read.All
4d02b0cc-d90b-441f-8d82-4fb55c34d6bb ChannelMessage.UpdatePolicyViolation.All
6b7d71aa-70aa-4810-a8d9-5d9fb2830017 Chat.Read.All
294ce7c9-31ba-490a-ad7d-97a7d075e4ed Chat.ReadWrite.All
7e847308-e030-4183-9899-5235d7270f58 Chat.UpdatePolicyViolation.All
089fe4d0-434a-44c5-8827-41ba8a0b17f5 Contacts.Read
6918b873-d17a-4dc1-b314-35f528134491 Contacts.ReadWrite
1138cb37-bd11-4084-a2b7-9f71582aeddb Device.ReadWrite.All
7a6ee1e7-141e-4cec-ae74-d9db155731ff DeviceManagementApps.Read.All
dc377aa6-52d8-4e23-b271-2a7ae04cedf3 DeviceManagementConfiguration.Read.All
2f51be20-0bb4-4fed-bf7b-db946066c75e DeviceManagementManagedDevices.Read.All
58ca0d9a-1575-47e1-a3cb-007ef2e4583b DeviceManagementRBAC.Read.All
06a5fe6d-c49d-46a7-b082-56b1b14103c7 DeviceManagementServiceConfig.Read.All
7ab1d382-f21e-4acd-a863-ba3e13f7da61 Directory.Read.All
19dbc75e-c2e2-444c-a770-ec69d8559fc7 Directory.ReadWrite.All
7e05723c-0bb0-42da-be95-ae9f08a6e53c Domain.ReadWrite.All
7c9db06a-ec2d-4e7b-a592-5a1e30992566 EduAdministration.Read.All
9bc431c3-b8bc-4a8d-a219-40f10f92eff6 EduAdministration.ReadWrite.All
4c37e1b6-35a1-43bf-926a-6f30f2cdf585 EduAssignments.Read.All
6e0a958b-b7fc-4348-b7c4-a6ab9fd3dd0e EduAssignments.ReadBasic.All
0d22204b-6cad-4dd0-8362-3e3f2ae699d9 EduAssignments.ReadWrite.All
f431cc63-a2de-48c4-8054-a34bc093af84 EduAssignments.ReadWriteBasic.All
e0ac9e1b-cb65-4fc5-87c5-1a8bc181f648 EduRoster.Read.All
0d412a8c-a06c-439f-b3ec-8abcf54d2f96 EduRoster.ReadBasic.All
d1808e82-ce13-47af-ae0d-f9b254e6d58a EduRoster.ReadWrite.All
38c3d6ee-69ee-422f-b954-e17819665354 ExternalItem.ReadWrite.All
01d4889c-1287-42c6-ac1f-5d1e02578ef6 Files.Read.All
75359482-378d-4052-8f01-80520e7db3cd Files.ReadWrite.All
5b567255-7703-4780-807c-7be8301ae99b Group.Read.All
62a82d76-70ea-41e2-9197-370581804d09 Group.ReadWrite.All
e321f0bb-e7f7-481e-bb28-e3b0b32d4bd0 IdentityProvider.Read.All
90db2b9a-d928-4d33-a4dd-8442ae3d41e4 IdentityProvider.ReadWrite.All
6e472fd1-ad78-48da-a0f0-97ab2c6b769e IdentityRiskEvent.Read.All
db06fb33-1953-4b7b-a2ac-f1e2c854f7ae IdentityRiskEvent.ReadWrite.All
dc5007c0-2d7d-4c42-879c-2dab87571379 IdentityRiskyUser.Read.All
656f6061-f9fe-4807-9708-6a2e0934df76 IdentityRiskyUser.ReadWrite.All
19da66cb-0fb0-4390-b071-ebc76a349482 InformationProtectionPolicy.Read.All
810c84a8-4a9e-49e6-bf7d-12d183f40d01 Mail.Read
e2a3a72e-5f79-4c64-b1b1-878b674786c9 Mail.ReadWrite
b633e1c5-b582-4048-a93e-9f11b44c7e96 Mail.Send
40f97065-369a-49f4-947c-6a255697ae91 MailboxSettings.Read
6931bccd-447a-43d1-b442-00a195474933 MailboxSettings.ReadWrite
658aa5d8-239f-45c4-aa12-864f4fc7e490 Member.Read.Hidden
3aeca27b-ee3a-4c2b-8ded-80376e2134a4 Notes.Read.All
0c458cef-11f3-48c2-a568-c66751c238c0 Notes.ReadWrite.All
c1684f21-1984-47fa-9d61-2dc8c296bb70 OnlineMeetings.Read.All
b8bb2037-6e08-44ac-a4ea-4674e010e2a4 OnlineMeetings.ReadWrite.All
0b57845e-aa49-4e6f-8109-ce654fffa618 OnPremisesPublishingProfiles.ReadWrite.All
b528084d-ad10-4598-8b93-929746b4d7d6 People.Read.All
246dd0d5-5bd0-4def-940b-0421030a5b68 Policy.Read.All
79a677f7-b79d-40d0-a36a-3e6f8688dd7a Policy.ReadWrite.TrustFramework
eedb7fdd-7539-4345-a38b-4839e4a84cbd ProgramControl.Read.All
60a901ed-09f7-4aa5-a16e-7dd3d6f9de36 ProgramControl.ReadWrite.All
230c1aed-a721-4c5d-9cb4-a90514e508ef Reports.Read.All
5e0edab9-c148-49d0-b423-ac253e121825 SecurityActions.Read.All
f2bf083f-0179-402a-bedb-b2784de8a49b SecurityActions.ReadWrite.All
bf394140-e372-4bf9-a898-299cfc7564e5 SecurityEvents.Read.All
d903a879-88e0-4c09-b0c9-82f6a1333f84 SecurityEvents.ReadWrite.All
a82116e5-55eb-4c41-a434-62fe8a61c773 Sites.FullControl.All
0c0bf378-bf22-4481-8f81-9e89a9b4960a Sites.Manage.All
332a536c-c7ef-4017-ab91-336970924f0d Sites.Read.All
9492366f-7969-46a4-8d15-ed1a20078fff Sites.ReadWrite.All
21792b6c-c986-4ffc-85de-df9da54b52fa ThreatIndicators.ReadWrite.OwnedBy
fff194f1-7dce-4428-8301-1badb5518201 TrustFrameworkKeySet.Read.All
4a771c9a-1cf2-4609-b88e-3d3e02d539cd TrustFrameworkKeySet.ReadWrite.All
405a51b5-8d8d-430b-9842-8be4b0e9f324 User.Export.All
09850681-111b-4a89-9bed-3f2cae46d706 User.Invite.All
df021288-bdef-4463-88db-98f22de89214 User.Read.All
741f803b-c850-494e-b5df-cde7c675a1ca User.ReadWrite.All

Jürgen Gutsch: Self-publishing a book

While writing on the Customizing ASP.NET Core series, a reader asked me to bundle all the posts into a book. I was thinking about it for a while. Also because I tried to write a book in the past together with a collogue at the YOO. But publishing a book with a publisher in behind turned out to be stress. Since we have a family with small kids and a job where we work on different projects, the book never has priority one. The publisher didn't see that fact. Fortunately the publisher quits the contract because we weren't able to deliver a chapter per week.

This is the planned cover for the bundled series:

(I took that photo at the Tschentenalp above Adelboden in Switzerland. It is the View to the Lohner Mountains)

Leanpub

In the past I already had a look into different self publishing platforms like Leanpub which looks pretty easy and modern. But it also has a downside:

  • Leanpub gives me 80 % of the salary, but we need to do the publishing and the marketing to sell that book
  • A publisher only gives me 20%, but does a professional publishing and marketing. He will sell a lot more books.

At the end you cannot get rich by publishing a book like this. But it is anyway nice to get some money out of your effort. Also Amazon provides a possibility to publish a book by yourself which looks nice for self-publisher. I'm going to try this as well.

In the past Leanpub also provides print on demand. This seemed to be stopped now. I couldn't found any information about it now. Anyway, it is good enough to publish in various eBook formats.

So I decided to go with Leanpub to try the self-publishing way.

Writing

Even if the most of the contents are already written for the blog, I decided to go over all the parts to also update all the stuff to ASP.NET Core 3.0. I also decided to also leave the ASP.NET Core 2.2 information, because this will also be valid for a while. So the chapter will handle 3.0 and 2.2.

Writing for Leanpub also works with GitHub and Markdown files, which also reduces the effort. I'm able to bind a GitHub repository to Leanpub and push Markdown files into it. I need to structure and order the different files in a book.txt file. Every markdown file is a chapter in that book.

Currently I have 13 chapters a preface, a about me chapter, a chapter to describe the technical requirements for this book and a small postface. All in all about 80 pages.

Rewriting

Sometimes it was hard to rewrite the demos and contents to ASP.NET Core. If you are writing about customizing that goes deeply into the APIs, you will definitely face some significant changes. So it wasn't that easy to get a custom DI container running in ASP.NET Core 3. Also adding the Middlewares using a custom Route changes from 2.2 to 3.0 Preview 3 and changes again from the preview 3 to the preview 6. Iven though I already had some experience with 3.0 there where some changes between the different previews.

But luckily I also have some chapters without any differences between 2.2 and 3.0

Updating the blog posts

I'm not yet sure whether I need to update the blog post or not. My current idea is to create new posts and to mention the new post in the old ones.

There is definitely enough stuff for a lot of new posts about About ASP.NET Core. One thing for example is the new Framework reference that was a pain in the ass during a live stream where I tried to update a preview 3 solution to preview 6.

Publishing

Currently I'm not sure when I'm able to publish this book. At the moment it is review by to people doing the non technical review and one guy doing the technical review.

I think I'm going to publish this book during the summer.

Contributing

If you want to help making this book better, feel free to go to the repositories, fork them and to create PRs.

It would also be helpful to propose a price you would pay for such a book. Until yet I got some proposals, but his seem to be a pretty high price from my perspective. It seems some folks are really willing to pay around 25 EUR. https://leanpub.com/customizing-aspnetcore/. What do you think?

Marco Scheel: Microsoft Graph, Postman und Wie bekomme ich ein App Only Token?

Der Microsoft Graph ist das “Schweizer Taschenmesser” für alle im Microsoft 365 Umfeld. Eine API für “alle” Dienste und noch besser immer das gleiche Authentifizierungsmodel. Im Hairless in the Cloud Podcast Nummer 18 habe ich meine Eindrücke zum Microsoft Graph schon geschildert. Der Graph Explorer auf der Website ist eine gute Methode den Graph kennenzulernen. Ich für meinen Teil bewege mich aber überwiegend ohne Benutzerinteraktion im Graph und somit nutze ich in meinen Anwendungen die Application Permissions. Die meisten APIs (vgl. Teams) kommen allerdings erstmal ohne App Permissions daher. Die Enttäuschung ist groß, wenn man über den Graph Explorer sein Research gemacht hat und dann feststellt, dass die Calls als App Permission scheitern.

Jeremy Thake aus dem Microsoft Graph Team hat vor einigen Monaten angefangen, die Samples (und mehr) aus dem Graph Explorer in einer Collection für Postman zu veröffentlichen. Diese Collection vereinfacht das Testen der eigenen Calls und gibt Anregung für neue Szenarien.

In der Vergangenheit habe ich mir aus meiner Azure Function das Token “geklaut” und dann im Postman als Bearer Token direkt hinterlegt:

image

Es gibt aber eine viel elegantere Version. Die MS Graph Postman Collection arbeitet mit dem Environment und Variablen. Eine Methode, die eigentlich dem Code in der eigene App (bei mir eine Azure Function) entspricht, ist aber auch mit an Bord. Postman bietet eine native OAuth Integration an. Man wählt einfach OAuth 2.0 aus und kann dann folgende Informationen aus seiner eigenen App hinterlegen:

image


Hinweis: Ich habe meine App schon wieder gelöscht. Sie ist nicht länger nutzbar, also ist das Secret im Code auch kein Geheimnis mehr.

Über “Request Token” kann ich dann ein Token holen und für alle weiteren Requests verwenden. Zum Prüfen des Token (hat der Scope geklappt) kann man einfach auf jwt.io oder auf den Microsoft Service jwt.ms gehen.

Hinweis: Solche Token Decoder sind eine tolle Sache, aber bitte denkt dran, wenn ihr das mit produktiven Token macht, dann müsst ihr dem Service vertrauen, denn er hat in dem Moment eure Berechtigung! In meinem Fall könnten die beiden Websites das Token nehmen und gegen meinen Tenant einsetzen! Ich nutze hier mein LAB Tenant und ich glaube, dass ich weiß was ich tue :) Also alles gut!

image

Mit dem Token kann man dann zum Beispiel in meinem Fall die Azure AD Access Reviews einsehen.

image

Mein Debugging wurde extrem vereinfacht, da ich so einfach meine App Permissions testen kann.

Code-Inside Blog: Jint: Invoke Javascript from .NET

If you ever dreamed to use Javascript in your .NET application there is a simple way: Use Jint.

Jint implements the ECMA 5.1 spec and can be use from any .NET implementation (Xamarin, .NET Framework, .NET Core). Just use the NuGet package and has no dependencies to other stuff - it’s a single .dll and you are done!

Why should integrate Javascript in my application?

In our product “OneOffixx” we use Javascript as a scripting language with some “OneOffixx” specific objects.

The pro arguments for Javascript:

  • It’s a well known language (even with all the brainfuck in it)
  • You can sandbox it quite simple
  • With a library like Jint it is super simple to interate

I highly recommend to checkout the GitHub page, but here a some simple examples, which should show how to use it:

Example 1: Simple start

After the NuGet action you can use the following code to see one of the most basic implementations:

public static void SimpleStart()
{
    var engine = new Jint.Engine();
    Console.WriteLine(engine.Execute("1 + 2 + 3 + 4").GetCompletionValue());
}

We create a new “Engine” and execute some simple Javascript and returen the completion value - easy as that!

Example 2: Use C# function from Javascript

Let’s say we want to provide a scripting environment and the script can access some C# based functions. This “bridge” is created via the “Engine” object. We create a value, which points to our C# implementation.

public static void DefinedDotNetApi()
{
    var engine = new Jint.Engine();

    engine.SetValue("demoJSApi", new DemoJavascriptApi());

    var result = engine.Execute("demoJSApi.helloWorldFromDotNet('TestTest')").GetCompletionValue();

    Console.WriteLine(result);
}

public class DemoJavascriptApi
{
    public string helloWorldFromDotNet(string name)

    {
        return $"Hello {name} - this is executed in {typeof(Program).FullName}";
    }
}

Example 3: Use Javascript from C#

Of course we also can do the other way around:

public static void InvokeFunctionFromDotNet()
{
    var engine = new Engine();

    var fromValue = engine.Execute("function jsAdd(a, b) { return a + b; }").GetValue("jsAdd");

    Console.WriteLine(fromValue.Invoke(5, 5));

    Console.WriteLine(engine.Invoke("jsAdd", 3, 3));
}

Example 4: Use a common Javascript library

Jint allows you to inject any Javascript code (be aware: There is no DOM, so only “libraries” can be used).

In this example we use handlebars.js:

public static void Handlebars()
{
    var engine = new Jint.Engine();

    engine.Execute(File.ReadAllText("handlebars-v4.0.11.js"));

    engine.SetValue("context", new
    {
        cats = new[]
        {
            new {name = "Feivel"},
            new {name = "Lilly"}
        }
    });

    engine.SetValue("source", "  says meow!!!\n");

    engine.Execute("var template = Handlebars.compile(source);");

    var result = engine.Execute("template(context)").GetCompletionValue();

    Console.WriteLine(result);
}

Example 5: REPL

If you are crazy enough, you can build a simple REPL like this (not sure if this would be a good idea for production, but it works!)

public static void Repl()
{
    var engine = new Jint.Engine();

    while (true)
    {
        Console.Write("> ");
        var statement = Console.ReadLine();
        var result = engine.Execute(statement).GetCompletionValue();
        Console.WriteLine(result);
    }
}

Jint: Javascript integration done right!

As you can see: Jint is quite powerfull and if you feel the need to integrate Javascript in your application, checkout Jint!

The sample code can be found here .

Hope this helps!

Norbert Eder: Scratch – Kinder lernen programmieren

Ohne Computer läuft heute gar nichts mehr. Umso wichtiger ist es, zu verstehen, wie sowohl Computer, als auch die darauf laufende Software, funktionieren. Um dieses so wichtige Verständnis zu schüren, sollten schon Kinder mit dem Thema des Programmierens in Berührung kommen.

Dazu gibt es unterschiedlichste Werkzeuge. Eines, das ich – aus Erfahrung – sehr empfehlen kann, ist Scratch.

Scratch ist ein tolles Hilfsmittel für Neueinsteiger, vor allem aber Kinder und Jugendliche. Programme bestehen hier aus interaktiven Komponenten, die zusammengesetzt und mit „Leben“ versehen werden können. Mittels unterschiedlicher Bausteine können die Komponenten bewegt werden, es ist möglich, auf Ereignisse zu reagieren oder aber auch Sound abzuspielen und vieles mehr.

Durch das Bausteinsystem werden Syntaxfehler vermieden. Statt Frust gibt es schnelle Erfolge und treiben zu weiteren „Spielereien“ ein. Innerhalb kürzester Zeit können so zum Beispiel kleine Spiele entwickelt werden.

Kinder lernen so spielerisch einige Grundkonzepte der Programmierung kennen und können so in kurzer Zeit auf komplexere Sprachen umsteigen und sich weiterentwickeln.

Die Voraussetzungen für Scratch sind gering: Ein Computer und ein Browser werden benötigt. Die Entwicklung findet komplett im Browser statt. Die Programme können abgespeichert oder geladen werden und stehen so auch sofort zur Verfügung. Es kann auch offline entwickelt werden. Dazu steht Scratch-Desktop für Windows 10 und MacOS 10.13+ zur Verfügung.

Scratch - Programmieren lernen

Scratch – Programmieren lernen

Damit man nicht ganz alleine starten muss, gibt es auch eine große Community und zahlreiche Hilfen für den Einstieg. Vielleicht gibt es ja auch in deiner Nähe ein CoderDojo. Hier in Österreich gibt es das CoderDojo Linz und das CoderDojo Graz. Hier bekommt man Unterstützung, wenn man als Elternteil nicht ganz so firm in diesen Dingen ist.

Besonders hilfreich ist die Liste der Übungsbeispiele des CoderDojo Linz zu Scratch und HTML.

In diesem Sinne wünsche ich Happy Coding und interessante, lehrreiche Stunden mit dem Nachwuchs.

Der Beitrag Scratch – Kinder lernen programmieren erschien zuerst auf Norbert Eder.

Holger Schwichtenberg: .NET Framework 4.8 erkennen

Wie bei den Vorgängern stellt man ein vorhandenes .NET Framework 4.8 über einen Registry-Eintrag fest.

Stefan Henneken: IEC 61131-3: Parameterübergabe per FB_init

Je nach Aufgabenstellung kann es erforderlich sein, dass Funktionsblöcke Parameter benötigen, die nur einmalig für Initialisierungsaufgaben verwendet werden. Ein möglicher Weg, diese elegant zu übergeben, bietet die Methode FB_init().

Vor TwinCAT 3 wurden Initialisierungs-Parameter sehr häufig über Eingangsvariablen übergeben.

(* TwinCAT 2 *)
FUNCTION_BLOCK FB_SerialCommunication
VAR_INPUT
  nDatabits   : BYTE(7..8);
  eParity     : E_Parity;
  nStopbits   : BYTE(1..2);	
END_VAR

Dieses hatte den Nachteil, dass in den graphischen Darstellungsarten die Funktionsblöcke unnötig groß wurden. Auch war es nicht möglich, ein Ändern der Parameter zur Laufzeit zu verhindern.

Sehr hilfreich ist hierbei die Methode FB_init(). Diese Methode wird vor dem Start der SPS-Task einmal implizit ausgeführt und kann dazu dienen, Initialisierungsaufgaben durchzuführen.

Der Dialog zum Hinzufügen von Methoden bietet hierzu eine fertige Vorlage an.

Pic01

In der Methode sind zwei Eingangsvariablen enthalten, welche Auskunft darüber geben, unter welchen Bedingungen die Methode ausgeführt wird. Die Variablen dürfen weder gelöscht noch verändert werden. Allerdings kann FB_init() um weitere Eingangsvariablen ergänzt werden.

Beispiel

Als Beispiel soll ein Baustein zur Kommunikation über eine serielle Schnittstelle dienen (FB_SerialCommunication). Dieser Baustein soll ebenfalls die serielle Schnittstelle mit den notwendigen Parametern initialisieren. Aus diesem Grund werden zu FB_init() drei Variablen hinzugefügt:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);		
END_VAR

Das Initialisieren der seriellen Schnittstelle erfolgt nicht direkt in FB_init(). Deshalb müssen die Parameter in Variablen kopiert werden, die sich im Funktionsblock befinden.

FUNCTION_BLOCK PUBLIC FB_SerialCommunication
VAR
  nInternalDatabits    : BYTE(7..8);
  eInternalParity      : E_Parity;
  nInternalStopbits    : BYTE(1..2);
END_VAR

In diesen drei Variablen werden die Werte aus FB_init() während der Initialisierung kopiert.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR

THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

Wird eine Instanz von FB_SerialCommunication angelegt, so sind diese drei zusätzlichen Parameter mit anzugeben. Die Werte werden direkt nach dem Namen des Funktionsblocks in runden Klammern angegeben:

fbSerialCommunication : FB_SerialCommunication(nDatabits := 8,
                                               eParity := E_Parity.None,
                                               nStopbits := 1);

Noch bevor die SPS-Task startet, wird die Methode FB_init() implizit aufgerufen, so dass die internen Variablen des Funktionsblocks die gewünschten Werte erhalten.

Pic02 

Mit dem Start der SPS-Task und dem Aufruf der Instanz von FB_SerialCommunication kann jetzt die Initialisierung der seriellen Schnittstelle erfolgen.

Es ist immer notwendig alle Parameter anzugeben. Eine Deklaration ohne eine vollständige Auflistung der Parameter ist nicht erlaubt und erzeugt beim Compilieren eine Fehlermeldung:

Pic03 

Arrays

Wird FB_init() bei Arrays verwendet, so sind für jedes Element die vollständigen Parameter anzugeben (mit eckige Klammern):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication[
                 (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                 (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1)];

Sollen alle Elemente die gleichen Initialisierungswerte erhalten, so ist es ausreichend, wenn die Parameter einmal vorhanden sind (ohne eckige Klammern):

aSerialCommunication : ARRAY[1..2] OF FB_SerialCommunication(nDatabits := 8,
                                                             eParity := E_Parity.None,
                                                             nStopbits := 1);

Mehrdimensionale Arrays sind ebenfalls möglich. Auch hier müssen alle Initialisierungswerte angegeben werden:

aSerialCommunication : ARRAY[1..2, 5..6] OF FB_SerialCommunication[
                      (nDatabits := 8, eParity := E_Parity.None, nStopbits := 1),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 1),
                      (nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 2),
                      (nDatabits := 7, eParity := E_Parity.Even, nStopbits := 2)];

Vererbung

Kommt Vererbung zum Einsatz, so wird die Methode FB_init() immer mit vererbt. Als Beispiel soll hier FB_SerialCommunicationRS232 dienen:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationRS232 EXTENDS FB_SerialCommunication

Wird eine Instanz von FB_SerialCommunicationRS232 angelegt, so müssen auch die Parameter von FB_init() angegeben werden, welche von FB_SerialCommunication geerbt wurden:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1);

Es besteht außerdem die Möglichkeit FB_init() zu überschreiben. In diesem Fall müssen die gleichen Eingangsvariablen in der gleichen Reihenfolge und vom gleichen Datentyp vorhanden sein, wie bei dem Basis-FB (FB_SerialCommunication). Es können aber weitere Eingangsvariablen hinzugefügt werden, so dass der abgeleitete Funktionsblock (FB_SerialCommunicationRS232) zusätzliche Parameter erhält:

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
  nBaudrate    : UDINT;	
END_VAR

THIS^.nInternalBaudrate := nBaudrate;

Wird eine Instanz von FB_SerialCommunicationRS232 angelegt, so sind alle Parameter, auch die von FB_SerialCommunication, anzugeben:

fbSerialCommunicationRS232 : FB_SerialCommunicationRS232(nDatabits := 8,
                                                         eParity := E_Parity.Odd,
                                                         nStopbits := 1,
                                                         nBaudRate := 19200);

In der Methode FB_init() von FB_SerialCommunicationRS232 ist nur das Kopieren des neuen Parameters (nBaudrate) notwendig. Dadurch, dass FB_SerialCommunicationRS232 von FB_SerialCommunication erbt, wird vor dem Start der SPS-Task auch FB_init() von FB_SerialCommunication implizit ausgeführt. Es werden immer beide FB_init() Methoden implizit aufgerufen, sowohl die von FB_SerialCommunication, als auch die von FB_SerialCommunicationRS232. Der Aufruf von FB_init() erfolgt bei Vererbung immer von ‚unten‘ nach ‚oben‘. Also erst von FB_SerialCommunication und anschließend von FB_SerialCommunicationRS232.

Parameter weiterleiten

Als Beispiel soll der Funktionsblock (FB_SerialCommunicationCluster) dienen, in dem mehrere Instanzen von FB_SerialCommunication deklariert werden:

FUNCTION_BLOCK PUBLIC FB_SerialCommunicationCluster
VAR
  fbSerialCommunication01 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  fbSerialCommunication02 : FB_SerialCommunication(nDatabits := nInternalDatabits, eParity := eInternalParity, nStopbits := nInternalStopbits);
  nInternalDatabits       : BYTE(7..8);
  eInternalParity         : E_Parity;
  nInternalStopbits       : BYTE(1..2);	
END_VAR

Damit die Parameter der Instanzen von außen einstellbar sind, erhält auch FB_SerialCommunicationCluster die Methode FB_init() mit den notwendigen Eingangsvariablen.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode  : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  nDatabits    : BYTE(7..8);
  eParity      : E_Parity;
  nStopbits    : BYTE(1..2);
END_VAR

THIS^.nInternalDatabits := nDatabits;
THIS^.eInternalParity := eParity;
THIS^.nInternalStopbits := nStopbits;

Hierbei gibt es allerdings einiges zu beachten. Die Aufrufreihenfolge von FB_init() ist in diesem Fall nicht eindeutig definiert. In meiner Testumgebung erfolgen die Aufrufe von ‚innen‘ nach ‚außen‘. Erst wird fbSerialCommunication01.FB_init() und fbSerialCommunication02.FB_init() aufgerufen, danach erst fbSerialCommunicationCluster.FB_init(). Es ist nicht möglich, die Parameter von ‚außen‘ nach ‚innen‘ durchzureichen. Die Parameter stehen in den beiden inneren Instanzen von FB_SerialCommunication somit nicht zur Verfügung.

Die Reihenfolge der Aufrufe ändert sich, sobald FB_SerialCommunication und FB_SerialCommunicationRS232 vom gleichen Basis-FB abgeleitet werden. In diesem Fall wird FB_init() von ‚außen‘ nach ‚innen‘ aufgerufen. Dieser Ansatz ist aus zwei Gründen nicht immer umzusetzen:

  1. Liegt FB_SerialCommunication in einer Bibliothek, so kann die Vererbung nicht ohne weiteres geändert werden.
  2. Die Aufrufreihenfolge von FB_init() ist bei Verschachtelung nicht weiter definiert. Es ist also nicht auszuschließen, dass sich dieses in zukünftigen Versionen ändern kann.

Eine Variante das Problem zu lösen, ist der explizite Aufruf von FB_SerialCommunication.FB_init() aus FB_SerialCommunicationCluster.FB_init().

fbSerialCommunication01.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 7, eParity := E_Parity.Even, nStopbits := nStopbits);
fbSerialCommunication02.FB_init(bInitRetains := bInitRetains, bInCopyCode := bInCopyCode, nDatabits := 8, eParity := E_Parity.Even, nStopbits := nStopbits);

Alle Parameter, auch bInitRetains und bInCopyCode, werden direkt weitergegeben.

Achtung: Der Aufruf von FB_init() hat immer zur Folge das alle lokalen Variablen der Instanz initialisiert werden. Das muss beachtet werden, sobald FB_init() aus der SPS-Task explizit aufgerufen wird, statt implizit vor der SPS-Task.

Zugriff über Eigenschaften

Durch die Übergabe der Parameter per FB_init(), können diese zur Laufzeit weder von Außen gelesen noch verändert werden. Die einzige Ausnahme wäre der explizite Aufruf von FB_init() aus der SPS-Task. Dieses sollte aber grundsätzlich vermieden werden, da dadurch alle lokalen Variablen der Instanz werden neu initialisiert.

Soll der Zugriff aber dennoch möglich sein, so können für die Parameter entsprechende Eigenschaften angelegt werden:

Pic04

Die Setter und Getter der jeweiligen Eigenschaften greifen auf die entsprechenden lokalen Variablen in dem Funktionsblock zu (nInternalDatabits, eInternalParity und nInternalStopbits). Somit lassen sich die Parameter bei der Deklaration, als auch zur Laufzeit vorgeben.

Durch das Entfernen der Setter kann ein Ändern der Parameter zur Laufzeit verhindert werden. Sind die Setter vorhanden kann allerdings auch auf FB_init() verzichtet werden. Eigenschaften können ebenfalls direkt bei der Deklaration einer Instanz initialisiert werden.

fbSerialCommunication : FB_SerialCommunication := (Databits := 8,
                                                   Parity := E_Parity.Odd,
                                                   Stopbits := 1);

Es können die Parameter von FB_init() und die Eigenschaften auch gleichzeitig angegeben werden:

fbSerialCommunication  : FB_SerialCommunication(nDatabits := 8, eParity := E_Parity.Odd, nStopbits := 1) :=
                                               (Databits := 8, Parity := E_Parity.Odd, Stopbits := 1);

Vorrang haben in diesem Fall die Initialisierungswerte der Eigenschaften. Die Übergabe per Eigenschaft und FB_init() hat hier den Nachteil, das die Deklaration des Funktionsblocks unnötig lang wird. Beides zu implementieren erscheint mir auch nicht notwendig. Sind alle Parameter auch über Eigenschaften schreibbar, so kann auf die Initialisierung per FB_init() verzichtet werden. Als Fazit gilt: Dürfen Parameter zur Laufzeit nicht änderbar sein, so ist der Einsatz von FB_init() in Betracht zu ziehen. Soll der Schreibzugriff möglich sein, so bieten sich Eigenschaften an.

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Code-Inside Blog: Build Windows Server 2016 Docker Images under Windows Server 2019

Since the uprising of Docker on Windows we also invested some time into it and packages our OneOffixx server side stack in a Docker image.

Windows Server 2016 situation:

We rely on Windows Docker Images, because we still have some “legacy” parts that requires the full .NET Framework, thats why we are using this base image:

FROM microsoft/aspnet:4.7.2-windowsservercore-ltsc2016

As you can already guess: This is based on a Windows Server 2016 and besides the “legacy” parts of our application, we need to support Windows Server 2016, because Windows Server 2019 is currently not available on our customer systems.

In our build pipeline we could easily invoke Docker and build our images based on the LTSC 2016 base image and everything was “fine”.

Problem: Move to Windows Server 2019

Some weeks ago my collegue updated our Azure DevOps Build servers from Windows Server 2016 to Windows Server 2019 and our builds began to fail.

Solution: Hyper-V isolation!

After some internet research this site popped up: Windows container version compatibility

Microsoft made some great enhancements to Docker in Windows Server 2019, but if you need to “support” older versions, you need to take care of it, which means:

If you have a Windows Server 2019, but want to use Windows Server 2016 base images, you need to activate Hyper-V isolation.

Example from our own cake build script:

var exitCode = StartProcess("Docker", new ProcessSettings { Arguments = "build -t " + dockerImageName + " . --isolation=hyperv", WorkingDirectory = packageDir});

Hope this helps!

Holger Schwichtenberg: Viele Breaking Changes in Entity Framework Core 3.0

Von Entity Framework Core 3.0 gibt es mittlerweile eine vierte Preview-Version, in der man aber noch nicht keine der unten genannten neuen Funktionen findet. Vielmehr hat Microsoft eine erhebliche Anzahl von Breaking Changes eingebaut. Die Frage ist warum?

Johnny Graber: Buch-Rezension zu „Java by Comparison“

„Java by Comparison“ von Simon Harrer, Jörg Lenhard und Linus Dietz erschien 2018 bei The Pragmatic Programmers. Dieses Buch wagt sich an eine grosse Herausforderung: Wie kann man das über Jahre angeeignete Expertenwissen in einfacher Form Programmier-Anfängern zugänglich machen? Die Autoren nutzen dazu 70 Beispiele, in denen ein funktionierender erster Wurf einer wartbaren und durchdachten … Buch-Rezension zu „Java by Comparison“ weiterlesen

Holger Schwichtenberg: Wie man Entity Framework Core dazu bringt, die Klassennamen statt der DbSet-Namen als Tabellennamen zu verwenden

Microsofts objektrelationaler Mapper Entity Framework Core hat eine unangenehme Grundeinstellung: Die Datenbanktabellen heißen nicht wie die Klassennamen der Entitätsklassen, sondern wie die Property-Namen, die in der Kontextklasse bei der Deklaration des DbSet verwendet werden.

Code-Inside Blog: Update OnPrem TFS 2018 to AzureDevOps Server 2019

We recently updated our OnPrem TFS 2018 installation to the newest release: Azure DevOps Server

The product has the same core features as TFS 2018, but with a new UI and other improvements. For a full list you should read the Release Notes.

*Be aware: This is the OnPrem solution, even with the slightly missleading name “Azure DevOps Server”. If you are looking for the Cloud solution you should read the Migration-Guide.

“Updating” a TFS 2018 installation

Our setup is quite simple: One server for the “Application Tier” and another SQL database server for the “Data Tier”. The “Data Tier” was already running with SQL Server 2016 (or above), so we only needed to touch the “Application Tier”.

Application Tier Update

In our TFS 2018 world the “Application Tier” was running on a Windows Server 2016, but we decided to create a new (clean) server with Windows Server 2019 and doing a “clean” Azure DevOps Server install, but pointing to the existing “Data Tier”.

In theory it is quite possible to update the actual TFS 2018 installation, but because “new is always better”, we also switched the underlying OS.

Update process

The actual update was really easy. We did a “test run” with a copy of the database and everything worked as expected, so we reinstalled the Azure DevOps Server and run the update on the production data.

Steps:

x

x

x

x

x

x

x

x

x

x

x

x

x

x

Summary

If you are running a TFS installation, don’t be afraid to do an update. The update itself was done in 10-15 minutes on our 30GB-ish database.

Just download the setup from the Azure DevOps Server site (“Free trial…”) and you should be ready to go!

Hope this helps!

Jürgen Gutsch: Customizing ASP.NET Core Part 12: Hosting

In this 12th part of this series, I'm going to write about how to customize hosting in ASP.NET Core. We will look into the hosting options, different kind of hosting and a quick look into hosting on the IIS. And while writing this post this again seems to get a long one.

This will change in ASP.NET Core 3.0. I anyway decided to do this post about ASP.NET Core 2.2 because it still needs some time until ASP.NET Core 3.0 is released.

This post is just an overview bout the different kind of application hosting. It is surely possible to go a lot more into the details for each topic, but this would increase the size of this post a lot and I need some more topics for future blog posts ;-)

This series topics

Quick setup

For this series we just need to setup a small empty web application.

dotnet new web -n ExploreHosting -o ExploreHosting

That's it. Open it with Visual Studio Code:

cd ExploreHosting
code .

And voila, we get a simple project open in VS Code:

WebHostBuilder

Like in the last post, we will focus on the Program.cs. The WebHostBuilder is our friend. This is where we configure and create the web host. The next snippet is the default configuration of every new ASP.NET Core web we create using File => New => Project in Visual Studio or dotnet new with the .NET CLI:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
        	.UseStartup<Startup>();
}

As we already know from the previous posts the default build has all the needed stuff pre-configured. All you need to run an application successfully on Azure or on an on-premise IIS is configured for you.

But you are able to override almost all of this default configurations. Also the hosting configuration.

Kestrel

After the WebHostBuilder is created we can use various functions to configure the builder. Here we already see one of them, which specifies the Startup class that should be used. In the last post we saw the UseKestrel method to configure the Kestrel options:

.UseKestrel((host, options) =>
{
    // ...
})

Reminder: Kestrel is one possibility to host your application. Kestrel is a web server built in .NET and based on .NET socket implementations. Previously it was built on top of libuv, which is the same web server that is used by NodeJS. Microsoft removes the dependency to libuv and created an own web server implementation based on .NET sockets.

Docs: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel

This first argument is a WebHostBuilderContext to access already configured hosting settings or the configuration itself. The second argument is an object to configure Kestrel. This snippet shows what we did in the last post to configure the socket endpoints where the host needs to listen to:

.UseKestrel((host, options) =>
{
    var filename = host.Configuration.GetValue("AppSettings:certfilename", "");
    var password = host.Configuration.GetValue("AppSettings:certpassword", "");
    
    options.Listen(IPAddress.Loopback, 5000);
    options.Listen(IPAddress.Loopback, 5001, listenOptions =>
    {
        listenOptions.UseHttps(filename, password);
    });
})

This will override the default configuration where you are able to pass in URLs, eg. using the applicationUrl property of the launchSettings.json or an environment variable.

HTTP.sys

Do you know that there is another hosting option? A different web server implementation? It is HTTP.sys. This is a pretty mature library deep within Windows that can be used to host your ASP.NET Core application.

.UseHttpSys(options =>
{
    // ...
})

The HTTP.sys is different to Kestrel. It cannot be used in IIS because it is not compatible with the ASP.NET Core Module for IIS.

The main reason to use HTTP.sys instead of Kestrel is Windows Authentication which cannot be used in Kestrel only. Another reason is, if you need to expose it to the internet without the IIS.

Also the IIS is running on top of HTTP.sys for years. Which means UseHttpSys() and IIS are using the same web server implementation. To learn more about HTTP.sys please read the docs.

Hosting on IIS

An ASP.NET Core Application shouldn't be directly exposed to the internet, even if it's supported for even Kestrel or the HTTP.sys. It would be the best to have something like a reverse proxy in between or at least a service that watches the hosting process. For ASP.NET Core the IIS isn't only a reverse proxy. It also takes care of the hosting process in case it brakes because of an error or whatever. It'll restart the process in that case. Also Nginx may be used as an reverse proxy on Linux that also takes care of the hosting process.

To host an ASP.NET Core web on an IIS or on Azure you need to publish it first. Publishing doesn't only compiles the project. It also prepares the project to host it on IIS, on Azure or on an webserver on Linux like Nginx.

dotnet publish -o ..\published -r win32-x64

This produces an output that can be mapped in the IIS. It also creates a web.config to add settings for the IIS or Azure. It contains the compiled web application as a DLL.

If you publish a self-contained application it also contains the runtime itself. A self-contained application brings it's own .NET Core runtime, but the size of the delivery increases a lot.

And on the IIS? Just create a new web and map it to the folder where you placed the published output:

It get's a little more complicated if you need to change the security, if you have some database connections and so on. This would be a topic for a separate blog post. But in this small sample it simply works:

This is the output of the small Middleware in the startup.cs of the demo project:

app.Run(async (context) =>
{
    await context.Response.WriteAsync("Hello World!");
});

Nginx

Unfortunately I cannot write about Nginx, because I don't have a running Linux currently to play around with it. This is one of the many future projects I have. I just got ASP.NET Core running on Linux using the Kestrel webserver.

Conclusion

ASP.NET Core and the .NET CLI already contain all the tools to get it running on various platforms and to set it up to get it ready for Azure and the IIS, as well as Nginx. This is super easy and well described in the docs.

BTW: What do you think about the new docs experience compared to the old MSDN documentation?

I'll definitely go deeper into some of the topics and in ASP.NET Core there are some pretty cool hosting features that make it a lot more flexible to host your application:

Currently we have the WebHostBuilder that creates the hosting environment of the applications. In 3.0 we get the HostBuilder that is able to create a hosting environment that is completely independent from any web context. I'm going to write about the HostBuilder in one of the next blog posts.

Holger Schwichtenberg: Magdeburger Developer Days vom 20. bis 22. Mai 2019

Die Entwickler-Community-Konferenz "Magdeburger Developer Days" geht in die vierte Runde.

Jürgen Gutsch: Sharpcms.Core - Migrating an old ASP.NET CMS to ASP.​NET Core on Twitch

On my Twitch stream I planned to show how to migrate a legacy ASP.NET application to ASP.NET Core, to start a completely new ASP.NET Core project and to show some news about the .NET Developer Community. When I did the first stream and introduced the plans to the audience, it somehow turns into the direction to migrate the legacy application. So I chose the old Sharpcms project to show the migration, which is maybe not the best choice because this CMS doesn't use the common ASP.NET patterns.

About the sharpcms

Initially the Sharpcms was built by a Danish developer. Back when he stopped maintaining it, me and my friend Thomas Huber asked him to take over his project and to continue maintaining this project. He said yes and since than we were the main contributors and coordinators of this project.

This is where my Twitter handle was born. Initially I planned to use this Twitter account to promote the sharpcms, but I used it off-topic. I promoted blog posts, community events using this account as well did some interesting discussions on twitter. I used it too much, it got linked everywhere and it didn't make sense to change it anymore. Anyway the priorities changed. The sharpcms wasn't the main hobby project anymore, but I still used this Twitter handle. It still kinda makes sense to me, because I work with CSharp and I'm a kind of a CMS expert. (I developed on two different ones for years and used a lot more.)

We had huge plans with this project, but as always plans and priorities change with new family members and new jobs. We haven't done anything on that CMS for years. Actually I'm not sure whether this CMS is still used or not.

Anyway. This is one of the best CMS systems from my perspective. Easy to setup, lightweight and fast to run and easy to use for users without a technical background. Creating templates for this CMS need a good knowledge of XML and XSLT, because XML is the base of this CMS and XSLT is used for the templates. This was super fast with the .NET Framework. Caching wasn't really needed for the sharpcms.

Juergen.IO.Stream

In the first show on Twitch I introduced the plans about to migrate the sharpcms and the other one about to start a plain new ASP.NET Core project. It turns out that the audience wanted to see the migration project. I introduced the sharpcms, showed the original sources and started to create .NET Standard libraries to show the difficulties.

I wasn't that pessimistic than the audience, cause I still knew that CMS. There where not too much dependencies to the classic ASP.NET and System.Web stuff. And as expected it wasn't that hard.

The rendering of the output in the sharpcms is completely based on XML and XSLT. The sharpcms creates a XML structure that get's interpreted and rendered using XSLT templates.

XSLT is a XML based programming language that navigates through XML data and crates any output. It actually is a programming language, you are able to create decision statements, loops, functions and variables. It is limited, but as well as Razor, ASP or PHP you mix the programming language with the output you wanna create, which makes it easy and intuitive to use.

This means there is no rendering logic in the C# codes. All the C# code does is to work on the request and to create the XML data containing the data to show. At the end it transforms the XML using the XSLT templates.

The main work I needed to do to create the Sharpcms running is to wrap the ASP.NET Core request context into a request context that looks similar to the System.Web version that was used inside the Sharpcms. Because it heavily uses the ASP.NET WebForm page object and its properties.

The migration strategy was to get it running even if it is kinda hacky and to clean it up later on. Know we are in this state. The old Sharpcms sources are working on ASP.NET Core using .NET Standard libraries.

The Sharpcms.Core running on Azure: https://sharpcms.azurewebsites.net

Performance

Albert Weinert (a community guy, former MVP and a Twitch streamer as well) told me during the first stream, that XSLT isn't that fast in .NET Core. Unfortunately he was right. The transformation speed and the speed of reading the XML data isn't that fast.

We'll need to have a look into the performance and to find a way to speed it up. Maybe to create a alternative view engine to replace the XML and XSLT based view engine somewhen. It would also be possible to have multiple view engines. Razor, Handlebars or Liquid would be an option. All of these already have .NET implementations which can be used here.

Next steps

Even though the CMS is now running on ASP.NET Core, there's still a lot to do. Here are the next issues I need to work on:

  • Build on Azure DevOps #8

  • Performance:

    • Get rid of the physical XML data and move the data to a database #4
    • Speed up the XSL transformation #3
    • Find another way to render the UI, maybe using razor, handlebars or liquid #2
    • Add caching #1
  • Cleanup the codes #9

  • User password encryption #5

  • Provide NuGet packages to easily use the sharpcms #6

    • Provide a package for the frontend as well #7
  • Map the Middleware as routed one, like it should work in ASP.NET core 3.0

Join me

If you like to join me in the stream to work together with me on the Sharpcms.Core, feel free to tell me. I would be super happy to do a pair programming session to work on a specific problem. It would be great to have experts on this topics in the stream:

  • Razor or Handlebars to create an alternative view engine
  • Security and Encryption to make this CMS more secure
  • DevOps to create a build and release pipeline

Summary

Migrating the old Sharpcms to ASP.NET Core was fun, but it's not yet done. There is a lot more to do. I'll continue working on it on my stream, but will also do some other stuff in the streams.

If you like to work on the Sharpcms to help me to solve some issues or to start creating a modern documentation. Feel free. This would help a lot.

David Tielke: [Webcast] Softwarequalität Teil 2 - Prozessqualität

Es ist mal wieder Webcast-Zeit. Nachdem wir uns im ersten Teil die Grundlagen zum Thema Softwarequalität angeschaut haben, widmen wir uns im zweiten Teil der Prozessqualität. Wie sollte also ein guter Softwareentwicklungsprozess aussehen und wie sollte er nicht aussehen? Worauf muss geachtet werden und was solltet Ihr machen oder auch besser die Finger davon lassen? All diese Fragen beschäftigen uns im zweiten Teil zum Thema Softwarequalität. Viel Spaß damit!

Christina Hirth : Continuous Delivery Is a Journey – Part 3

In the first part I described why I think that continuous delivery is important for an adequate developer experience and in the second part I draw a rough picture about how we implemented it in a 5-teams big product development. Now it is time to discuss about the big impact – and the biggest benefits – regarding the development of the product itself.

Why do more and more companies, technical and non-technical people want to change towards an agile organisation? Maybe because the decision makers have understood that waterfall is rarely purposeful? There are a lot of motives – beside the rather wrong dumb one “because everybody else does this” – and I think there are two intertwined reasons for this: the speed at wich the digital world changes and the ever increasing complexity of the businesses we try to automate.

Companies/people have finally started to accept that they don’t know what their customer need. They have started to feel that the customer – also the market – has become more and more demanding regarding the quality of the solutions they get. This means that until Skynet is not born (sorry, I couldn’t resist 😁) we oftware developers, product owners, UX designers, etc. have to decide which solution would be the best to solve the problems in that specific business and we have to decide fast.

We have to deliver fast, get feedback fast, learn and adapt the consequences even faster. We have to do all this without down times, without breaking the existing features and – for most of us very important: without getting a heart attack every time we deploy to production.

IMHO These are the most important reasons why every product development team should invest in CI/CD.

The last missing piece of the jigsaw which allows us to deliver the features fast (respectively continuously) without disturbing anybody and without losing the control how and when features are released is called feature toggle.

feature toggle[1] (also feature switchfeature flagfeature flipperconditional feature, etc.) is a technique in software development that attempts to provide an alternative to maintaining multiple source-code branches (known as feature branches), such that a feature can be tested even before it is completed and ready for release. Feature toggle is used to hide, enable or disable the feature during run time. For example, during the development process, a developer can enable the feature for testing and disable it for other users.[2]

Wikipedia

The concept is really simple: one feature should be hidden until somebody, something decides that it is allowed to be used.

function useNewFeature(featureId) {
  const e = document.getElementById(featureId);
  const feat = config.getFeature(featureId);
  if(!feat.isEnabled)
    e.style.display = 'none';
  else
    e.style.display = 'block';
}

As you see, implementing feature toggles is really that simple. To adopt this concept will need some effort though:

  • Strive for only one toggle (one if) per feature. At the beginning it will be hard or even impossible to achieve this but it is a very important to define this as a middle-term goal. Having only one toggle per feature means the code is highly decoupled and very good structured.
  • Place this (main) toggle at the entry point (a button, a new form, a new API endpoint) the first interaction point with the user (person or machine) and in disabled state it should hide this entry point.
  • The enabled state of the toggle should lead to new services (in micro service world), new arguments or to new functions, all of them implementing the behavior for feature.enabled == true. This will lead to code duplication: yes, this is totally ok. I look at it as a very careful refactoring without changing the initial code. Implementing a new feature should not break or eliminate existing features. The tests too (all kind of them) should be organized similarly: in different files, duplicated versions, implemented for each state.
the different states of the toggle lead to clearly separated paths
  • Through the toggle you gain real freedom to make mistakes or just the wrong feature. At the same time you can always enable the feature and show it the product owner or the stake holders. This means a feedback loop is reduced to minimum.
  • This freedom has a price of course: after the feature is implemented, the feedback is collected, the decision for enabling the feature was made, after all this the source code must be cleaned up: all code for feature.enabled == false must be removed. This is why it is so important to create the different paths so that the risk of introducing a bug is virtually zero. We want to reduce workload not increase it.
  • Toggles don’t have to be temporary, business toggles (i.e. some premium features or “maintenance mode”) can stay forever. It is important to define beforehand what kind of toggle will be needed because the business toggles will be always part of your source code. The default value for this kind of toggles should be false.
  • The default value for the temporary toggles should be true and it should be deactivated on production, activated during the development.

One advice regarding the tooling: start small, with a config map in kubernetes, a database table, a json file somewhere will suffice. Later on new requirements will appear, like notifying the client UI when a toggle changes or allowing the product owner to decide, when a feature will be enabled. That will be the moment to think about next steps but for the moment it is more important to adopt this workflow, adopt this mindset of discipline to keep the source code clean, learn the techniques how to organize the code basis and ENJOY HAVING THE CONTROL over the impact of deployments, feature decisions, stress!

That’s it, I shared all of my thoughts regarding this subject: your journey of delivering continuously can start or continued 😉) now.

p.s. It is time for the one sentence about feature branches:
Feature toggles will never work with feature branches. Period. This means you have to decide: move to trunk based development or forget continuous development.

p.p.s. For the most languages exist feature toggle libraries, frameworks, even platforms, it is not necessary to write a new one. There are libraries for different complexities how the state can be calculated (like account state, persons, roles, time settings), just pick one.

Update:

As pointed out by Gergely on Twitter, on Martin Fowlers blog is a very good article describing extensively the different feature toggles and the power of this technique: Feature Toggles (aka Feature Flags)

David Tielke: [Webcast] Softwarequalität Teil 1 - Einführung

Es ist mal wieder Webcast-Zeit - nach etlichen Anfragen in den letzten Tagen habe ich heute wieder mein Studio-Equipment aufgebaut und den Vortrag des .NET Day Franken als Webcast aufgezeichnet. Da auf der Konferenz schon die vorgegebenen 70 Minuten sehr knapp bemessen waren, habe ich das Ganze auf mehrere Folgen aufgeteilt, welche in den nächsten Tagen und Wochen erscheinen werden. Viel Spaß damit!

David Tielke: .NET Day Franken 2019 - Inhalte meiner Session "Softwarequaltät"

 Früher als in den letzten Jahren ist für mich die Konferenzsaison dieses Mal schon im April gestartet und direkt mit einer neuen Konferenz, dem .NET Day Franken 2019. Die Communitykonferenz mit knapp 200 Teilnehmern wurde zum zehnten Mal in Nürnberg von der Community veranstaltet und bot neben einem tollen Programm, einer super Orga vor allem eine sensationelle Location. Ich durfte einen 70-minütigen Vortrag zum Thema "Softwarequalität" beisteuern, in welchem neben den Basics vor allem die diversen Probleme und deren vermeintlichen Lösungen im Fokus standen. An dieser Stelle möchte ich noch einmal allen Teilnehmern und natürlich auch den Organisatoren für eine erstklassige Veranstaltung danken. Es halt sehr viel Spaß gemacht, ich hoffe wir sehen uns im nächsten Jahr wieder. Hier stelle ich nun die Folien meines Vortrags zur Verfügung.

David Tielke: .NET Day Franken 2019 - Inhalte meiner Session "Softwarequaltät"

 Früher als in den letzten Jahren ist für mich die Konferenzsaison dieses Mal schon im April gestartet und direkt mit einer neuen Konferenz, dem .NET Day Franken 2019. Die Communitykonferenz mit knapp 200 Teilnehmern wurde zum zehnten Mal in Nürnberg von der Community veranstaltet und bot neben einem tollen Programm, einer super Orga vor allem eine sensationelle Location. Ich durfte einen 70-minütigen Vortrag zum Thema "Softwarequalität" beisteuern, in welchem neben den Basics vor allem die diversen Probleme und deren vermeintlichen Lösungen im Fokus standen. An dieser Stelle möchte ich noch einmal allen Teilnehmern und natürlich auch den Organisatoren für eine erstklassige Veranstaltung danken. Es halt sehr viel Spaß gemacht, ich hoffe wir sehen uns im nächsten Jahr wieder. Hier stelle ich nun die Folien meines Vortrags zur Verfügung.

Jürgen Gutsch: Implement Middlewares using Endpoint Routing in ASP.​NET Core 3.0

If you have a Middleware that needs to work on a specific path, you should implement it by mapping it to a route in ASP.NET Core 3.0, instead of just checking the path names. This post doesn't handle regular Middlewares, which need to work all request, or all requests inside a Map or MapWhen branch.

At the Global MVP Summit 2019 in Redmond I attended the hackathon where I worked on my GraphQL Middlewares for ASP.NET Core. I asked Glen Condron for a review of the API and the way the Middleware gets configured. He told me that we did it all right. We followed the proposed way to provide and configure an ASP.NET Core Middleware. But he also told me that there is a new way in ASP.NET Core 3.0 to use this kind of Middlewares.

Glen asked James Newton King who works on the new Endpoint Routing to show me how this needs to be done in ASP.NET Core 3.0. James pointed me to the ASP.NET Core Health Checks and explained me the new way to go.

BTW: That's kinda closing the loop: Four summits ago Damien Bowden and I where working on the initial drafts of the ASP.NET Core Health Checks together with Glen Condron. Awesome that this is now in production ;-)

The new ASP.NET Core 3.0 implementation of the GraphQL Middlewares is in the aspnetcore30 branch of the repository: https://github.com/JuergenGutsch/graphql-aspnetcore

About Endpoint Routing

The MVP fellow Steve Gordon had an early look into Endpoint Routing. His great post may help you to understand Entpoint Routing.

How it worked before:

Until now you used MapWhen() to map the Middleware to a specific condition defined in a predicate:

Func<HttpContext, bool> predicate = context =>
{
    return context.Request.Path.StartsWithSegments(path, out var remaining) &&
                            string.IsNullOrEmpty(remaining);
};

return builder.MapWhen(predicate, b => b.UseMiddleware<GraphQlMiddleware>(schemaProvider, options));

(ApplicationBuilderExtensions.cs)

In this case the path is checked. This is pretty common to not only map based on paths. This allows you to also map on all other kind of criteria based on the HttpContext.

Also the much simpler Map() was a way to go:

builder.Map(path, branch => branch.UseMiddleware<GraphQlMiddleware>(schemaProvider, options));

How this should be done now

In ASP.NET Core 3.0 these kind of mappings, where you may listen on a specific endpoint, should be done using the EndpoiontRouteBuilder. If you create a new ASP.NET Core 3.0 web application. MVC is now added a little different in the Startup.cs than before:

app.UseRouting(routes =>
{
    routes.MapControllerRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}");
    routes.MapRazorPages();
});

The method MapControllerRoute() adds the controller based MVC and Web API. The new ASP.NET Core Health Checks, which also provide an own endpoint will also be added like this. Means we now have Map() methods as extension methods on the IEndpointRouteBuilder instead of Use() methods on the IApplicationBuilder. It is still possible to use the Use methods.

In case of the GraphQL Middleware it looks like this:

var pipeline = routes.CreateApplicationBuilder()
    .UseMiddleware<GraphQlMiddleware>(schemaProvider, options)
    .Build();

return routes.Map(pattern, pipeline)
    .WithDisplayName(_defaultDisplayName);

(EndpointRouteBuilderExtensions.cs)

Based on the current IEndpointRouteBuilder a new IApplicationBuilder is created, where we Use the GraphQL Middleware as before. We pass the ISchemaProvider and the GraphQlMiddlewareOptions as arguments to the Middleware. The result is a RequestDelegate in the pipeline variable.

The configured endpoint pattern and the pipeline than gets mapped to the IEndpointRouteBuilder. The small extension Method WithDisplayName() sets the configured display name to the endpoint.

I needed to copy this extension method to from the ASP.NET Core repository to my code base, because the current development build of ASP.NET Core didn't contain this method two weeks ago. I need to check the latest version ASAP.

In ASP.NET Core 3.0 the GraphQl and the GraphiQl Middleware can now added like this:

app.UseRouting(routes =>
{
    if (env.IsDevelopment())
    {
        routes.MapGraphiQl("/graphiql");
    }
    
    routes.MapGraphQl("/graphql");
    
    routes.MapControllerRoute(
        name: "default",
        template: "{controller=Home}/{action=Index}/{id?}");
    routes.MapRazorPages();
});

Conclusion

The new ASP.NET Core 3.0 implementation of the GraphQL Middlewares is on the aspnetcore30 branch of the repository: https://github.com/JuergenGutsch/graphql-aspnetcore

This approach feels a bit different. In my opinion it messes the startup.cs a little bit. Previously we added one middleware after another... line by line to the IApplicationBuilder method. With this approach we have some Middlewares still registered on the IApplicationBuilder and some others on the IEndpointRouteBuilder inside a lambda expression on a new IApplicationBuilder.

The other thing is, that the order isn't really clear anymore. When will the Middlewares inside the UseRouting() be executed and in which direction? I will dig deeper into this the next months.

Jürgen Gutsch: #MVPSummit2019 - Impressions...

Also this year I was invited to attend the yearly Global MVP Summit in Redmond and Bellevue. It started last week Sunday until Thursday. As last year I add two days before and after the summit to get some time to explore Seattle. This is a small summery of the 8 days in the Seattle area.

Just to weeks before the summit starts there was the so called #snowmageddon2019 in the north west of the US. Cold and a lot of snow from the US perspective. But I was sure, when I will arrive in Seattle it'll be sunny and warm. And it was. I never had a rainy day in Seattle. In Bellevue and Redmond I had, but never in Seattle. Also last year I stayed two nights before and two nights after the summit in downtown Seattle and it was sunny than, but rainy while staying in Bellevue. Anyway, Seattle is always sunny, people are happy and friendly because of that.

Pre-Summit days in Seattle

As well as last year I stayed the first two nights in the Green Tortoise Hostel in downtown Seattle near the Pike place. This is a cheap hostel, you need to share the room with six to eight other people. But it is anyway impressive. The weekend when I arrived it was ComiCon in Seattle and Sint Patrick's day. So the hostel was full of ComiCon attendees, people wearing green things, Backpackers, and some MVPs.

I again met the South Korean Azure MVP in this hostel as last year, who gave me the sticker of his Korean Azure user group. I also met him the two nights after in the same hostel as well as during the summit

Even if the hostel is cheap, compared with the hotels in Seattle, the location is absolutely awesome. If you leave the hostel, you will stumble into the only Starbucks restaurant, that serves the Pike Place Special Reserve outside the Pike Place. Leaving the restaurant, you will stumble into the public market of the Pike Place where you can grab some pastries for breakfast. Than leaving the Pike Place to take the breakfast in the sun within the Victor Steinbrueck Park.

I arrived on Friday and took the Light Rail to Downtown Seattle, checked in to the Green Tortoise and went for a walk threw the Pike Place and had the first awesome burger at Lowell's Restaurant while enjoying the nice view to the Puget Sound. Saturday starts slowly with the breakfast described in the last paragraph. Later than I joined some MVPs for the guided Market Experience tour. Where I learned a lot about the market.

Did you know that the fist Starbucks isn't really the first one, but the oldest one? Did you know that you need to found your business on the Pike Place to get a spot to sell your stuff? All you want to sell on the market needs to be produced by yourself (except meat, sausage and fish I think)

Later I joined some MVP Friends for lunch and for a walk to the space needle. We had lunch at the Pike Place Brewery before where I found sausages, sauerkraut and meshed potatoes on the menu. Beer brazed sausages, with fine apple sauerkraut. Seattle meats Bavaria. I needed to try it and it was really yummy.

In the evening we had free beer at the hostel. With free beer and my laptop I started to merge almost all of the pull requests to the ASP.NET Core GraphQL Middlewares, answered almost all open issues and updated the dependencies of the project.

The Summit days in Bellevue and Redmond

The Sunday also started slowly, before I took the express bus to Redmond where the Summit hotels are located. I checked in to the Marriott Bellevue, where I shared the room with the famous Alex Witkowski. This room was awesome, with a great view to the space needle and a super modern stylish sliding door to the bathroom that cannot be locked and that always wasn't really closed. Felt strange while sitting on the toilet, but must be super modern for a 599$ room ;-)

Sunday is the day where the most of the MVPs registering for the summit at the biggest summit hotel. Some soft skill talks were held there too. The first parties organized by MVPs or tools vendors where on Saturday so we joined them and met the first Microsofties and other famous MVPs. it got late and the Monday got hard. Anyway the actual Summit starts on Monday with a lot of technical sessions.

From Monday to Wednesday there where a lot of interesting technical sessions. Many of them really had a lot of value. Some others didn't contain new information for me, because the most stuff in my area was openly discussed on GitHub, but anyway clarified some rumors.

I really got into Razor Components, which is not about Blazor as I initially thought. Also Scott Hanselman did a clarification post about it. [link] Razor Components is component based development using Razor. It looks similar to React, even if it may be rendered on the server side, as well as on the client side using Blazor. Awesome stuff.

The Thursday also is a highlight for me. Thursday is hackathon day. I joined Jeff Fritz who showed us his mobile streaming setup. I got a chance to talk to Jeff and to other Twitch streamers, like Emanuele Bartolesi. Besides of that I worked on the ASP.NET Core CraphQL Middlewares and had a chance to get a review by Glen Condron. He also told me that the way how a Middleware is created changed in 3.0 for Middlewares that handle a specific path. I'll write about it in one of the next posts. Glen and James Newton King who works on the new ASP.NET Core routing supported me to get it running for ASP.NET Core 3.0.

Post-Summit days in Seattle

On Thursday after the hackathon I moved back to Seattle into the Green Tortoise and again met the south Korean Azure MVP at the check-in. I used the night to work on the ASP.NET Core CraphQL Middleware to finish the GraphQL Middleware registration using the route mapping.

Friday was shopping day. My wife always need some pants from her favorite store Seattle and I need to buy some souvenirs for the Kids (usually some t-shirts). After this was done I decided to explore the international district and china town where I also had a quick lunch in on of the Asian restaurants. China town was less colorful than expected but nice anyway. An awesome detail: You know you are in china town, if the street names are printed in two languages.

I left china town and unexpectedly stumbled into the old part of Seattle. The Pioneer Square was surprisingly nice. Old houses, small shops and pubs. One of the pups sells a German stout beer "Köstritzer", as well as "Biers" and "Brats".

Also found the "Berliner" döner and kebab restaurant, which is (as far as I know) the very first and the only real döner restaurant in the US:

In the evening I decided to go to the Hardrock Cafe across the street to take a dinner. I was there for the first time. I don't get why this is a popular place. Pretty loud, uncomfortable and the food is good but not really special. Anyway, I continued to get the GraphiQL Middleware (the GraphQL UI) running using the new route mapping and cleaned up all the changes. Free beer at the Green Tortoise and Coding matches pretty well.

Saturday was the day to fly back at home. The morning starts with the annual JustCommunity Summit at the Lowell's Restaurant in the Public Market area of the Pike Place. Kostia and I took a breakfast and talked about the plans of the INETA Germany and JustCommunity. Our goals: To have a strategy about the JustCommunity until the end of the year. We also need to lineup the INETA tasks with the community support of the .NET Foundation.

Leaving Seattle

This was the fifth time in Seattle which is one of the most impressive cities. Pretty diverse, fascinating and pretty much different to any other cities in the US I've bin (not that many unfortunately).

Leaving Seattle is a little bit like leaving home. The last years I didn't know why. Now I'm pretty sure it is because I always meet friends, community members and many other nice people for the summit. The Summit is a little bit like a annual family meetup.

But one week without the family is hard as well and it is time to go home to my lovely wife and the three boys :-)

Holger Schwichtenberg: Visual Studio 2019 erscheint heute

Microsoft wird heute Abend um 18 Uhr die Version 2019 seiner IDE freigeben.

Jürgen Gutsch: Git Flow - About, installing and using

The people who know me, also know that I'm a huge fan of consoles and CLIs. I run the dotnet CLI as well as the Angular CLI and the create-react CLI. Yeoman is also a tool I like. I own a Mac, but cannot really work with the Mac UI. I really prefer the terminal in Mac. Also Git is used in the console the most time. The only situation where I don't use git in the console, is while resolving merge conflicts. I configured KDiff3 as the merge tool. I don't really need a graphic user interface for all the other tasks to work with Git.

So I do using the Git Flow process.

About Git Flow

In general Git Flow is a branching concept over Git. It is pretty clear and intuitive, but following this concept manually in Git is a bit hard and needs some time. Git Flow is now implemented in many graphical user interfaces like SourceTree. This reduces the overhead.

Git Flow is mainly about merging and branching. It defines two main branches, which are "master" as the production/release branch and "develop" as the working branch. The actual work is done in different types of feature branches:

  • "feature" a branch created based on "develop" to implement new featues
    • will be merged back to "develop"
    • branch name pattern: feature/<name|ticket|#123-my-feature>
  • "release" a branch created based on "develop" to create a new release
    • the branch name gets the tag name
    • will create a tag
    • will be merged to "master" and "develop"
    • branch name pattern: release/<tag|version|1.2.0>
  • "hotfix" a branch created based on "master"
    • the branch name gets the tag name
    • will create a tag
    • will merge to "master" and "develop"
    • branch name pattern: hotfix/<tag|version|1.2.3>
  • "bugfix" less popular. We use "feature" to create bug fixes
    • not available in all tools
    • behaves like "feature"
  • "support" much less popular. We don't use it
    • not available in all tools
    • almost behaves like hotfixes

I propose to have a look into the Git Flow cheat sheet documentation to see how the branching concept works: http://danielkummer.github.io/git-flow-cheatsheet/

Git Flow is also a tool provided as Git extension. This reduces branching, merging, releasing tagging to just one single command and does all the needed tasks in the background for you. This CLI makes it super easy to follow Git Flow.

Install Git Flow as Git Extension

The installation is a bit annoying, because it needs a some additional tools and some more tasks for just a small Git extension.

To install it you need cygwin, which also is a console that gives you Linux like tools on Windows. The easiest way to install cygwin is to use Chocolatey, which is a packet manager for Windows. (apt-get for windows). You can also install it manually by running the installer, but you need to ensure to also install cyg-get, wget and util-linux, which is much easier using Chocolatey.

To install Chocolatey follow the instructions on https://chocolatey.org.

Open a console and type the following commands

choco install cygwin
choco install cyg-get

If this is done you can use cyg-get to install the needed extensions for the cygwin console

Open the console and type the following commands:

cyg-get install wget
cyg-get install util-linux

Now the cygwin is ready to use to install Git Flow. Type

cygwin

This will open the cygwin bash inside the current console.

Now you are able to run the installation of Git Flow. Copy the following command to the cygwin bash and press enter:

wget -q -O - --no-check-certificate https://raw.github.com/petervanderdoes/gitflow-avh/develop/contrib/gitflow-installer.sh install stable | bash

If this is done exit the bash by typing exit and close the console by typing exit. Closing the consoles and open it again ensures all the environment variables needed are available.

Open a new console and type git flow. You should now see the Git Flow CLI help like this:

Every time you checkout or create a new repository you need to run git flow init to enable Git Flow.

Using this command you will setup Git Flow on an existing repository by configuring the different branch prefixes and specifying the two main branches. I would propose to choose the default prefixes and names:

Working with Git Flow

Using Git Flow is pretty easy using this CLI. Let's assume we need to start working on a feature called "Implement validation". We could now write a command like this

git flow feature start implement-validation

This will work as expected:

Since the most of us are using a planning tool like Jira or TFS it would make more sense to use the ticket number here as feature name. In case you use the TFS I would propose to add the work item type to the number:

  • Jira: PROJ-101
  • TFS: Task-34212

This helps to keep the branch names clean and you don't start messing around with long branch names or wrong names. Git Flow usually deletes the feature branch after merging it back. So the list of branches will never be too long. But anyway, I learned in the past few years, it is much easier to follow ticket numbers than weird named branch names, because we talk about the current tickets every day in the daily scrum meeting.

All the commands that are not related to branches can be done using the regular Git CLI. That means commands to commit, to push and so on.

Git Flow will merge the branches, if you finish them. It doesn't work with rebase or other approaches. This means it'll take over the entire history of the feature branch. Because of this I would also propose to add the ticket number to the commit messages like this: "PROJ-101: adds validation to the form". This makes it easy to follow the history in case it is needed.

To finish a feature you should first merge the latest changes of the development branch in:

git fetch --all
git merge develop
git flow feature finish

If you don't add the feature name to the git flow feature finish command, Git Flow will try to close the current feature branch and will write out a message in case the current branch is not a feature branch.

I would propose to always merge the latest changes of develop to the current feature branch to solve possible conflicts within the feature branch instead in the develop branch. This way the merge to develop will almost never have a conflict.

I showed the way how to work with Git Flow using the feature branch. But it'll work the same way with the other branch types. Except with the release and the hotfix branches where you need to set the tag name as feature name. This should be the version number of the release or the version number of the hotfix.

While finishing these two branches Git Flow will ask you for a tag message. After finishing it you need to push both the master and the develop brunch, as well as the tags:

git push --all
git push --tags

For more information about the Git Flow commands please follow the documentation on Daniel Kummer's Git Flow cheat sheet: http://danielkummer.github.io/git-flow-cheatsheet/. (Which is BTW the best Git Flow documentations ever)

Conclusion

I really love the CLI help of this tool. It is not only descriptive but also explaining. The same way the GIT CLI is explaining things. It is also providing proposals in case a command is miss-spelled.

Git Flow helps me to speed up the branching and merging flows and to follow the Git Flow process. I proposed to use Git Flow in the company and works pretty well there. And I learned a lot about how this process works in production.

As written somewhen in the past, It also helps me to write my blog. I really use Git Flow to organize my posts I'm working on. I'm creating a feature per post and a hotfix in case I need to fix a post or something else on the blog. I use SemVer to version my releases and hotfixes: Every post increases the feature number and a hotfix increases the patch number. The feature number also is the number of post in my blog. The number of open features in my blog is the number of posts I'm working on. This way I can work on many posts separately and I'm able to release the posts separately.

Code-Inside Blog: Load hierarchical data from MSSQL with recursive common table expressions

Scenario

We have a pretty simple scenario: We have a table with a simple Id + ParentId schema and some demo data in it. I have seen this design quite a lot in the past and in the relational database world this is the obvious choice.

x

Problem

Each data entry is really simple to load or manipulate. Just load the target element and change the ParentId for a move action etc.. A more complex problem is how to load a whole “data tree”. Let’s say I want to load all children or parents of a given Id. You could load everything, but if your dataset is large enough, this operation will work poorly and might kill your database.

Another naive way would be to query this with code from a client application, but if your “tree” is big enough, it will consume lots of resources, because for each “level” you open a new connection etc.

Recursive Common Table Expressions!

Our goal is to load the data in one go as effective as possible - without using Stored Procedures(!). In the Microsoft SQL Server world we have this handy feature called “common table expresions (CTE)”. A common table expression can be seen as a function inside a SQL statement. This function can be invoked by itself and now we can call this a “recursive common table expression”.

The syntax itself is a bit odd, but works well and you can enhance it with JOINs from other tables.

Scenario A: From child to parent

Let’s say you want to go the tree upwards from a given Id:

WITH RCTE AS
    (
    SELECT anchor.Id as ItemId, anchor.ParentId as ItemParentId, 1 AS Lvl, anchor.[Name]
    FROM Demo anchor WHERE anchor.[Id] = 7
    
    UNION ALL
    
    SELECT nextDepth.Id  as ItemId, nextDepth.ParentId as ItemParentId, Lvl+1 AS Lvl, nextDepth.[Name]
    FROM Demo nextDepth
    INNER JOIN RCTE recursive ON nextDepth.Id = recursive.ItemParentId
    )
                                    
SELECT ItemId, ItemParentId, Lvl, [Name]
FROM RCTE as hierarchie

The anchor.[Id] = 7 is our starting point and should be given as a SQL parameter. The with statement starts our function description, which we called “RCTE”. In the first select we just load everything from the target element. Note, that we add a “Lvl” property, which starts at 1. The UNION ALL is needed (at least we were not 100% if there are other options). In the next line we are doing a join based on the Id = ParentId schema and we increase the “Lvl” property for each level. The last line inside the common table expression uses the “recursive” feature.

Now we are done and can use the CTE like a normal table in our final statement.

Result:

x

We now only load the “path” from the child entry up to the root entry.

If you ask why we introduce the “lvl” column: With this column it is really easy see each “step” and it might come handy in your client application.

Scenario B: From parent to all descendants

With a small change we can do the other way around. Loading all descendants from a given id.

The logic itself is more or less identical, we changed only the INNER JOIN RCTE ON …

WITH RCTE AS
    (
    SELECT anchor.Id as ItemId, anchor.ParentId as ItemParentId, 1 AS Lvl, anchor.[Name]
    FROM Demo anchor WHERE anchor.[Id] = 2
    
    UNION ALL
    
    SELECT nextDepth.Id  as ItemId, nextDepth.ParentId as ItemParentId, Lvl+1 AS Lvl, nextDepth.[Name]
    FROM Demo nextDepth
    INNER JOIN RCTE recursive ON nextDepth.ParentId = recursive.ItemId
    )
                                    
SELECT ItemId, ItemParentId, Lvl, [Name]
FROM RCTE as hierarchie

Result:

x

In this example we only load all children from a given id. If you point this to the “root”, you will get everything except the “alternative root” entry.

Conclusion

Working with trees in a relational database might not “feel” as good as in a document database, but it doesn’t mean, that such scenarios needs to perform bad. We use this code at work for some bigger datasets and it works really well for us.

Thanks to my collegue Alex - he discovered this wild T-SQL magic.

Hope this helps!

Albert Weinert: Kostenloser Live ASP.NET Core Authentication und Authorization Deep Dive am 31.03.2019

Kostenlos aber nicht umsonst!

Auf meinem Twitch Live Conding Kanal habe ich ein Follower Goal ausgerufen. Bei einhundert Followern mache ich Live einen ASP.NET Core Authentication und Authorization Deep Dive. Diese Ziel ist nun erreicht und nun muss ich Taten folgen lassen.

Die Taten starten am Sonntag den 31. März 2019 um 11 Uhr, zusammen mit Jürgen Gutsch der sich freundlicher Weise als Moderator, Fragesteller und Verbindung zum Chat zu Verfügung stellt, der Deep Dive Live auf Sendung gehen.

Was erwartet euch?

2-3 Stunden Möglichkeiten, Dos and Don'ts rund um das Thema, auch werdet Ihr Fragen, Probleme und Wünsche äußern können. Vorab- oder Live im Chat. Von der Cookie Authentizierung bis Open ID Connect, von der Absicherung gegen übliche Angriffe aus dem Netz, was für Bestandteile gibt es. Viele Hinweise zu dem was man alles falsche mache kann und warum und wie man es besser richtig macht.

Es wird kein reiner Vortrag sein, sondern ein lockerer Dialog zwischen Jürgen, dem Chat und mir, wobei ich natürlich auch viel Code zeigen und schreiben werden.

Du hast Fragen zum Thema?

Dann hinterlasse sie am besten beim passenden GitHub Issue, oder über Twitter mit dem Hashtag #deepdivealbert. Alterantiv hier als Kommentar posten. Natürlich kannst Du dich auch während dem Stream einbringen, dazu brauchst Du ein Twitch Konto und musst angemeldet sein.

Bei Twitch anmelden?

Nein, Du kannst den Stream auch ohne Anmeldung sehen, aber Du kannst dann nicht am Chat teilnehmen.

Die Aufzeichnung ist nun Online

Uli Armbruster: Freigrenze vs. Freibetrag

Heute bin ich von meinem Steuerberater auf die wichtige Unterscheidung zw. Freigrenze und Freibetrag aufmerksam gemacht worden, was ich bis dato synonym verwendet habe.

Dazu ein Beispiel:

  • 40€ sind die jeweilige Grenze
  • Der Kauf übersteigt die Grenze um 1€, sprich die Gesamtkosten belaufen sich auf 41€

Freigrenze

Im Fall der Freigrenze muss bei Übersteigung des Grenzbetrags die volle Summe versteuert werden, d.h. die vollen 41€ sind steuerpflichtig.

Freibetrag

Im Fall des Freibetrags muss lediglich der die Grenze übersteigende Betrag versteuert werden, d.h. 1€.

Bei Streuartikeln (bis 10€ netto) und Sachzuwendungen an Geschäftsfreunde (35€ pro Jahr und Person) bzw. Sachzuwendungen an Arbeitnehmer (44€ pro Mitarbeiter und Monat | nicht auf Folgemonat übertragbar) handelt es sich um Freigrenzen. Leider sind im betrieblichen Umfeld die meisten Grenzen Freigrenzen.

Ein Freibetrag wäre z.B. der sogenannte Rabattfreibetrag, bei dem der Arbeitgeber seinen Angestellten Rabatte auf die eigenen Waren oder Dienstleistungen gewährt.

Michael Schwarz: Nach längerer Pause - jetzt zu Apple Themen auf Twitter

Nach längerer Pause bin ich jetzt zu Apple Themen auf Twitter umgestiegen. Unter https://twitter.com/DieApfelFamilie könnt ihr mir folgen.


Christina Hirth : Continuous Delivery Is a Journey – Part 2

After describing the context a little bit in part one it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I’m sorry, but it is a quite long part 🙄)

The very first step starts with pushing all the current commits to master (if you work with feature branches you will probably encounter a new level of self-made complexity which I don’t intend to discuss about).

This action triggers the first checks and quality gates like licence validation and unit tests. If all checks are “green” the new version of the software will be saved to the repository manager and will be tagged as “latest”.

Successful push leads to a new version of my service/pkg/docker image

At this moment the continuous integration is done but the features are far from being used by any customer. I have a first feedback that I didn’t brake any tests or other basic constraints but that’s all because nobody can use the features, it is not deployed anywhere yet.

Well let Jenkins execute the next step: deployment to the Kubernetes environment called integration (a.k.a. development)

Continuous delivery to the first environment including the execution of first acceptance tests

At this moment all my changes are tested if they can work together with the currently integrated features developed by my colleagues and if the new features are evolving in the right direction (or are done and ready for acceptance).

This is not bad, but what if I want to be sure that I didn’t break the “platform”, what if I don’t want to disturb everybody else working on the same product because I made some mistakes – but I still want to be a human ergo be able to make mistakes 😉? This means that my behavioral and structure changes introduced by my commits should be tested before they land on integration.

These must be obviously a different set of tests. They should test if the whole system (composed by a few microservices each having it’s own data persistence, one or more UI-Apps) is working as expected, is resilient, is secure, etc.

At this point came the power of Kubernetes (k8s) and ksonnet as a huge help. Having k8s in place (and having the infrastructure as code) it is almost a no-brainer to set up a new environment to wire up the single systems in isolation and execute the system tests against it. This needs not only the k8s part as code but also the resources deployed and running on it. With ksonnet can be every service, deployment, ingress configuration (manages external access to the services in a cluster), or config map defined and configured as code. ksonnet not only supports to deploy to different environments but offers also the possibility to compare these. There are a lot of tools offering these possibilities, it is not only ksonnet. It is important to choose the fitting tool and is even more important to invest the time and effort to configure everything as code. This is a must-have in order to achieve a real automation and continuous deployment!

Good developer experience also means simplified continuous deployment

I will not include here any ksonnet examples, they have a great documentation. What is important to realize is the opportunity offered with such an approach: if everything is code then every change can be checked in. Everything checked in can be included observed/monitored, can trigger pipelines and/or events, can be reverted, can be commented – and the feature that helped us in our solution – can be tagged.

What happens in a continuous delivery? Some change in VCS triggers pipeline, the fitting version of the source code is loaded (either as source code like ksonett files or as package or docker image), the configured quality gate checks are verified (runtime environment is wired up, the specs with the referenced version are executed) and in case of success the artifact will be tagged as “thumbs up” and promoted to the next environment. We started do this manually to gather enough experience to automate the process.

Deploy manually the latest resources from integration to the review stage

If you have all this working you have finished the part with the biggest effort. Now it is time to automate and generalize the single steps. After the Continuous Integration the only changes will occur in the ksonnet repo (all other source code changes are done before), which is called here deployment repo.

Roll out, test and eventually roll back the system ready for review

I think, this post is already to long. The next part ( I think, it will be the last one) I would like to write about the last essential method, how to deploy to production, without annoying anybody (no secret here, this is why feature toggles were invented for 😉) and about some open questions or decisions what we encountered on our journey.

Every graphic is realized with plantuml thank you very much!

to be continued …

Golo Roden: Einführung in Node.js, Folge 26: Let's code (comparejs)

JavaScript verfügt – wie auch andere Programmiersprachen – über Operatoren zum Vergleichen von Werten. Leider läuft ihre Funktionsweise häufig der Intuition zuwider. Warum also nicht die Vergleichsoperatoren in Form eines Moduls neu schreiben und dabei auf vorhersagbares Verhalten achten?

Christina Hirth : Continuous Delivery Is a Journey – Part 1

Last year my colleagues and I had the pleasure to spend 2 days with @hamvocke and @diegopeleteiro from @thoughtworks reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: “think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this is what your CD should enable.”

Even if (or maybe because) this thought scared the hell out of us, it became our vision for the next few months because we saw great opportunities we would gain if we would be able to work this way.

Let me describe the context we were working:

  • Four business teams, 100% self-organized, owning 1…n Self-contained Systems, creating microservices running as Docker containers orchestrated with Kubernetes, hosted on AWS.
  • Boundaries (as in Domain Driven Design) defined based on the business we were in.
  • Each team having full ownership and full accountability for their part of business (represented by the SCS).
  • Basic heuristics regarding source code organisation: “share nothing” about business logic, “share everything” about utility functions (in OSS manner), about experiences you made, about the lessons you learned, about the errors you made.
  • Ensuring the code quality and the software quality is 100% team responsibility.
  • You build it, you run it.
  • One Platform-as-a-service team to enable this business teams to deliver features fast.
  • Gitlab as VS, Jenkins as build server, Nexus as package repository
  • Trunk-based development, no cherry picking, “roll fast forward” over roll back.
Teams
4 Business Teams + 1 Platform-as-a-Service Team = One Product

The architecture we have chosen was meant to support our organisation: independent teams able to work and deliver features fast and independently. They should decide themselves when and what they deploy. In order to achieve this we defined a few rules regarding inter-system communication. The most important ones are:

  • Event-driven Architecture: no synchronous communication only asynchronous via the Domain Event Bus
  • Non-blocking systems: every SCS must remain (reduced) functional even if all the other systems are down

We had only a couple of exceptions for these rules. As an example: authentication doesn’t really make sense in asynchronous manner.

Working in self-organized, independent teams is a really cool thing. But

with great power there must also come great responsibility

Uncle Ben to his nephew

Even though we set some guards regarding the overall architecture, the teams still had the ownership for the internal architecture decisions. As at the beginning we didn’t have continuous delivery in place every team was alone responsible for deploying his systems. Due the missing automation we were not only predestined to make human errors but we were also blind for the couplings between our services. (And we spent of course a lot of time doing stuff manually instead of letting Jenkins or Gitlab or some other tool doing this stuff for us 🤔 )

One example: every one of our systems had at least one React App and a GraphQL API as the main communication (read/write/subscribe) channel. One of the best things about GraphQL is the possibility to include the GraphQL-schema in the react App and this way having the API Interface definition included in the client application.

Is this not cool? It can be. Or it can lead to some very smelly behavior, to a real tight coupling and to inability to deploy the App and the API independently. And just like my friend @etiennedi says: “If two services cannot be deployed independently they aren’t two services!”

This was the first lesson we have learned on this journey: If you don’t have a CD pipeline you will most probably hide the flaws of your design.

One can surely ask “what is the problem with manual deployment?” – nothing, if you have only a few services to handle, if every one in your team knows about these couplings and dependencies and is able to execute the very precise deployment steps to minimize the downtime. But otherwise? This method doesn’t scale, this method is not very professional – and the biggest problem: this method ignores the possibilities offered by Kubernetes to safely roll out, take down, or scale everything what you have built.

Having an automated, standardized CD pipeline as described at the beginning – with the goal that every commit will land on production in a few seconds – having this in place forces everyone to think about the consequences of his/hers commit, to write backwards compatible code, to become a more considered developer.

to be continued …

Stefan Henneken: MEF Part 3 – Life cycle management and monitoring

Part 1 took a detailed look at binding of composable parts. In an application, however, we sometimes need to selectively break such bindings without deleting the entire container. We will look at interfaces which tell parts whether binding has taken place or whether a part has been deleted completely.

The IPartImportsSatisfiedNotification interface

For parts, it can be helpful to know when binding has taken place. To achieve this, we implement an interface called IPartImportsSatisfiedNotification. This interface can be implemented in both imports and exports.

[Export(typeof(ICarContract))]
public class BMW : ICarContract, IPartImportsSatisfiedNotification
{
    // ...
    public void OnImportsSatisfied()
    {
        Console.WriteLine("BMW import is satisfied.");
    }
}
class Program : IPartImportsSatisfiedNotification
{
    [ImportMany(typeof(ICarContract))]
    private IEnumerable<Lazy<ICarContract>> CarParts { get; set; }
 
    static void Main(string[] args)
    {
        new Program().Run();
    }
    void Run()
    {
        var catalog = new DirectoryCatalog(".");
        var container = new CompositionContainer(catalog);
        container.ComposeParts(this);
        foreach (Lazy<ICarContract> car in CarParts)
            Console.WriteLine(car.Value.StartEngine("Sebastian"));
        container.Dispose();
    }
    public void OnImportsSatisfied()
    {
        Console.WriteLine("CarHost imports are satisfied.");
    }
}

Sample 1 (Visual Studio 2010) on GitHub

When the above program is run, after executing container.ComposeParts() (line 14) the method OnImportsSatisfied() of the host will be executed. If this is the first time an export has been accessed, the export will first run the constructor, then its OnImportsSatisfied() method, and finally its StartEngine() method.

If we don’t use the Lazy<T> class, the sequence in which the methods are called is somewhat different. In this case, after executing the container.ComposeParts() method, the constructor, and then the OnImportsSatisfied() method will first be executed for all exports. Only then the OnImportsSatisfied() method of the host will be called, and finally the StartEngine() method for all exports.

Using IDisposable

As usual in .NET, the IDisposable interface should also be implemented by exports. Because the Managed Extensibility Framework manages the parts, only the container containing the parts should call Dispose(). If the container calls Dispose(), it also calls the Dispose() method of all of the parts. It is therefore important to call the container’s Dispose() method once the container is no longer required.

Releasing exports

If the creation policy is defined as NonShared, multiple instances of the same export will be created. These instances will then only be released when the entire container is destroyed by using the Dispose() method. With long-lived applications in particular, this can lead to problems. Consequently, the CompositionContainer class possesses the methods ReleaseExports() and ReleaseExport(). ReleaseExports() destroys all parts, whilst ReleaseExport() releases parts individually. If an export has implemented the IDisposable interface, its Dispose() method is called when you release the export. This allows selected exports to be removed from the container, without having to destroy the entire container. The ReleaseExports() and ReleaseExport() methods can only be used on exports for which the creation policy is set to NonShared.

In the following example, the IDisposable interface has been implemented in each export.

using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarBMW
{
    [Export(typeof(ICarContract))]
    public class BMW : ICarContract, IDisposable
    {
        private BMW()
        {
            Console.WriteLine("BMW constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the BMW.", name);
        }
        public void Dispose()
        {
            Console.WriteLine("Disposing BMW.");
        }
    }
}

The host first binds all exports to the import. After calling the StartEngine() method, we use the ReleaseExports() method to release all of the exports. After re-binding the exports to the import, this time we remove the exports one by one. Finally, we use the Dispose() method to destroy the container.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.NonShared)]
        private IEnumerable<Lazy<ICarContract>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
 
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
 
            Console.WriteLine("");
            Console.WriteLine("ReleaseExports.");
            container.ReleaseExports<ICarContract>(CarParts);
            Console.WriteLine("");
 
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
 
            Console.WriteLine("");
            Console.WriteLine("ReleaseExports.");
            foreach (Lazy<ICarContract> car in CarParts)
                container.ReleaseExport<ICarContract>(car);
 
            Console.WriteLine("");
            Console.WriteLine("Dispose Container.");
            container.Dispose();
        }
    }
}

The program output therefore looks like this:

CommandWindowSample02

Sample 2 (Visual Studio 2010) on GitHub

Golo Roden: Einführung in Node.js, Folge 25: Let's code (is-subset-of)

Will man in JavaScript wissen, ob ein Array oder ein Objekt eine Teilmenge eines anderen Arrays oder eines anderen Objekts ist, lässt sich das nicht einfach herausfinden – erst recht nicht, wenn eine rekursive Analyse gewünscht ist. Warum also nicht ein Modul zu dem Zweck entwickeln?

André Krämer: Verstärkung bei der Quality Bytes GmbH in Sinzig gesucht (Softwareentwickler .NET, Softwareentwickler Angular, Xamarin, ASP.NET Core)

Das könnte dein neuer Schreibtisch sein Im Sommer 2018 habe ich gemeinsam mit einem Partner die Quality Bytes GmbH in Sinzig am Rhein, gelegen zwischen Bonn und Koblenz, gegründet. Seitdem entwicklen wir in einem Team von vier Entwicklern spannende Lösungen im Web- und Mobile-Umfeld. Wir setzen dabei auf moderne Technologien und Werkzeuge, wie z. B. ASP.NET Core Angular Xamarin Azure DevOps git Typescript C# Aktuell haben wir mehrere Stellen zu besetzen.

Code-Inside Blog: Check Scheduled Tasks with Powershell

Task Scheduler via Powershell

Let’s say we want to know the latest result of the “GoogleUpdateTaskMachineCore” task and the corresponding actions.

x

All you have to do is this (in a Run-As-Administrator Powershell console) :

Get-ScheduledTask | where TaskName -EQ 'GoogleUpdateTaskMachineCore' | Get-ScheduledTaskInfo

The result should look like this:

LastRunTime        : 2/26/2019 6:41:41 AM
LastTaskResult     : 0
NextRunTime        : 2/27/2019 1:02:02 AM
NumberOfMissedRuns : 0
TaskName           : GoogleUpdateTaskMachineCore
TaskPath           : \
PSComputerName     :

Be aware that the “LastTaskResult” might be displayed as an integer. The full “result code list” documentation only lists the hex value, so you need to convert the number to hex.

Now, if you want to access the corresponding actions you need to work with the “actual” task like this:

PS C:\WINDOWS\system32> $task = Get-ScheduledTask | where TaskName -EQ 'GoogleUpdateTaskMachineCore'
PS C:\WINDOWS\system32> $task.Actions


Id               :
Arguments        : /c
Execute          : C:\Program Files (x86)\Google\Update\GoogleUpdate.exe
WorkingDirectory :
PSComputerName   :

If you want to dig deeper, just checkout all the properties:

PS C:\WINDOWS\system32> $task | Select *


State                 : Ready
Actions               : {MSFT_TaskExecAction}
Author                :
Date                  :
Description           : Keeps your Google software up to date. If this task is disabled or stopped, your Google
                        software will not be kept up to date, meaning security vulnerabilities that may arise cannot
                        be fixed and features may not work. This task uninstalls itself when there is no Google
                        software using it.
Documentation         :
Principal             : MSFT_TaskPrincipal2
SecurityDescriptor    :
Settings              : MSFT_TaskSettings3
Source                :
TaskName              : GoogleUpdateTaskMachineCore
TaskPath              : \
Triggers              : {MSFT_TaskLogonTrigger, MSFT_TaskDailyTrigger}
URI                   : \GoogleUpdateTaskMachineCore
Version               : 1.3.33.23
PSComputerName        :
CimClass              : Root/Microsoft/Windows/TaskScheduler:MSFT_ScheduledTask
CimInstanceProperties : {Actions, Author, Date, Description...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties

If you have worked with Powershell in the past this blogpost should “easy”, but it took me a while to see the result code and to check if the action was correct or not.

Hope this helps!

Golo Roden: Einführung in Node.js, Folge 24: Let's code (typedescriptor)

Der typeof-Operator von JavaScript hat einige Schwächen: Er kann beispielsweise nicht zwischen Objekten und Arrays unterscheiden und identifiziert null fälschlicherweise als Objekt. Abhilfe schafft ein eigenes Modul, das Typen verlässlich identifiziert und beschreibt.

Jürgen Gutsch: Problems using a custom Authentication Cookie in classic ASP.​NET

A customer of mine created an own authentication service that combines various login mechanisms in their on-premise application environment. On this central service combines authentication via Active Directory, classic ASP.NET Forms Authentication and a custom login via a number of an access card.

  • Active Directory (For employees only)
  • Forms Authentication (Against a user store in the database for extranet users and and against the AD for employees via extranet)
  • Access badges (for employees, this authentication results in lower access rights)

This worked pretty nice in their environment until I created a new application which needs to authenticate against this service and was build using ASP.NET 4.7.2

BTW: Unfortunately I couldn't use ASP.NET Core here because I needed to reuse specific MVC components that are shared between all the applications.

I also wrote "classic ASP.NET" which feels a bit wired. I worked with ASP.NET for a long time (.NET 1.0) and still work with ASP.NET for specific customers. But it really is kinda classic since ASP.NET Core is out and since I worked with ASP.NET Core a lot as well.

Hot the customer solution works

I cannot go into the deep details, because this is the customers code, you only need to get the idea.

The problem because it didn't work with the new ASP.NET Framework is, that they use a custom authentication cookie that was based on ASP.NET forms authentication. I'm pretty sure, when the authentication service was created they didn't know about ASP.NET Identity or it didn't exist. They created a custom Identity, that stores all the user information as properties. They build an authentication ticket out of it and use forms authentication to encrypt and store that cookie. The cookie name is customized in the web.config which is not an issue. All the apps share the same encryption information.

The client applications that uses the central authentication service, read that cookie, decrypt the information using forms authentication, de-serialize the data into that custom authentication ticket that contains the user information. The user than gets created and stored into the User property of the current HttpContext and is authenticated in the application.

This sounds pretty straight foreword is working well, except in newer ASP.NET versions.

How it should work

The best way to use the authentication cookie would be to use the ASP.NET Identity mechanisms to create that cookie. After the authentication happened on the central service, the needed user information should have been stored as claims inside the identity object, instead of properties in a custom Identity object. The authentication cookie should have been stored using the forms authentication mechanism only, without an custom authentication ticket. The forms authentication is able to create that ticket including all the claims.

On the client applications forms authentication would have been red the cookie and would have been created a new Identity including all the claims that are defined in the central authentication service. The forms authentication module would have stored the user in the current HttpContext as well.

Less code, more easy. IMHO.

What is the actual problem?

The actual problem is, that the client applications reads the authentication cookie from the CookieCollection on Application_PostAuthenticateRequest:

// removed logging and other overhead

protected void Application_PostAuthenticateRequest(Object sender, EventArgs e)
{
    var serMod = default(CustomUserSerializeModel);

	var authCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
	if (authCookie != null || Request.IsLocal)
	{
		var ticket = FormsAuthentication.Decrypt(authCookie.Value); 
		var serializer = new JavaScriptSerializer();
		serMod = serializer.Deserialize<CustomUserSerializeModel>(ticket.UserData);
    }
    
    // some fallback code ...
    
    if (serMod != null)
	{
		var user = new CustomUser(serMod);
		var cultureInfo = CultureInfo.GetCultureInfo(user.Language);

		HttpContext.Current.User = user;
        Thread.CurrentThread.CurrentCulture = ci;
        Thread.CurrentThread.CurrentUICulture = ci;
	}
    
    // some more code ...
}

In newer ASP.NET Frameworks the authentication cookie gets removed from the cookie collection after the user was authenticated.

Actually I have no idea since what version the cookie will be removed, but this is anyway a good thing because of security reasons, but there are no information in the release notes since ASP.NET 4.0.

Anyway the cookie collection doesn't contain the authentication cookie anymore and the cookie variable is null if I try to read it out of the collection.

BTW: The cookie is still in the request headers and could be read manually. But including the encryption it could be difficult to read it.

I tried to solve this problem by reading the cookie on Application_AuthenticateRequest. This is also not working, because the FormsAuthenticationModule reads the cookie previously.

The next try was on to read it on Application_BeginRequest. This in generally woks, I get the cookie and I can read it. But, because the cookie is configured as authentication cookie, the FormsAuthModule tries to read it and fails. It'll set the User to null because there is an authentication cookie available which doesn't contain valid information. Which also makes kinda sense.

So this is not the right solution as well.

I worked on that problem almost four months. (Not completely four months, but for many hours within this four months.) I compared applications and other solutions. Because there was no hint about the removal of the authentication cookie and because it was working on the old applications I was pretty confused about the behavior.

I studied the source code of ASP.NET to get the solution. And there is one.

And finally the solution

The solution is to read the cookie on FormsAuthentication_OnAuthenticate in the global.asax and not to store the user in the current context, but in the event arguments User property. The user than gets stored in the context by the FormsAutheticationModule, that also executes this event handler.

// removed logging and other overhead

protected void FormsAuthentication_OnAuthenticate(Object sender, FormsAuthenticationEventArgs args)
{
	AuthenticateUser(args);
}

public void AuthenticateUser(FormsAuthenticationEventArgs args)
{    
	var serMod = default(CustomUserSerializeModel);

	var authCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
	if (authCookie != null || Request.IsLocal)
	{
		var ticket = FormsAuthentication.Decrypt(authCookie.Value); 
		var serializer = new JavaScriptSerializer();
		serMod = serializer.Deserialize<CustomUserSerializeModel>(ticket.UserData);
    }
    
    // some fallback code ...
    
    if (serMod != null)
	{
		var user = new CustomUser(serMod);
		var cultureInfo = CultureInfo.GetCultureInfo(user.Language);

		args.User = user; // <<== this does the thing!
        Thread.CurrentThread.CurrentCulture = ci;
        Thread.CurrentThread.CurrentUICulture = ci;
	}
    
    // some more code ...
}

That's it.

Conclusion

Pleas don't create custom authentication cookies, try the FormsAuthentication and ASP.NET Identity mechanisms first. This is much simpler and won't break that way because of future changes.

Also please don't write a custom authentication service, because there is already a good one out there that is the almost the standard. Have a look into the IdentityServer, that also provides the option to handle different authentications mechanisms using common standards and technologies.

If you really need to create a custom solution, be carefully and know what you are doing.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.