Golo Roden: Objekte aufzählen in JavaScript

JavaScript kennt keine forEach-Schleife für Objekte. Der Einsatz von modernen Sprachmitteln wie der Funktion Object.entries und der for-of-Schleife ermöglicht aber das einfache und elegante Nachbauen einer solchen Schleife.

Manfred Steyer: Angular Elements, Part I: A Dynamic Dashboard in Four Steps with Web Components

Beginning with version 6, we can expose Angular Components as Web Components or to be more precise: As Custom Elements which is one of the standards behind the umbrella term Web Components. They can be reused with each framework and even with VanillaJS. In addition to that, we can very easily create them at runtime because they are rendered by the browser. Dynamically adding new Web Components to a page is just a matter of creating DOM nodes.

Here, I'm using this idea to build a dynamic dashboard.

Dynamic Dashboard

The source code can be found in my github repo.

Step 1: Installing Angular Elements and Polyfills

It is no surprise that Angular Elements can be installed via npm. In addition to this, I also install the @webcomponents/custom-elements which polyfills Custom Elements back to Internet Explorer 11.

npm i @angular/elements --save
npm i @webcomponents/custom-elements --save

After this, reference the polyfill at the end of your polyfills.ts:

import '@webcomponents/custom-elements/custom-elements.min';

Another file of this package needs to be referenced in your angular.json:

"scripts": [
  "node_modules/@webcomponents/custom-elements/src/native-shim.js"
]

It is needed for browsers that do support Web Components when we downlevel our source code to EcmaScript 5 as Web Components are defined for EcmaScript 2015+.

As an alternative, you could also install @angular/elements with the new ng add command:

ng add @angular/elements

This command also downloads a polyfill and references it in your angular.json. It is a slimmer than the one I'm using here but does not support Internet Explorer 11.

Step 2: Create your Angular Components

The dashboard tile, I want to expose as a Web Component looks like this:

@Component({ // selector: 'app-dashboard-tile', templateUrl: './dashboard-tile.component.html', styleUrls: ['./dashboard-tile.component.css'] }) export class DashboardTileComponent { @Input() a: number; @Input() b: number; @Input() c: number; }

I'm not using a selector because the Custom Element gets one assigned when it is registered. This way, I'm preventing naming conflicts.

Step 3: Register your Angular Component as a Custom Element

For exposing an Angular Component as a Custom Element, we need to declare it and put it into the entryComponents section of a module. This is necessary because Angular Elements is creating it dynamically at runtime:

@NgModule({ […], declarations: [ […] DashboardTileComponent ], entryComponents: [ DashboardTileComponent ] }) export class DashboardModule { constructor(private injector: Injector) { const tileCE = createCustomElement(DashboardTileComponent, { injector: this.injector }); customElements.define('dashboard-tile', tileCE); } }

The method createCustomElement wraps the DashboardTileComponent so that it looks like a Web Component. Using customElements.define we can register it with the browser.

Step 4: Use the Custom Element

Now, we can use the Custom Element like all other build-in HTML tags:

<dashboard-tile a="100" b="50" c="25"></dashboard-tile>

As the browser renders it, Angular is not aware of the element name dashboard-tile. To prevent it from throwing an error, we have to use the CUSTOM_ELEMENTS_SCHEMA:

@NgModule({ […] schemas: [ CUSTOM_ELEMENTS_SCHEMA ] }) export class AppModule { }

We can even dynamically create a DOM node with it which is one key to dynamic UIs:

const tile = document.createElement('dashboard-tile'); tile.setAttribute('class', 'col-lg-4 col-md-3 col-sm-2'); tile.setAttribute('a', '100'); tile.setAttribute('b', '50'); tile.setAttribute('c', '25'); const content = document.getElementById('content'); content.appendChild(tile);

If you want to make sure that your application also supports other environments -- e. g. for server side rendering or hybrid apps -- you should use the Renderer2 service which abstracts DOM manipulations.

Uli Armbruster: Microservices mögen kein Denken in klassischen Entitäten

Einen ganz besonderen AHA Moment hatte ich kürzlich in einem Workshop bei Udi Dahan, CEO von Particular. In seinem Beispiel ging es um die klassische Entität eines Kunden.

Microservices zu realisieren bedeutet Fachlichkeiten sauber schneiden und in eigenständige Silos (oder Säulen) packen zu müssen. Jedes Silo muss dabei die Hohheit über die eigenen Daten besitzen, auf denen es die zugehörigen Geschäftsprozesse abbildet. Soweit so gut. Doch wie lässt sich dies im Falle eines Kunden bewerkstelligen, der klassischerweise wie im Screenshot zu sehen modelliert ist? Unterschiedliche Eigenschaften werden von unterschiedlichen Microservices benötigt bzw. verändert.

Wird die gleiche Entität in allen Silos verwendet, muss es eine entsprechende Synchronisierung zw. den Microservices geben. Das hat erhelbiche Auswirkungen auf Skalierbarkeit und Performance. In einer Applikation mit häufig parallelen Änderungen an einer Entität wird das Fehlschlagen von Geschäftsprozessen zunehmen – oder im schlimmsten Fall zu Inkonsistenzen führen.

Klassische Kundeentität

Klassische Kundeentität

 

Udi schlägt die folgende Modellierung vor:

Neue Modellierung eines Kunden

Der Kunde wird durch unabhängige Entitäten modelliert

Zur Identifikation, welche Daten zusammengehören, schlägt Udi einen Interessanten Ansatz vor:

Fragt die Fachabteilung, ob das Ändern einer Eigenschaft Auswirkung auf eine andere Eigenschaft hat. 

Würde das Ändern des Nachnamens einen Einfluss auf die Preiskalkulation haben? Oder auf die Art der Marketings?

Nun gilt es noch das Problem der Aggregation zu lösen, sprich wenn ich in meiner Anzeige unterschiedliche Daten unterschiedlicher Microserivces anzeigen möchte. Klassischerweise würde es jetzt eine Tabelle geben, die die Spalten

 

ID_Kunde ID_Kundenstamm ID_Bestandskundenmarketing ID_Preiskalkulation

 

besitzt. Das führt aber zu 2 Problemen:

  1. Die Tabelle muss immer erweitert werden, wenn ein neuer Microservices hinzugefügt wird.
  2. Sofern ein Microservices die gleiche Funktionalität in Form unterschiedlicher Daten abdeckt, müssten pro Microservices mehrere Spalten hinzugefügt und NULL Werte zugelassen werden.

Ein Beispiel für Punkt 2 wäre ein Microservices, der das Thema Bezahlmethoden abdeckt. Anfangs gab es beispielsweise nur Kreditkarte und Kontoeinzug. Dann folgte Paypal. Und kurze Zeit später dann Bitcoin. Der Microservices hätte hierzu mehrere Tabellen, wo er die individuelle Daten für die jeweilige Bezahlmethode halten würde. In oben gezeigter Aggregationstabelle müsste aber für jede Bezahlmethode, die der Kunde nutzt, eine Spalte gefüllt werden. Wenn er sie nicht benutzt, würde NULL geschrieben werden. Man merkt schon: Das stinkt.

Ein anderer Ansatz ist da deutlich besser geeignet. Welcher das ist und wie man diesen technischen realisieren kann, könnt ihr im GitHub Repository von Particular nachschauen.

 

Golo Roden: Ein asynchrones 'map' für JavaScript

Die 'map'-Funktion in JavaScript arbeitet stets synchron und hat kein asynchrones Pendant. Da 'async'-Funktionen vom Compiler aber in synchrone Funktionen, die Promises zurückgeben, umgewandelt werden, lässt sich 'map' mit 'Promise.all' kombinieren, um den gewünschten Effekt zu erzielen.

Jürgen Gutsch: Configuring HTTPS in ASP.NET Core 2.1

Finally HTTPS gets into ASP.NET Core. It was there before back in 1.1, but was kinda tricky to configure. It was available in 2.0 bit not configured by default. Now it is part of the default configuration and pretty much visible and present to the developers who will create a new ASP.NET Core 2.1 project.

So the title of that blog post is pretty much misleading, because you don't need to configure HTTPS. because it already is. So let's have a look how it is configured and how it can be customized. First create a new ASP.NET Core 2.1 web application.

Did you already install the latest .NET Core SDK? If not, go to https://dot.net/ to download and install the latest version for your platform.

Open a console and CD to your favorite location to play around with new projects. It is C:\git\aspnet\ in my case.

mkdir HttpsSecureWeb && cd HttpSecureWeb
dotnet new mvc -n HttpSecureWeb -o HttpSecureWeb
dotnet run

This commands will create and run a new application called HttpSecureWeb. And you will see HTTPS the first time in the console output by running an newly created ASP.NET Core 2.1 application:

There are two different URLs where Kestrel is listening on: https://localhost:5001 and http://localhost:5000

If you go to the Configure method in the Startup.cs there are some new middlewares used to prepare this web to use HTTPS:

In the Production and Staging environment mode there is this middleware:

app.UseHsts();

This enables HSTS (HTTP Strinct Transport Protocol), which is a HTTP/2 feature to avoid man-in-the-middle attacks. It tells the browser to cache the certificate for the specific host-headers and for a specific time range. If the certificate changes before the time range ends, something is wrong with the page. (More about HSTS)

The next new middleware redirects all requests without HTTPS to use the HTTPS version:

app.UseHttpsRedirection();

If you call http://localhost:5000, you get redirected immediately to https://localhost:5001. This makes sense if you want to enforce HTTPS.

So from the ASP.NET Core perspective all is done to run the web using HTTPS. Unfortunately the Certificate is missing. For the production mode you need to buy a valid trusted certificate and to install it in the windows certificate store. For the Development mode, you are able to create a development certificate using Visual Studio 2017 or the .NET CLI. VS 2017 is creating a certificate for you automatically.

Using the .NET CLI tool "dev-certs" you are able to manage your development certificates, like exporting them, cleaning all development certificates, trusting the current one and so on. Just time the following command to get more detailed information:

dotnet dev-certs https --help

On my machine I trusted the development certificate to not get the ugly error screen in the browser about an untrusted certificate and an unsecure connection every time I want to debug a ASP.NET Core application. This works quite well:

dotnet dev-cert https --trust

This command trusts the development certificate, by adding it to the certificate store or to the keychain on Mac.

On Windows you should use the certificate store to register HTTPS certificated. This is the most secured way on Windows machines. But I also like the idea to store the password protected certificate directly in the web folder or somewhere on the web server. This makes it pretty easy to deploy the application to different platforms, because Linux and Mac use different ways to store the certificated. Fortunately there is a way in ASP.NET Core to create a HTTPS connection using a file certificate which is stored on the hard drive. ASP.NET Core is completely customizable. If you want to replace the default certification handling, feel free to do it.

To change the default handling, open the Program.cs and take a quick look at the code, especially to the method CreateWebHostBuilder:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .UseStartup<Startup>();

This method creates the default WebHostBuilder. This has a lot of stuff preconfigured, which is working great in the most scenarios. But it is possible to override all of the default settings here and to replace it with some custom configurations. We need to tell the Kestrel webserver which host and port he need to listen on and we are able to configure the ListenOptions for specific ports. In this ListenOptions we can use HTTPS and pass in the certificate file and a password for that file:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseKestrel(options =>
        {
            options.Listen(IPAddress.Loopback, 5000);
            options.Listen(IPAddress.Loopback, 5001, listenOptions =>
            {
                listenOptions.UseHttps("certificate.pfx", "topsecret");
            });
        })
        .UseStartup<Startup>();

Usually we would use the hardcoded values from a configuration file or environment variables, instead of hardcoding it.

Be sure the certificate is password protected using a long password or even better a pass-phrase. Be sure to not store the password or the pass-phrase into a configuration file. In development mode you should use the user secrets to store such secret date and in production mode the Azure Key Vault could be an option.

Conclusion

I hope this helps to get you a rough overview over the the usage of HTTPS in ASP.NET Core. This is not really a deep dive, but tries to explain what are the new middlewares good for and how to configure HTTPS for different platforms.

BTW: I just saw in the blog post about HTTPS improvements, about HSTS in ASP.NET Core, there is a way to store the HTTPS configuration in the launchSettings.json. This is an easy way to pass in environment variables on startup to the application. The samples also shows to add the certificate password to this settings file. Please never ever do this! Because a file is easily shared to a source code repository or any other way, so the password inside is shared as well. Please use different mechanisms to set passwords in an application, like the already mentioned user secrets or the Azure Key Vault.

Manfred Steyer: Angular Elements without zone.js

Since Version 6, we can very easily create Web Components with Angular. To be more precise, we should speak about Custom Elements -- a standard behind the umbrella term Web Components that allows for creating own HTML-Elements.

However, Angular depends on zone.js for Change Tracking and in most cases we don't want to force the customers of our widgets into using it.

In this short article, I explain why excluding zone.js is a good idea and how to deal with the consequences. The sample I use for this can be found in my github repo. Make sure to use the branch noop-zone.

Why zone.js might be a bad idea for Custom Elements

In general, we want our custom Elements to be as small as possible in terms of bundle size. The upcoming ngIvy view engine will help a lot with this goal as it produces more tree-shakable code and hence allows Angular blowing itself mostly away during the compilation.

Another approach to shrink bundles is reusing Angular packages across several Angular Elements and the host application. After an enlightening discussion with Angular's Rob Wormald, I've created ngx-build-plus -- a simple CLI extension that helps to implement this idea.

However, in both cases we cannot get rid of zone.js which is used by Angular since it first days for change tracking. This library monkey patches a lot of browser objects to get informed about all events after which Angular needs to check the displayed components for changes.

While this provides convenience in Angular application having such a dependency for a custom element is not desirable, especially when the hosting application is not Angular based: Not every consumer wants to monkey patch browser objects and in many cases zone.js is bigger than the custom element itself.

Getting rid of zone.js

Getting rid of zone.js is the easiest part. Just set configure the noop zone (no operation zone) when bootstrapping the Angular application:

platformBrowserDynamic() .bootstrapModule( AppModule, { ngZone: 'noop' }) .catch(err => console.log(err));

However, dealing with the consequences of removing zone.js isn't that easy as the consequence is triggering change detection manually.

Triggering Change Detection manually

For my demonstrations, I use a simple Angular component that displays three numeric values:

@Component({ [...] }) export class ExternalDashboardTileComponent { @Input() a: number; @Input() b: number; @Input() c: number; more(): void { this.a = Math.round(Math.random() * 100); this.b = Math.round(Math.random() * 100); this.c = Math.round(Math.random() * 100); } }

It also provides a more method that updates those values. For the sake of simplicity, I use random numbers here.

The values are displayed in an table and the method is bound to the click event of a button:

<table class="table table-condensed"> <tr> <td>A</td> <td>{{a}}</td> </tr> <tr> <td>B</td> <td>{{b}}</td> </tr> <tr> <td>C</td> <td>{{c}}</td> </tr> </table> <button class="btn btn-default btn-sm" (click)="more()">More</button>

When using zone.js, Angular is automatically performing change detection after the click event and hence updating the bound values. But without zone.js Angular is not aware of the click event. This means, we have to trigger change detection by hand.

This can be accomplished by calling the markForCheck method of the current ChangeDetectorRef:

@Component({ [...] }) export class ExternalDashboardTileComponent { @Input() a: number; @Input() b: number; @Input() c: number; constructor(private cd: ChangeDetectorRef) { } more(): void { this.a = Math.round(Math.random() * 100); this.b = Math.round(Math.random() * 100); this.c = Math.round(Math.random() * 100); this.cd.markForCheck(); } }

As this is a very explicit approach, one can easily forget about calling the method at the right moment. Therefore I present an alternative in the next section.

Push-Pipe

A more declarative way for triggering change detection is using Observables. Every time prove a new value, a pipe can tell Angular to check for changes. While Angular comes with the async pipe for such cases, it also demands on zone.js.

What we need is a tuned async pipe. A prototypical (!) one comes from Fabian Wiles who is an active community member. He calls it push pipe.

To use it, we need to introduce an Observable. In my simple, I put it directly into the component. In an more advanced case, it should be provided by a service instead. To be able to directly notify it, I'm using a BehaviorSubject too:

@Component({ [...] }) export class ExternalDashboardTileComponent implements OnInit { @Input() a: number; @Input() b: number; @Input() c: number; private statsSubject = new BehaviorSubject<Stats>(null); public stats$ = this.statsSubject.asObservable(); [...] }

To get along with just one Observable for all three values, I group them with a class Stats:

class Stats { constructor( readonly a: number, readonly b: number, readonly c: number ) { } }

After Angular created the component, we have to publish the three numeric values for the first time:

ngOnInit(): void { this.statsSubject.next(new Stats(this.a, this.b, this.c)); }

After each modification, we have to do the same:

more(): void { this.a = Math.round(Math.random() * 100); this.b = Math.round(Math.random() * 100); this.c = Math.round(Math.random() * 100); this.statsSubject.next(new Stats(this.a, this.b, this.c)); }

In the template, we can subscribe to the Observable with the new push pipe. In the next listing I'm using an ngIf for this. The as clause writes the received object into the stats template variable.

<div class="content" *ngIf="stats$ | push as stats"> <div style="height:200px;"> <br> <table class="table table-condensed"> <tr> <td>A</td> <td>{{stats.a}}</td> </tr> <tr> <td>B</td> <td>{{stats.b}}</td> </tr> <tr> <td>C</td> <td>{{stats.c}}</td> </tr> </table> <button class="btn btn-default btn-sm" (click)="more()">More</button> </div> </div>

Also, we can switch to OnPush now, as we are just relying on Observables and Immutables:

@Component({ [...], changeDetection: ChangeDetectionStrategy.OnPush }) export class ExternalDashboardTileComponent implements OnInit { [...] }

Uli Armbruster: Quellen zu Defensives Design und Separation of Concerns sind jetzt online

Den Quellcode zu meinen Vorträgen auf den Karlsruher Entwicklertagen und der DWX sind jetzt online:

  • Zu Super Mario Kata mit Fokus auf Defensivem Design geht es hier.
  • Zur Prüfsummen Kata mit fokus auf Separation of Concerns gelangt ihr hier.

Auf beiden Seiten findet ihr auch die Links zu den PowerPoint Folien. Im Juli 2018 werde ich den Code der Prüfsummen Kata noch in Form von Iterationen veröffentlichen und beide Talks als YouTube Video freigeben.

IMG_20180627_145405

Defensives Design Talk auf der DWX

Ihr wollt Teile davon in euren Vorträgen verwenden, ihr wollt ein Training zu dem Thema oder ich soll dazu in euerer Community einen Vortrag halten, dann kontaktiert mich über die auf GitHub genannten Kanäle.

Stefan Henneken: IEC 61131-3: The generic data type T_Arg

In the article The wonders of ANY, Jakob Sagatowski shows how the data type ANY can be effectively used. In the example described, a function compares two variables to determine whether the data type, data length and content are exactly the same. Instead of implementing a separate function for each data type, the same requirements can be implemented much more elegantly with only one function using data type ANY.

Some time ago, I had a similar task. A method should be developed that accepts any number of parameters. Both the data type and the number of parameters were random.

During my first attempt to find solution, I tried to use a variable-length array of type ARRAY [*] OF ANY. However, variable-length arrays can only be used as VAR_IN_OUT and the data type ANY only as VAR_INPUT (see also IEC 61131-3: Arrays with variable length). This approach was therefore ruled out.

As an alternative to data type ANY, structure T_Arg is also available. T_Arg is declared in the TwinCAT library Tc2_Utilities and, in contrast to ANY, is also available at TwinCAT 2. The structure of T_Arg is similar to the structure used for the data type ANY (see also The wonders of ANY).

TYPE T_Arg :
STRUCT
  eType   : E_ArgType   := ARGTYPE_UNKNOWN; (* Argument data type *)
  cbLen   : UDINT       := 0;               (* Argument data byte length *
  pData   : UDINT       := 0;               (* Pointer to argument data *)
END_STRUCT
END_TYPE

T_Arg can be used at any place, including in the VAR_IN_OUT range.

The following function adds any amount of numbers whose data type can also be random. The result is returned as LREAL.

FUNCTION F_AddMulti : LREAL
VAR_IN_OUT
  aArgs : ARRAY [*] OF T_Arg;
END_VAR
VAR
  nIndex : DINT;
  aUSINT : USINT;
  aUINT  : UINT;
  aINT   : INT;
  aDINT  : DINT;
  aREAL  : REAL;
  aLREAL : LREAL;
END_VAR

F_AddMulti := 0.0;
FOR nIndex := LOWER_BOUND(aArgs, 1) TO UPPER_BOUND(aArgs, 1) DO
  CASE (aArgs[nIndex].eType) OF
    E_ArgType.ARGTYPE_USINT:
      MEMCPY(ADR(aUSINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUSINT;
    E_ArgType.ARGTYPE_UINT:
      MEMCPY(ADR(aUINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUINT;
    E_ArgType.ARGTYPE_INT:
      MEMCPY(ADR(aINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aINT;
    E_ArgType.ARGTYPE_DINT:
      MEMCPY(ADR(aDINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aDINT;
    E_ArgType.ARGTYPE_REAL:
      MEMCPY(ADR(aREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aREAL;
    E_ArgType.ARGTYPE_LREAL:
      MEMCPY(ADR(aLREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aLREAL;
  END_CASE
END_FOR

However, calling the function is somewhat more complicated than with the data type ANY.

PROGRAM MAIN
VAR
  sum    : LREAL;
  args   : ARRAY [1..4] OF T_Arg;
  a      : INT := 4567;
  b      : REAL := 3.1415;
  c      : DINT := 7032345;
  d      : USINT := 13;
END_VAR

args[1] := F_INT(a);
args[2] := F_REAL(b);
args[3] := F_DINT(c);
args[4] := F_USINT(d);
sum := F_AddMulti(args);

The array passed to the function must be initialized first. The library Tc2_Utilities contains help functions that convert a variable into a structure of type T_Arg (F_INT(), F_REAL(), F_DINT(), …). The function for adding the values has only one input variable of type ARRAY [*] OF T_Arg.

The data type T_Arg is used, for example, in the function block FB_FormatString() or in the function F_FormatArgToStr() of TwinCAT. The function block FB_FormatString() can replace up to 10 placeholders in a string with values of PLC variables of type T_Arg (similar to fprintf in C).

An advantage of ANY is the fact that the data type is defined by the IEC 61131-3 standard.

Even if the generic data types ANY and T_Arg do not correspond to the generics in C# or the templates in C++, they still support the development of generic functions in IEC 61131-3. These can now be designed in such a way that the same function can be used for different data types and data structures.

David Tielke: #DWX2018 - Inhalte meiner Sessions, Workshops und Fernseh- und Radiointerviews

Vom 25.06. bis 28.06. fand auch in diesem Jahr wieder die Developer Week 2018 in Nürnberg statt. An vier Tagen öffnete das NCC Ost des Messezentrums Nürnberg die Pforten um tausenden wissenshungrigen Entwicklern zu empfangen, die sich in Sessions und Workshops rund um das Thema Softwareentwicklung schlau machen konnten. Wie in jedem Jahr, war ich auch in diesem Jahr wieder als Trackchair für die beiden Tracks Softwarequalität und Softwarearchitekturen inhaltlich zuständig. Neben der Programmgestaltung durfte ich auch selbst wieder aktiv werden und in insgesamt vier Sessions, einer Abendveranstaltung und einem Workshop mein Wissen an die Teilnehmer weiterreichen. 


Hier meine Beiträge im Überblick:

  • Session: Effektive Architekturen mit Workflows
  • Session: Architektur für die Praxis 2.0 (Ersatzvortrag)
  • Session: Metriken - wie gut ist Ihre Software?
  • Session: Testing Everything
  • Abendveranstaltung: SmartHome - Das Haus der Zukunft!
  • Workshop: Architektur für die Praxis 2.0

Fernsehinterview BR / ARD:


Im Zusammenhang mit meinem Vortrag zum Thema "Smarthome" am Montagabend, wurde ich im Vorfeld von diversen Medien wie BR, ARD, Nürnberger Nachrichten und Radio Gong zu dem Thema befragt. Der Beitrag des BR dazu ist noch immer in deren Mediathek abrufbar.

Inhalte zu meinen Workshops und Sessions

Wie mit den Teilnehmern in meinen Sessions vorab besprochen, stelle ich hier nun alle relevanten Materialien meiner Sessions zur Verfügung. Diese Inhalte umfassen zum einen meine Codebeispiele aus Visual Studio, sowie meine Notizen und Mitschriften aus OneNote als PDF aber vor allem meine Artikel auf meinem dotnetpro Kolumne "Davids Deep Dive" zu diesem Thema:

Das Passwort für beide Bereiche, wurde auf der Konferenz bekannt gegeben und kann alternativ bei mir per Email erfragt werden.

Bis zur Developer Week 2019!

Auch im nächsten Jahr wird die Developer Week wieder Ihre Pforten öffnen und ich freue mich schon jetzt darauf! Ich möchte auf diesem Wege nochmal allen Teilnehmern meiner Sessions für die tolle Atmosphäre und interessanten Diskussionen danken, es hat mal wieder super viel Spaß gemacht und es war mir wie immer eine Ehre. Ebenfalls ein riesen großes Danke geht an den Veranstalter Developer Media, die mal wieder ein noch besseres Event als im vorherigen Jahr auf die Beine gestellt haben. Bis nächstes Jahr!

Golo Roden: Shorthand-Syntax für die Konsole

Die Shorthand-Syntax von ES2015 ermöglicht das vereinfachte Definieren von Objekten, deren Werte gleichnamigen Variablen entsprechen. Diese Syntax kann man sich bei Ausgaben auf die Konsole zu Nutze machen, um besser lesbare und nachvollziehbare Ausgaben zu erhalten.

Jürgen Gutsch: Four times in a row

One year later, it is the July 1st and I got the email from the Global MVP Administrator. I got the MVP award the fourth time in a row :)

I'm pretty proud and honored about that and I'm really happy to be part of the great MVP community one year more. I'm also looking forward to the Global MVP Summit next year to meet all the other MVPs from around the world.

Still not really a fan-boy...!?

I'm also proud of being a MVP, because I never called myself a Microsoft fan-boy. And sometimes, I also criticize some tools and platforms built by Microsoft (I feel like a bad boy). But I like most of the development tools built by Microsoft and I like to use the tools, and frameworks and I really like the new and open Microsoft. The way how Microsoft now supports more than its own technologies and platforms. I like using VSCode, Typescript and Webpack to create NodeJS applications. I like VSCode and .NET Core on Linux to build Applications on a different platform than Windows. I also like to play around with UWP Apps on Windows for IoT on a Raspberry PI.

There are much more possibilities, much more platforms, much more customers to reach, using the current Microsoft development stack. And it is really fun to play with it, to use it in real project, to write about it in .NET magazines, in this blog and to talk about it in the user groups and on conferences.

In the last year being an MVP, I also learned that it is kinda fun to contribute to Microsoft's open source projects, being a part of that project and to see my own work in that projects. If you like open source as well, contribute to the the open source projects. Make the projects better, make the documentations better.

I also need to say Thanks

But I wouldn't get honored again without such a great development community. I wouldn't continue to contribute to the community without that positive feedback and without that great people. This is why the biggest "Thank You" goes to the development community :)

And like last year, I also need to say "Thank You" to my great family (my lovely wife and my three kids) which supports me in spending so much time to contribute to the community. I also need to say Thanks to the YooApplications AG, my colleagues and my boss for supporting me and allowing me to use parts of my working time to contribute the the community.

Golo Roden: Wie man Zahlen in JavaScript auffüllt

JavaScript kennt seit der Version ES2017 die beiden neuen Funktionen padStart und padEnd, um Zeichenketten von links beziehungsweise von rechts aufzufüllen. Damit sie mit Zahlen funktionieren, müssen diese zuvor mit toString in Zeichenketten umgewandelt werden.

Holger Schwichtenberg: Microsoft verkündet Pläne für ASP.NET Core und Entity Framework Core 2.2

Microsoft hat gestern auf Github sowohl die Termine als auch die inhaltlichen Pläne für Version 2.2 von ASP.NET Core und Entity Framework Core verkündet.

Golo Roden: Wie man Content-Types zerlegt

Das npm-Modul content-type stellt eine parse-Funktion zur Verfügung, mit der sich Content-Type-Header RFC-konform analysieren und zerlegen lassen.

Uli Armbruster: Kausale Ketten: Gründe für das Scheitern von Softwareprojekten

Auf dieser Github Page habe ich begonnen die Problemen in Softwareprojekten, die uns täglich begegnen, genauer zu analysieren, um diese vermeiden bzw. beheben zu können.

In Gesprächen mit Teilnehmern meiner Workshops werden mir regelmäßig Symptome geschildert, von denen ich der Meinung bin, dass die Ursache wie so oft tiefer liegt.

Tituliert habe ich das als kausale Ketten und orientiere mich dabei an folgendem Schema:

  • Was ist das wahrgenommene Problem, also das Symptom
  • Wie kam es dazu, also die Verlaufsbetrachtung
  • Warum kam es dazu, was ist die Ursache

Zudem suche ich nach nachvollziehbaren Beispielen, um die Theorie zu untermauern.

Ich verstehe die Seite als Anreiz zum Nachdenken. Alle „Thesen“ sollen als Einstieg in eine lebendige Diskussion dienen.

Gebt mir gerne Feedback in Form von Pull Requests.

Uli Armbruster: Termin nossued 2019

In diesem Jahr fand der #nossued noch vor Beginn der Sommerferien in den meisten Bundesländern statt. Das Feedback der letzten Jahren ging dahin, dass der ein oder andere aufgrund der Schulferien nicht kommen konnte.

Neben der Berücksichtung von zeitlich verschobener Ferien in den einzelnen Bundesländern gilt es natürlich noch auf etablierte Events wie die dotnet Cologne Rücksicht zu nehmen.

Der ein oder andere hadert mit Juni/Juli auch wegen den alle 2 Jahre stattfindenden Fußballturnieren oder weil die warme Sommerzeit lieber für Freizeitaktivitäten genutzt wird. Andere wiederum schätzen genau die strahlende Sonne und die Möglichkeit die Dachterrasse nutzen zu können.

Daher würden wir von der Organisation gerne wissen, wie ihr zu den möglichen Terminen 2019 steht. Denkbar wären auch frühere Termine wie die Zeiträume vom 08. März bis 04. April und 27. April bis 31. Mai. Wie steht die Community dazu?

Eure Rückmeldungen würden uns freuen, z.B. in Form von Nennung von guten Zeiträumen oder weniger geeigneten Monaten.

Christian Dennig [MS]: Open Service Broker for Azure (OSBA) with Azure Kubernetes Service (AKS)

In case you missed it, the Azure Managed Kubernetes service (AKS) has been released today (June, 13th 2018, hoooooray 🙂 see the official announcement from Brendan Burns here) and it is now possible to run production workloads on a fully Microsoft-managed Kubernetes cluster in the Azure cloud. “Fully managed” means, that the K8s control plane and the worker nodes (infrastructure) are managed by Microsoft (API server, Docker runtime, scheduler, etcd server…), security patches are applied on a dialy basis to the underlying OS, you get Azure Active Directory integration (currently in preview) etc. And what’s really nice: you only pay for the worker nodes, the control plane is completely free!

The integration of Kubernetes into the Azure infrastructure part is really impressive, but when it comes to service integration and provisioning on the cloud platform there is still room for improvement…but it’s on its way! The Open Service Broker for Azure (update: Version 1.0 reached) closes the gap between Kubernetes workloads that require certain Azure services to run and the provisioning part of these services as it makes it possible e.g. to create a SQL server instance in Azure via a Kubernetes “YAML”-file during deployment of other Kubernetes objects on the fly. Sounds good? Let’s see, how this works.

Creating a Kubernetes Demo Cluster

First of all, we need a Kubernetes cluster to be able to test the Open Service Broker for Azure – we are going to use Azure CLI, therefore please make sure you have installed the latest version of it.

Okay, so let’s create an Azure resource group where we can deploy the AKS cluster to afterwards:

// resource group
az group create --name osba-demo-rg --location westeurope

// AKS cluster - version must be above 1.9 (!)
az aks create 
        --resource-group osba-demo-rg 
        --name osba-k8sdemo-cluster 
        --generate-ssh-keys  
        --kubernetes-version 1.9.6

When the deployment of the cluster has finished, download the corresponding kubeconfig file:

az aks get-credentials 
        --resource-group osba-demo-rg 
        --name osba-k8sdemo-cluster

Now we are ready to use kubectl to work with the newly created cluster. Test the connection by querying the available worker nodes of the cluster:

kubectl get nodes

You should see something like this:

nodes

Before we can install the Open Service Broker, we also need a service principal in Azure that is able to interact with the Azure Resource Manager and create resources on behalf of us (think of it as a “service account” in Linux / Windows).

az ad sp create-for-rbac --name osba-demo-principal -o table

Important: Remember “Tenant”, “Application ID” and “Password” for you will need these values when installing the OSBA now.

Installing OSBA

Cluster Installation

We are using Helm to install OSBA on our cluster, so we first need to prepare your local machine for Helm (FYI: your AKS cluster is by default ready to use Helm, so there’s no need to install anything on it – anyway, you need to install the Helm client on your workstation):

helm init

Next, we need to deploy the Service Catalog on the cluster:

helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

helm install svc-cat/catalog --name catalog --namespace catalog \
   --set rbacEnable=false \
   --set apiserver.storage.etcd.persistence.enabled=true

Now we are ready to deploy OSBA to the cluster:

// add the Azure charts repository

helm repo add azure https://kubernetescharts.blob.core.windows.net/azure


// finally, add the service broker for Azure

helm install azure/open-service-broker-azure --name osba --namespace osba `
  --set azure.subscriptionId=<Your Subscription ID>  `
  --set azure.tenantId=<Tenant> `
  --set azure.clientId=<Service Principal Application ID> `
  --set azure.clientSecret=<Service Principal Password>

Info: In case you don’t know your Azure subscription Id, run…

az account show

…and use the value of property “id”.

You can check the status of the deployments (catalog & service broker) by querying the running pods in the namespaces catalog and osba.

pods

Service Catalog Client Tools

Service Catalog comes with its own command line interface. So you need to install it on your machine (installation instructions).

Using the OSBA for Service Provisioning

Now, we are prepared to provision / create so-called “ServiceInstances” (Azure resources) and “bind” them via “ServiceBindings” in order to be able to use them as resources/endpoints/services etc. in our pods.

In the current example, we want to provision an Azure SQL DB. So first of all, we need to create a service intance of the database. Therefore, use the following YAML definition:

As you can see, you, there are some values, you have to provide to OSBA:

  • clusterServiceClassExternalName – in our case, we want to create an Azure SQL DB. You can query the available service classes by using the following command: svcat get classes. We will be using azure-sql-12-0.
  • clusterServicePlanExternalName – the service plan name which represents the service tier in Azure. Use svcat describe classes azure-sql-12-0 to show the available service plans for class azure-sql-12-0. We will be using standard-s1.
  • resourceGroupthe Azure resource group for the server and database

 

 

classes

Show available service classes via “svcat get classes

azure-sql-12-0

Show available service plans for class “azure-sql-12-0” via “svcat describe classes azure-sql-12-0

Now, create the service via kubectl:

kubectl create -f .\service-instance.yaml

Query the service instances by using the Service Catalog CLI:

svcat get instances

The result should be (after a short amount of time) something like that:

instances

In the Azure portal, you should also see these newly created resources:

portal_resources

Now that we have created the service instance, let’s bind the instance, in order to be able to use it. Here’s the YAML file for it:

kubectl create -f service-binding.yaml

As seen with the service instance, the service binding also needs some parameters in order to work. Of course, the binding needs a reference to the service instance, it wants to use (instanceRef). The more interesting property is secretName. While creating the binding, the service broker also creates a secret in the current namespace, where important values ( like passwords, server name, database name, URIs etc.) are added. You can reference the secret / values afterwards in your K8s deployments and add them e.g. as environment variables to your pods.

Now let’s see, if the binding has been created via svcat:

bindings

That look’s good. Over to the Kubernetes dashboard, to see, if the secret has been created in the default namespace.

secret

Kubernetes secret

It seems like everything was “bound” for usage as expected and we are now ready to use the Azure SQL DB in our containers/pods!

Wrap Up

As you have seen in this example, with the Open Service Broker for Azure it is very easy to create Azure resources via Kubernetes object definitions. You simply need to install OSBA to your cluster with Helm! Afterwards, you can create and bind Azure services like Azure SQL DB. If you are curious what resource providers are supported, there are currently three services, that are available:

…and some experimental services:

  • Azure CosmosDB
  • Azure KeyVault
  • Azure Redis Cache
  • Azure Event Hubs
  • Azure Service Bus
  • Azure Storage
  • Azure Container Instances
  • Azure Search

The up-to-date list can always be found here: https://github.com/Azure/open-service-broker-azure/tree/master/docs/modules

Have fun with it 🙂

Holger Schwichtenberg: ASP.NET Blazor 0.4 erschienen

Die vierte Preview-Version von Microsofts .NET-basiertem Framework zur WebAssembly-Programmierung bietet einige Verbesserungen.

Jürgen Gutsch: Creating a signature pad using Canvas and ASP.​NET Core Razor Pages

In one of our projects, we needed to add a possibility to add signatures to PDF documents. A technician fills out a checklist online and a responsible person and the technician need to sign the checklist afterwards. The signatures then gets embedded into a generated pdf document together with the results of the checklist. The signatures must be created on a web UI, running on an iPad Pro.

It was pretty clear that we need to use the HTML5 canvas element and to capture the pointer movements. Fortunately we stumbled upon a pretty cool library on GitHub, created by Szymon Nowak from Poland. It is the super awesome Signature Pad written in TypeScript and available as NPM and Yarn package. It is also possible to use a CDN to use the Signature Pad.

Use Signature Pad

Using Signature Pad is really easy and works well without any configuration. Let me show you in a quick way how it works:

To play around with it, I created a new ASP.NET Core Razor Pages web using the dotnet CLI:

dotnet new razor -n SignaturePad -o SignaturePad

I added a new razor page called Signature and added it to the menu in the _Layout.cshtml. I created a simple form and placed some elements in it:

<form method="POST">
    <p>
        <canvas width="500" height="400" id="signature" 
                style="border:1px solid black"></canvas><br>
        <button type="button" id="accept" 
                class="btn btn-primary">Accept signature</button>
        <button type="submit" id="save" 
                class="btn btn-primary">Save</button><br>
        <img width="500" height="400" id="savetarget" 
             style="border:1px solid black"><br>
        <input type="text" asp-for="@Model.SignatureDataUrl"> 
    </p>
</form>

The form posts the content to the current URL, which is the same Razor page, but the different HTTP method handler. We will have a look later on.

The canvas is the most important thing. This is the area where the signature gets drawn. I added a border to make the pad boundaries visible on the screen. I add a button to accept the signature. This means we lock the canvas and write the image data to the input field added as last element. I also added a second button to submit the form. The image is just to validate the signature and is not really needed, but I was curious about, how it looks in an image tag.

This is not the nicest HTML code but works for a quick test.

Right after the form I added a script area to render the JavaScript to the end of the page. To get it running quickly, I use jQuery to access the HTML elements. I also copied the signature_pad.min.js into the project, instead of using the CDN version

@section Scripts{
    <script src="~/js/signature_pad.min.js"></script>
    <script>
        $(function () {

            var canvas = document.querySelector('#signature');
            var pad = new SignaturePad(canvas);

            $('#accept').click(function(){

                var data = pad.toDataURL();

                $('#savetarget').attr('src', data);
                $('#SignatureDataUrl').val(data);
                pad.off();
            
            });
                    
        });
    </script>
}

As you can see, creating the Signature Pad is simply done by creating a new instance of SignaturePad and passing the canvas as an argument. On click at the accept button, I start working with the pad. The function toDataURL() generates an image data URL that can be directly used as image source, like I do in the next line. After that I store the result as value in the input field to send it to the server. In Production this should be a hidden field. at the end I switch the Signature Pad off to lock the canvas and the user cannot manipulate the signature anymore.

Handling the Image Date URL with C##

The image data URL looks like this:

data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAAGQCAYA...

So after the comma the image is a base 64 encoded string. The data before the comma describes the image type and the encoding. I now send the complete data URL to the server and we need to decode the string.

public void OnPost()
{
    if (String.IsNullOrWhiteSpace(SignatureDataUrl)) return;

    var base64Signature = SignatureDataUrl.Split(",")[1];            
    var binarySignature = Convert.FromBase64String(base64Signature);

    System.IO.File.WriteAllBytes("Signature.png", binarySignature);
}

On the page model we need to create a new method OnPost() to handle the HTTP POST method. Inside we first check whether the bound property has a value or not. Then we split the string by comma and convert the base 64 string to an byte array.

With this byte array we can do whatever we need to do. In the current project I store the image directly in the PDF and in this demo I just store the data in an image on the hard drive.

Conclusion

As mentioned this is just a quick demo with some ugly code. But the rough idea could be used to make it better in Angular or React. To learn more about the Signature Pad visit the repository: https://github.com/szimek/signature_pad

This example also shows what is possible with HTML5 this times. I really like the possibilities of HTML5 and the HTML5 APIs used with JavaScript.

Hope this helps :-)

Code-Inside Blog: DbProviderFactories & ODP.NET: When even Oracle can be tamed

Oracle and .NET: Tales from the dark ages

Each time when I tried to load data from an Oracle database it was a pretty terrible experience.

I remember that I struggle to find the right Oracle driver and even when everything was installed the strange TNS ora config file popped up and nothing worked.

It can be simple…

2 weeks ago I had the pleasure to load some data from a Oracle database and discovered something beautiful: Actually, I can be pretty simple today.

The way to success:

1. Just ignore the System.Data.OracleClient-Namespace

The implementation is pretty old and if you go this route you will end up with the terrible “Oracle driver/tns.ora”-chaos mentioned above.

2. Use the Oracle.ManagedDataAccess:

Just install the official NuGet package and you are done. The single .dll contains all the bits to connect to an Oracle database. No driver installation additional software is needed. Yay!

The NuGet package will add some config entries in your web.config or app.config. I will cover this in the section below.

3. Use sane ConnectionStrings:

Instead of the wild Oracle TNS config stuff, just use (a more or less) sane ConnectionString.

You can either just use the same configuration you would normally do in the TNS file, like this:

Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=MyHost)(PORT=MyPort)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=MyOracleSID)));User Id=myUsername;Password=myPassword;

Or use the even simpler “easy connect name schema” like this:

Data Source=username/password@myserver//instancename;

DbProviderFactories & ODP.NET

As I mentioned earlier after the installation your web or app.config might look different.

The most interesting addition is the registration in the DbProviderFactories-section:

...
<system.data>
    <DbProviderFactories>
      <remove invariant="Oracle.ManagedDataAccess.Client"/>
      <add name="ODP.NET, Managed Driver" invariant="Oracle.ManagedDataAccess.Client" description="Oracle Data Provider for .NET, Managed Driver"
          type="Oracle.ManagedDataAccess.Client.OracleClientFactory, Oracle.ManagedDataAccess, Version=4.122.1.0, Culture=neutral, PublicKeyToken=89b483f429c47342"/>
    </DbProviderFactories>
  </system.data>
...

I covered this topic a while ago in an older blogpost, but to keep it simple: It also works for Oracle!

		private static void OracleTest()
        {
            string constr = "Data Source=localhost;User Id=...;Password=...;";

            DbProviderFactory factory = DbProviderFactories.GetFactory("Oracle.ManagedDataAccess.Client");

            using (DbConnection conn = factory.CreateConnection())
            {
                try
                {
                    conn.ConnectionString = constr;
                    conn.Open();

                    using (DbCommand dbcmd = conn.CreateCommand())
                    {
                        dbcmd.CommandType = CommandType.Text;
                        dbcmd.CommandText = "select name, address from contacts WHERE UPPER(name) Like UPPER('%' || :name || '%') ";

                        var dbParam = dbcmd.CreateParameter();
                        // prefix with : possible, but @ will be result in an error
                        dbParam.ParameterName = "name";
                        dbParam.Value = "foobar";

                        dbcmd.Parameters.Add(dbParam);

                        using (DbDataReader dbrdr = dbcmd.ExecuteReader())
                        {
                            while (dbrdr.Read())
                            {
                                Console.WriteLine(dbrdr[0]);
                            }
                        }
                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message);
                    Console.WriteLine(ex.StackTrace);
                }
            }
        }

MSSQL, MySql and Oracle - via DbProviderFactories

The above code is a snippet from my larger sample demo covering MSSQL, MySQL and Oracle. If you are interested just check this demo on GitHub.

Each SQL-Syntax teats parameter a bit different, so make sure you use the correct syntax for your target database.

Bottom line

Accessing a Oracle database from .NET doesn’t need to be a pain nowadays.

Be aware that the ODP.NET provider might surface higher level APIs to work with Oracle databases. The dbProviderfactory-approach helped us for our simple “just load some data”-scenario.

Hope this helps.

MSDN Team Blog AT [MS]: Starte mit Artificial Intelligence und Machine Learning

  • Du interessierst Dich für Cognitive Services und Custom Machine Learning?
  • Du willst Dich mit Neuralen Netze sowie mit Frameworks wie TensorFlow and CNTK beschäftigen?
  • Du willst mit Machine Learning Spezialisten mal so richtig in eine technische Diskussion eintauchen?
  • Du willst eventuell Euer geplantes Projekt vorab mal abklopfen lassen oder suchst Tipps für die Umsetzung?

Wir bringen Euch mit anderen Software Engineers in einer dreitägigen Veranstaltung zusammen, um Eure Machine Learning Fähigkeiten durch eine Reihe von strukturierten Herausforderungen zu schärfen, und in weiterer Folge gemeinsam mit Euch Probleme im Computer Vision Bereich zu lösen.

Ende Juni bieten wir als Readiness Maßnahme einen kostenlosen OpenHack zu Artificial Intelligence und Machine Learning an.

Bei den OpenHacks geben wir den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß.

Falls ihr Euch also aktiv mit den Themen Artificial Intelligence & Machine Learning auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien.

Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!

Voraussetzungen also:

  • Eigenes Arbeitsgerät zum Entwickeln mitbringen
  • Gut wäre zumindest Basis Wissen zu Python und Datenstrukturen.
    Kleiner Tipp: "Intro zu Python for Data Science" als Intro oder Auffrischung durchwassern.
  • Zwar nicht wesentlich, aber sicher hilfreich: erste Erfahrungen mit Machine Learning
  • Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, …

Als CSE (Commercial Software Engineering) können wir Euch in weiterer Folge auch beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Das Ziel wäre es also klarerweise Euch zu motivieren über eigene AI & ML Projekte nachzudenken bzw. diese zu starten. Nur so als kleine Anregung daher ein paar Projekte, die so in Kooperation mit der CSE entstanden sind: https://www.microsoft.com/developerblog/category/machine-learning/

Anmelden unter: https://aka.ms/openhackberlin

Wir freuen uns auf Euer Kommen!!

Code-Inside Blog: CultureInfo.GetCultureInfo() vs. new CultureInfo() - what's the difference?

The problem

The problem started with a simple code:

double.TryParse("1'000", NumberStyles.Any, culture, out _)

Be aware that the given culture was “DE-CH” and the Swiss use the ‘ for the separator for numbers.

Unfortunately the Swiss authorities have abandoned the ‘ for currencies, but it is widly used in the industrie and such a number should be parsed or displayed.

Now Microsoft steps in and they use a very similar char in the “DE-CH” region setting:

  • The backed in char to separate numbers: ‘ (CharCode: 8217)
  • The obvious choice would be: ‘ (CharCode: 39)

The result of this configuration hell:

If you don’t change the region settings in Windows you can’t parse doubles with this fancy group separator.

Stranger things:

My work machine is running the EN-US version of Windows and my tests where failing because of this madness, but it was even stranger: Some other tests (quite similar to what I did) were OK on our company DE-CH machines.

But… why?

After some crazy time I discovered that our company DE-CH machines (and the machines from our customer) were using the “sane” group separator, but my code still didn’t work as expected.

Root cause

The root problem (besides the stupid char choice) was this: I used the “wrong” method to get the “DE-CH” culture in my code.

Let’s try out this demo code:

class Program
    {
        static void Main(string[] args)
        {
            var culture = new CultureInfo("de-CH");

            Console.WriteLine("de-CH Group Separator");
            Console.WriteLine(
                $"{culture.NumberFormat.CurrencyGroupSeparator} - CharCode: {(int) char.Parse(culture.NumberFormat.CurrencyGroupSeparator)}");
            Console.WriteLine(
                $"{culture.NumberFormat.NumberGroupSeparator} - CharCode: {(int) char.Parse(culture.NumberFormat.NumberGroupSeparator)}");

            var cultureFromFramework = CultureInfo.GetCultureInfo("de-CH");

            Console.WriteLine("de-CH Group Separator from Framework");
            Console.WriteLine(
                $"{cultureFromFramework.NumberFormat.CurrencyGroupSeparator} - CharCode: {(int)char.Parse(cultureFromFramework.NumberFormat.CurrencyGroupSeparator)}");
            Console.WriteLine(
                $"{cultureFromFramework.NumberFormat.NumberGroupSeparator} - CharCode: {(int)char.Parse(cultureFromFramework.NumberFormat.NumberGroupSeparator)}");
        }
    }

The result should be something like this:

de-CH Group Separator
' - CharCode: 8217
' - CharCode: 8217
de-CH Group Separator from Framework
' - CharCode: 8217
' - CharCode: 8217

Now change the region setting for de-CH and see what happens:

x

de-CH Group Separator
' - CharCode: 8217
X - CharCode: 88
de-CH Group Separator from Framework
' - CharCode: 8217
' - CharCode: 8217

Only the CultureInfo from the first instance got the change!

Modified vs. read-only

The problem can be summerized with: RTFM!

From the MSDN for GetCultureInfo: Retrieves a cached, read-only instance of a culture.

The “new CultureInfo” constructor will pick up the changed settings from Windows.

TL;DR:

  • CultureInfo.GetCultureInfo will return a “backed in” culture, which might be very fast, but doesn’t respect user changes.
  • If you need to use the modified values from windows: Use the normal CultureInfo constructor.

Hope this helps!

Stefan Henneken: IEC 61131-3: The ‘Observer’ Pattern

The Observer Pattern is suitable for applications that require one or more function blocks to be notified when the state of a particular function block changes. The assignment of the communication participants can be changed at runtime of the program.

In almost every IEC 61131-3 program, function blocks exchange states with each other. In the simplest case, one input of one FB is assigned the output of another FB.

Pic01

This makes it very easy to exchange states between function blocks. But this simplicity has its price:

Inflexibility. The assignment between fbSensor and the three instances of FB_Actuator is hard-coded in the program. Dynamic assignment between the FBs during runtime is not possible.

Fixed dependencies. The data type of the output variable of FB_Sensor must be compatible to the input variable of FB_Actuator. If there is a new sensor component whose output variable is incompatible with the previous data type, this necessarily results in an adjustment of the data type of the actuators.

Problem Definition

The following example shows how, with the help of the observer pattern, the fixed assignment between the communication participants can be dispensed with. The sensor reads a measured value (e.g. a temperature) from a data source, while the actuator performs actions depending on a measured value (e.g. temperature control). The communication between the participants should be changeable. If these disadvantages are to be eliminated, two basic OO design patterns are helpful:

  • Identify those areas that remain constant and separate them from those that change.
  • Never program directly to implementations, but always to interfaces. The assignment between input and output variables must therefore no longer be permanently implemented.

    This can be realized elegantly with the help of interfaces that define the communication between the FBs. There is no longer a fixed assignment of input and output variables. This results in a loose coupling between the participants. Software design based on loose coupling makes it possible to build flexible software systems that cope better with changes, since dependencies between the participants are minimized.

    Definition of Observer Pattern

    The observer pattern provides an efficient communication mechanism between several participants, whereby one or more participants depend on the state of one participant. The participant providing a state is called Subject (FB_Sensor). The participants, which depend on the state, are called Observer (FB_Actuator).

    The Observer pattern is often compared to a newspaper subscription service. The publisher is the subject, while the subscribers are the observers. The subscriber must register with the publisher. When registering, you may also specify which information you would like to receive. The publisher maintains a list in which all subscribers are stored. As soon as a new publication is available, the publisher sends the desired information to all subscribers in the list.

    This becomes more formal in the book „Design pattern. Elements of reusable object-oriented software” expressed by Gamma, Helm, Johnson and Vlissides:

    The Observer pattern defines a 1-to-n dependency between objects, so that changing the state of an object causes all dependent objects to be notified and automatically updated.

    Implementation

    In which way the subject receives the data and how the observer processes the data is not discussed here in more detail.

    Observer

    The method Update() notifies the observer of the subject, if the value changes. Since this behaviour is the same for all observers, the interface I_Observer is defined, which is implemented by all observers.

    The function block FB_Observer also defines a property that returns the current actual value.

    Pic02 Pic03

    Since the data is exchanged by method, no further inputs or outputs are required.

    FUNCTION_BLOCK PUBLIC FB_Observer IMPLEMENTS I_Observer
    VAR
      fValue : LREAL;
    END_VAR
    

    Here is the implementation of the method Update():

    METHOD PUBLIC Update
    VAR_INPUT
      fValue : LREAL;
    END_VAR
    THIS^.fValue := fValue;
    

    und das Property fActualValue:

    PROPERTY PUBLIC fActualValue : LREAL
    fActualValue := THIS^.fValue;
    

    Subject

    The subject manages a list of observers. Using the methods Attach() and Detach(), the individual Observers can log on and off.

    Pic04 Pic05

    Since all Observers implement the interface I_Observer, the list is of type ARRAY[1..Param.cMaxObservers] OF I_Observer. The exact implementation of the observer does not have to be known at this point. Further variants of observers can be created, as long as they implement the interface I_Observer, the subject can communicate with them.

    The method Attach() contains the interface pointer to the observer as a parameter. Before it is stored in the list (line 23), the system checks whether it is valid and not already contained in the list.

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver            : I_Observer;
    END_VAR
    VAR
      nIndex                : INT := 0;
    END_VAR
    
    Attach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    // is the observer already registered?
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        RETURN;
      END_IF
    END_FOR
    
    // save the observer object into the array of observers and send the actual value
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = 0) THEN
        THIS^.aObservers[nIndex] := ipObserver;
        THIS^.aObservers[nIndex].Update(THIS^.fValue);
        Attach := TRUE;
        EXIT;
      END_IF
    END_FOR
    

    The method Detach() also contains the interface pointer to the Observer as a parameter. If the interface pointer is valid, the Observer is searched in the list and the corresponding position is deleted (line 15).

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver             : I_Observer;
    END_VAR
    VAR
      nIndex                 : INT := 0;
    END_VAR
    
    Detach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        THIS^.aObservers[nIndex] := 0;
        Detach := TRUE;
      END_IF
    END_FOR
    

    If there is a status change in the subject, the method Update() is called by all valid interface pointers in the list (line 8). This functionality is found in the private method Notify().

    METHOD PRIVATE Notify
    VAR
      nIndex : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] 0) THEN
        THIS^.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    In this example, the subject generates a random value every second and then notifies the observer using the Notify() method.

    FUNCTION_BLOCK PUBLIC FB_Subject IMPLEMENTS I_Subject
    VAR
      fbDelay : TON;
      fbDrand : DRAND;
      fValue : LREAL;
      aObservers : ARRAY [1..Param.cMaxObservers] OF I_Observer;
    END_VAR
    
    // creates every sec a random value and invoke the update method
    fbDelay(IN := TRUE, PT := T#1S);
    IF (fbDelay.Q) THEN
      fbDelay(IN := FALSE);
      fbDrand(SEED := 0);
      fValue := fbDrand.Num * 1234.5;
      Notify();
    END_IF
    

    There is no statement in the subject to access FB_Observer directly. Access always takes place indirectly via the interface I_Observer. An application can be extended with any observer. As long as it implements the interface I_Observer, no adjustments to the subject are necessary.

    Pic06 

    Application

    The following module should help to test the example program. A subject and two observers are created in it. By setting appropriate auxiliary variables, the two observers can be both connected to the subject and disconnected again at runtime.

    PROGRAM MAIN
    VAR
      fbSubject         : FB_Subject;
      fbObserver1       : FB_Observer;
      fbObserver2       : FB_Observer;
      bAttachObserver1  : BOOL;
      bAttachObserver2  : BOOL;
      bDetachObserver1  : BOOL;
      bDetachObserver2  : BOOL;
    END_VAR
    
    fbSubject();
    
    IF (bAttachObserver1) THEN
      fbSubject.Attach(fbObserver1);
      bAttachObserver1 := FALSE;
    END_IF
    IF (bAttachObserver2) THEN
      fbSubject.Attach(fbObserver2);
      bAttachObserver2 := FALSE;
    END_IF
    IF (bDetachObserver1) THEN
      fbSubject.Detach(fbObserver1);
      bDetachObserver1 := FALSE;
    END_IF
    IF (bDetachObserver2) THEN
      fbSubject.Detach(fbObserver2);
      bDetachObserver2 := FALSE;
    END_IF
    

    Sample 1 (TwinCAT 3.1.4022) on GitHub

    Improvements

    Subject: Interface or base class?

    The necessity of the interface I_Observer is obvious in this implementation. Access to an observer is decoupled from implementation by the interface.

    However, the interface I_Subject does not appear necessary here. And in fact, the interface I_Subject could be omitted. However, I have planned it anyway, because it keeps the option open to create special variants of FB_Subject. For example, there might be a function block that does not organize the observer list in an array. The methods for logging on and off the different Observers could then be accessed generically using the interface I_Subject.

    The disadvantage of the interface, however, is that the code for logging in and out must be implemented each time, even if the application does not require it. Instead, a base class (FB_SubjectBase) seems to be more useful for the subject. The management code for the methods Attach() and Detach() could be moved to this base class. If it is necessary to create a special subject (FB_SubjectNew), it can be inherited from this base class (FB_SubjectBase).

    But what if this special function block (FB_SubjectNew) already inherits from another base class (FB_Base)? Multiple inheritance is not possible (however, several interfaces can be implemented).

    Here, it makes sense to embed the base class in the new function block, i.e. to create a local instance of FB_SubjectBase.

    FUNCTION_BLOCK PUBLIC FB_SubjectNew EXTENDS FB_Base IMPLEMENTS I_Subject
    VAR
      fValue               : LREAL;
      fbSubjectBase        : FB_SubjectBase;
    END_VAR
    

    The methods Attach() and Detach() can then access this local instance.

    Method Attach():

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver : I_Observer;
    END_VAR
    
    Attach := FALSE;
    IF (THIS^.fbSubjectBase.Attach(ipObserver)) THEN
      ipObserver.Update(THIS^.fValue);
      Attach := TRUE;
    END_IF
    

    Method Detach():

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver : I_Observer;
    END_VAR
    Detach := THIS^.fbSubjectBase.Detach(ipObserver);
    

    Method Notify():

    METHOD PRIVATE Notify
    VAR
      nIndex : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.fbSubjectBase.aObservers[nIndex] 0) THEN
        THIS^.fbSubjectBase.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    Thus, the new subject implements the interface I_Subject, inherits from the function block FB_Base and can access the functionalities of FB_SubjectBase via the embedded instance.

    Pic07

    Sample 2 (TwinCAT 3.1.4022) on GitHub

    Update: Push or pull method?

    There are two ways in which the observer receives the desired information from the subject:

    With the push method, all information is passed to the observer via the update method. Only one method call is required for the entire information exchange. In the example, only one variable of the data type LREAL has ever passed the subject. But depending on the application, it can be considerably more data. However, not every observer always needs all the information that is passed to it. Furthermore, extensions are made more difficult: What if the method Update() is extended by further data? All observers must be customized. This can be remedied by using a special function block as a parameter. This function block encapsulates all necessary information in properties. If additional properties are added, it is not necessary to adjust the update method.

    If the pull method is implemented, the observer receives only a minimal notification. He then gets all the information he needs from the subject. However, two conditions must be met. First, the subject should make all data available as properties. On the other hand, the observer must be given a reference to the subject so that it can access the properties. One solution may be that the update method contains a reference to the subject (i.e. to itself) as a parameter.

    Both variants can certainly be combined with each other. The subject provides all relevant data as properties. At the same time, the update method can provide a reference to the subject and pass the most important information as a function block. This method is the classic approach of numerous GUI libraries.

    Tip: If the subject knows little about its observers, the pull method is preferable. If the subject knows its observers (since there are only a few different types of observers), the push method should be used.

  • Holger Schwichtenberg: GroupBy funktioniert in Entity Framework Core 2.1 Release Candidate 1 endlich

    Tests in Entity Framework Core 2.1 Release Candidate 1 zeigen, dass nun tatsächlich die Übersetzung des LINQ-GroupBy-Operators in SQL funktioniert. Endlich!

    Holger Schwichtenberg: Die Highlights der Build 2018

    Der Dotnet-Doktor fasst die wesentlichen Nachrichten von Microsofts Build-Konferenz 2018 zusammen.

    Holger Schwichtenberg: Microsoft Build 2018: Was können wir erwarten?

    Microsofts Entwicklerkonferenz "Build 2018" beginnt am Montag, den 7. Mai 2018, um 17 Uhr deutscher Zeit mit der ersten Keynote. Microsoft wird wohl wieder Neuigkeiten zu .NET, .NET Core, Visual Studio, Azure und Windows verkünden.

    Manfred Steyer: The new Treeshakable Providers API in Angular: Why, How and Cycles

    Source code: https://github.com/manfredsteyer/treeshakable-providers-demo

    Big thanks to Alex Rickabaugh from the Angular team for discussing this topic with me and for giving me some valueable hints.


    Treeshakable providers come with a new optional API that helps tools like webpack or rollup to get rid of unused services during the build process. "Optional" means that you can still go with the existing API you are used to. Besides smaller bundles, this innovation also allows a more direct and easier way for declaring services. Also, it might be a first foretaste of a future where modules are optional.

    In this post, I'm showing several options for using this new API and also point to some pitfalls one might run into. The source code I'm using here can be found in my GitHub repository. Please note that each branch represents one of the below mentioned scenarios.

    Why and (a first) How?

    First of all, let me explain why we need treeshakable providers. For this, let's have a look at the following example that uses the traditional API:

    @NgModule({ [...] providers: [ { provide: FlightService, useClass: FlightService } // Alternative: FlightService ] [...] }) export class FlightBookingModule { }

    Let assume, our AppModule imports the displayed FlightBookingModule. In this case we have the following dependencies:

    Traditional API

    Here you can see, that the AppModule always indirectly references our service, regardless if it uses it or not. Hence, tree shaking tool decide against removing it from the bundle, even if it is not used at all.

    To mitigate this issue, the core team found a solution that follows a simple idea: Turning around one of the arrows:

    Traditional API

    In this case, the AppModule just has a dependency to the service, when it uses it (directly or indirectly).

    To express this in your code, just make use of the provideIn property within the Injectable decorator:

    @Injectable({ providedIn: 'root' }) export class FlightService { constructor(private http: HttpClient) {} [...] }

    This property points to a module and the service will be put into this module's injection scope. The value 'root' is just a shortcut for the root injector's scope. Please note, that this scope is used by all other eagerly loaded (=not lazy-loaded) modules. Only lazy loaded modules as well as components get their own scope which inherits from the root scope. For this reason, you will very likely use 'root' in most cases.

    One nice thing about this API is that we don't have to modify the module anymore for registering the service. This means that we can inject the service immediately after writing it.

    Why Providers and not Components?

    Now you might wonder, why the very same situation doesn't prevent treeshaking for components or other declarations. The answer is: It also prevents this. Therefore, the Angular team wrote the build optimizer which is used by the CLI when creating a production build. One of it's tasks is removing the component decorator with its meta data as it is not needed after AOT compiling and prevents for tree shaking as shown.

    However, providers are a bit special: They are registered with a specific injection scope and provide a mapping between a token and a service. All this meta data is needed at runtime. Hence, the Angular team needed to go one step further and this led to the API for treeshakbles providers we are looking at here.

    Indirections

    The reason we are using dependency injection is that it allows for configuring indirections between a requested token and a provided service.

    For this, you can use known properties like useClass within the Injectable decorator to point to the service to inject:

    @Injectable({ providedIn: 'root', useClass: AdvancedFlightService, deps: [HttpClient] }) export class FlightService { constructor(private http: HttpClient) {} [...] }

    This means that every component and service requesting a FlightService gets an AdvancedFlightService.

    When I wrote this using version 6.0.0, I've noticed that we have to mention the dependencies of the service useClass points to in the deps array. Otherwise Angular uses the tokens from the current constructor. In the displayed example both expect an HttpClient, hence the deps array would not be needed. I think that further versions will solve this issue so that we don't need the deps array for useClass.

    In addition to useClass, you can also use the other known options: useValue, useFactory and useExisting. Multi Providers seem to be not supported by treeshakable providers which makes sense because when it comes to this variety, the token should not know the individual services in advance.

    This means, we have to use the traditional API for this. As an alternative, we could build our own Multi Providers implementation by leveraging factories. I've included such an implementation in my examples; you can look it up here.

    Abstract Classes as Tokens

    In the last example, we needed to make sure that the AdvancedFlightService can replace the FlightService. A super type like an abstract class or an interface would at least assure compatible method signatures.

    If we go with an abstract class, we can also use it as a token. This is a common practice for dependency injection: We are requesting an abstraction and get one of the possible implementations.

    Please note, that we cannot use an interface as a token, even though this is usual in lot's of other environments. The reason for this is TypeScript that is removing interfaces during the compilation as JavaScript doesn't has such a concept. However, we need tokens at runtime to request a service and so we cannot go with interfaces.

    For this solution, we just need to move our Injectable decorator containing the DI configuration to our abstract class:

    @Injectable({ providedIn: 'root', useClass: AdvancedFlightService, deps: [HttpClient] }) export abstract class AbstractFlightService { [...] }

    Then, the services can implement this abstract class:

    @Injectable() export class AdvancedFlightService implements AbstractFlightService { [...] }

    Now, the consumers are capable of requesting the abstraction to get the configured implementation:

    @Component({ [...] }) export class FlightSearchComponent implements OnInit { constructor(private flightService: AbstractFlightService) { } [...] }

    This looks easy, but here is a pitfall. If you closely look at this example, you will notice a cycle:

    Cycle caused by abstract class that points to service that is implementing it

    However, in this very case we are lucky, because we are implementing and not extending the abstract class. This lesser known feature allows us to treat the abstract class like an interface: TypeScript just uses it to check the methods and the signatures. After this, it removes the reference to it and this resolves the cycle.

    But if we used extends here, the cycle would stay and this would result in an hen/egg-problem causing issues at runtime. To make a long story short: Always implements in such cases.

    Registering Services with Lazy Modules

    In very seldom cases, you want to register a service with the scope of an lazy module. This leads to an own service instance (an "own singleton") for the lazy module that can override an service of an parent's scope.

    For this, provideIn can point to the module in question:

    @Injectable({ providedIn: FlightBookingModule, useClass: AdvancedFlightService, deps: [HttpClient] }) export abstract class AbstractFlightService { }

    This seems to be easy but it also causes a cycle:

    Cycle caused by pointing to a module with provideId

    In a good discussion with Alex Rickabaugh from the Angular team, I've found out that we can resolve this cycle be putting services in an own service module. I've called this module just containing services for the feature in question FlightApiModule:

    Resolving cycle by introducing service module

    This means we just have to change providedIn to point to the new FlightApiModule:

    @Injectable({ providedIn: FlightApiModule, useClass: AdvancedFlightService, deps: [HttpClient] }) export abstract class AbstractFlightService { }

    In addition, the lazy module also needs to import the new service module:

    @NgModule({ imports: [ [...] FlightApiModule ], [...] }) export class FlightBookingModule { }

    Using InjectionTokens

    In Angular, we can also use InjectionTokens objects to represent tokens. This allows us to create tokens for situations a class is not suitable for. To make this variety treeshakable too, the InjectionToken now takes a service provider:

    export const FLIGHT_SERVICE = new InjectionToken<FlightService>('FLIGHT_SERVICE', { providedIn: FlightApiModule, factory: () => new FlightService(inject(HttpClient)) } );

    For technical reasons, we have to specify a factory here. As there is no way to infer tokens from a function's signature, we have to use the shown inject method to get services by providing a token. Those services can be passed to the service the factory creates.

    Unfortunately, we cannot use inject with tokens represented by abstract classes. Even though Angular supports this, inject's signature does currently (version 6.0.0) not allow for it. The reason might be that TypeScript doesn't have a nice way to express types that point to abstract classes. Hopefully this will be resolved in the future. For instance, Angular could use a workaround or just allow any for tokens. In the time being, we can cast the abstract class to any as it is compatible with every type.

    With this trick, we can create an injection token pointing to a service that uses our AbstractFlightService as a token.

    export const BOOKING_SERVICE = new InjectionToken<BookingService>('BOOKING_SERVICE', { providedIn: FlightApiModule, factory: () => new BookingService(inject(<any>AbstractFlightService)) } );

    Using Modules to Configure a Module

    Even though treeshakable providers come with a nicer API and help us to shrink our bundles, in some situations we have to go with the traditional API. One such situation was already outlined above: Multi-Providers. Another case where we stick with the traditional API is when providing services to configure a module. An example for this is the RouterModule with its static forRoot and forChild that take a router configuration.

    For this scenario we still need such static methods returning a ModuleWithProviders instance:

    @NgModule({ imports: [ CommonModule ], declarations: [ DemoComponent ], providers: [ /* no services */ ], exports: [ DemoComponent ] }) export class DemoModule { static forRoot(config: ConfigService): ModuleWithProviders { return { ngModule: DemoModule, providers: [ { provide: ConfigService, useValue: config } ] } } }

    Martin Richter: Benachrichtigungen erhalten wenn der Symantec Endpoint Protection Manager (SEPM) keine Virendefinitionen aktualisiert

    Seit Jahren benutzen wir die Symantec Endpoint Protection in meiner Firma. Aktuell die Version 14.0.1.
    Eigentlich macht das Ding was es soll. Aber… Es gibt einen Fall in dem im Werkzeug Koffer von Symantec kein Tool vorhanden ist das Problem zu lösen.

    Was passiert?

    Eigentlich möchte man ja von einem Antivirensystem nichts hören und sehen. Es soll funktionieren und das war es.
    Ganz besonders in einer Firma in der 5, 10, 20 und mehr Clients vorhanden sind.
    Der SEPM (Symantec Endpoint Protection Manager), benachrichtigt much brav wenn auf Stations nach mehreren Tagen noch alte Virendefinition sind, oder auch eine bestimme Anzahl von PCs erreicht wurden, die alte Virendefinitionen haben. Oft sind das bei uns Maschinen, die unterwegs sind, oder lange nicht eingeschaltet wurden.

    Aber es gibt einen Fall in dem der SEPM vollkommen versagt: Wenn nämlich der SEPM selber keine neuen Virendefinitionen erhält. Warum auch immer!

    Ich hatte in den letzten Jahren mehrfach den Fall, in denen der SEPM keine neuen Virendefinitionen von Symantec geladen hat. Die Gründe waren vielseitig. Mal hatte der SEPM kein Internet aufgrund eines Konfigurationsfehlers, mal startete der SEPM gar nicht nach einem Windows Update.
    Aber in den meisten Fällen war der SEPM einfach nicht fähig, die neuen Signaturen zu laden obwohl er anzeigte, dass welche zum Download bereit stehen.

    Der letzte Fall ist besonders nervig. Ich habe zwei Support-Cases zu dem Thema schon offen gehabt, aber die redlich bemühten Supporter haben dennoch nichts herausbekommen.
    Nach jeweiligem Neustart des Dienstes oder des Servers, lief es fast immer wieder. Also hatte sich scheinbar nur irgendwas intern „verklemmt“!

    Dieser Fall ist aber gefährlich. Man bekommt von der ganzen Sache nichts mit, bis eine bestimmte Anzahl von PCs nach ein paar Tagen eben alte Virendefinitionen haben. In der Einstellung bei uns sind das 10% der Maschinen nach 4 Tagen. Man kann das zwar herunter drücken, aber diese Warnungen nerven meistens nur ohne triftigen Grund.
    Und man kann in diesem Fall nicht mal testweise einfach den SEPM neu starten.
    Eigentlich will ich keine Meldung und das System soll erstmal selbst versuchen erkannte Problem zu lösen.
    Vor allem habe ich keine Lust irgendjemanden zu beauftragen, der pro Tag einmal diese blöde Konsole startet und nachschaut was los ist. Über alles andere bekomme ich ja auch Emails.

    Ich finde solch eine lange Latenz, in der es nicht bemerkt wird, dass die AV-Signaturen alt sind, einfach gefährlich.
    Aber Bordmittel darüber zu warnen gibt es nicht!
    Zudem trat dieser Fall ca alle 6-9 Wochen immer einmal wieder auf.
    Und das nervt.

    Also habe ich mich auf die Suche gemacht und habe für den SQL Server in dem unsere Daten gehalten werden zwei kleine Jobs geschrieben die nachfolgend beschreiben werden.
    Diese Jobs laufen nun einige Monate und haben bereits mehrfach erfolgreich dieses Problem „selbsttätig“ behoben…

    Job 1: Symantec Virus Signature Check

    Dieser Job läuft jede Stunde einmal.
    Der Code macht einfach folgendes.

    • Sollte es in den letzten 32 Stunden (siehe Wert für @delta) eine Änderungen in den Signaturen gegeben haben ist alles OK
    • Gab es kein Update der Signaturen dann werden zwei Schritte eingeleitet.
    • Es wird über den internen SQL Mail Dienst eine Warnung versendet an den Admin.
    • Danach wird ein weiterer Job gestartet mit dem Namen Symantec SEPM Restart.

    Die 32 Stunden, sind ein Wert der sich aus Erfahrungswerten gebildet hat. In 98% aller Fälle werden Signaturen innerhalb von 24h aktualisiert. Aber es gibt eben ein paar Ausnahmen

    Taucht die Email mehr als zweimal auf, muss ich wohl irgendwie aktiv werden und mal manuell kontrollieren.

    DECLARE @delta INT
    -- number of hours
    SET @delta = 32 
    DECLARE @d DATETIME 
    DECLARE @t VARCHAR(MAX)
    IF NOT EXISTS(SELECT * FROM PATTERN WHERE INSERTDATETIME>DATEADD(hh,-@delta,GETDATE()) AND PATTERN_TYPE='VIRUS_DEFS')
    BEGIN
          SET @d = (SELECT TOP 1 INSERTDATETIME FROM PATTERN WHERE PATTERN_TYPE='VIRUS_DEFS' ORDER BY INSERTDATETIME DESC)
          SET @t = 'Hallo Admin!
    
    Die letzten Antivirus-Signaturen wurden am ' + CONVERT(VARCHAR, @d, 120)+' aktualisiert!
    Es wird versucht den SEPM Dienst neu zu starten!
    
    Liebe Grüße Ihr
    SQLServerAgent'
          EXEC msdb.dbo.sp_send_dbmail @profile_name='Administrator',
    				   @recipients='administrator@mydomain.de',
    				   @subject='Symantec Virus Definitionen sind nicht aktuell',
    				   @body=@t
          PRINT 'Virus Signaturen sind veraltet! Letztes Update: ' + CONVERT(VARCHAR, @d, 120)
          EXEC msdb.dbo.sp_start_job @job_name='Symantec Restart'
          PRINT 'Restart SEPM server!!!'
        END
      ELSE
        BEGIN 
          SET @d = (SELECT TOP 1 INSERTDATETIME FROM PATTERN WHERE PATTERN_TYPE='VIRUS_DEFS' ORDER BY INSERTDATETIME DESC)
          PRINT 'Virus Signaturen sind OK! Letztes Update: ' + CONVERT(VARCHAR, @d, 120)
        END
    

    Job 2: Symnatec Restart

    Dieser Job wird nur durch den Job 1 gestartet und er ist äußerst trivial.
    Führt einfach nur 2 Befehle aus, die den SEPM stoppen und anschließend neu starten.

    NET STOP SEMSRV
    NET START SEMSRV
    

    PS: Traurig war, dass man durch den Support auch keine Hilfe bekam, nachdem ich solch einen Lösungsansatz vorschlug. Man wollte mir keine Infos über die Strukturen der Tabellen geben. Letzten Endes waren die Suchmaschinen so nett alle nötigen Informationen zu liefern, denn ich war nicht der einzige mit diesem Problem.


    Copyright © 2017 Martin Richter
    Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
    (Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

    David Tielke: dotnet Cologne 2018 - Folien meines Vortrags über serviceorientierte Architekturen

    Heute bin auch ich endlich mit der dotnet Cologne 2018 in das Konferenzjahr gestartet. Auf der von der .NET Usergroup Köln/Bonn e.V. jährlich veranstalteten Konferenz, durfte ich für meinen langjährigen Partner Developer Media zum ersten Mal als Sprecher teilnehmen. Während viele neue und hippe Themen auf der Agenda standen, habe ich mich bewusst auf altbewährtes konzentriert und mit zahlreichen Vorurteilen und Kritikpunkten bezüglich einer der am meisten missverstandenen Architekturmuster überhaupt aufgeräumt - den Serviceorientierten Architekturen. In der 60 minütigen Level 300 Session ging es neben der Theorie zu Architekturen, der Funktionsweise von SOA vor allem um eins: Was können wir aus dieser genialen Architekturform für andere Architekturen lernen? Wie kann eine monolithische Systemarchitektur mit Aspekten aus SOA zu einer flexiblen und wartbaren Architektur umgewandelt werden? Nach dem sehr gut besuchten Vortrag habe ich sehr viel Feedback der Teilnehmer bekommen, besonders von denen, die keinen Sitzplatz mehr ergattern konnten. Deshalb habe ich das Thema noch einmal als Webcast aufgezeichnet und in meinem Youtube-Channel online gestellt. Zusätzlich gibt es hier wie immer die Folien als PDF. Ich danke an dieser Stelle nochmal allen Teilnehmern, natürlich dem Veranstalter und meinem Partner Developer Media für diesen tollen Konferenztag. Bis zum nächsten Jahr!

    Webcast


    Folien

    Links
    Folien
    Youtube-Kanal

    Martin Richter: Sieh mal an: SetFilePointer und SetFilePointerEx sind eigentlich überflüssig, wenn man ReadFile und WriteFile verwendet…

    Man lernt nie aus, bzw. man hat vermutlich nie die Dokumentation vollständig und richtig gelesen.

    Wenn man eine Datei nicht sequentiell liest ist es normal Seek, Read/Write in dieser Folge zu nutzen. Order eben SetFilePointer, ReadFile/WriteFile.

    In einer StackOverflow Antwort stolperte ich über die Aussage:

    you not need use SetFilePointerEx – this is extra call. use explicit offset in WriteFile / ReadFile instead

    (Rechtschreibung nicht korrigiert).

    Aber der Inhalt war für mich neu. Sieh mal an, selbst wen man kein FILE_FLAG_OVERRLAPPED benutzt, kann man die OVERLAPPED Struktur nutzen und die darin vorhandenen Offsets verwenden.
    Diese werden sogar netterweise angepasst, nachdem gelesen/geschrieben wurde.

    Zitat aus der MSDN (der Text ist gleichlautend für WriteFile):

    Considerations for working with synchronous file handles:

    • If lpOverlapped is NULL, the read operation starts at the current file position and ReadFile does not return until the operation is complete, and the system updates the file pointer before ReadFile returns.
    • If lpOverlapped is not NULL, the read operation starts at the offset that is specified in the OVERLAPPED structure and ReadFile does not return until the read operation is complete. The system updates the OVERLAPPED offset before ReadFile returns.

    Copyright © 2017 Martin Richter
    Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
    (Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

    Manfred Steyer: Micro Apps with Web Components using Angular Elements

    Update on 2018-05-04: Updated for @angular/elements in Angular 6
    Source code: https://github.com/manfredsteyer/angular-microapp

    In one of my last blog posts I've compared several approaches for using Single Page Applications, esp. Angular-based ones, in a microservice-based environment. Some people are calling such SPAs micro frontends; other call them Micro Apps.

    As you can read in the mentioned post, there is not the one and only perfect approach but several feasible concepts with different advantages and disadvantages.

    In this post I'm looking at one of those approaches in more detail: Using Web Components. For this, I'm leveraging the new Angular Elements library (@angular/elements) which is available beginning with Angular 6. The source code for the case study decribed can be found in my GitHub repo.

    Case Study

    The case study presented here is as simple as possible. It contains a shell app that dynamically loads and activates micro apps. It also takes care about routing between the apps (meta-routing) and allows them to communicate with each other using message passing. They are just called Client A and Client B. In addition, Client B also contains a widget from Client A.

    Client A is activated

    Client B with widget from Client A

    Project structure

    Following the ideas of micro services, each part of the overall solution would be a separate project. This allows different teams do work individually on their parts without the need of much coordination.

    To make this case study a bit easier, I've decided to use one CLI project with a sub project for each part. This is something, the CLI supports beginning with version 6.

    You can create a sub project using ng generate application my-sub-project within an existing one.

    Using this approach, I've created the following structure:

      + projects
        +--- client-a
             +--- src
        +--- client-b
             +--- src
      + src 
    

    The outer src folder at the end is the folder for the shell application.

    Micro Apps as Web Components with Angular Elements

    To allow loading the micro apps on demand into the shell, they are exposed as Web Components using Angular Elements. In addition to that, I'm providing further Web Components for stuff I want to share with other Micro Apps.

    Using the API of Angular Elements isn't difficult. After npm installing @angular/elements you just need to declare your Angular Component with a module and put it also into the entryComponents array. Using entryComponents is necessary because Angular Elements are created dynamically at runtime. Otherwise the compiler would not know about them.

    Than you have to create a wrapper for your component using createCustomElement and register it as a custom element with the browser using its customElements.define method:

    import { createCustomElement } from '@angular/elements'; [...] @NgModule({ [...] bootstrap: [], entryComponents: [ AppComponent, ClientAWidgetComponent ] }) export class AppModule { constructor(private injector: Injector) { } ngDoBootstrap() { const appElement = createCustomElement(AppComponent, { injector: this.injector}) customElements.define('client-a', appElement); const widgetElement = createCustomElement(ClientAWidgetComponent, { injector: this.injector}) customElements.define('client-a-widget', widgetElement); } }

    The AppModule above only offers two custom elements. The first one is the root component of the micro app and the second one is a component it shares whit other micro apps. Please note that it does not bootstrap a traditional Angular component. Hence, the bootstrap array is empty and we need to introduce an ngDoBootstrap method intended for manual bootstrapping.

    If we had traditional Angular components, services, modules, etc., we could also place this code inside of them.

    After this, we can use our Angular Components like ordinary HTML elements:

    <client-a [state]="someState" (message)="handleMessage($event)"><client-a>

    While the last example uses Angular to call the Web Component, this also works with other frameworks and VanillaJS. In this case, we have to use the respective syntax of the hosting solution when calling the component.

    When we load web components in an other Angular application, we need to register the CUSTOM_ELEMENTS_SCHEMA:

    import { NgModule, CUSTOM_ELEMENTS_SCHEMA } from '@angular/core'; [...] @NgModule({ declarations: [AppComponent ], imports: [BrowserModule], schemas: [CUSTOM_ELEMENTS_SCHEMA], providers: [], bootstrap: [AppComponent] }) export class AppModule { }

    This is necessary to tell the Angular compiler that there will be components it is not aware of. Those components are the web components that are directly executed by the browser.

    We also need a polyfill for browsers that don't support Web Components. Hence, I've npm installed @webcomponents/custom-elements and referenced it at the end of the polyfills.ts file:

    import '@webcomponents/custom-elements/custom-elements.min';

    This polyfill even works with IE 11.

    Routing across Micro Apps

    One thing that is rather unusual here, is that whole clients are implemented as Web Components and hence they are using routing:

    @NgModule({ imports: [ ReactiveFormsModule, BrowserModule, RouterModule.forRoot([ { path: 'client-a/page1', component: Page1Component }, { path: 'client-a/page2', component: Page2Component }, { path: '**', component: EmptyComponent} ], { useHash: true }) ], [...] }) export class AppModule { [...] }

    An interesting thing about this simple routing configuration is that it uses the prefix client-a for all but one route. The last route is a catch all route displaying an empty component. This makes the application disappear, when the current path does not start with its prefix. Using this simple trick you can allow the shell to switch between apps very easily.

    Please note that I'm using hash based routing as after changing the hash all routers in our micro apps will update their route. Unfortunately, this isn't the case with the default location strategy which leverages the push API.

    When bootstrapping such components as Web Components we have to initialize the router manually:

    @Component([...]) export class ClientAComponent { constructor(private router: Router) { router.initialNavigation(); // Manually triggering initial navigation } }

    Build Process

    For building the web components, I'm using a modified version of the webpack configuration from Vincent Ogloblinsky's blog post.

    const AotPlugin = require('@ngtools/webpack').AngularCompilerPlugin; const path = require('path'); const PurifyPlugin = require('@angular-devkit/build-optimizer').PurifyPlugin; const webpack = require('webpack'); const clientA = { entry: './projects/client-a/src/main.ts', resolve: { mainFields: ['browser', 'module', 'main'] }, module: { rules: [ { test: /\.ts$/, loaders: ['@ngtools/webpack'] }, { test: /\.html$/, loader: 'html-loader', options: { minimize: true } }, { test: /\.js$/, loader: '@angular-devkit/build-optimizer/webpack-loader', options: { sourceMap: false } } ] }, plugins: [ new AotPlugin({ skipCodeGeneration: false, tsConfigPath: './projects/client-a/tsconfig.app.json', hostReplacementPaths: { "./src/environments/environment.ts": "./src/environments/environment.prod.ts" }, entryModule: path.resolve(__dirname, './projects/client-a/src/app/app.module#AppModule' ) }), new PurifyPlugin() ], output: { path: __dirname + '/dist/shell/client-a', filename: 'main.bundle.js' }, mode: 'production' }; const clientB = { [...] }; module.exports = [clientA, clientB];

    In addition to that, I'm using some npm scripts to trigger both, the build of the shell as well as the build of the micro apps. For this, I'm copying the bundles for the micro apps over to the shell's dist folder. This makes testing a bit easier:

    "scripts": { "start": "live-server dist/shell", "build": "npm run build:shell && npm run build:clients ", "build:clients": "webpack", "build:shell": "ng build --project shell", [...] }

    Loading bundles

    After creating the bundles, we can load them into a shell application. A first simple approach could look like this:

    <client-a></client-a> <client-b></client-b> <script src="client-a/main.bundle.js"></script> <script src="client-b/main.bundle.js"></script>

    This example shows one more time that a web component works just as an ordinary html element.

    We can also dynamically load the bundles on demand with some lines of simple DOM code. I will present a solution for this a bit later.

    Communication between Micro Apps

    Even though micro apps should be as isolated as possible, sometimes we need to share some information. The good message here is that we can leverage attributes and events for this:

    To implement this idea, our micro apps get a property state the shell can use to send down some application wide state. It also gets an event message to notify the shell:

    @Component({ ... }) export class AppComponent implements OnInit { @Input('state') set state(state: string) { console.debug('client-a received state', state); } @Output() message = new EventEmitter<any>(); [...] }

    The shell can now bind to these to communicate with the Micro App:

    <client-a [state]="appState" (message)="handleMessage($event)"></client-a> <client-b [state]="appState" (message)="handleMessage($event)"></client-b>

    Using this approach one can easily broadcast messages down by updating the appState. And if handleMessage also updates the appState, the micro apps can communicate with each other.

    One thing I want to point out is that this kind of message passing allows inter app communication without coupling them in a strong way.

    Dynamically Loading Micro Apps

    As web components work as traditional html elements, we can dynamically load them into our app using DOM. For this task, I've created a simple configuration object pointing to all the data we need:

    config = { "client-a": { path: 'client-a/main.bundle.js', element: 'client-a' }, "client-b": { path: 'client-b/main.bundle.js', element: 'client-b' } };

    To load one of those clients, we just need to create a script tag pointing to its bundle and an element representing the micro app:

    load(name: string): void { const configItem = this.config[name]; const content = document.getElementById('content'); const script = document.createElement('script'); script.src = configItem.path; script.onerror = () => console.error(`error loading ${configItem.path}`); content.appendChild(script); const element: HTMLElement = document.createElement(configItem.element); element.addEventListener('message', msg => this.handleMessage(msg)); content.appendChild(element); element.setAttribute('state', 'init'); } handleMessage(msg): void { console.debug('shell received message: ', msg.detail); }

    By hooking up an event listener for the message event, the shell can receive information from the micro apps. To send some data down, this example uses setAttribute.

    We can even decide when to call the load function for our application. This means, we can implements eager loading or lazy loading. For the sake of simplicity I've decided for the first option:

    ngOnInit() { this.load('client-a'); this.load('client-b'); }

    Using Widgets from other Micro Apps

    Using widgets from other Micro Apps is also a piece of cake: Just create an html element. Hence, all client b has to do to use client a's widget is this:

    <client-a-widget></client-a-widget>

    Evaluation

    Advantages

    • Styling is isolated from other Microservice Clients due to Shadow DOM or the Shadow DOM Emulation provided by Angular out of the box.
    • Allows for separate development and separate deployment
    • Mixing widgets from different Microservice Clients is possible
    • The shell can be a Single Page Application too
    • We can use different SPA frameworks in different versions for our Microservice Clients

    Disadvantages

    • Microservice Clients are not completely isolated as it would be the case when using hyperlinks or iframes instead. This means that they could influence each other in an unplanned way. This also means that there can be conflicts when using different frameworks in different versions.
    • We need polyfills for some browsers
    • We cannot leverage the CLI for generating a self-contained package for every client. Hence, I used webpack.

    Tradeoff

    • We have to decide, whether we want to import all the libraries once or once for each client. This is more or less a matter of bundling. The first option allows to optimize for bundle sizes; the last option provides more isolation and hence separate development and deployment. This properties are considered valuable architectural goals in the world of micro services.

    Holger Schwichtenberg: C# 8.0 erkennt mehr Programmierfehler

    Referenztypen werden nicht mehr automatisch "nullable" sein; die Möglichkeit, den Wert null zuzuweisen, müssen Entwickler explizit deklarieren.

    Code-Inside Blog: .editorconfig: Sharing a common coding style in a team

    Sharing Coding Styles & Conventions

    In a team it is really important to set coding conventions and to use a specific coding style, because it helps to maintain the code - a lot. Of course has each developer his own “style”, but some rules should be set, otherwise it will end in a mess.

    Typical examples for such rules are “Should I use var or not?” or “Are _ still OK for private fields?”. Those questions shouldn’t be answered in a Wiki - it should be part of the daily developer life and should show up in your IDE!

    Be aware that coding conventions are highly debated. In our team it was important to set a commpon ruleset, even if not everyone is 100% happy with each setting.

    Embrace & enforce the conventions

    In the past this was the most “difficult” aspect: How do we enforce these rules?

    Rules in a Wiki are not really helpful, because if you are in your favorite IDE you might not notice rule violations.

    Stylecop was once a thing in the Visual Studio World, but I’m not sure if this is still alive.

    Resharper, a pretty useful Visual Studio plugin, comes with it’s own code convention sharing file, but you will need Resharper enforce and embrace the conventions.

    Introducing: .editorconfig

    Last year Microsoft decided to support the .EditorConfig file format in Visual Studio.

    The .editorconfig defines a set of common coding styles (think of tabs or spaces) in a very simple format. Different text ediotors and IDEs support this file, which makes it a good choice if you are using multiple IDEs or working with different setups.

    Additionally Microsoft added a couple of C# related options for the editorconfig file to support the C# language features.

    Each rule can be marked as “Information”, “Warning” or “Error” - which will light up in your IDE.

    Sample

    This was a tough choice, but I ended up with the .editorconfig of the CoreCLR. It is more or less the “normal” .NET style guide. I’m not sure if I love the the “var”-setting and the “static private field naming (like s_foobar)”, but I can life with them and it was for us a good starting point (and still is).

    The .editorconfig file can be saved at the same level as the .sln file, but you could also use multiple .editorconfig files based on the folder structure. Visual Studio should detect the file and see the rules.

    Benefits

    When everything is ready Visual Studio should populate the results and show does nice light blub:

    x

    Be aware that I have Resharper installed and Resharper has it’s own ruleset, which might be in conflict with the .editorconfig setting. You need to adjust those settings in Resharper. I’m still not 100% sure how good the .editorconfig support is, sometimes I need to overwrite the backed in Resharper settings and sometimes it just works. Maybe this page gives a hint.

    Getting started?

    Just search for a .editorconfig file (or use something from the Microsoft GitHub repositories) and play with the settings. The setup is easy and it’s just a small text file right next to our code. Read more about the customization here.

    Related topic

    If you are looking for a more powerful option to embrace coding standards, you might want to take a look at Roslyn Analysers:

    With live, project-based code analyzers in Visual Studio, API authors can ship domain-specific code analysis as part of their NuGet packages. Because these analyzers are powered by the .NET Compiler Platform (code-named “Roslyn”), they can produce warnings in your code as you type even before you’ve finished the line (no more waiting to build your code to discover issues). Analyzers can also surface an automatic code fix through the Visual Studio light bulb prompt to let you clean up your code immediately

    MSDN Team Blog AT [MS]: OpenHack IoT & Data – 28. – 30. Mai

    Als CSE (Commercial Software Engineering) können wir Kunden beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Ende Mai bieten wir als Readiness Maßnahme einen dreitägigen OpenHack zu IoT & Data an. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß.

    28. - 30. Mai OpenHack IoT & DataWir geben bei den OpenHacks den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer.

    Weiters braucht Ihr in diesem Fall auch noch kein eigenes Projekt, an dem ihr weiterarbeitet. Ihr braucht uns hier also dementsprechend auch keinerlei Projektinformationen von Euch zu geben. Kann bei machen Chefs ja ein wichtiger Punkt sein. Smile

    Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, …

    Falls ihr Euch also aktiv mit den Themen IoT & Data auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien.

    Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!

    Das Ziel wäre es klarerweise Euch zu motivieren über eigene IoT & Data Projekte nachzudenken bzw. diese zu starten. Nur so als kleine Anregung daher ein paar Projekte, die in Kooperation mit der CSE entstanden sind: https://www.microsoft.com/developerblog/tag/IoT

    Anmelden unter: http://www.aka.ms/zurichopenhack

    Ich freue mich auf Euer Kommen!!

    MSDN Team Blog AT [MS]: Build 2018 Public Viewing with BBQ & Beer

     

    Die Microsoft Build Konferenz ist DIE Veranstaltung für alle Softwareentwickler die sich mit Microsoft Technologien befassen. Die Keynote gibt immer einen tollen Überblick über die neuesten Entwicklungen in den Bereichen .NET, Azure, Windows, Visual Studio, AI, IoT, Big Data und mehr.

    Build 2018 Public Viewing with BBQ & BeerManche haben ja das Glück vor Ort live dabei sein zu können. Ein paar müssen allerdings auhc zu Hause bleiben.

    Ist das ein grund zum traurig sein? Kommt darauf an. Winking smile
    Zumindest in Graz trifft sich die Community zum Build 2018 Public Viewing with BBQ & Beer

    Zur Einstimmung auf die Keynote wird die Microsoft Developer User Group Graz dieses Jahr schon etwas früher mit BBQ und Bier starten.

    Ab 16:00 gibt es BBQ und Bier, ab
    17:30 dann die Keynote, gefolgt von einem gemütlicher Ausklang.

    Die Microsoft Developer User Group Graz freut sich auf Euer Kommen, gutes Essen und eine spannende Keynote!

     

    Stefan Henneken: IEC 61131-3: Der generische Datentyp T_Arg

    In dem Artikel The wonders of ANY zeigt Jakob Sagatowski wie der Datentyp ANY sinnvoll eingesetzt werden kann. Im beschriebenen Beispiel vergleicht eine Funktion zwei Variablen, ob der Datentyp, die Datenlänge und der Inhalt exakt gleich sind. Statt für jeden Datentyp eine eigene Funktion zu implementieren, kann mit dem Datentyp ANY die gleichen Anforderungen mit nur einer Funktion deutlich eleganter umgesetzt werden.

    Vor einiger Zeit hatte ich eine ähnliche Aufgabenstellung. Es sollte eine Methode entwickelt werden, die eine beliebige Anzahl von Parametern entgegennimmt. Sowohl der Datentyp, als auch die Anzahl der Parameter waren beliebig.

    Bei meinem ersten Lösungsansatz versuchte ich ein Array mit variabler Länge vom Typ ARRAY [*] OF ANY einzusetzen. Allerdings können Arrays mit variabler Länge nur als VAR_IN_OUT und der Datentyp ANY nur als VAR_INPUT eingesetzt werden (siehe auch IEC 61131-3: Arrays mit variabler Länge). Somit schied dieser Ansatz aus.

    Alternativ zu dem Datentyp ANY steht noch die Struktur T_Arg zur Verfügung. T_Arg ist in der TwinCAT-Bibliothek Tc2_Utilities deklariert und steht, im Gegensatz zu ANY, auch unter TwinCAT 2 zur Verfügung. Der Aufbau von T_Arg ist vergleichbar mit der Struktur, die für den Datentyp ANY eingesetzt wird (siehe auch The wonders of ANY).

    TYPE T_Arg :
    STRUCT
      eType   : E_ArgType    := ARGTYPE_UNKNOWN;     (* Argument data type *)
      cbLen   : UDINT        := 0;                   (* Argument data byte length *)
      pData   : UDINT        := 0;                   (* Pointer to argument data *)
    END_STRUCT
    END_TYPE
    

    T_Arg kann an beliebigen Stellen eingesetzt werden, somit auch im Bereich VAR_IN_OUT.

    Die folgende Funktion addiert eine beliebige Anzahl von Zahlen, deren Datentyp ebenfalls beliebig sein kann. Das Ergebnis wird als LREAL zurückgegeben.

    FUNCTION F_AddMulti : LREAL
    VAR_IN_OUT
      aArgs        : ARRAY [*] OF T_Arg;
    END_VAR
    VAR
      nIndex	: DINT;
      aUSINT	: USINT;
      aUINT		: UINT;
      aINT		: INT;
      aDINT		: DINT;
      aREAL		: REAL;   
      aLREAL	: LREAL;
    END_VAR
    
    F_AddMulti := 0.0;
    FOR nIndex := LOWER_BOUND(aArgs, 1) TO UPPER_BOUND(aArgs, 1) DO
      CASE (aArgs[nIndex].eType) OF
        E_ArgType.ARGTYPE_USINT:
          MEMCPY(ADR(aUSINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aUSINT;
        E_ArgType.ARGTYPE_UINT:
          MEMCPY(ADR(aUINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aUINT;
        E_ArgType.ARGTYPE_INT:
          MEMCPY(ADR(aINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aINT;
        E_ArgType.ARGTYPE_DINT:
          MEMCPY(ADR(aDINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aDINT;
        E_ArgType.ARGTYPE_REAL:
          MEMCPY(ADR(aREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aREAL;
        E_ArgType.ARGTYPE_LREAL:
          MEMCPY(ADR(aLREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aLREAL;
      END_CASE
    END_FOR
    

    Der Aufruf der Funktion gestaltet sich hierbei allerdings etwas umständlicher, als bei dem Datentyp ANY.

    PROGRAM MAIN
    VAR
      sum          : LREAL;
      args         : ARRAY [1..4] OF T_Arg;
      a            : INT := 4567;
      b            : REAL := 3.1415;
      c            : DINT := 7032345;
      d            : USINT := 13;
    END_VAR
    
    args[1] := F_INT(a);
    args[2] := F_REAL(b);
    args[3] := F_DINT(c);
    args[4] := F_USINT(d);
    sum := F_AddMulti(args);
    

    Das Array, das an die Funktion übergeben wird, muss zuvor initialisiert werden. In der Bibliothek Tc2_Utilities stehen Hilfsfunktionen zur Verfügung, die eine Variable in eine Struktur vom Typ T_Arg umwandeln (F_INT(), F_REAL(), F_DINT(), …). Die Funktion zum Addieren der Werte besitzt nur noch eine Eingangsvariable vom Typ ARRAY [*] OF T_Arg.

    Anwendung findet der Datentyp T_Arg z.B. im Funktionsblock FB_FormatString() oder in der Funktion F_FormatArgToStr() von TwinCAT. Mit dem Funktionsblock FB_FormatString() können in einem String bis zu 10 Platzhalter durch Werte von SPS-Variablen vom Typ T_Arg ersetzt werden (ähnlich wie bei fprintf in C).

    Ein Vorteil von ANY ist die Tatsache, dass der Datentyp durch die Norm IEC 61131-3 definiert wird.

    Auch wenn die generische Datentypen ANY und T_Arg vom Leistungsumfang nicht den Generics in C# oder den Templates in C++ entsprechen, so unterstützen diese dennoch die Entwicklung generischer Funktionen in der IEC 61131-3. Diese können jetzt so entworfen werden, dass die gleiche Funktion für unterschiedliche Datentypen und Datenstrukturen verwendet werden kann.

    Manfred Steyer: Seamlessly Updating your Angular Libraries with the CLI, Schematics and ng update


    Table of Contents

    This blog post is part of an article series.


    Thanks a lot to Hans Larsen from the Angular CLI team for reviewing this article.

    Updating libraries within your npm/yarn-based project can be a nightmare. Once you've dealt with all the peer dependencies, you have to make sure your source code doesn't run into breaking changes.

    The new command ng update provides a remedy: It goes trough all updated dependencies -- including the transitive ones -- and calls schematics to update the current project for them. Together with ng add described in my blog article here, it is the foundation for an eco system allowing a more frictionless package management.

    In this post, I'm showing how to make use of ng update within an existing library by extending the simple logger used in my article about ng add.

    If you want to look at the completed example, you find it in my GitHub repo.

    Schematics is currently an Angular Labs project. Its public API is experimental and can change in future.

    Angular Labs

    Introducing a Breaking Change

    To showcase ng update, I'm going to modify my logger library here. For this, I'm renaming the LoggerModule's forRoot method into configure:

    // logger.module.ts [...] @NgModule({ [...] }) export class LoggerModule { // Old: // static forRoot(config: LoggerConfig): ModuleWithProviders { // New: static configure(config: LoggerConfig): ModuleWithProviders { [...] } }

    As this is just an example, please see this change just as a proxy for all the other breaking changes one might introduce with a new version.

    Creating the Migration Schematic

    To adopt existing projects to my breaking change, I'm going to create a schematic for it. It will be placed into an new update folder within the library's schematics folder:

    Folder update for new schematic

    This new folder gets an index.ts with a rule factory:

    import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; export function update(options: any): Rule { return (tree: Tree, _context: SchematicContext) => { _context.logger.info('Running update schematic ...'); // Hardcoded path for the sake of simplicity const appModule = './src/app/app.module.ts'; const buffer = tree.read(appModule); if (!buffer) return tree; const content = buffer.toString('utf-8'); // One more time, this is for the sake of simplicity const newContent = content.replace('LoggerModule.forRoot(', 'LoggerModule.configure('); tree.overwrite(appModule, newContent); return tree; }; }

    For the sake of simplicity, I'm taking two short cuts here. First, the rule assumes that the AppModule is located in the file ./src/app/app.module.ts. While this might be the case in a traditional Angular CLI project, one could also use a completely different folder structure. One example is a monorepo workspace containing several applications and libraries. I will present a solution for this in an other post but for now, let's stick with this simple solution.

    To simplify things further, I'm directly modifying this file using a string replacement. A more safe way to change existing code is going with the TypeScript Compiler API. If you're interested into this, you'll find an example for this in my blog post here.

    Configuring the Migration Schematic

    To configure migration schematics, let's follow the advice from the underlying design document and create an own collection. This collection is described by an migration-collection.json file:

    Collection for migration schematics

    For each migration, it gets a schematic. The name of this schematic doesn't matter but what matters is the version property:

    { "schematics": { "migration-01": { "version": "4", "factory": "./update/index#update", "description": "updates to v4" } } }

    This collection tells the CLI to execute the current schematic when migrating to version 4. Let's assume we had such an schematic for version 5 too. If we migrated directly from version 3 to 5, the CLI would execute both.

    Instead of just pointing to a major version, we could also point to a minor or a patch version using version numbers like 4.1 or 4.1.1.

    We also need to tell the CLI that this very file describes the migration schematics. For this, let's add an entry point ng-update to our package.json. As in our example the package.json located in the project root is used by the library built, we have to modify this one. In other project setups the library could have an package.json of its own:

    [...] "version": "4.0.0", "schematics": "./schematics/collection.json", "ng-update": { "migrations": "./schematics/migration-collection.json" }, [...]

    While the known schematics field is pointing to the traditional collection, ng-update shows which collection to use for migration.

    We also need to increase the version within the package.json. As my schematic is indented for version 4, I've set the version field to this very version above.

    Test, Publish, and Update

    To test the migration schematic, we need a demo Angular application using the old version of the logger-lib. Some information about this can be found in my last blog post. This post also describes, how to setup a simple npm registry that provides the logger-lib and how to use it in your demo project.

    Make sure to use the latest versions of @angular/cli and its dependency @angular-devkit/schematics. When I wrote this up, I've used version 6.0.0-rc.4 of the CLI and version 0.5.6 of the schematics package. However, this came with some issues especially on Windows. Nether the less, I expect those issues to vanish, once we have version 6.

    To ensure having the latest versions, I've installed the latest CLI and created a new application with it.

    Sometimes during testing, it might be useful to install a former/ a specific version of the library. You can just use npm install for this:

    npm install @my/logger-lib@^0 --save
    

    When everything is in place, we can build and publish the new version of our logger-lib. For this, let's use the following commands in the library's root directory:

    npm run build:lib
    cd dist
    cd lib
    npm publish --registry http://localhost:4873
    

    As in the previous article, I'm using the npm registry verdaccio which is available at port 4863 by default.

    Updating the Library

    To update the logger-lib within our demo application, we can use the following command in it's root directory:

    ```
    ng update @my/logger-lib --registry http://localhost:4873 --force
    ```
    

    The switch force makes ng update proceed even if there are unresolved peer dependencies.

    This command npm installs the newest version of the logger-lib and executes the registered migration script. After this, you should see the modifications within your app.module.ts file.

    As an alternative, you could also npm install it by hand:

    npm i @my/logger-lib@^4 --save
    

    After this, you could run all the necessary migration schematics using ng update with the migrate-only switch:

    ng update @my/logger-lib --registry http://localhost:4873 
    --migrate-only --from=0.0.0 --force
    

    This will execute all migration schematics to get from version 0.0.0 to the currently installed one. To just execute the migration schematics for a specific (former) version, you could make use of the --to switch:

    ng update @my/logger-lib --registry http://localhost:4873 
    --migrate-only --from=0.0.0 --to=4.0.0 --force
    

    Jürgen Gutsch: A generic logger factory facade for classic ASP.NET

    ASP.NET Core already has this feature. There is a ILoggerFactory to create a logger. You are able to inject the ILoggerFactory to your component (Controller, Service, etc.) and to create a named logger out of it. During testing you are able to replace this factory with a mock, to not test the logger as well and to not have an additional dependency to setup.

    Recently we had the same requirement in a classic ASP.NET project, where we use Ninject to enable dependency injection and log4net to log all the stuff we do and all exceptions. One important requirement is a named logger per component.

    Creating named loggers

    Usually log4net gets created inside the components as a private static instance:

    private static readonly ILog _logger = LogManager.GetLogger(typeof(HomeController));
    

    There already is a static factory method to create a named logger. Unfortunately this isn't really testable anymore and we need a different solution.

    We could create a bunch of named logger in advance and register them to Ninject, which obviously is not the right solution. We need to have a more generic solution. We figured out two different solutions:

    // would work well
    public MyComponent(ILoggerFactory loggerFactory)
    {
        _loggerA = loggerFactory.GetLogger(typeof(MyComponent));
        _loggerB = loggerFactory.GetLogger("MyComponent");
        _loggerC = loggerFactory.GetLogger<MyComponent>();
    }
    // even more elegant
    public MyComponent(
        ILoggerFactory<MyComponent> loggerFactoryA
        ILoggerFactory<MyComponent> loggerFactoryB)
    {
        _loggerA = loggerFactoryA.GetLogger();
        _loggerB = loggerFactoryB.GetLogger();
    }
    

    We decided to go with the second approach, which is a a simpler solution. This needs a dependency injection container that supports open generics like Ninject, Autofac and LightCore.

    Implementing the LoggerFactory

    Using Ninject the binding of open generics looks like this:

    Bind(typeof(ILoggerFactory<>)).To(typeof(LoggerFactory<>)).InSingletonScope();
    

    This binding creates an instance of LoggerFactory<T> using the requested generic argument. If I request for an ILoggerFactory<HomeController>, Ninject creates an instance of LoggerFactory<HomeController>.

    We register this as an singleton to reuse the ILog instances as we would do using the usual way to create the ILog instance in a private static variable.

    The implementation of the LoggerFactory is pretty easy. We use the generic argument to create the log4net ILog instance:

    public interface ILoggerFactory<T>
    {
    	ILog GetLogger();
    }
    
    public class LoggerFactory<T> : ILoggerFactory<T>
    {
        private ILog _logger;
        public ILog GetLogger()
        {
            if (_logger == null)
            {
                var type = typeof(T);
                _logger = LogManager.GetLogger(typeof(T));
            }
            return _logger;
        }
    }
    

    We need to ensure the logger is created before creating a new one. Because Ninject creates a new instance of the LoggerFactory per generic argument, the LoggerFactory don't need to care about the different loggers. It just stores a single specific logger.

    Conclusion

    Now we are able to create one or more named loggers per component.

    What we cannot do, using this approach is to create individual named loggers, using a specific string as a name. There is a type needed that gets passed as generic argument. So every time we need an individual named logger we need to create a specific type. In our case this is not a big problem.

    If you don't like to create types just to create individual named loggers, feel free to implement a non generic LoggerFactory and make a generic GetLogger method as well as a GetLogger method that accepts strings as logger names.

    Jürgen Gutsch: Creating Dummy Data Using GenFu

    Two years ago I already wrote about playing around with GenFu and I still use it now, as mentioned in that post. When I do a demo, or when I write blog posts and articles, I often need dummy data and I use GenFu to create it. But every time I use it in a talk or a demo, somebody still asks me a question about it,

    Actually I really forgot about that blog post and decided to write about it again this morning because of the questions I got. Almost accidently I stumbled upon this "old" post.

    I wont create a new one. Now worries ;-) Because of the questions I just want to push this topic a little bit to the top:

    Playing around with GenFu

    GenFu on GitHub

    PM> Install-Package GenFu
    

    Read about it, grab it and use it!

    It is one of the most time saving tools ever :)

    Holger Schwichtenberg: Die Windows-Update-Endlosschleife und der Microsoft-Support

    Windows 10 Update 1709 installiert nicht und der Microsoft-Support hat auch keine Lösung beziehungsweise gibt sich nicht viel Mühe, eine Lösung zu finden.

    Christina Hirth : Kollegen-Bashing – Überraschung, es hilft nicht!

    Bei allen Konferenzen, die meinen Kollegen und ich besuchen, poppt früher oder später das Thema Team-Kultur auf, als Grund von vielen/allen Problemen. Wenn wir erzählen, wie wir arbeiten, landen wir unausweichlich bei der Aussage “eine selbstorganisierte crossfunktionale Organisation ohne einen Chef, der DAS Sagen hat, ist naiv und nicht realistisch“. “Ihr habt irgendwo sicher einen Chef, ihr wisst es nur nicht!” war eine der abgefahrensten Antworten, die wir unlängst gehört haben, nur weil der Gesprächspartner nicht in der Lage war, dieses Bild zu verarbeiten: 5 Selbstorganisierte Teams, ohne Chefs, ohne CTO, ohne Projektmanager, ohne irgendwelche von außen eingekippte Regeln und Anforderungen, ohne Deadlines denen wir widerspruchslos unterliegen würden. Dafür aber mit selbst auferlegten Deadlines, mit Budgets, mit Freiheiten und Verantwortung gleichermaßen.

    Ich spreche jetzt hier nicht vom Gesetz und von der Papierform: natürlich haben wir in der Firma einen CTO, einen Head of Development, einen CFO, sie entscheiden nur nicht wann, was und wie wir etwas tun. Sie definieren die Rahmen, in der die Geschäftsleitung in das Produkt/Vorhaben investiert, aber den Rest tun wir: POs und Scrum Master und Entwickler, gemeinsam.

    Wir arbeiten seit mehr als einem Jahr in dieser Konstellation und wir können noch 6 Monate Vorlaufzeit dazurechnen, bis wir in der Lage waren, dieses Projekt auf Basis von Conways-Law zu starten.

    “Organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.” [Wikipedia]

    In Umkehrschluss (und freie Übersetzung) heißt das “wie deine Organisation ist, so wird auch dein Produkt, dein Code strukturiert sein”. Wir haben also an unserer Organisation gearbeitet. Das Ziel war, ein verantwortungsvolles Team aufzubauen, das frei zum Träumen ist, um ein neues, großartiges Produkt zu bauen, ohne auferlegten Fesseln.

    Wir haben jetzt dieses Team, wir leben jetzt diesen Traum – der natürlich auch Schatten hat, das Leben ist schließlich kein Ponyhof :). Der Unterschied ist: es sind unsere Probleme und wir drücken uns nicht davor, wir lösen sie zusammen.

    Bevor ihr sagt “das ist ein Glücksfall, passiert normalerweise nicht” würde ich widersprechen. Bei uns ist es auch nicht nur einfach so passiert, wir haben (ungefähr 6 Monate) daran gearbeitet, und tun es weiterhin kontinuierlich. Der Clou, der Schlüssel zu dieser Organisation ist nämlich eine offene Feedback-Kultur.

    Was soll das heißen, wie haben wir das bei uns erreicht?

    • Wir haben gelernt, Feedback zu geben und zu nehmen – ja, das ist nicht so einfach. Das sind die Regeln
      • Alle Aussagen sind Subjektiv: “Gestern als ich Review gemacht habe, habe ich das und das gesehen. Das finde ich aus folgenden Gründen nicht gut genug/gefährlich. Ich könnte mir vorstellen, dass so oder so es uns schneller zum Ziel bringen könnte.” Ihr merkt: niemals DU sagen, alles in Ich-Form, ohne vorgefertigten Meinungen oder Annahmen.
      • Alle Aussagen mit konkreten Beispielen. Aussagen mit “ich glaube, habe das Gefühl, etc.” sind Meinungen und keine Tatsachen. Man muss ein Beispiel finden sonst ist das Feedback nicht “zulässig”
      • Das Feedback wird immer konstruktiv formuliert. Es hilft nicht zu sagen, was schlecht ist, es ist viel wichtiger zu sagen woran man arbeiten sollte: “Ich weiß aus eigener Erfahrung, dass Pair-Programming in solchen Fällen sehr hilfreich ist” z.B.
      • Derjenige, die Feedback bekommt, muss es anhören ohne sich zu recht fertigen. Sie muss sich selber entscheiden, was sie mit dem Feedback macht. Jeder, der sich verbessern möchte, wird versuchen, dieses Feedback zu Herzen zu nehmen und an sich zu arbeiten. Das muss man nicht vorschreiben!
    • One-and-Ones: das sind Feedback-Runden zwischen 2 Personen in einem Team, am Anfang mit Scrum Master, solange die Leute sich an die Formulierung gewöhnt haben (wir haben am Anfang die ganze Idee ausgelacht) und später dann nur noch die Paare. Jedes mal nur in eine Richtung (nur der eine bekommt Feedback) und z.B. eine Woche später in die andere Richtung. Das Ergebnis ist, das wir inzwischen keine Termine mehr haben, wir machen das automatisch, jedes Mal, wenn etwas zu “Feedbacken” ist.
    • Team-Feedback: ist die letzte Stufe, läuft nach den gleichen regeln. Wird nicht nur zwischen Teams sondern auch zwischen Gruppen/Gilden gehalten, wie POs oder Architektur-Owner.

    Das war’s. Ich habe seit über einem Jahr nicht mehr Sätze gehört, wie “die Teppen von dem anderen Team waren, die alles verbockt haben” oder “Die kriegen es ja sowieso nicht hin” oder “Was kümmert es mich, sie haben ja den Fehler eingecheckt” Und diese Arbeitsatmosphäre verleiht Flügel! (sorry für die copy-right-Verletzung 😉 )

    Code-Inside Blog: Did you know that you can run ASP.NET Core 2 under the full framework?

    This post might be obvious for some, but I really struggled a couple of month ago and I’m not sure if a Visual Studio Update fixed the problem for me or if I was just blind…

    The default way: Running .NET Core

    AFAIK the framework dropdown in the normal Visual Studio project template selector (the first window) is not important and doesn’t matter anyway for .NET Core related projects.

    When you create a new ASP.NET Core application you will see something like this:

    x

    The important part for the framework selection can be found in the upper left corner: .NET Core is currently selected.

    When you continue your .csproj file should show something like this:

    <Project Sdk="Microsoft.NET.Sdk.Web">
    
      <PropertyGroup>
        <TargetFramework>netcoreapp2.0</TargetFramework>
      </PropertyGroup>
    
      <ItemGroup>
        <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.5" />
      </ItemGroup>
    
      <ItemGroup>
        <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.2" />
      </ItemGroup>
    
    </Project>
    

    Running the full framework:

    I had some trouble to find the option, but it’s really obvious. You just have to adjust the selected framework in the second window:

    x

    After that your .csproj has the needed configuration.

    <Project Sdk="Microsoft.NET.Sdk.Web">
      <PropertyGroup>
        <TargetFramework>net461</TargetFramework>
      </PropertyGroup>
      
      <ItemGroup>
        <PackageReference Include="Microsoft.AspNetCore" Version="2.0.1" />
        <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.0.2" />
        <PackageReference Include="Microsoft.AspNetCore.Mvc.Razor.ViewCompilation" Version="2.0.2" PrivateAssets="All" />
        <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="2.0.1" />
        <PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="2.0.1" />
      </ItemGroup>
      
      <ItemGroup>
        <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.2" />
      </ItemGroup>
    </Project>
    

    The biggest change: When you run under the full .NET Framework you can’t use the “All”-Meta-Package, because with version 2.0 the package is still .NET Core only, and need to point to each package manually.

    Easy, right?

    Be aware: Maybe with ASP.NET Core 2.1 the Meta-Package story with the full framework might get easier.

    I’m still not sure why I struggled to find this option… Hope this helps!

    Jürgen Gutsch: Running and Coding

    I wasn't really sporty before two years, but anyway active. I was also forced to be active with three little kids and a sporty and lovely women. But anyway, a job where I mostly sit in a comfortable chair, even great food and good southern German beers also did its work. When I first met my wife, I had around 80 Kg, what is good for my size of 178cm. But my weight increased up to 105Kg until Christmas 2015. This was way too much I thought. Until then I always tried to reduce it by doing some more cycling, more hiking and some gym, but it didn't really worked out well.

    Anyway, there is not a more effective way to loose weight than running. It is btw. tree times more effective than cycling. I tried it a lot in the past, but it pretty much hurts in the lower legs and I stopped it more than once.

    Running the agile way

    I tried it again in Easter 2016 in a little different way, and it worked. I tried to do it the same way as in a perfect software project:

    I did it in an agile way, using pretty small goals to get as much success as possible.

    Also I bought me fitness watch to count steps, calories, levels and to measure the hart rate while running, to get some more challenges to do. At the same time I changed food a lot.

    It sounds weird and funny, but it worked really well. I lost 20Kg since then!

    I think it was important to not set to huge goals. I just wanted to loose 20Kg. I didn't set a time limit, or something like this.

    I knew it hurts in the lower legs while running. I started to learn a lot of running and the different stiles of running. I chose the way of easy running which worked pretty well with natural running shoes and barefoot shoes. This also worked well for me.

    Finding time to run

    Finding time was the hardest thing. In the past I always thought that I'm too busy to run. I discussed it a lot with the family and we figured out the best time to run was during lunch time, because I need to walk the dog anyway and this also was an option to run with the dog. This was also a good thing for our huge dog.

    Running at lunch time had another good advantage: I get the brain cleaned a little bit after four to five hours of work. (Yes, I usually start between 7 to 8 in the morning.) Running is great when you are working on software projects with a huge level of complexity. Unfortunately when I'm working in Basel, I cannot go run, because there is now shower available. But I'm still able to run three to four times a week.

    Starting to run

    The first runs were a real pain. I just chose a small lap of 2,5km, because I needed to learn running as the first step. Also because of the pain in the lower legs, I chose to run shorter tracks up-hill. Why up-hill? Because this is more exhausting than running leveled-up. So I had short up-hill running phases and longer quick walking phases. Just a few runs later the running phases start to be a little bit longer and longer.

    This was the first success just a few runs later. That was great. it was even greater when I finished my first kilometer after 1,5 months running every second day. That was amazing.

    On every run there was a success and that really pushed me. But I not only succeeded on running, I also started to loose weight, which pushed me even more. So the pain wasn't too hard and I continued running.

    Some weeks later I ran the entire lap of 2.5km. I was running the whole lap not really fast but without a walking pause. Some more motivation.

    I continued running just this 2.5km for a few more weeks to get some success on personal records on this lap.

    Low carb

    I mentioned the change with food. I changed to low-carb diet. Which is in general a way to reduce the consumption of sugar. Every kind of sugar, which means bread, potatoes, pasta, rice and corn as well. In the first phase of three months I almost completely stopped eating carbs. After that phase, started to eat a little of them. I also had one cheating-day per week when I was able to eat the normal way.

    After 6 Months of eating less carbs and running, I lost around 10Kg, which was amazing and I was absolutely happy with this progress.

    Cycling as a compensation

    As already mentioned I run every second day. The days between I used my new mountain bike to climb the hills around the city where I live. Actually, it really was a kind of compensation because cycling uses other parts of the legs. (Except when I run up-hill).

    Using my smart watch, I was able to measure that running burns three times more calories per hour in average than cycling in the same time. This is a measurement done on my person only and cannot adopt to any other person, but actually it makes sense to me.

    Unfortunately cycling during the winter was a different kind of pain. It hurts the face, the feet and the hands. It was too cold. so I stopped it, if the temperature was lower than 5 degrees.

    Extending the lap

    After a few weeks running the entire 2.5Km, I increased the length to 4.5. This was more exhausting than expected. Two kilometers more needs a completely new kind of training. I needed to enforce myself to not run too fast at the beginning. I needed to start to manage my power. Again I started slowly and used some walking pauses to get the whole lap done. During the next months the walking pauses had decrease more and more until I didn't need a walking pause anymore on this lap.

    The first official run

    Nine months later I wanted to challenge myself a little bit and attended the first public run. It was a new years eve run. Pretty cold than, but unexpectedly a lot of fun. I was running with my brother which was a good idea. The atmosphere before and during the run was pretty special and I still like it a lot. I got three challenges done during this run. I reached the finish (1) and I wasn't the last one who passed the finish line (2). That was great. I also got a new personal record on the 5km (3).

    This was done one year and three months ago. I did exactly the same run again last new years eve and got a new personal record, was faster than my brother and reached the finish. Amazing. More success to push myself.

    The first 10km

    During the last year I increased the number of kilometers, attended some more public runs. In September 2015 I finished my first public 10km run. Even more success to push me foreword.

    I didn't increase the number of kilometer fast. Just one by one kilometer. Trained one to three months on this range and added some more kilometer. Last spring I started to do a longer run during the weekends, just because I had time to do this. On workdays it doesn't make sense to run more than 7 km because this would also increase the time used for the lunch break. I tried to just use one hour for the lunch run, including the shower and changing the cloths.

    Got it done

    Last November I got it done: I actually did loose 20kg since I started to run. This was really great. It was a great thing to see a weight less than 85kg.

    Conclusion

    How did running changed my life? it changed it a lot. I cannot really live without running for more than two days. I get really nervous than.

    Do I feel better since I started running? Because of the sports I am more tired than before, I have muscle ache, I also had two sport accidents. But I'm pretty much more relaxed I think. Physically the most time it feels bad but in a wired positive way because I feel I've done something.

    Also some annoying work was done more easily. I really looking foreword to the next lunch break to run the six or seven kilometer with the dog, or to ride the bike up and down the hills and to get the brain cleaned up.

    I'm running on almost every weather except it is too slippery because of ice or snow. Fresh snow is fine, mud is fun, rain I don't feel anymore, sunny is even better and heat is challenging. Only the dog doesn't love warm weather.

    Crazy? Yes, but I love it.

    Yo you want to follow me on Strava?

    Jürgen Gutsch: Why I use paket now

    I never really had any major problem using the NuGet client. By reading the Twitter timeline, it seems I am the only one without problems. But depending on what dev process you like to use, there could be a problem. This is not really a NuGet fault, but this process makes the usage of NuGet a little bit more complex than it should be.

    As mentioned in previous posts, I really like to use Git Flow and the clear branching structure. I always have a production branch, which is the master. It contains the sources of the version which is currently in production.

    In my projects I don't need to care about multiple version installed on multiple customer machines. Usually as a web developer, you only have one production version installed somewhere on a webserver.

    I also have a next version branch, which is the develop branch. This contains the version we are currently working on. Besides of this, we can have feature branches, hotfix branches, release branches and so on. Read more about Git Flow in this pretty nice cheat sheet.

    The master branch get's compiled in release mode and uses a semantic version like this. (breaking).(feature).(patch). The develop branch, get's compiled in debug mode and has an version number that tells NuGet that it is a preview version: (breaking).(feature).(patch)-preview(build). Where build is the build number generated by the build server.

    The actual problem

    We use this versioning, build and release process for web projects and shared libraries. And with those shared libraries it starts to get complicated using NuGet.

    Some of the shared libraries are used in multiple solutions and shared via a private NuGet feed, which is a common way, I think.

    Within the next version of a web project we also use the next versions of the shared libraries to test them. In the current versions of the web projects we use the current versions of the shared libraries. Makes kinda sense, right? If we do a new production release of a web project, we need to switch back to the production version of the shared libraries.

    Because in the solutions packages folder, NuGet creates package sub-folders containing the version number. And the project references the binaries from those folder. Changing the library versions, needs to use the UI or to change the packages.config AND the project files, because the reference path contains the version information.

    Maybe switching the versions back and forth doesn't really makes sense in the most cases, but this is the way how I also try new versions of the libraries. In this special case, we have to maintain multiple ASP.NET applications, which uses multiple shared libraries, which are also dependent to different versions of external data sources. So a preview release of an application also goes to a preview environment with a preview version of a database, so it needs to use the preview versions of the needed libraries. While releasing new features, or hotfixes, it might happen that we need to do an release without updating the production environments and the production databases. So we need to switch the dependencies back to the latest production version of the libraries .

    Paket solves it

    Paket instead only supports one package version per solution, which makes a lot more sense. This means Paket doesn't store the packages in a sub-folder with a version number in its name. Changing the package versions is easily done in the paket.dependencies file. The reference paths don't change in the project files and the projects immediately use the other versions, after I changed the version and restored the packages.

    Paket is an alternative NuGet client, developed by the amazing F# community.

    Paket works well

    Fortunately Paket works well with MSBuild and CAKE. Paket provides MSBuild targets to automatically restore packages before the build starts. Also in CAKE there is an add-in to restore Paket dependencies. Because I don't commit Paket to the repository I use the command line interface of Paket directly in CAKE:

    Task("CleanDirectory")
    	.Does(() =>
    	{
    		CleanDirectory("./Published/");
    		CleanDirectory("./packages/");
    	});
    
    Task("LoadPaket")
    	.IsDependentOn("CleanDirectory")
    	.Does(() => {
    		var exitCode = StartProcess(".paket/paket.bootstrapper.exe");
    		Information("LoadPaket: Exit code: {0}", exitCode);
    	});
    
    Task("AssemblyInfo")
    	.IsDependentOn("LoadPaket")
    	.Does(() =>
    	{
    		var file = "./SolutionInfo.cs";		
    		var settings = new AssemblyInfoSettings {
    			Company = " YooApplications AG",
    			Copyright = string.Format("Copyright (c) YooApplications AG {0}", DateTime.Now.Year),
    			ComVisible = false,
    			Version = version,
    			FileVersion = version,
    			InformationalVersion = version + build
    		};
    		CreateAssemblyInfo(file, settings);
    	});
    
    Task("PaketRestore")
    	.IsDependentOn("AssemblyInfo")
    	.Does(() => 
    	{	
    		var exitCode = StartProcess(".paket/paket.exe", "install");
    		Information("PaketRestore: Exit code: {0}", exitCode);
    	});
    
    // ... and so on
    

    Conclusion

    No process is 100% perfect, even this process is not. But it works pretty well in this case. We are able to do releases and hotfix very fast. The setup of a new project using this process is fast and easy as well.

    The whole process of releasing a new version, starting with the command git flow release start ... to the deployed application on the web server doesn't take more than 15 minutes. Depending on the size of the application and the amount of tests to run on the build server.

    I just recognized, this post is not about .NET Core or ASP.NET Core. The problem I described only happens to classic projects and solutions that store the NuGet packages in the solutions packages folder.

    Any Questions about that? Do you wanna learn more about Git Flow, CAKE and Continuous Deployment? Just drop me a comment.

    Jürgen Gutsch: Recap the MVP Global Summit 2018

    Being a MVP has a lot of benefits. Getting free tools, software and Azure credits are just a few of them. The direct connection to the product group has a lot more value than all software. Even more valuable is the is the fact of being a part of an expert community with more than 3700 MVPs from around the world.

    In fact there are a lot more experts outside the MVP community which are also contributing to the communities of the Microsoft related technologies and tools. Being an MVP also means to find those experts and to nominate them to also get the MVP award.

    The most biggest benefit of being an MVP is the yearly MVP Global Summit in Redmond. Also this year Microsoft invites the MVPs to attend the MVP Global Summit. More than 2000 MVPs and Regional Directors were registered to attend the summit.

    I also attended the summit this year. It was my third summit and the third chance to directly interact with the product group and with other MVPs from all over the world.

    The first days in Seattle

    My journey to the summit starts at Frankfurt airport where a lot of German, Austrian and Swiss MVPs start their journey and where many more MVPs from Europe change the plain. The LH490 and LH491 flights around the summits are called the "MVP plains" because of this. This always feels like a yearly huge school trip.

    The flight was great, sunny the most time and I had an impressive view over Greenland and Canada:

    Greenland

    After we arrived at SEATEC, some German MVP friends and me took the train to Seattle downtown. We checked in at the hotels and went for a beer and a burger. This year I decided to arrive one day earlier than the last years and to stay in Seattle downtown for the first two nights and the last two nights. This was a great decision.

    Pike Place Seattle

    I spent the nights just a few steps away from the pike place. I really love the special atmosphere at this place and this area. There are a lot of small stores, small restaurants, the farmers market and the breweries. Also the very first Starbucks restaurant is at this place. It's really a special place. This also allows me to use the public transportation, which works great in Seattle.

    There is a direct train from the airport to Seattle downtown and an express bus from Seattle downtown to the center of Bellevue where the conference hotels are located. For those of you, who don't want to spent 40USD or more for Uber, Taxy or a Shuttle, the train to Seattle costs 3USD and the express bus 2,70USD. Both need around 30 minutes, maybe you need some time to wait a few minutes in the underground station in Seattle.

    The Summit days

    After checking-in into the my conference hotel on Sunday morning, I went to the registration, but it seemed I was pretty early:

    Summit Registration

    But it wasn't really right. The most of the MVPs where in the queue to register for the conference and to get their swag.

    Like the last years, the summit days where amazing, even if we don't really learn a lot of really new things in my contribution area. The most stuff in the my MVP category is open source and openly discussed on GitHub and Twitter and in the blog posts written by Microsoft. Anyway we learned about some cool ideas, which I unfortunately cannot write down here, because it is almost all NDA content.

    So the most amazing things during the summit are the events and parties around the conference and to meet all the famous MVPs and Microsoft employees. I'm not really a selfie guy, but this time I really needed to take a picture with the amazing Phil "Mister ASP.NET MVC" Haack.

    Phil Haak

    I'm also glad to met Steve Gorden, Andrew Lock, David Pine, Damien Bowden, Jon Galloway, Damien Edwards, David Fowler, Immo Landwerth, Glen Condron, and many, many more. And of course the German speaking MVP Family from Germany (D), Austria (A) and Switzerland (CH) (aka DACH)

    Special Thanks to Alice, who manages all the MVPs in the DACH area.

    I'm also pretty glad to meet the owner of millions of hats, Mr. Jeff Fritz in person who ask me to do a lightning talk in front of many program managers during the summit. Five MVPs should tell the developer division program managers stories about the worst or the best things about the development tools. I was quite nervous, but it worked out well, mostly because Jeff was super cool. I told a worse story about the usage of Visual Studio 2015 and TFS by a customer with a huge amount of solutions and a lot more VS projects in it. It was pretty wired to also tell Julia Liuson (Corporate Vice President of Visual Studio) about that problems. But she was really nice, asked the right questions.

    BTW: The power bank (battery pack) we got from Jeff, after the lightning talk, is the best power bank I ever had. Thanks Jeff.

    On Thursday, the last Summit day for the VS and dev tools MVPs, there was a hackathon. They provided different topics to work on. There was a table for working with Blazor, another one for some IoT things, F#, C# and even VB.NET still seems to be a thing ;-)

    My idea was to play around with Blazor, but I wanted to finalize a contribution to the ASP.NET documentation first. Unfortunately this took longer than expected, this is why I left the table and took a place on another table. I fixed a over-localization issue in the German ASP.NET documentation and took care about an issue on LightCore. On LightCore we currently have an open issue regarding some special registrations done by ASP.NET Core. We thought it was because of special registrations after the IServiceProvider were created, but David Fowler told me the provider is immutable and he points me to the registrations of open generics. LightCore already provides open generics, but implemented the resolution in a wrong way. In case a registrations of a list of generics is not found, LightCore should return an empty list instead of null.

    It was amazing how fast David Fowler points me to the right problem. Those guys are crazy smart. Just a few seconds after I showed him the missing registration, I got the right answer. Glen Condron told me right after, how to isolate this issue and test it. Problem found and I just need to fix it.

    Thanks guys :-)

    The last days in Seattle

    I also spent the last two nights at the same location near the Pike Place. Right after the hackathon, I grabbed my luggage at the conference hotel and used the express bus to go to Seattle again. I had a nice dinner together with André Krämer at the Pike Brewery. On the next Morning I had a amazingly yummy breakfast in a small restaurant at the Pike Place market, with a pretty cool morning view to the water front. Together with Kostja Klein, we had a cool chat about this and that, the INETA Germany and JustCommunity.

    The last day usually is also the time to buy some souvenirs for the Kids, my lovely wife and the Mexican exchange student, who lives in hour house. I also finished the blog series about React and ASP.NET Core.

    At the last morning in Seattle, I stumbled over the Pike Street into the Starbucks to take a small breakfast. It was pretty early at the Pike Place:

    Pike Place Seattle

    Leaving the Seattle area and the Summit feels a little bit of leaving a second home.

    I'm really looking forward to the next summit :-)

    BTW: Seattle isn't about rainy and cloudy weather

    Have I already told you, that every time I visited Seattle, it was sunny and warm?

    It's because of me, I think.

    During the last summits it was Sunny when I visit Seattle downtown. In summer 2012, I was in a pretty warm and sunny Seattle, together with my family.

    This time it was quite warm during the first days. It started to rain, when I left Seattle to go to the summit locations in Bellevue and Redmond and it was sunny and warm again when I moved back to Seattle downtown.

    It's definitely because of me, I'm sure. ;-)

    Or maybe the rainy cloudy Seattle is a different one ;-)

    Topics I'll write about

    Some of the topics I'm allowed to write about and I definitely will write about in the next posts are the following:

    • News on ASP.NET Core 2.1
    • News on ASP.NET (yes, it is still alive)
    • New features in C# 7.x
    • Live Share
    • Blazor

    Stefan Henneken: IEC 61131-3: Das ‘Observer’ Pattern

    Das Observer Pattern ist für Anwendungen geeignet, in denen gefordert wird, dass ein oder mehrere Funktionsblöcke benachrichtigt werden, sobald sich der Zustand eines bestimmten Funktionsblocks verändert. Hierbei ist die Zuordnung der Kommunikationsteilnehmer zur Laufzeit des Programms veränderbar.

    In nahezu jedem IEC 61131-3 Programm tauschen Funktionsblöcke Zustände miteinander aus. Im einfachsten Fall, wird einem Eingang eines FBs der Ausgang eines anderen FBs zugeordnet.

    Pic01

    Somit lassen sich recht einfach Zustände zwischen Funktionsblöcken austauschen. Doch diese Einfachheit hat seinen Preis:

    Unflexibel. Die Zuordnung zwischen fbSensor und den drei Instanzen von FB_Actuator ist fest im Programmcode implementiert. Eine dynamische Zuordnung zwischen den FBs während der Laufzeit ist nicht möglich.

    Feste Abhängigkeiten. Der Datentyp der Ausgangsvariable von FB_Sensor muss kompatibel zu der Eingangsvariable von FB_Actuator sein. Gibt es einen neuen Sensorbaustein, dessen Ausgangsvariable inkompatibel zu dem vorherigen Datentyp ist, hat dies zwingend eine Anpassung des Datentyps der Aktoren zur Folge.

    Aufgabenstellung

    Das folgende Beispiel soll zeigen, wie mit Hilfe des Observern Pattern, auf die feste Zuordnung zwischen den Kommunikationseilnehmern verzichtet werden kann. Der Sensor liest aus einer Datenquelle einen Messwert (z.B. eine Temperatur) aus, während der Aktor in Abhängigkeit eines Messwertes Aktionen ausführt (z.B. eine Temperaturregelung). Die Kommunikation zwischen den Teilnehmern soll veränderbar sein. Sollen die genannten Nachteile eliminiert werden, so helfen zwei grundlegende OO-Entwurfsprinzipien:

    • Identifiziere jene Bereiche, die konstant bleiben und trenne sie von denen, die sich verändern.
    • Programmiere nie direkt auf eine Implementierung, sondern immer auf Schnittstellen.Die Zuordnung zwischen Ein- und Ausgangsvariablen darf somit nicht mehr fest implementiert werden.

    Elegant realisierbar ist dieses mit Hilfe von Interfaces, welche die Kommunikation zwischen den FBs definieren. Es erfolgt nicht mehr eine feste Zuordnung von Ein- und Ausgangsvariablen. Hierdurch entsteht zwischen den Teilnehmern eine lose Koppelung. Softwaredesign auf Basis von loser Kopplung ermöglicht es, flexible Softwaresysteme aufzubauen, die mit Veränderungen besser klarkommen, da die Abhängigkeiten zwischen den Teilnehmern minimiert werden.

    Definition des Observer Pattern

    Das Observer Pattern bietet einen effizienten Kommunikationsmechanismus zwischen mehreren Teilnehmern, wobei ein bzw. mehrere Teilnehmer von dem Zustand eines Teilnehmers abhängig sind. Der Teilnehmer, der einen Zustand zur Verfügung stellt, wird hierbei Subject (FB_Sensor) genannt. Die Teilnehmer, die von dem Zustand abhängig sind, heißen Observer (FB_Actuator).

    Das Observer Pattern wird häufig mit einem Zeitungsabonnementdienst verglichen. Der Herausgeber ist das Subject, während die Abonnenten die Observer darstellen. Der Abonnent muss sich beim Herausgeber registrieren. Bei der Registrierung wird evtl. noch angegeben, welche Informationen gewünscht werden. Vom Herausgeber wird eine Liste gepflegt, in dem alle Abonnenten gespeichert sind. Sobald eine neue Veröffentlichung vorliegt, sendet der Herausgeber an alle Abonnenten aus der Liste die gewünschten Informationen.

    Formeller wird dieses im Buch “Entwurfsmuster. Elemente wiederverwendbarer objektorientierter Software” von Gamma, Helm, Johnson und Vlissides ausgedrückt:

    Das Observer Pattern definiert eine 1-zu-n-Abhängigkeit zwischen Objekten, so dass die Änderung des Zustands eines Objekts dazu führt, das alle abhängigen Objekte benachrichtigt und automatisch aktualisiert werden.”

    Implementierung

    Wie das Subject die Daten erhält und wie der Observer die Daten weiterverarbeitet, soll an dieser Stelle nicht weiter vertieft werden.

    Observer

    Über die Methode Update() wird der Observer vom Subject bei Wertänderung benachrichtigt. Da dieses Verhalten bei allen Observern gleich ist, wird das Interface I_Observer definiert, welches alle Observer implementieren.

    Der Funktionsblock FB_Observer definiert außerdem eine Eigenschaft, die den aktuellen Istwert zurückliefert.

    Pic02 Pic03

    Da die Daten per Methode ausgetauscht werden, sind weitere Ein- oder Ausgänge nicht notwendig.

    FUNCTION_BLOCK PUBLIC FB_Observer IMPLEMENTS I_Observer
    VAR
      fValue      : LREAL;
    END_VAR
    

    Hier die Implementierung der Methode Update():

    METHOD PUBLIC Update
    VAR_INPUT
      fValue      : LREAL;
    END_VAR
    THIS^.fValue := fValue;
    

    und das Property fActualValue:

    PROPERTY PUBLIC fActualValue : LREAL
    fActualValue := THIS^.fValue;
    

    Subject

    Das Subject verwaltet eine Liste von Observern. Über die Methoden Attach() und Detach() können sich die einzelnen Observer an- und abmelden.

    Pic04 Pic05

    Da alle Observer das Interface I_Observer implementieren, ist die Liste vom Typ ARRAY [1..Param.cMaxObservers] OF I_Observer. Die genaue Implementierung der Observer muss an dieser Stelle nicht bekannt sein. Es können weitere Varianten von Observern erstellt werden, solange diese das Interface I_Observer implementieren, kann das Subject mit diesen kommunizieren.

    Die Methode Attach() enthält als Parameter den Interface Pointer auf den Observer. Bevor dieser in die Liste abgelegt wird (Zeile 23), wird geprüft ob er gültig und nicht schon in der Liste enthalten ist.

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver              : I_Observer;
    END_VAR
    VAR
      nIndex                  : INT := 0;
    END_VAR
    
    Attach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    // is the observer already registered?
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        RETURN;
      END_IF
    END_FOR
    
    // save the observer object into the array of observers and send the actual value
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = 0) THEN
        THIS^.aObservers[nIndex] := ipObserver;
        THIS^.aObservers[nIndex].Update(THIS^.fValue);
        Attach := TRUE;
        EXIT;
      END_IF
    END_FOR
    

    Auch die Methode Detach() enthält als Parameter den Interface Pointer auf den Observer. Ist der Interface Pointer gültig, wird in der Liste nach dem Observer gesucht und die entsprechende Stelle gelöscht (Zeile 15).

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver     : I_Observer;
    END_VAR
    VAR
      nIndex         : INT := 0;
    END_VAR
    
    Detach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        THIS^.aObservers[nIndex] := 0;
        Detach := TRUE;
      END_IF
    END_FOR
    

    Liegt eine Statusänderung im Subject vor, so wird von allen gültigen Interface Pointern, die sich in der Liste befinden, die Methode Update() aufgerufen (Zeile 8). Diese Funktionalität liegt in der privaten Methode Notify().

    METHOD PRIVATE Notify
    VAR
      nIndex          : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex]  0) THEN
        THIS^.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    In diesem Beispiel generiert das Subject jede Sekunde einen Zufallswert und benachrichtigt anschließend die Observer über die Methode Notify().

    FUNCTION_BLOCK PUBLIC FB_Subject IMPLEMENTS I_Subject
    VAR
      fbDelay                : TON;
      fbDrand                : DRAND;
      fValue                 : LREAL;
      aObservers             : ARRAY [1..Param.cMaxObservers] OF I_Observer;
    END_VAR
    
    // creates every sec a random value and invoke the update method
    fbDelay(IN := TRUE, PT := T#1S);
    IF (fbDelay.Q) THEN
      fbDelay(IN := FALSE);
      fbDrand(SEED := 0);
      fValue := fbDrand.Num * 1234.5;
      Notify();
    END_IF
    

    Im Subject gibt es keine Anweisung, bei der auf FB_Observer direkt zugegriffen wird. Der Zugriff findet immer indirekt über das Interface I_Observer statt. Eine Anwendung kann mit beliebigen Observer erweitert werden, solange diese das Interface I_Observer implementieren, sind keine Anpassungen am Subject notwendig.

    Pic06

    Anwendung

    Der folgende Baustein soll helfen das Beispielprogramm zu testen. In diesem wird ein Subject und zwei Observer angelegt. Durch Setzen entsprechender Hilfsvariablen können die beiden Observer zur Laufzeit sowohl mit dem Subject verbunden, als auch wieder getrennt werden.

    PROGRAM MAIN
    VAR
      fbSubject               : FB_Subject;
      fbObserver1             : FB_Observer;
      fbObserver2             : FB_Observer;
      bAttachObserver1        : BOOL;
      bAttachObserver2        : BOOL;
      bDetachObserver1        : BOOL;
      bDetachObserver2        : BOOL;
    END_VAR
    
    fbSubject();
    
    IF (bAttachObserver1) THEN
      fbSubject.Attach(fbObserver1);
      bAttachObserver1 := FALSE;
    END_IF
    IF (bAttachObserver2) THEN
      fbSubject.Attach(fbObserver2);
      bAttachObserver2 := FALSE;
    END_IF
    IF (bDetachObserver1) THEN
      fbSubject.Detach(fbObserver1);
      bDetachObserver1 := FALSE;
    END_IF
    IF (bDetachObserver2) THEN
      fbSubject.Detach(fbObserver2);
      bDetachObserver2 := FALSE;
    END_IF
    

    Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

    Optimierungen

    Subject: Interface oder Basisklasse?

    Die Notwendigkeit des Interfaces I_Observer ist bei dieser Implementierung offensichtlich. Die Zugriffe auf Observer werden durch das Interface von der Implementierung entkoppelt.

    Dagegen erscheint das Interface I_Subject hier nicht erforderlich. Und tatsächlich könnte auf das Interface I_Subject verzichtet werden. Ich habe es aber trotzdem vorgesehen, da es die Option offen hält, spezielle Varianten von FB_Subject anzulegen. So könnte es einen Funktionsblock geben, der die Observer-Liste nicht in einem Array organisiert. Der Zugriff auf die Methoden zum An- und Abmelden der unterschiedlichen Observer könnte dann generisch über das Interface I_Subject erfolgen.

    Der Nachteil bei dem Interface liegt allerdings darin, dass der Code für das An- und Abmelden jedesmal neu implementiert werden muss, auch dann, wenn es die Applikation nicht erfordert. Stattdessen erscheint eine Basisklasse (FB_SubjectBase) für das Subject sinnvoller. Der Verwaltungscode für die Methoden Attach() und Detach() könnten in diese Basisklasse verlagert werden. Besteht die Notwendigkeit ein spezielles Subject (FB_SubjectNew) zu erstellen, so kann von dieser Basisklasse (FB_SubjectBase) geerbt werden.

    Was ist aber, wenn dieser spezielle Funktionsblock (FB_SubjectNew) schon von einer anderen Basisklasse (FB_Base) erbt? Mehrfachvererbung ist nicht möglich (dagegen können aber mehrere Interfaces implementiert werden).

    Hier bietet es sich an, die Basisklasse in den neuen Funktionsblock einzubetten, also eine lokale Instanz von FB_SubjetBase anzulegen.

    FUNCTION_BLOCK PUBLIC FB_SubjectNew EXTENDS FB_Base IMPLEMENTS I_Subject
    VAR
      fValue              : LREAL;
      fbSubjectBase       : FB_SubjectBase;
    END_VAR
    

    In den Methoden Attach() und Detach() kann dann auf diese lokale Instanz zugegriffen werden.

    Methode Attach():

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver          : I_Observer;
    END_VAR
    
    Attach := FALSE;
    IF (THIS^.fbSubjectBase.Attach(ipObserver)) THEN
      ipObserver.Update(THIS^.fValue);
      Attach := TRUE;
    END_IF
    

    Methode Detach():

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver		: I_Observer;
    END_VAR
    Detach := THIS^.fbSubjectBase.Detach(ipObserver);
    

    Methode Notify():

    METHOD PRIVATE Notify
    VAR
      nIndex              : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.fbSubjectBase.aObservers[nIndex]  0) THEN
        THIS^.fbSubjectBase.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    Somit implementiert das neue Subject das Interface I_Subject, erbt von dem Funktionsblock FB_Base und kann über die eingebettete Instanz auf die Funktionalitäten von FB_SubjectBase zugreifen.

    Pic07

    Beispiel 2 (TwinCAT 3.1.4022) auf GitHub

    Update: Push- oder Pull-Methode?

    Es gibt zwei Varianten, wie der Observer die gewünschten Informationen vom Subject erhält:

    Bei der Push-Methode werden über die Update-Methode alle Informationen an den Observer übergeben. Für den gesamten Informationsaustausch ist nur der Aufruf einer Methode notwendig. In dem Beispiel war es immer nur eine Variable vom Datentyp LREAL, die das Subject übergeben hat. Je nach Anwendung, können es aber deutlich mehr Daten sein. Doch nicht jeder Observer benötigt immer alle Informationen, die an ihn übergeben werden. Weiterhin werden Erweiterungen erschwert: Was ist, wenn die Methode Update() um weitere Daten erweitert wird? Es müssen alle Observer angepasst werden. Abhilfe schafft in diesem Fall die Nutzung eines speziellen Funktionsblocks als Parameter. Dieser Funktionsblock kapselt alle notwendigen Informationen in Eigenschaften. Kommen weitere Eigenschaften hinzu, so ist es nicht notwendig die Update-Methode anzupassen.

    Wird die Pull-Methode implementiert, so erhält der Observer nur eine minimale Benachrichtigung. Dieser holt sich dann aus den Subject alle Informationen die benötigt werden. Hierzu müssen allerdings zwei Bedingungen erfüllt sein. Zum einen muss das Subject alle Daten als Eigenschaften zur Verfügung stellen. Zum anderen muss der Observer eine Referenz auf das Subject erhalten, damit dieser auf die Eigenschaften zugreifen kann. Ein Lösungsansatz kann darin bestehen, dass die Update-Methode als Parameter eine Referenz auf das Subject (also auf sich selbst) enthält.

    Natürlich lassen sich beide Varianten miteinander kombinieren. Das Subject stellt alle relevanten Daten als Eigenschaften zur Verfügung. Gleichzeitig kann die Update-Methode eine Referenz auf das Subject mitliefern und die wichtigsten Informationen als Funktionsblock übergeben. Dieser Ansatz ist das klassische Vorgehen von zahlreichen GUI-Bibliotheken.

    Tipp: Sofern das Subject wenig über seine Observer weiß, ist die Pull-Methode vorzuziehen. Kennt das Subject dagegen seine Observer (da es nur wenige verschiede Arten von Observern geben kann), sollte die Push-Methode angewendet werden.

    Holger Schwichtenberg: Community-Konferenz in Madgeburg im April ab 40 Euro pro Tag

    Die Magdeburger Developer Days gehen in die dritte Auflage und dieses Mal auch dreitägig vom 9. bis 11. April 2018.

    Holger Schwichtenberg: GroupBy funktioniert in Entity Framework Core 2.1 Preview 1 immer noch nicht so ganz

    Aggregatoperatoren wie Min(), Max(), Sum() und Average() funktionieren, nicht aber Count().

    Manfred Steyer: Custom Schematics - Part IV: Frictionless Library Setup with the Angular CLI and Schematics


    Table of Contents

    This blog post is part of an article series.


    Thanks a lot to Hans Larsen from the Angular CLI team for reviewing this article.

    It's always the same: After npm installing a new library, we have to follow a readme step by step to include it into our application. Usually this involves creating configuration objects, referencing css files, and importing Angular Modules. As such tasks aren't fun at all it would be nice to automate this.

    This is exactly what the Angular CLI supports beginning with Version 6 (Beta 5). It gives us a new ng add command that fetches an npm package and sets it up with a schematic -- a code generator written with the CLI's scaffolding tool Schematics. To support this, the package just needs to name this schematic ng-add.

    In this article, I show you how to create such a package. For this, I'll use ng-packagr and a custom schematic. You can find the source code in my GitHub account.

    If you haven't got an overview to Schematics so far, you should lookup the well written introduction in the Angular Blog before proceeding here.

    Goal

    To demonstrate how to leverage ng add, I'm using an example with a very simple logger library here. It is complex enough to explain how everything works and not indented for production. After installing it, one has to import it into the root module using forRoot:

    [...] import { LoggerModule } from '@my/logger-lib'; @NgModule({ imports: [ [...], LoggerModule.forRoot({ enableDebug: true }) ], [...] }) export class AppModule { }

    As you see in the previous listing, forRoot takes a configuration object. After this, the application can get hold of the LoggerService and use it:

    [...] import { LoggerService } from '@my/logger-lib'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent { constructor(private logger: LoggerService) { logger.debug('Hello World!'); logger.log('Application started'); } }

    To prevent the need for importing the module manually and for remembering the structure of the configuration object, the following sections present a schematic for this.

    Schematics is currently an Angular Labs project. Its public API is experimental and can change in future.

    Angular Labs

    Getting Started

    To get started, you need to install version 6 of the Angular CLI. Make sure to fetch Beta 5 or higher:

    npm i -g @angular/cli@~6.0.0-beta
    

    You also need the Schematics CLI:

    npm install -g @angular-devkit/schematics-cli
    

    The above mentioned logger library can be found in the start branch of my sample:

    git clone https://github.com/manfredsteyer/schematics-ng-add
    cd schematics-ng-add
    git checkout start
    

    After checking out the start branch, npm install its dependencies:

    npm install
    

    If you want to learn more about setting up a library project from scratch, I recommend the resources outlined in the readme of ng-packagr.

    Adding an ng-add Schematic

    As we have everything in place now, let's add a schematics project to the library. For this, we just need to run the blank Schematics in the project's root:

    schematics blank --name=schematics
    

    This generates the following folder structure:

    Generated Schematic

    The folder src/schematics contains an empty schematic. As ng add looks for an ng-add schematic, let's rename it:

    Renamed Schematic

    In the index.ts file in the ng-add folder we find a factory function. It returns a Rule for code generation. I've adjusted its name to ngAdd and added a line for generating a hello.txt.

    import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; export function ngAdd(): Rule { return (tree: Tree, _context: SchematicContext) => { tree.create('hello.txt', 'Hello World!'); return tree; }; }

    The generation of the hello.txt file represents the tasks for setting up the library. We will replace it later with a respective implementation.

    As our schematic will be looked up in the collection.json later, we have also to adjust it:

    { "$schema": "../node_modules/@angular-devkit/schematics/collection-schema.json", "schematics": { "ng-add": { "description": "Initializes Library", "factory": "./ng-add/index#ngAdd" } } }

    Now, the name ng-add points to our rule -- the ngAdd function in the ng-add/index.ts file.

    Adjusting the Build Script

    In the current project, ng-packagr is configured to put the library build out of our sources in the folder dist/lib. The respective settings can be found within the ngPackage node in the package.json. When I'm mentioning package.json here, I'm referring to the project root's package.json and not to the generated one in the schematics folder.

    To make use of our schematic, we have to make sure it is compiled and copied over to this folder. For the latter task, I'm using the cpr npm package we need to install in the project's root:

    npm install cpr --save-dev
    

    In order to automate the mentioned tasks, add the following scripts to the package.json:

    [...] "scripts": { [...], "build:schematics": "tsc -p schematics/tsconfig.json", "copy:schematics": "cpr schematics/src dist/lib/schematics --deleteFirst", [...] }, [...]

    Also, extend the build:lib script so that the newly introduced scripts are called:

    [...] "scripts": { [...] "build:lib": "ng-packagr -p package.json && npm run build:schematics && npm run copy:schematics", [...] }, [...]

    When the CLI tries to find our ng-add schematic, it looks up the schematics field in the package.json. By definition it points to the collection.json which in turn points to the provided schematics. Hence, let's add this field to our package.json too:

    { [...], "schematics": "./schematics/collection.json", [...] }

    Please note that the mentioned path is relative to the folder lib where ng-packagr copies the package.json over.

    Test the Schematic Directly

    For testing the schematic, let's build the library:

    npm run build:lib
    

    After this, move to the dist/lib folder and run the schematic:

    schematics .:ng-add
    

    Testing the ng-add schematic

    Even though the output mentions that a hello.txt is generated, you won't find it because when executing a schematic locally it's performing a dry run. To get the file, set the dry-run switch to false:

    schematics .:ng-add --dry-run false
    

    After we've seen that this works, generate a new project with the CLI to find out whether our library plays together with the new ng add:

    ng new demo-app
    cd demo-app
    ng add ..\logger-lib\dist\lib
    

    ng add with relative path

    Make sure that you point to our dist/lib folder. Because I'm working on Windows, I've used backslashes here. For Linux or Mac, replace them with forward slashes.

    When everything worked, we should see a hello.txt.

    As ng add is currently not adding the installed dependency to your package.json, you should do this manually. This might change in future releases.

    Test the Schematic via an npm Registry

    As we know now that everything works locally, let's also check whether it works when we install it via an npm registry. For this, we can for instance use verdaccio -- a very lightweight node-based implementation. You can directly npm install it:

    npm install -g verdaccio
    

    After this, it is started by simply running the verdaccio command:

    Running verdaccio

    Before we can publish our library to verdaccio, we have to remove the private flag from our package.json or at least set it to false:

    { [...] "private": false, [...] }

    To publish the library, move to your project's dist/lib folder and run npm publish:

    npm publish --registry http://localhost:4873
    

    Don't forget to point to verdaccio using the registry switch.

    Now, let's switch over to the generated demo-app. To make sure our registry is used, create an .npmrc file in the project's root:

    @my:registry=http://localhost:4873
    

    This entry causes npm to look up each library with the @my scope in our verdaccio instance.

    After this, we can install our logger library:

    ng add @my/logger-lib
    

    ng add

    When everything worked, we should find our library in the node_modules/@my/logger-lib folder and the generated hello.txt in the root.

    Extend our Schematic

    So far, we've created a library with a prototypical ng-add schematic that is automatically executed when installing it with ng add. As we know that our setup works, let's extend the schematic to setup the LoggerModule as shown in the beginning.

    Frankly, modifying existing code in a safe way is a bit more complicated than what we've seen before. But I'm sure, we can accomplish this together ;-).

    For this endeavour, our schematic has to modify the project's app.module.ts file. The good message is, that this is a common task the CLI performs and hence its schematics already contain the necessary logic. However, when writing this, the respective routines have not been part of the public API and so we have to fork it.

    For this, I've checked out the Angular DevKit and copied the contents of its packages/schematics/angular/utility folder to my library project's schematics/src/utility folder. Because those files are subject to change, I've conserved the current state here.

    Now, let's add a Schematics rule for modifying the AppModule. For this, move to our schematics/src/ng-add folder and add a add-declaration-to-module.rule.ts file. This file gets an addDeclarationToAppModule function that takes the path of the app.module.ts and creates a Rule for updating it:

    import { Rule, Tree, SchematicsException } from '@angular-devkit/schematics'; import { normalize } from '@angular-devkit/core'; import * as ts from 'typescript'; import { addSymbolToNgModuleMetadata } from '../utility/ast-utils'; import { InsertChange } from "../utility/change"; export function addDeclarationToAppModule(appModule: string): Rule { return (host: Tree) => { if (!appModule) { return host; } // Part I: Construct path and read file const modulePath = normalize('/' + appModule); const text = host.read(modulePath); if (text === null) { throw new SchematicsException(`File ${modulePath} does not exist.`); } const sourceText = text.toString('utf-8'); const source = ts.createSourceFile(modulePath, sourceText, ts.ScriptTarget.Latest, true); // Part II: Find out, what to change const changes = addSymbolToNgModuleMetadata(source, modulePath, 'imports', 'LoggerModule', '@my/logger-lib', 'LoggerModule.forRoot({ enableDebug: true })'); // Part III: Apply changes const recorder = host.beginUpdate(modulePath); for (const change of changes) { if (change instanceof InsertChange) { recorder.insertLeft(change.pos, change.toAdd); } } host.commitUpdate(recorder); return host; }; }

    Most of this function has been "borrowed" from the Angular DevKit. It reads the module file and calls the addSymbolToNgModuleMetadata utility function copied from the DevKit. This function finds out what to modify. Those changes are applied to the file using the recorder object and its insertLeft method.

    To make this work, I had to tweak the copied addSymbolToNgModuleMetadata function a bit. Originally, it imported the mentioned Angular module just by mentioning its name. My modified version has an additional parameter which takes an expression like LoggerModule.forRoot({ enableDebug: true }). This expression is put into the module's imports array.

    Even though this just takes some minor changes, the whole addSymbolToNgModuleMetadata method is rather long. That's why I'm not printing it here but you can look it up in my solution.

    After this modification, we can call addDeclarationToAppModule in our schematic:

    import { Rule, SchematicContext, Tree, chain, branchAndMerge } from '@angular-devkit/schematics'; import { addDeclarationToAppModule } from './add-declaration-to-module.rule'; export function ngAdd(): Rule { return (tree: Tree, _context: SchematicContext) => { const appModule = '/src/app/app.module.ts'; let rule = branchAndMerge(addDeclarationToAppModule(appModule)); return rule(tree, _context); }; }

    Now, we can test our Schematic as shown above. To re-publish it to the npm registry, we have to increase the version number in the package.json. For this, you can make use of npm version:

    npm version minor
    

    After re-building it (npm run build:lib) and publishing the new version to verdaccio (npm publish --registry http://localhost:4873), we can add it to our demo app:

    Add extended library

    Conclusion

    An Angular-based library can provide an ng-add Schematic for setting it up. When installing the library using ng add, the CLI calls this schematic automatically. This innovation has a lot of potential and will dramatically lower the entry barrier for installing libraries in the future.

    MSDN Team Blog AT [MS]: Neue Azure Regions

    Microsoft hat heute eine starke Erweiterung Ihrer Azure Rechenzentren in Europa bekannt gegeben. Es wurden zwei neue Azure Regionen in der Schweiz angekündigt, zwei weitere Regionen in Deutschland und die Inbetriebnahme der beiden fertiggestellten Regionen in Frankreich. Die beiden zusätzlichen Regionen in Deutschland werden, anders als die existierenden, Teil der internationalen Azure Rechenzentren sein. Eine dedizierte Datenspeicherung in Deutschland ist möglich aber auch die Nutzung der Skalierbarkeit und Ausfalsssicherheit im Zusammenspiel mit Irland, den Niederlanden, Frankreich und bald auch der Schweiz.

    Neben Europa wurden auch zwei Regionen in den Vereinigten Arabischen Emiraten angekündigt.

     

    Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.