Holger Schwichtenberg: Die Highlights der Build 2018

Der Dotnet-Doktor fasst die wesentlichen Nachrichten von Microsofts Build-Konferenz 2018 zusammen.

Holger Schwichtenberg: Microsoft Build 2018: Was können wir erwarten?

Microsofts Entwicklerkonferenz "Build 2018" beginnt am Montag, den 7. Mai 2018, um 17 Uhr deutscher Zeit mit der ersten Keynote. Microsoft wird wohl wieder Neuigkeiten zu .NET, .NET Core, Visual Studio, Azure und Windows verkünden.

Manfred Steyer: The new Treeshakable Providers API in Angular: Why, How and Cycles

Source code: https://github.com/manfredsteyer/treeshakable-providers-demo

Big thanks to Alex Rickabaugh from the Angular team for discussing this topic with me and for giving me some valueable hints.


Treeshakable providers come with a new optional API that helps tools like webpack or rollup to get rid of unused services during the build process. "Optional" means that you can still go with the existing API you are used to. Besides smaller bundles, this innovation also allows a more direct and easier way for declaring services. Also, it might be a first foretaste of a future where modules are optional.

In this post, I'm showing several options for using this new API and also point to some pitfalls one might run into. The source code I'm using here can be found in my GitHub repository. Please note that each branch represents one of the below mentioned scenarios.

Why and (a first) How?

First of all, let me explain why we need treeshakable providers. For this, let's have a look at the following example that uses the traditional API:

@NgModule({ [...] providers: [ { provide: FlightService, useClass: FlightService } // Alternative: FlightService ] [...] }) export class FlightBookingModule { }

Let assume, our AppModule imports the displayed FlightBookingModule. In this case we have the following dependencies:

Traditional API

Here you can see, that the AppModule always indirectly references our service, regardless if it uses it or not. Hence, tree shaking tool decide against removing it from the bundle, even if it is not used at all.

To mitigate this issue, the core team found a solution that follows a simple idea: Turning around one of the arrows:

Traditional API

In this case, the AppModule just has a dependency to the service, when it uses it (directly or indirectly).

To express this in your code, just make use of the provideIn property within the Injectable decorator:

@Injectable({ providedIn: 'root' }) export class FlightService { constructor(private http: HttpClient) {} [...] }

This property points to a module and the service will be put into this module's injection scope. The value 'root' is just a shortcut for the root component's scope. Please note, that this scope is also used by all other eagerly loaded (=not lazy-loaded) modules. Only lazy loaded modules as well as components get their own scope which inherits from the root scope. For this reason, you will very likely use 'root' in most cases.

One nice thing about this API is that we don't have to modify the module anymore for registering the service. This means that we can inject the service immediately after writing it.

Why Providers and not Components?

Now you might wonder, why the very same situation doesn't prevent treeshaking for components or other declarations. The answer is: It also prevents this. Therefore, the Angular team wrote the build optimizer which is used by the CLI when creating a production build. One of it's tasks is removing the component decorator with its meta data as it is not needed after AOT compiling and prevents for tree shaking as shown.

However, providers are a bit special: They are registered with a specific injection scope and provide a mapping between a token and a service. All this meta data is needed at runtime. Hence, the Angular team needed to go one step further and this led to the API for treeshakbles providers we are looking at here.

Indirections

The reason we are using dependency injection is that it allows for configuring indirections between a requested token and a provided service.

For this, you can use known properties like useClass within the Injectable decorator to point to the service to inject:

@Injectable({ providedIn: 'root', useClass: AdvancedFlightService, deps: [HttpClient] }) export class FlightService { constructor(private http: HttpClient) {} [...] }

This means that every component and service requesting a FlightService gets an AdvancedFlightService.

When I wrote this using version 6.0.0, I've noticed that we have to mention the dependencies of the service useClass points to in the deps array. Otherwise Angular uses the tokens from the current constructor. In the displayed example both expect an HttpClient, hence the deps array would not be needed. I think that further versions will solve this issue so that we don't need the deps array for useClass.

In addition to useClass, you can also use the other known options: useValue, useFactory and useExisting. Multi Providers seem to be not supported by treeshakable providers which makes sense because when it comes to this variety, the token should not know the individual services in advance.

This means, we have to use the traditional API for this. As an alternative, we could build our own Multi Providers implementation by leveraging factories. I've included such an implementation in my examples; you can look it up here.

Abstract Classes as Tokens

In the last example, we needed to make sure that the AdvancedFlightService can replace the FlightService. A super type like an abstract class or an interface would at least assure compatible method signatures.

If we go with an abstract class, we can also use it as a token. This is a common practice for dependency injection: We are requesting an abstraction and get one of the possible implementations.

Please note, that we cannot use an interface as a token, even though this is usual in lot's of other environments. The reason for this is TypeScript that is removing interfaces during the compilation as JavaScript doesn't has such a concept. However, we need tokens at runtime to request a service and so we cannot go with interfaces.

For this solution, we just need to move our Injectable decorator containing the DI configuration to our abstract class:

@Injectable({ providedIn: 'root', useClass: AdvancedFlightService, deps: [HttpClient] }) export abstract class AbstractFlightService { [...] }

Then, the services can implement this abstract class:

@Injectable() export class AdvancedFlightService implements AbstractFlightService { [...] }

Now, the consumers are capable of requesting the abstraction to get the configured implementation:

@Component({ [...] }) export class FlightSearchComponent implements OnInit { constructor(private flightService: AbstractFlightService) { } [...] }

This looks easy, but here is a pitfall. If you closely look at this example, you will notice a cycle:

Cycle caused by abstract class that points to service that is implementing it

However, in this very case we are lucky, because we are implementing and not extending the abstract class. This lesser known feature allows us to tread the abstract class like an interface: TypeScript just uses it to check the methods and the signatures. After this, it removes the reference to it and this resolves the cycle.

But if we used extends here, the cycle would stay and this would result in an hen/egg-problem causing issues at runtime. To make a long story short: Always implements in such cases.

Registering Services with Lazy Modules

In very seldom cases, you want to register a service with the scope of an lazy module. This leads to an own service instance (an "own singleton") for the lazy module that can override an service of an parent's scope.

For this, provideIn can point to the module in question:

@Injectable({ providedIn: FlightBookingModule, useClass: AdvancedFlightService, deps: [HttpClient] }) export abstract class AbstractFlightService { }

This seems to be easy but it also causes a cycle:

Cycle caused by pointing to a module with provideId

In a good discussion with Alex Rickabaugh from the Angular team, I've found out that we can resolve this cycle be putting services in an own service module. I've called this module just containing services for the feature in question FlightApiModule:

Resolving cycle by introducing service module

This means we just have to change providedIn to point to the new FlightApiModule:

@Injectable({ providedIn: FlightApiModule, useClass: AdvancedFlightService, deps: [HttpClient] }) export abstract class AbstractFlightService { }

In addition, the lazy module also needs to import the new service module:

@NgModule({ imports: [ [...] FlightApiModule ], [...] }) export class FlightBookingModule { }

Using InjectionTokens

In Angular, we can also use InjectionTokens objects to represent tokens. This allows us to create tokens for situations a class is not suitable for. To make this variety treeshakable too, the InjectionToken now takes a service provider:

export const FLIGHT_SERVICE = new InjectionToken<FlightService>('FLIGHT_SERVICE', { providedIn: FlightApiModule, factory: () => new FlightService(inject(HttpClient)) } );

For technical reasons, we have to specify a factory here. As there is no way to infer tokens from a function's signature, we have to use the shown inject method to get services by providing a token. Those services can be passed to the service the factory creates.

Unfortunately, we cannot use inject with tokens represented by abstract classes. Even though Angular supports this, inject's signature does currently (version 6.0.0) not allow for it. The reason might be that TypeScript doesn't have a nice way to express types that point to abstract classes. Hopefully this will be resolved in the future. For instance, Angular could use a workaround or just allow any for tokens. In the time being, we can cast the abstract class to any as it is compatible with every type.

With this trick, we can create an injection token pointing to a service that uses our AbstractFlightService as a token.

export const BOOKING_SERVICE = new InjectionToken<BookingService>('BOOKING_SERVICE', { providedIn: FlightApiModule, factory: () => new BookingService(inject(<any>AbstractFlightService)) } );

Using Modules to Configure a Module

Even though treeshakable providers come with a nicer API and help us to shrink our bundles, in some situations we have to go with the traditional API. One such situation was already outlined above: Multi-Providers. Another case where we stick with the traditional API is when providing services to configure a module. An example for this is the RouterModule with its static forRoot and forChild that take a router configuration.

For this scenario we still need such static methods returning a ModuleWithProviders instance:

@NgModule({ imports: [ CommonModule ], declarations: [ DemoComponent ], providers: [ /* no services */ ], exports: [ DemoComponent ] }) export class DemoModule { static forRoot(config: ConfigService): ModuleWithProviders { return { ngModule: DemoModule, providers: [ { provide: ConfigService, useValue: config } ] } } }

Martin Richter: Benachrichtigungen erhalten wenn der Symantec Endpoint Protection Manager (SEPM) keine Virendefinitionen aktualisiert

Seit Jahren benutzen wir die Symantec Endpoint Protection in meiner Firma. Aktuell die Version 14.0.1.
Eigentlich macht das Ding was es soll. Aber… Es gibt einen Fall in dem im Werkzeug Koffer von Symantec kein Tool vorhanden ist das Problem zu lösen.

Was passiert?

Eigentlich möchte man ja von einem Antivirensystem nichts hören und sehen. Es soll funktionieren und das war es.
Ganz besonders in einer Firma in der 5, 10, 20 und mehr Clients vorhanden sind.
Der SEPM (Symantec Endpoint Protection Manager), benachrichtigt much brav wenn auf Stations nach mehreren Tagen noch alte Virendefinition sind, oder auch eine bestimme Anzahl von PCs erreicht wurden, die alte Virendefinitionen haben. Oft sind das bei uns Maschinen, die unterwegs sind, oder lange nicht eingeschaltet wurden.

Aber es gibt einen Fall in dem der SEPM vollkommen versagt: Wenn nämlich der SEPM selber keine neuen Virendefinitionen erhält. Warum auch immer!

Ich hatte in den letzten Jahren mehrfach den Fall, in denen der SEPM keine neuen Virendefinitionen von Symantec geladen hat. Die Gründe waren vielseitig. Mal hatte der SEPM kein Internet aufgrund eines Konfigurationsfehlers, mal startete der SEPM gar nicht nach einem Windows Update.
Aber in den meisten Fällen war der SEPM einfach nicht fähig, die neuen Signaturen zu laden obwohl er anzeigte, dass welche zum Download bereit stehen.

Der letzte Fall ist besonders nervig. Ich habe zwei Support-Cases zu dem Thema schon offen gehabt, aber die redlich bemühten Supporter haben dennoch nichts herausbekommen.
Nach jeweiligem Neustart des Dienstes oder des Servers, lief es fast immer wieder. Also hatte sich scheinbar nur irgendwas intern „verklemmt“!

Dieser Fall ist aber gefährlich. Man bekommt von der ganzen Sache nichts mit, bis eine bestimmte Anzahl von PCs nach ein paar Tagen eben alte Virendefinitionen haben. In der Einstellung bei uns sind das 10% der Maschinen nach 4 Tagen. Man kann das zwar herunter drücken, aber diese Warnungen nerven meistens nur ohne triftigen Grund.
Und man kann in diesem Fall nicht mal testweise einfach den SEPM neu starten.
Eigentlich will ich keine Meldung und das System soll erstmal selbst versuchen erkannte Problem zu lösen.
Vor allem habe ich keine Lust irgendjemanden zu beauftragen, der pro Tag einmal diese blöde Konsole startet und nachschaut was los ist. Über alles andere bekomme ich ja auch Emails.

Ich finde solch eine lange Latenz, in der es nicht bemerkt wird, dass die AV-Signaturen alt sind, einfach gefährlich.
Aber Bordmittel darüber zu warnen gibt es nicht!
Zudem trat dieser Fall ca alle 6-9 Wochen immer einmal wieder auf.
Und das nervt.

Also habe ich mich auf die Suche gemacht und habe für den SQL Server in dem unsere Daten gehalten werden zwei kleine Jobs geschrieben die nachfolgend beschreiben werden.
Diese Jobs laufen nun einige Monate und haben bereits mehrfach erfolgreich dieses Problem „selbsttätig“ behoben…

Job 1: Symantec Virus Signature Check

Dieser Job läuft jede Stunde einmal.
Der Code macht einfach folgendes.

  • Sollte es in den letzten 32 Stunden (siehe Wert für @delta) eine Änderungen in den Signaturen gegeben haben ist alles OK
  • Gab es kein Update der Signaturen dann werden zwei Schritte eingeleitet.
  • Es wird über den internen SQL Mail Dienst eine Warnung versendet an den Admin.
  • Danach wird ein weiterer Job gestartet mit dem Namen Symantec SEPM Restart.

Die 32 Stunden, sind ein Wert der sich aus Erfahrungswerten gebildet hat. In 98% aller Fälle werden Signaturen innerhalb von 24h aktualisiert. Aber es gibt eben ein paar Ausnahmen

Taucht die Email mehr als zweimal auf, muss ich wohl irgendwie aktiv werden und mal manuell kontrollieren.

DECLARE @delta INT
-- number of hours
SET @delta = 32 
DECLARE @d DATETIME 
DECLARE @t VARCHAR(MAX)
IF NOT EXISTS(SELECT * FROM PATTERN WHERE INSERTDATETIME>DATEADD(hh,-@delta,GETDATE()) AND PATTERN_TYPE='VIRUS_DEFS')
BEGIN
      SET @d = (SELECT TOP 1 INSERTDATETIME FROM PATTERN WHERE PATTERN_TYPE='VIRUS_DEFS' ORDER BY INSERTDATETIME DESC)
      SET @t = 'Hallo Admin!

Die letzten Antivirus-Signaturen wurden am ' + CONVERT(VARCHAR, @d, 120)+' aktualisiert!
Es wird versucht den SEPM Dienst neu zu starten!

Liebe Grüße Ihr
SQLServerAgent'
      EXEC msdb.dbo.sp_send_dbmail @profile_name='Administrator',
				   @recipients='administrator@mydomain.de',
				   @subject='Symantec Virus Definitionen sind nicht aktuell',
				   @body=@t
      PRINT 'Virus Signaturen sind veraltet! Letztes Update: ' + CONVERT(VARCHAR, @d, 120)
      EXEC msdb.dbo.sp_start_job @job_name='Symantec Restart'
      PRINT 'Restart SEPM server!!!'
    END
  ELSE
    BEGIN 
      SET @d = (SELECT TOP 1 INSERTDATETIME FROM PATTERN WHERE PATTERN_TYPE='VIRUS_DEFS' ORDER BY INSERTDATETIME DESC)
      PRINT 'Virus Signaturen sind OK! Letztes Update: ' + CONVERT(VARCHAR, @d, 120)
    END

Job 2: Symnatec Restart

Dieser Job wird nur durch den Job 1 gestartet und er ist äußerst trivial.
Führt einfach nur 2 Befehle aus, die den SEPM stoppen und anschließend neu starten.

NET STOP SEMSRV
NET START SEMSRV

PS: Traurig war, dass man durch den Support auch keine Hilfe bekam, nachdem ich solch einen Lösungsansatz vorschlug. Man wollte mir keine Infos über die Strukturen der Tabellen geben. Letzten Endes waren die Suchmaschinen so nett alle nötigen Informationen zu liefern, denn ich war nicht der einzige mit diesem Problem.


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

David Tielke: dotnet Cologne 2018 - Folien meines Vortrags über serviceorientierte Architekturen

Heute bin auch ich endlich mit der dotnet Cologne 2018 in das Konferenzjahr gestartet. Auf der von der .NET Usergroup Köln/Bonn e.V. jährlich veranstalteten Konferenz, durfte ich für meinen langjährigen Partner Developer Media zum ersten Mal als Sprecher teilnehmen. Während viele neue und hippe Themen auf der Agenda standen, habe ich mich bewusst auf altbewährtes konzentriert und mit zahlreichen Vorurteilen und Kritikpunkten bezüglich einer der am meisten missverstandenen Architekturmuster überhaupt aufgeräumt - den Serviceorientierten Architekturen. In der 60 minütigen Level 300 Session ging es neben der Theorie zu Architekturen, der Funktionsweise von SOA vor allem um eins: Was können wir aus dieser genialen Architekturform für andere Architekturen lernen? Wie kann eine monolithische Systemarchitektur mit Aspekten aus SOA zu einer flexiblen und wartbaren Architektur umgewandelt werden? Nach dem sehr gut besuchten Vortrag habe ich sehr viel Feedback der Teilnehmer bekommen, besonders von denen, die keinen Sitzplatz mehr ergattern konnten. Deshalb habe ich das Thema noch einmal als Webcast aufgezeichnet und in meinem Youtube-Channel online gestellt. Zusätzlich gibt es hier wie immer die Folien als PDF. Ich danke an dieser Stelle nochmal allen Teilnehmern, natürlich dem Veranstalter und meinem Partner Developer Media für diesen tollen Konferenztag. Bis zum nächsten Jahr!

Webcast


Folien

Links
Folien
Youtube-Kanal

Martin Richter: Sieh mal an: SetFilePointer und SetFilePointerEx sind eigentlich überflüssig, wenn man ReadFile und WriteFile verwendet…

Man lernt nie aus, bzw. man hat vermutlich nie die Dokumentation vollständig und richtig gelesen.

Wenn man eine Datei nicht sequentiell liest ist es normal Seek, Read/Write in dieser Folge zu nutzen. Order eben SetFilePointer, ReadFile/WriteFile.

In einer StackOverflow Antwort stolperte ich über die Aussage:

you not need use SetFilePointerEx – this is extra call. use explicit offset in WriteFile / ReadFile instead

(Rechtschreibung nicht korrigiert).

Aber der Inhalt war für mich neu. Sieh mal an, selbst wen man kein FILE_FLAG_OVERRLAPPED benutzt, kann man die OVERLAPPED Struktur nutzen und die darin vorhandenen Offsets verwenden.
Diese werden sogar netterweise angepasst, nachdem gelesen/geschrieben wurde.

Zitat aus der MSDN (der Text ist gleichlautend für WriteFile):

Considerations for working with synchronous file handles:

  • If lpOverlapped is NULL, the read operation starts at the current file position and ReadFile does not return until the operation is complete, and the system updates the file pointer before ReadFile returns.
  • If lpOverlapped is not NULL, the read operation starts at the offset that is specified in the OVERLAPPED structure and ReadFile does not return until the read operation is complete. The system updates the OVERLAPPED offset before ReadFile returns.

Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Manfred Steyer: Micro Apps with Web Components using Angular Elements

Update on 2018-05-04: Updated for @angular/elements in Angular 6
Source code: https://github.com/manfredsteyer/angular-microapp

In one of my last blog posts I've compared several approaches for using Single Page Applications, esp. Angular-based ones, in a microservice-based environment. Some people are calling such SPAs micro frontends; other call them Micro Apps.

As you can read in the mentioned post, there is not the one and only perfect approach but several feasible concepts with different advantages and disadvantages.

In this post I'm looking at one of those approaches in more detail: Using Web Components. For this, I'm leveraging the new Angular Elements library (@angular/elements) which is available beginning with Angular 6. The source code for the case study decribed can be found in my GitHub repo.

Case Study

The case study presented here is as simple as possible. It contains a shell app that dynamically loads and activates micro apps. It also takes care about routing between the apps (meta-routing) and allows them to communicate with each other using message passing. They are just called Client A and Client B. In addition, Client B also contains a widget from Client A.

Client A is activated

Client B with widget from Client A

Project structure

Following the ideas of micro services, each part of the overall solution would be a separate project. This allows different teams do work individually on their parts without the need of much coordination.

To make this case study a bit easier, I've decided to use one CLI project with a sub project for each part. This is something, the CLI supports beginning with version 6.

You can create a sub project using ng generate application my-sub-project within an existing one.

Using this approach, I've created the following structure:

  + projects
    +--- client-a
         +--- src
    +--- client-b
         +--- src
  + src 

The outer src folder at the end is the folder for the shell application.

Micro Apps as Web Components with Angular Elements

To allow loading the micro apps on demand into the shell, they are exposed as Web Components using Angular Elements. In addition to that, I'm providing further Web Components for stuff I want to share with other Micro Apps.

Using the API of Angular Elements isn't difficult. After npm installing @angular/elements you just need to declare your Angular Component with a module and put it also into the entryComponents array. Using entryComponents is necessary because Angular Elements are created dynamically at runtime. Otherwise the compiler would not know about them.

Than you have to create a wrapper for your component using createCustomElement and register it as a custom element with the browser using its customElements.define method:

import { createCustomElement } from '@angular/elements'; [...] @NgModule({ [...] bootstrap: [], entryComponents: [ AppComponent, ClientAWidgetComponent ] }) export class AppModule { constructor(private injector: Injector) { } ngDoBootstrap() { const appElement = createCustomElement(AppComponent, { injector: this.injector}) customElements.define('client-a', appElement); const widgetElement = createCustomElement(ClientAWidgetComponent, { injector: this.injector}) customElements.define('client-a-widget', widgetElement); } }

The AppModule above only offers two custom elements. The first one is the root component of the micro app and the second one is a component it shares whit other micro apps. Please note that it does not bootstrap a traditional Angular component. Hence, the bootstrap array is empty and we need to introduce an ngDoBootstrap method intended for manual bootstrapping.

If we had traditional Angular components, services, modules, etc., we could also place this code inside of them.

After this, we can use our Angular Components like ordinary HTML elements:

<client-a [state]="someState" (message)="handleMessage($event)"><client-a>

While the last example uses Angular to call the Web Component, this also works with other frameworks and VanillaJS. In this case, we have to use the respective syntax of the hosting solution when calling the component.

When we load web components in an other Angular application, we need to register the CUSTOM_ELEMENTS_SCHEMA:

import { NgModule, CUSTOM_ELEMENTS_SCHEMA } from '@angular/core'; [...] @NgModule({ declarations: [AppComponent ], imports: [BrowserModule], schemas: [CUSTOM_ELEMENTS_SCHEMA], providers: [], bootstrap: [AppComponent] }) export class AppModule { }

This is necessary to tell the Angular compiler that there will be components it is not aware of. Those components are the web components that are directly executed by the browser.

We also need a polyfill for browsers that don't support Web Components. Hence, I've npm installed @webcomponents/custom-elements and referenced it at the end of the polyfills.ts file:

import '@webcomponents/custom-elements/custom-elements.min';

This polyfill even works with IE 11.

Routing across Micro Apps

One thing that is rather unusual here, is that whole clients are implemented as Web Components and hence they are using routing:

@NgModule({ imports: [ ReactiveFormsModule, BrowserModule, RouterModule.forRoot([ { path: 'client-a/page1', component: Page1Component }, { path: 'client-a/page2', component: Page2Component }, { path: '**', component: EmptyComponent} ], { useHash: true }) ], [...] }) export class AppModule { [...] }

An interesting thing about this simple routing configuration is that it uses the prefix client-a for all but one route. The last route is a catch all route displaying an empty component. This makes the application disappear, when the current path does not start with its prefix. Using this simple trick you can allow the shell to switch between apps very easily.

Please note that I'm using hash based routing as after changing the hash all routers in our micro apps will update their route. Unfortunately, this isn't the case with the default location strategy which leverages the push API.

When bootstrapping such components as Web Components we have to initialize the router manually:

@Component([...]) export class ClientAComponent { constructor(private router: Router) { router.initialNavigation(); // Manually triggering initial navigation } }

Build Process

For building the web components, I'm using a modified version of the webpack configuration from Vincent Ogloblinsky's blog post.

const AotPlugin = require('@ngtools/webpack').AngularCompilerPlugin; const path = require('path'); const PurifyPlugin = require('@angular-devkit/build-optimizer').PurifyPlugin; const webpack = require('webpack'); const clientA = { entry: './projects/client-a/src/main.ts', resolve: { mainFields: ['browser', 'module', 'main'] }, module: { rules: [ { test: /\.ts$/, loaders: ['@ngtools/webpack'] }, { test: /\.html$/, loader: 'html-loader', options: { minimize: true } }, { test: /\.js$/, loader: '@angular-devkit/build-optimizer/webpack-loader', options: { sourceMap: false } } ] }, plugins: [ new AotPlugin({ skipCodeGeneration: false, tsConfigPath: './projects/client-a/tsconfig.app.json', hostReplacementPaths: { "./src/environments/environment.ts": "./src/environments/environment.prod.ts" }, entryModule: path.resolve(__dirname, './projects/client-a/src/app/app.module#AppModule' ) }), new PurifyPlugin() ], output: { path: __dirname + '/dist/shell/client-a', filename: 'main.bundle.js' }, mode: 'production' }; const clientB = { [...] }; module.exports = [clientA, clientB];

In addition to that, I'm using some npm scripts to trigger both, the build of the shell as well as the build of the micro apps. For this, I'm copying the bundles for the micro apps over to the shell's dist folder. This makes testing a bit easier:

"scripts": { "start": "live-server dist/shell", "build": "npm run build:shell && npm run build:clients ", "build:clients": "webpack", "build:shell": "ng build --project shell", [...] }

Loading bundles

After creating the bundles, we can load them into a shell application. A first simple approach could look like this:

<client-a></client-a> <client-b></client-b> <script src="client-a/main.bundle.js"></script> <script src="client-b/main.bundle.js"></script>

This example shows one more time that a web component works just as an ordinary html element.

We can also dynamically load the bundles on demand with some lines of simple DOM code. I will present a solution for this a bit later.

Communication between Micro Apps

Even though micro apps should be as isolated as possible, sometimes we need to share some information. The good message here is that we can leverage attributes and events for this:

To implement this idea, our micro apps get a property state the shell can use to send down some application wide state. It also gets an event message to notify the shell:

@Component({ ... }) export class AppComponent implements OnInit { @Input('state') set state(state: string) { console.debug('client-a received state', state); } @Output() message = new EventEmitter<any>(); [...] }

The shell can now bind to these to communicate with the Micro App:

<client-a [state]="appState" (message)="handleMessage($event)"></client-a> <client-b [state]="appState" (message)="handleMessage($event)"></client-b>

Using this approach one can easily broadcast messages down by updating the appState. And if handleMessage also updates the appState, the micro apps can communicate with each other.

One thing I want to point out is that this kind of message passing allows inter app communication without coupling them in a strong way.

Dynamically Loading Micro Apps

As web components work as traditional html elements, we can dynamically load them into our app using DOM. For this task, I've created a simple configuration object pointing to all the data we need:

config = { "client-a": { path: 'client-a/main.bundle.js', element: 'client-a' }, "client-b": { path: 'client-b/main.bundle.js', element: 'client-b' } };

To load one of those clients, we just need to create a script tag pointing to its bundle and an element representing the micro app:

load(name: string): void { const configItem = this.config[name]; const content = document.getElementById('content'); const script = document.createElement('script'); script.src = configItem.path; script.onerror = () => console.error(`error loading ${configItem.path}`); content.appendChild(script); const element: HTMLElement = document.createElement(configItem.element); element.addEventListener('message', msg => this.handleMessage(msg)); content.appendChild(element); element.setAttribute('state', 'init'); } handleMessage(msg): void { console.debug('shell received message: ', msg.detail); }

By hooking up an event listener for the message event, the shell can receive information from the micro apps. To send some data down, this example uses setAttribute.

We can even decide when to call the load function for our application. This means, we can implements eager loading or lazy loading. For the sake of simplicity I've decided for the first option:

ngOnInit() { this.load('client-a'); this.load('client-b'); }

Using Widgets from other Micro Apps

Using widgets from other Micro Apps is also a piece of cake: Just create an html element. Hence, all client b has to do to use client a's widget is this:

<client-a-widget></client-a-widget>

Evaluation

Advantages

  • Styling is isolated from other Microservice Clients due to Shadow DOM or the Shadow DOM Emulation provided by Angular out of the box.
  • Allows for separate development and separate deployment
  • Mixing widgets from different Microservice Clients is possible
  • The shell can be a Single Page Application too
  • We can use different SPA frameworks in different versions for our Microservice Clients

Disadvantages

  • Microservice Clients are not completely isolated as it would be the case when using hyperlinks or iframes instead. This means that they could influence each other in an unplanned way. This also means that there can be conflicts when using different frameworks in different versions.
  • We need polyfills for some browsers
  • We cannot leverage the CLI for generating a self-contained package for every client. Hence, I used webpack.

Tradeoff

  • We have to decide, whether we want to import all the libraries once or once for each client. This is more or less a matter of bundling. The first option allows to optimize for bundle sizes; the last option provides more isolation and hence separate development and deployment. This properties are considered valuable architectural goals in the world of micro services.

Holger Schwichtenberg: C# 8.0 erkennt mehr Programmierfehler

Referenztypen werden nicht mehr automatisch "nullable" sein; die Möglichkeit, den Wert null zuzuweisen, müssen Entwickler explizit deklarieren.

Code-Inside Blog: .editorconfig: Sharing a common coding style in a team

Sharing Coding Styles & Conventions

In a team it is really important to set coding conventions and to use a specific coding style, because it helps to maintain the code - a lot. Of course has each developer his own “style”, but some rules should be set, otherwise it will end in a mess.

Typical examples for such rules are “Should I use var or not?” or “Are _ still OK for private fields?”. Those questions shouldn’t be answered in a Wiki - it should be part of the daily developer life and should show up in your IDE!

Be aware that coding conventions are highly debated. In our team it was important to set a commpon ruleset, even if not everyone is 100% happy with each setting.

Embrace & enforce the conventions

In the past this was the most “difficult” aspect: How do we enforce these rules?

Rules in a Wiki are not really helpful, because if you are in your favorite IDE you might not notice rule violations.

Stylecop was once a thing in the Visual Studio World, but I’m not sure if this is still alive.

Resharper, a pretty useful Visual Studio plugin, comes with it’s own code convention sharing file, but you will need Resharper enforce and embrace the conventions.

Introducing: .editorconfig

Last year Microsoft decided to support the .EditorConfig file format in Visual Studio.

The .editorconfig defines a set of common coding styles (think of tabs or spaces) in a very simple format. Different text ediotors and IDEs support this file, which makes it a good choice if you are using multiple IDEs or working with different setups.

Additionally Microsoft added a couple of C# related options for the editorconfig file to support the C# language features.

Each rule can be marked as “Information”, “Warning” or “Error” - which will light up in your IDE.

Sample

This was a tough choice, but I ended up with the .editorconfig of the CoreCLR. It is more or less the “normal” .NET style guide. I’m not sure if I love the the “var”-setting and the “static private field naming (like s_foobar)”, but I can life with them and it was for us a good starting point (and still is).

The .editorconfig file can be saved at the same level as the .sln file, but you could also use multiple .editorconfig files based on the folder structure. Visual Studio should detect the file and see the rules.

Benefits

When everything is ready Visual Studio should populate the results and show does nice light blub:

x

Be aware that I have Resharper installed and Resharper has it’s own ruleset, which might be in conflict with the .editorconfig setting. You need to adjust those settings in Resharper. I’m still not 100% sure how good the .editorconfig support is, sometimes I need to overwrite the backed in Resharper settings and sometimes it just works. Maybe this page gives a hint.

Getting started?

Just search for a .editorconfig file (or use something from the Microsoft GitHub repositories) and play with the settings. The setup is easy and it’s just a small text file right next to our code. Read more about the customization here.

Related topic

If you are looking for a more powerful option to embrace coding standards, you might want to take a look at Roslyn Analysers:

With live, project-based code analyzers in Visual Studio, API authors can ship domain-specific code analysis as part of their NuGet packages. Because these analyzers are powered by the .NET Compiler Platform (code-named “Roslyn”), they can produce warnings in your code as you type even before you’ve finished the line (no more waiting to build your code to discover issues). Analyzers can also surface an automatic code fix through the Visual Studio light bulb prompt to let you clean up your code immediately

MSDN Team Blog AT [MS]: OpenHack IoT & Data – 28. – 30. Mai

Als CSE (Commercial Software Engineering) können wir Kunden beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Ende Mai bieten wir als Readiness Maßnahme einen dreitägigen OpenHack zu IoT & Data an. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß.

28. - 30. Mai OpenHack IoT & DataWir geben bei den OpenHacks den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer.

Weiters braucht Ihr in diesem Fall auch noch kein eigenes Projekt, an dem ihr weiterarbeitet. Ihr braucht uns hier also dementsprechend auch keinerlei Projektinformationen von Euch zu geben. Kann bei machen Chefs ja ein wichtiger Punkt sein. Smile

Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, …

Falls ihr Euch also aktiv mit den Themen IoT & Data auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien.

Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!

Das Ziel wäre es klarerweise Euch zu motivieren über eigene IoT & Data Projekte nachzudenken bzw. diese zu starten. Nur so als kleine Anregung daher ein paar Projekte, die in Kooperation mit der CSE entstanden sind: https://www.microsoft.com/developerblog/tag/IoT

Anmelden unter: http://www.aka.ms/zurichopenhack

Ich freue mich auf Euer Kommen!!

MSDN Team Blog AT [MS]: Build 2018 Public Viewing with BBQ & Beer

 

Die Microsoft Build Konferenz ist DIE Veranstaltung für alle Softwareentwickler die sich mit Microsoft Technologien befassen. Die Keynote gibt immer einen tollen Überblick über die neuesten Entwicklungen in den Bereichen .NET, Azure, Windows, Visual Studio, AI, IoT, Big Data und mehr.

Build 2018 Public Viewing with BBQ & BeerManche haben ja das Glück vor Ort live dabei sein zu können. Ein paar müssen allerdings auhc zu Hause bleiben.

Ist das ein grund zum traurig sein? Kommt darauf an. Winking smile
Zumindest in Graz trifft sich die Community zum Build 2018 Public Viewing with BBQ & Beer

Zur Einstimmung auf die Keynote wird die Microsoft Developer User Group Graz dieses Jahr schon etwas früher mit BBQ und Bier starten.

Ab 16:00 gibt es BBQ und Bier, ab
17:30 dann die Keynote, gefolgt von einem gemütlicher Ausklang.

Die Microsoft Developer User Group Graz freut sich auf Euer Kommen, gutes Essen und eine spannende Keynote!

 

Stefan Henneken: IEC 61131-3: Der generische Datentyp T_Arg

In dem Artikel The wonders of ANY zeigt Jakob Sagatowski wie der Datentyp ANY sinnvoll eingesetzt werden kann. Im beschriebenen Beispiel vergleicht eine Funktion zwei Variablen, ob der Datentyp, die Datenlänge und der Inhalt exakt gleich sind. Statt für jeden Datentyp eine eigene Funktion zu implementieren, kann mit dem Datentyp ANY die gleichen Anforderungen mit nur einer Funktion deutlich eleganter umgesetzt werden.

Vor einiger Zeit hatte ich eine ähnliche Aufgabenstellung. Es sollte eine Methode entwickelt werden, die eine beliebige Anzahl von Parametern entgegennimmt. Sowohl der Datentyp, als auch die Anzahl der Parameter waren beliebig.

Bei meinem ersten Lösungsansatz versuchte ich ein Array mit variabler Länge vom Typ ARRAY [*] OF ANY einzusetzen. Allerdings können Arrays mit variabler Länge nur als VAR_IN_OUT und der Datentyp ANY nur als VAR_INPUT eingesetzt werden (siehe auch IEC 61131-3: Arrays mit variabler Länge). Somit schied dieser Ansatz aus.

Alternativ zu dem Datentyp ANY steht noch die Struktur T_Arg zur Verfügung. T_Arg ist in der TwinCAT-Bibliothek Tc2_Utilities deklariert und steht, im Gegensatz zu ANY, auch unter TwinCAT 2 zur Verfügung. Der Aufbau von T_Arg ist vergleichbar mit der Struktur, die für den Datentyp ANY eingesetzt wird (siehe auch The wonders of ANY).

TYPE T_Arg :
STRUCT
  eType   : E_ArgType    := ARGTYPE_UNKNOWN;     (* Argument data type *)
  cbLen   : UDINT        := 0;                   (* Argument data byte length *)
  pData   : UDINT        := 0;                   (* Pointer to argument data *)
END_STRUCT
END_TYPE

T_Arg kann an beliebigen Stellen eingesetzt werden, somit auch im Bereich VAR_IN_OUT.

Die folgende Funktion addiert eine beliebige Anzahl von Zahlen, deren Datentyp ebenfalls beliebig sein kann. Das Ergebnis wird als LREAL zurückgegeben.

FUNCTION F_AddMulti : LREAL
VAR_IN_OUT
  aArgs        : ARRAY [*] OF T_Arg;
END_VAR
VAR
  nIndex	: DINT;
  aUSINT	: USINT;
  aUINT		: UINT;
  aINT		: INT;
  aDINT		: DINT;
  aREAL		: REAL;   
  aLREAL	: LREAL;
END_VAR

F_AddMulti := 0.0;
FOR nIndex := LOWER_BOUND(aArgs, 1) TO UPPER_BOUND(aArgs, 1) DO
  CASE (aArgs[nIndex].eType) OF
    E_ArgType.ARGTYPE_USINT:
      MEMCPY(ADR(aUSINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUSINT;
    E_ArgType.ARGTYPE_UINT:
      MEMCPY(ADR(aUINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUINT;
    E_ArgType.ARGTYPE_INT:
      MEMCPY(ADR(aINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aINT;
    E_ArgType.ARGTYPE_DINT:
      MEMCPY(ADR(aDINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aDINT;
    E_ArgType.ARGTYPE_REAL:
      MEMCPY(ADR(aREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aREAL;
    E_ArgType.ARGTYPE_LREAL:
      MEMCPY(ADR(aLREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aLREAL;
  END_CASE
END_FOR

Der Aufruf der Funktion gestaltet sich hierbei allerdings etwas umständlicher, als bei dem Datentyp ANY.

PROGRAM MAIN
VAR
  sum          : LREAL;
  args         : ARRAY [1..4] OF T_Arg;
  a            : INT := 4567;
  b            : REAL := 3.1415;
  c            : DINT := 7032345;
  d            : USINT := 13;
END_VAR

args[1] := F_INT(a);
args[2] := F_REAL(b);
args[3] := F_DINT(c);
args[4] := F_USINT(d);
sum := F_AddMulti(args);

Das Array, das an die Funktion übergeben wird, muss zuvor initialisiert werden. In der Bibliothek Tc2_Utilities stehen Hilfsfunktionen zur Verfügung, die eine Variable in eine Struktur vom Typ T_Arg umwandeln (F_INT(), F_REAL(), F_DINT(), …). Die Funktion zum Addieren der Werte besitzt nur noch eine Eingangsvariable vom Typ ARRAY [*] OF T_Arg.

Anwendung findet der Datentyp T_Arg z.B. im Funktionsblock FB_FormatString() oder in der Funktion F_FormatArgToStr() von TwinCAT. Mit dem Funktionsblock FB_FormatString() können in einem String bis zu 10 Platzhalter durch Werte von SPS-Variablen vom Typ T_Arg ersetzt werden (ähnlich wie bei fprintf in C).

Ein Vorteil von ANY ist die Tatsache, dass der Datentyp durch die Norm IEC 61131-3 definiert wird.

Auch wenn die generische Datentypen ANY und T_Arg vom Leistungsumfang nicht den Generics in C# oder den Templates in C++ entsprechen, so unterstützen diese dennoch die Entwicklung generischer Funktionen in der IEC 61131-3. Diese können jetzt so entworfen werden, dass die gleiche Funktion für unterschiedliche Datentypen und Datenstrukturen verwendet werden kann.

Manfred Steyer: Seamlessly Updating your Angular Libraries with the CLI, Schematics and ng update


Table of Contents

This blog post is part of an article series.


Thanks a lot to Hans Larsen from the Angular CLI team for reviewing this article.

Updating libraries within your npm/yarn-based project can be a nightmare. Once you've dealt with all the peer dependencies, you have to make sure your source code doesn't run into breaking changes.

The new command ng update provides a remedy: It goes trough all updated dependencies -- including the transitive ones -- and calls schematics to update the current project for them. Together with ng add described in my blog article here, it is the foundation for an eco system allowing a more frictionless package management.

In this post, I'm showing how to make use of ng update within an existing library by extending the simple logger used in my article about ng add.

If you want to look at the completed example, you find it in my GitHub repo.

Schematics is currently an Angular Labs project. Its public API is experimental and can change in future.

Angular Labs

Introducing a Breaking Change

To showcase ng update, I'm going to modify my logger library here. For this, I'm renaming the LoggerModule's forRoot method into configure:

// logger.module.ts [...] @NgModule({ [...] }) export class LoggerModule { // Old: // static forRoot(config: LoggerConfig): ModuleWithProviders { // New: static configure(config: LoggerConfig): ModuleWithProviders { [...] } }

As this is just an example, please see this change just as a proxy for all the other breaking changes one might introduce with a new version.

Creating the Migration Schematic

To adopt existing projects to my breaking change, I'm going to create a schematic for it. It will be placed into an new update folder within the library's schematics folder:

Folder update for new schematic

This new folder gets an index.ts with a rule factory:

import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; export function update(options: any): Rule { return (tree: Tree, _context: SchematicContext) => { _context.logger.info('Running update schematic ...'); // Hardcoded path for the sake of simplicity const appModule = './src/app/app.module.ts'; const buffer = tree.read(appModule); if (!buffer) return tree; const content = buffer.toString('utf-8'); // One more time, this is for the sake of simplicity const newContent = content.replace('LoggerModule.forRoot(', 'LoggerModule.configure('); tree.overwrite(appModule, newContent); return tree; }; }

For the sake of simplicity, I'm taking two short cuts here. First, the rule assumes that the AppModule is located in the file ./src/app/app.module.ts. While this might be the case in a traditional Angular CLI project, one could also use a completely different folder structure. One example is a monorepo workspace containing several applications and libraries. I will present a solution for this in an other post but for now, let's stick with this simple solution.

To simplify things further, I'm directly modifying this file using a string replacement. A more safe way to change existing code is going with the TypeScript Compiler API. If you're interested into this, you'll find an example for this in my blog post here.

Configuring the Migration Schematic

To configure migration schematics, let's follow the advice from the underlying design document and create an own collection. This collection is described by an migration-collection.json file:

Collection for migration schematics

For each migration, it gets a schematic. The name of this schematic doesn't matter but what matters is the version property:

{ "schematics": { "migration-01": { "version": "4", "factory": "./update/index#update", "description": "updates to v4" } } }

This collection tells the CLI to execute the current schematic when migrating to version 4. Let's assume we had such an schematic for version 5 too. If we migrated directly from version 3 to 5, the CLI would execute both.

Instead of just pointing to a major version, we could also point to a minor or a patch version using version numbers like 4.1 or 4.1.1.

We also need to tell the CLI that this very file describes the migration schematics. For this, let's add an entry point ng-update to our package.json. As in our example the package.json located in the project root is used by the library built, we have to modify this one. In other project setups the library could have an package.json of its own:

[...] "version": "4.0.0", "schematics": "./schematics/collection.json", "ng-update": { "migrations": "./schematics/migration-collection.json" }, [...]

While the known schematics field is pointing to the traditional collection, ng-update shows which collection to use for migration.

We also need to increase the version within the package.json. As my schematic is indented for version 4, I've set the version field to this very version above.

Test, Publish, and Update

To test the migration schematic, we need a demo Angular application using the old version of the logger-lib. Some information about this can be found in my last blog post. This post also describes, how to setup a simple npm registry that provides the logger-lib and how to use it in your demo project.

Make sure to use the latest versions of @angular/cli and its dependency @angular-devkit/schematics. When I wrote this up, I've used version 6.0.0-rc.4 of the CLI and version 0.5.6 of the schematics package. However, this came with some issues especially on Windows. Nether the less, I expect those issues to vanish, once we have version 6.

To ensure having the latest versions, I've installed the latest CLI and created a new application with it.

Sometimes during testing, it might be useful to install a former/ a specific version of the library. You can just use npm install for this:

npm install @my/logger-lib@^0 --save

When everything is in place, we can build and publish the new version of our logger-lib. For this, let's use the following commands in the library's root directory:

npm run build:lib
cd dist
cd lib
npm publish --registry http://localhost:4873

As in the previous article, I'm using the npm registry verdaccio which is available at port 4863 by default.

Updating the Library

To update the logger-lib within our demo application, we can use the following command in it's root directory:

```
ng update @my/logger-lib --registry http://localhost:4873 --force
```

The switch force makes ng update proceed even if there are unresolved peer dependencies.

This command npm installs the newest version of the logger-lib and executes the registered migration script. After this, you should see the modifications within your app.module.ts file.

As an alternative, you could also npm install it by hand:

npm i @my/logger-lib@^4 --save

After this, you could run all the necessary migration schematics using ng update with the migrate-only switch:

ng update @my/logger-lib --registry http://localhost:4873 
--migrate-only --from=0.0.0 --force

This will execute all migration schematics to get from version 0.0.0 to the currently installed one. To just execute the migration schematics for a specific (former) version, you could make use of the --to switch:

ng update @my/logger-lib --registry http://localhost:4873 
--migrate-only --from=0.0.0 --to=4.0.0 --force

Jürgen Gutsch: A generic logger factory facade for classic ASP.NET

ASP.NET Core already has this feature. There is a ILoggerFactory to create a logger. You are able to inject the ILoggerFactory to your component (Controller, Service, etc.) and to create a named logger out of it. During testing you are able to replace this factory with a mock, to not test the logger as well and to not have an additional dependency to setup.

Recently we had the same requirement in a classic ASP.NET project, where we use Ninject to enable dependency injection and log4net to log all the stuff we do and all exceptions. One important requirement is a named logger per component.

Creating named loggers

Usually log4net gets created inside the components as a private static instance:

private static readonly ILog _logger = LogManager.GetLogger(typeof(HomeController));

There already is a static factory method to create a named logger. Unfortunately this isn't really testable anymore and we need a different solution.

We could create a bunch of named logger in advance and register them to Ninject, which obviously is not the right solution. We need to have a more generic solution. We figured out two different solutions:

// would work well
public MyComponent(ILoggerFactory loggerFactory)
{
    _loggerA = loggerFactory.GetLogger(typeof(MyComponent));
    _loggerB = loggerFactory.GetLogger("MyComponent");
    _loggerC = loggerFactory.GetLogger<MyComponent>();
}
// even more elegant
public MyComponent(
    ILoggerFactory<MyComponent> loggerFactoryA
    ILoggerFactory<MyComponent> loggerFactoryB)
{
    _loggerA = loggerFactoryA.GetLogger();
    _loggerB = loggerFactoryB.GetLogger();
}

We decided to go with the second approach, which is a a simpler solution. This needs a dependency injection container that supports open generics like Ninject, Autofac and LightCore.

Implementing the LoggerFactory

Using Ninject the binding of open generics looks like this:

Bind(typeof(ILoggerFactory<>)).To(typeof(LoggerFactory<>)).InSingletonScope();

This binding creates an instance of LoggerFactory<T> using the requested generic argument. If I request for an ILoggerFactory<HomeController>, Ninject creates an instance of LoggerFactory<HomeController>.

We register this as an singleton to reuse the ILog instances as we would do using the usual way to create the ILog instance in a private static variable.

The implementation of the LoggerFactory is pretty easy. We use the generic argument to create the log4net ILog instance:

public interface ILoggerFactory<T>
{
	ILog GetLogger();
}

public class LoggerFactory<T> : ILoggerFactory<T>
{
    private ILog _logger;
    public ILog GetLogger()
    {
        if (_logger == null)
        {
            var type = typeof(T);
            _logger = LogManager.GetLogger(typeof(T));
        }
        return _logger;
    }
}

We need to ensure the logger is created before creating a new one. Because Ninject creates a new instance of the LoggerFactory per generic argument, the LoggerFactory don't need to care about the different loggers. It just stores a single specific logger.

Conclusion

Now we are able to create one or more named loggers per component.

What we cannot do, using this approach is to create individual named loggers, using a specific string as a name. There is a type needed that gets passed as generic argument. So every time we need an individual named logger we need to create a specific type. In our case this is not a big problem.

If you don't like to create types just to create individual named loggers, feel free to implement a non generic LoggerFactory and make a generic GetLogger method as well as a GetLogger method that accepts strings as logger names.

Jürgen Gutsch: Creating Dummy Data Using GenFu

Two years ago I already wrote about playing around with GenFu and I still use it now, as mentioned in that post. When I do a demo, or when I write blog posts and articles, I often need dummy data and I use GenFu to create it. But every time I use it in a talk or a demo, somebody still asks me a question about it,

Actually I really forgot about that blog post and decided to write about it again this morning because of the questions I got. Almost accidently I stumbled upon this "old" post.

I wont create a new one. Now worries ;-) Because of the questions I just want to push this topic a little bit to the top:

Playing around with GenFu

GenFu on GitHub

PM> Install-Package GenFu

Read about it, grab it and use it!

It is one of the most time saving tools ever :)

Holger Schwichtenberg: Die Windows-Update-Endlosschleife und der Microsoft-Support

Windows 10 Update 1709 installiert nicht und der Microsoft-Support hat auch keine Lösung beziehungsweise gibt sich nicht viel Mühe, eine Lösung zu finden.

Christina Hirth : Kollegen-Bashing – Überraschung, es hilft nicht!

Bei allen Konferenzen, die meinen Kollegen und ich besuchen, poppt früher oder später das Thema Team-Kultur auf, als Grund von vielen/allen Problemen. Wenn wir erzählen, wie wir arbeiten, landen wir unausweichlich bei der Aussage “eine selbstorganisierte crossfunktionale Organisation ohne einen Chef, der DAS Sagen hat, ist naiv und nicht realistisch“. “Ihr habt irgendwo sicher einen Chef, ihr wisst es nur nicht!” war eine der abgefahrensten Antworten, die wir unlängst gehört haben, nur weil der Gesprächspartner nicht in der Lage war, dieses Bild zu verarbeiten: 5 Selbstorganisierte Teams, ohne Chefs, ohne CTO, ohne Projektmanager, ohne irgendwelche von außen eingekippte Regeln und Anforderungen, ohne Deadlines denen wir widerspruchslos unterliegen würden. Dafür aber mit selbst auferlegten Deadlines, mit Budgets, mit Freiheiten und Verantwortung gleichermaßen.

Ich spreche jetzt hier nicht vom Gesetz und von der Papierform: natürlich haben wir in der Firma einen CTO, einen Head of Development, einen CFO, sie entscheiden nur nicht wann, was und wie wir etwas tun. Sie definieren die Rahmen, in der die Geschäftsleitung in das Produkt/Vorhaben investiert, aber den Rest tun wir: POs und Scrum Master und Entwickler, gemeinsam.

Wir arbeiten seit mehr als einem Jahr in dieser Konstellation und wir können noch 6 Monate Vorlaufzeit dazurechnen, bis wir in der Lage waren, dieses Projekt auf Basis von Conways-Law zu starten.

“Organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations.” [Wikipedia]

In Umkehrschluss (und freie Übersetzung) heißt das “wie deine Organisation ist, so wird auch dein Produkt, dein Code strukturiert sein”. Wir haben also an unserer Organisation gearbeitet. Das Ziel war, ein verantwortungsvolles Team aufzubauen, das frei zum Träumen ist, um ein neues, großartiges Produkt zu bauen, ohne auferlegten Fesseln.

Wir haben jetzt dieses Team, wir leben jetzt diesen Traum – der natürlich auch Schatten hat, das Leben ist schließlich kein Ponyhof :). Der Unterschied ist: es sind unsere Probleme und wir drücken uns nicht davor, wir lösen sie zusammen.

Bevor ihr sagt “das ist ein Glücksfall, passiert normalerweise nicht” würde ich widersprechen. Bei uns ist es auch nicht nur einfach so passiert, wir haben (ungefähr 6 Monate) daran gearbeitet, und tun es weiterhin kontinuierlich. Der Clou, der Schlüssel zu dieser Organisation ist nämlich eine offene Feedback-Kultur.

Was soll das heißen, wie haben wir das bei uns erreicht?

  • Wir haben gelernt, Feedback zu geben und zu nehmen – ja, das ist nicht so einfach. Das sind die Regeln
    • Alle Aussagen sind Subjektiv: “Gestern als ich Review gemacht habe, habe ich das und das gesehen. Das finde ich aus folgenden Gründen nicht gut genug/gefährlich. Ich könnte mir vorstellen, dass so oder so es uns schneller zum Ziel bringen könnte.” Ihr merkt: niemals DU sagen, alles in Ich-Form, ohne vorgefertigten Meinungen oder Annahmen.
    • Alle Aussagen mit konkreten Beispielen. Aussagen mit “ich glaube, habe das Gefühl, etc.” sind Meinungen und keine Tatsachen. Man muss ein Beispiel finden sonst ist das Feedback nicht “zulässig”
    • Das Feedback wird immer konstruktiv formuliert. Es hilft nicht zu sagen, was schlecht ist, es ist viel wichtiger zu sagen woran man arbeiten sollte: “Ich weiß aus eigener Erfahrung, dass Pair-Programming in solchen Fällen sehr hilfreich ist” z.B.
    • Derjenige, die Feedback bekommt, muss es anhören ohne sich zu recht fertigen. Sie muss sich selber entscheiden, was sie mit dem Feedback macht. Jeder, der sich verbessern möchte, wird versuchen, dieses Feedback zu Herzen zu nehmen und an sich zu arbeiten. Das muss man nicht vorschreiben!
  • One-and-Ones: das sind Feedback-Runden zwischen 2 Personen in einem Team, am Anfang mit Scrum Master, solange die Leute sich an die Formulierung gewöhnt haben (wir haben am Anfang die ganze Idee ausgelacht) und später dann nur noch die Paare. Jedes mal nur in eine Richtung (nur der eine bekommt Feedback) und z.B. eine Woche später in die andere Richtung. Das Ergebnis ist, das wir inzwischen keine Termine mehr haben, wir machen das automatisch, jedes Mal, wenn etwas zu “Feedbacken” ist.
  • Team-Feedback: ist die letzte Stufe, läuft nach den gleichen regeln. Wird nicht nur zwischen Teams sondern auch zwischen Gruppen/Gilden gehalten, wie POs oder Architektur-Owner.

Das war’s. Ich habe seit über einem Jahr nicht mehr Sätze gehört, wie “die Teppen von dem anderen Team waren, die alles verbockt haben” oder “Die kriegen es ja sowieso nicht hin” oder “Was kümmert es mich, sie haben ja den Fehler eingecheckt” Und diese Arbeitsatmosphäre verleiht Flügel! (sorry für die copy-right-Verletzung 😉 )

Code-Inside Blog: Did you know that you can run ASP.NET Core 2 under the full framework?

This post might be obvious for some, but I really struggled a couple of month ago and I’m not sure if a Visual Studio Update fixed the problem for me or if I was just blind…

The default way: Running .NET Core

AFAIK the framework dropdown in the normal Visual Studio project template selector (the first window) is not important and doesn’t matter anyway for .NET Core related projects.

When you create a new ASP.NET Core application you will see something like this:

x

The important part for the framework selection can be found in the upper left corner: .NET Core is currently selected.

When you continue your .csproj file should show something like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.5" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.2" />
  </ItemGroup>

</Project>

Running the full framework:

I had some trouble to find the option, but it’s really obvious. You just have to adjust the selected framework in the second window:

x

After that your .csproj has the needed configuration.

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>net461</TargetFramework>
  </PropertyGroup>
  
  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="2.0.1" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.0.2" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc.Razor.ViewCompilation" Version="2.0.2" PrivateAssets="All" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="2.0.1" />
    <PackageReference Include="Microsoft.VisualStudio.Web.BrowserLink" Version="2.0.1" />
  </ItemGroup>
  
  <ItemGroup>
    <DotNetCliToolReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Tools" Version="2.0.2" />
  </ItemGroup>
</Project>

The biggest change: When you run under the full .NET Framework you can’t use the “All”-Meta-Package, because with version 2.0 the package is still .NET Core only, and need to point to each package manually.

Easy, right?

Be aware: Maybe with ASP.NET Core 2.1 the Meta-Package story with the full framework might get easier.

I’m still not sure why I struggled to find this option… Hope this helps!

Jürgen Gutsch: Running and Coding

I wasn't really sporty before two years, but anyway active. I was also forced to be active with three little kids and a sporty and lovely women. But anyway, a job where I mostly sit in a comfortable chair, even great food and good southern German beers also did its work. When I first met my wife, I had around 80 Kg, what is good for my size of 178cm. But my weight increased up to 105Kg until Christmas 2015. This was way too much I thought. Until then I always tried to reduce it by doing some more cycling, more hiking and some gym, but it didn't really worked out well.

Anyway, there is not a more effective way to loose weight than running. It is btw. tree times more effective than cycling. I tried it a lot in the past, but it pretty much hurts in the lower legs and I stopped it more than once.

Running the agile way

I tried it again in Easter 2016 in a little different way, and it worked. I tried to do it the same way as in a perfect software project:

I did it in an agile way, using pretty small goals to get as much success as possible.

Also I bought me fitness watch to count steps, calories, levels and to measure the hart rate while running, to get some more challenges to do. At the same time I changed food a lot.

It sounds weird and funny, but it worked really well. I lost 20Kg since then!

I think it was important to not set to huge goals. I just wanted to loose 20Kg. I didn't set a time limit, or something like this.

I knew it hurts in the lower legs while running. I started to learn a lot of running and the different stiles of running. I chose the way of easy running which worked pretty well with natural running shoes and barefoot shoes. This also worked well for me.

Finding time to run

Finding time was the hardest thing. In the past I always thought that I'm too busy to run. I discussed it a lot with the family and we figured out the best time to run was during lunch time, because I need to walk the dog anyway and this also was an option to run with the dog. This was also a good thing for our huge dog.

Running at lunch time had another good advantage: I get the brain cleaned a little bit after four to five hours of work. (Yes, I usually start between 7 to 8 in the morning.) Running is great when you are working on software projects with a huge level of complexity. Unfortunately when I'm working in Basel, I cannot go run, because there is now shower available. But I'm still able to run three to four times a week.

Starting to run

The first runs were a real pain. I just chose a small lap of 2,5km, because I needed to learn running as the first step. Also because of the pain in the lower legs, I chose to run shorter tracks up-hill. Why up-hill? Because this is more exhausting than running leveled-up. So I had short up-hill running phases and longer quick walking phases. Just a few runs later the running phases start to be a little bit longer and longer.

This was the first success just a few runs later. That was great. it was even greater when I finished my first kilometer after 1,5 months running every second day. That was amazing.

On every run there was a success and that really pushed me. But I not only succeeded on running, I also started to loose weight, which pushed me even more. So the pain wasn't too hard and I continued running.

Some weeks later I ran the entire lap of 2.5km. I was running the whole lap not really fast but without a walking pause. Some more motivation.

I continued running just this 2.5km for a few more weeks to get some success on personal records on this lap.

Low carb

I mentioned the change with food. I changed to low-carb diet. Which is in general a way to reduce the consumption of sugar. Every kind of sugar, which means bread, potatoes, pasta, rice and corn as well. In the first phase of three months I almost completely stopped eating carbs. After that phase, started to eat a little of them. I also had one cheating-day per week when I was able to eat the normal way.

After 6 Months of eating less carbs and running, I lost around 10Kg, which was amazing and I was absolutely happy with this progress.

Cycling as a compensation

As already mentioned I run every second day. The days between I used my new mountain bike to climb the hills around the city where I live. Actually, it really was a kind of compensation because cycling uses other parts of the legs. (Except when I run up-hill).

Using my smart watch, I was able to measure that running burns three times more calories per hour in average than cycling in the same time. This is a measurement done on my person only and cannot adopt to any other person, but actually it makes sense to me.

Unfortunately cycling during the winter was a different kind of pain. It hurts the face, the feet and the hands. It was too cold. so I stopped it, if the temperature was lower than 5 degrees.

Extending the lap

After a few weeks running the entire 2.5Km, I increased the length to 4.5. This was more exhausting than expected. Two kilometers more needs a completely new kind of training. I needed to enforce myself to not run too fast at the beginning. I needed to start to manage my power. Again I started slowly and used some walking pauses to get the whole lap done. During the next months the walking pauses had decrease more and more until I didn't need a walking pause anymore on this lap.

The first official run

Nine months later I wanted to challenge myself a little bit and attended the first public run. It was a new years eve run. Pretty cold than, but unexpectedly a lot of fun. I was running with my brother which was a good idea. The atmosphere before and during the run was pretty special and I still like it a lot. I got three challenges done during this run. I reached the finish (1) and I wasn't the last one who passed the finish line (2). That was great. I also got a new personal record on the 5km (3).

This was done one year and three months ago. I did exactly the same run again last new years eve and got a new personal record, was faster than my brother and reached the finish. Amazing. More success to push myself.

The first 10km

During the last year I increased the number of kilometers, attended some more public runs. In September 2015 I finished my first public 10km run. Even more success to push me foreword.

I didn't increase the number of kilometer fast. Just one by one kilometer. Trained one to three months on this range and added some more kilometer. Last spring I started to do a longer run during the weekends, just because I had time to do this. On workdays it doesn't make sense to run more than 7 km because this would also increase the time used for the lunch break. I tried to just use one hour for the lunch run, including the shower and changing the cloths.

Got it done

Last November I got it done: I actually did loose 20kg since I started to run. This was really great. It was a great thing to see a weight less than 85kg.

Conclusion

How did running changed my life? it changed it a lot. I cannot really live without running for more than two days. I get really nervous than.

Do I feel better since I started running? Because of the sports I am more tired than before, I have muscle ache, I also had two sport accidents. But I'm pretty much more relaxed I think. Physically the most time it feels bad but in a wired positive way because I feel I've done something.

Also some annoying work was done more easily. I really looking foreword to the next lunch break to run the six or seven kilometer with the dog, or to ride the bike up and down the hills and to get the brain cleaned up.

I'm running on almost every weather except it is too slippery because of ice or snow. Fresh snow is fine, mud is fun, rain I don't feel anymore, sunny is even better and heat is challenging. Only the dog doesn't love warm weather.

Crazy? Yes, but I love it.

Yo you want to follow me on Strava?

Jürgen Gutsch: Why I use paket now

I never really had any major problem using the NuGet client. By reading the Twitter timeline, it seems I am the only one without problems. But depending on what dev process you like to use, there could be a problem. This is not really a NuGet fault, but this process makes the usage of NuGet a little bit more complex than it should be.

As mentioned in previous posts, I really like to use Git Flow and the clear branching structure. I always have a production branch, which is the master. It contains the sources of the version which is currently in production.

In my projects I don't need to care about multiple version installed on multiple customer machines. Usually as a web developer, you only have one production version installed somewhere on a webserver.

I also have a next version branch, which is the develop branch. This contains the version we are currently working on. Besides of this, we can have feature branches, hotfix branches, release branches and so on. Read more about Git Flow in this pretty nice cheat sheet.

The master branch get's compiled in release mode and uses a semantic version like this. (breaking).(feature).(patch). The develop branch, get's compiled in debug mode and has an version number that tells NuGet that it is a preview version: (breaking).(feature).(patch)-preview(build). Where build is the build number generated by the build server.

The actual problem

We use this versioning, build and release process for web projects and shared libraries. And with those shared libraries it starts to get complicated using NuGet.

Some of the shared libraries are used in multiple solutions and shared via a private NuGet feed, which is a common way, I think.

Within the next version of a web project we also use the next versions of the shared libraries to test them. In the current versions of the web projects we use the current versions of the shared libraries. Makes kinda sense, right? If we do a new production release of a web project, we need to switch back to the production version of the shared libraries.

Because in the solutions packages folder, NuGet creates package sub-folders containing the version number. And the project references the binaries from those folder. Changing the library versions, needs to use the UI or to change the packages.config AND the project files, because the reference path contains the version information.

Maybe switching the versions back and forth doesn't really makes sense in the most cases, but this is the way how I also try new versions of the libraries. In this special case, we have to maintain multiple ASP.NET applications, which uses multiple shared libraries, which are also dependent to different versions of external data sources. So a preview release of an application also goes to a preview environment with a preview version of a database, so it needs to use the preview versions of the needed libraries. While releasing new features, or hotfixes, it might happen that we need to do an release without updating the production environments and the production databases. So we need to switch the dependencies back to the latest production version of the libraries .

Paket solves it

Paket instead only supports one package version per solution, which makes a lot more sense. This means Paket doesn't store the packages in a sub-folder with a version number in its name. Changing the package versions is easily done in the paket.dependencies file. The reference paths don't change in the project files and the projects immediately use the other versions, after I changed the version and restored the packages.

Paket is an alternative NuGet client, developed by the amazing F# community.

Paket works well

Fortunately Paket works well with MSBuild and CAKE. Paket provides MSBuild targets to automatically restore packages before the build starts. Also in CAKE there is an add-in to restore Paket dependencies. Because I don't commit Paket to the repository I use the command line interface of Paket directly in CAKE:

Task("CleanDirectory")
	.Does(() =>
	{
		CleanDirectory("./Published/");
		CleanDirectory("./packages/");
	});

Task("LoadPaket")
	.IsDependentOn("CleanDirectory")
	.Does(() => {
		var exitCode = StartProcess(".paket/paket.bootstrapper.exe");
		Information("LoadPaket: Exit code: {0}", exitCode);
	});

Task("AssemblyInfo")
	.IsDependentOn("LoadPaket")
	.Does(() =>
	{
		var file = "./SolutionInfo.cs";		
		var settings = new AssemblyInfoSettings {
			Company = " YooApplications AG",
			Copyright = string.Format("Copyright (c) YooApplications AG {0}", DateTime.Now.Year),
			ComVisible = false,
			Version = version,
			FileVersion = version,
			InformationalVersion = version + build
		};
		CreateAssemblyInfo(file, settings);
	});

Task("PaketRestore")
	.IsDependentOn("AssemblyInfo")
	.Does(() => 
	{	
		var exitCode = StartProcess(".paket/paket.exe", "install");
		Information("PaketRestore: Exit code: {0}", exitCode);
	});

// ... and so on

Conclusion

No process is 100% perfect, even this process is not. But it works pretty well in this case. We are able to do releases and hotfix very fast. The setup of a new project using this process is fast and easy as well.

The whole process of releasing a new version, starting with the command git flow release start ... to the deployed application on the web server doesn't take more than 15 minutes. Depending on the size of the application and the amount of tests to run on the build server.

I just recognized, this post is not about .NET Core or ASP.NET Core. The problem I described only happens to classic projects and solutions that store the NuGet packages in the solutions packages folder.

Any Questions about that? Do you wanna learn more about Git Flow, CAKE and Continuous Deployment? Just drop me a comment.

Jürgen Gutsch: Recap the MVP Global Summit 2018

Being a MVP has a lot of benefits. Getting free tools, software and Azure credits are just a few of them. The direct connection to the product group has a lot more value than all software. Even more valuable is the is the fact of being a part of an expert community with more than 3700 MVPs from around the world.

In fact there are a lot more experts outside the MVP community which are also contributing to the communities of the Microsoft related technologies and tools. Being an MVP also means to find those experts and to nominate them to also get the MVP award.

The most biggest benefit of being an MVP is the yearly MVP Global Summit in Redmond. Also this year Microsoft invites the MVPs to attend the MVP Global Summit. More than 2000 MVPs and Regional Directors were registered to attend the summit.

I also attended the summit this year. It was my third summit and the third chance to directly interact with the product group and with other MVPs from all over the world.

The first days in Seattle

My journey to the summit starts at Frankfurt airport where a lot of German, Austrian and Swiss MVPs start their journey and where many more MVPs from Europe change the plain. The LH490 and LH491 flights around the summits are called the "MVP plains" because of this. This always feels like a yearly huge school trip.

The flight was great, sunny the most time and I had an impressive view over Greenland and Canada:

Greenland

After we arrived at SEATEC, some German MVP friends and me took the train to Seattle downtown. We checked in at the hotels and went for a beer and a burger. This year I decided to arrive one day earlier than the last years and to stay in Seattle downtown for the first two nights and the last two nights. This was a great decision.

Pike Place Seattle

I spent the nights just a few steps away from the pike place. I really love the special atmosphere at this place and this area. There are a lot of small stores, small restaurants, the farmers market and the breweries. Also the very first Starbucks restaurant is at this place. It's really a special place. This also allows me to use the public transportation, which works great in Seattle.

There is a direct train from the airport to Seattle downtown and an express bus from Seattle downtown to the center of Bellevue where the conference hotels are located. For those of you, who don't want to spent 40USD or more for Uber, Taxy or a Shuttle, the train to Seattle costs 3USD and the express bus 2,70USD. Both need around 30 minutes, maybe you need some time to wait a few minutes in the underground station in Seattle.

The Summit days

After checking-in into the my conference hotel on Sunday morning, I went to the registration, but it seemed I was pretty early:

Summit Registration

But it wasn't really right. The most of the MVPs where in the queue to register for the conference and to get their swag.

Like the last years, the summit days where amazing, even if we don't really learn a lot of really new things in my contribution area. The most stuff in the my MVP category is open source and openly discussed on GitHub and Twitter and in the blog posts written by Microsoft. Anyway we learned about some cool ideas, which I unfortunately cannot write down here, because it is almost all NDA content.

So the most amazing things during the summit are the events and parties around the conference and to meet all the famous MVPs and Microsoft employees. I'm not really a selfie guy, but this time I really needed to take a picture with the amazing Phil "Mister ASP.NET MVC" Haack.

Phil Haak

I'm also glad to met Steve Gorden, Andrew Lock, David Pine, Damien Bowden, Jon Galloway, Damien Edwards, David Fowler, Immo Landwerth, Glen Condron, and many, many more. And of course the German speaking MVP Family from Germany (D), Austria (A) and Switzerland (CH) (aka DACH)

Special Thanks to Alice, who manages all the MVPs in the DACH area.

I'm also pretty glad to meet the owner of millions of hats, Mr. Jeff Fritz in person who ask me to do a lightning talk in front of many program managers during the summit. Five MVPs should tell the developer division program managers stories about the worst or the best things about the development tools. I was quite nervous, but it worked out well, mostly because Jeff was super cool. I told a worse story about the usage of Visual Studio 2015 and TFS by a customer with a huge amount of solutions and a lot more VS projects in it. It was pretty wired to also tell Julia Liuson (Corporate Vice President of Visual Studio) about that problems. But she was really nice, asked the right questions.

BTW: The power bank (battery pack) we got from Jeff, after the lightning talk, is the best power bank I ever had. Thanks Jeff.

On Thursday, the last Summit day for the VS and dev tools MVPs, there was a hackathon. They provided different topics to work on. There was a table for working with Blazor, another one for some IoT things, F#, C# and even VB.NET still seems to be a thing ;-)

My idea was to play around with Blazor, but I wanted to finalize a contribution to the ASP.NET documentation first. Unfortunately this took longer than expected, this is why I left the table and took a place on another table. I fixed a over-localization issue in the German ASP.NET documentation and took care about an issue on LightCore. On LightCore we currently have an open issue regarding some special registrations done by ASP.NET Core. We thought it was because of special registrations after the IServiceProvider were created, but David Fowler told me the provider is immutable and he points me to the registrations of open generics. LightCore already provides open generics, but implemented the resolution in a wrong way. In case a registrations of a list of generics is not found, LightCore should return an empty list instead of null.

It was amazing how fast David Fowler points me to the right problem. Those guys are crazy smart. Just a few seconds after I showed him the missing registration, I got the right answer. Glen Condron told me right after, how to isolate this issue and test it. Problem found and I just need to fix it.

Thanks guys :-)

The last days in Seattle

I also spent the last two nights at the same location near the Pike Place. Right after the hackathon, I grabbed my luggage at the conference hotel and used the express bus to go to Seattle again. I had a nice dinner together with André Krämer at the Pike Brewery. On the next Morning I had a amazingly yummy breakfast in a small restaurant at the Pike Place market, with a pretty cool morning view to the water front. Together with Kostja Klein, we had a cool chat about this and that, the INETA Germany and JustCommunity.

The last day usually is also the time to buy some souvenirs for the Kids, my lovely wife and the Mexican exchange student, who lives in hour house. I also finished the blog series about React and ASP.NET Core.

At the last morning in Seattle, I stumbled over the Pike Street into the Starbucks to take a small breakfast. It was pretty early at the Pike Place:

Pike Place Seattle

Leaving the Seattle area and the Summit feels a little bit of leaving a second home.

I'm really looking forward to the next summit :-)

BTW: Seattle isn't about rainy and cloudy weather

Have I already told you, that every time I visited Seattle, it was sunny and warm?

It's because of me, I think.

During the last summits it was Sunny when I visit Seattle downtown. In summer 2012, I was in a pretty warm and sunny Seattle, together with my family.

This time it was quite warm during the first days. It started to rain, when I left Seattle to go to the summit locations in Bellevue and Redmond and it was sunny and warm again when I moved back to Seattle downtown.

It's definitely because of me, I'm sure. ;-)

Or maybe the rainy cloudy Seattle is a different one ;-)

Topics I'll write about

Some of the topics I'm allowed to write about and I definitely will write about in the next posts are the following:

  • News on ASP.NET Core 2.1
  • News on ASP.NET (yes, it is still alive)
  • New features in C# 7.x
  • Live Share
  • Blazor

Stefan Henneken: IEC 61131-3: Das ‘Observer’ Pattern

Das Observer Pattern ist für Anwendungen geeignet, in denen gefordert wird, dass ein oder mehrere Funktionsblöcke benachrichtigt werden, sobald sich der Zustand eines bestimmten Funktionsblocks verändert. Hierbei ist die Zuordnung der Kommunikationsteilnehmer zur Laufzeit des Programms veränderbar.

In nahezu jedem IEC 61131-3 Programm tauschen Funktionsblöcke Zustände miteinander aus. Im einfachsten Fall, wird einem Eingang eines FBs der Ausgang eines anderen FBs zugeordnet.

Pic01

Somit lassen sich recht einfach Zustände zwischen Funktionsblöcken austauschen. Doch diese Einfachheit hat seinen Preis:

Unflexibel. Die Zuordnung zwischen fbSensor und den drei Instanzen von FB_Actuator ist fest im Programmcode implementiert. Eine dynamische Zuordnung zwischen den FBs während der Laufzeit ist nicht möglich.

Feste Abhängigkeiten. Der Datentyp der Ausgangsvariable von FB_Sensor muss kompatibel zu der Eingangsvariable von FB_Actuator sein. Gibt es einen neuen Sensorbaustein, dessen Ausgangsvariable inkompatibel zu dem vorherigen Datentyp ist, hat dies zwingend eine Anpassung des Datentyps der Aktoren zur Folge.

Aufgabenstellung

Das folgende Beispiel soll zeigen, wie mit Hilfe des Observern Pattern, auf die feste Zuordnung zwischen den Kommunikationseilnehmern verzichtet werden kann. Der Sensor liest aus einer Datenquelle einen Messwert (z.B. eine Temperatur) aus, während der Aktor in Abhängigkeit eines Messwertes Aktionen ausführt (z.B. eine Temperaturregelung). Die Kommunikation zwischen den Teilnehmern soll veränderbar sein. Sollen die genannten Nachteile eliminiert werden, so helfen zwei grundlegende OO-Entwurfsprinzipien:

  • Identifiziere jene Bereiche, die konstant bleiben und trenne sie von denen, die sich verändern.
  • Programmiere nie direkt auf eine Implementierung, sondern immer auf Schnittstellen.Die Zuordnung zwischen Ein- und Ausgangsvariablen darf somit nicht mehr fest implementiert werden.

Elegant realisierbar ist dieses mit Hilfe von Interfaces, welche die Kommunikation zwischen den FBs definieren. Es erfolgt nicht mehr eine feste Zuordnung von Ein- und Ausgangsvariablen. Hierdurch entsteht zwischen den Teilnehmern eine lose Koppelung. Softwaredesign auf Basis von loser Kopplung ermöglicht es, flexible Softwaresysteme aufzubauen, die mit Veränderungen besser klarkommen, da die Abhängigkeiten zwischen den Teilnehmern minimiert werden.

Definition des Observer Pattern

Das Observer Pattern bietet einen effizienten Kommunikationsmechanismus zwischen mehreren Teilnehmern, wobei ein bzw. mehrere Teilnehmer von dem Zustand eines Teilnehmers abhängig sind. Der Teilnehmer, der einen Zustand zur Verfügung stellt, wird hierbei Subject (FB_Sensor) genannt. Die Teilnehmer, die von dem Zustand abhängig sind, heißen Observer (FB_Actuator).

Das Observer Pattern wird häufig mit einem Zeitungsabonnementdienst verglichen. Der Herausgeber ist das Subject, während die Abonnenten die Observer darstellen. Der Abonnent muss sich beim Herausgeber registrieren. Bei der Registrierung wird evtl. noch angegeben, welche Informationen gewünscht werden. Vom Herausgeber wird eine Liste gepflegt, in dem alle Abonnenten gespeichert sind. Sobald eine neue Veröffentlichung vorliegt, sendet der Herausgeber an alle Abonnenten aus der Liste die gewünschten Informationen.

Formeller wird dieses im Buch “Entwurfsmuster. Elemente wiederverwendbarer objektorientierter Software” von Gamma, Helm, Johnson und Vlissides ausgedrückt:

Das Observer Pattern definiert eine 1-zu-n-Abhängigkeit zwischen Objekten, so dass die Änderung des Zustands eines Objekts dazu führt, das alle abhängigen Objekte benachrichtigt und automatisch aktualisiert werden.”

Implementierung

Wie das Subject die Daten erhält und wie der Observer die Daten weiterverarbeitet, soll an dieser Stelle nicht weiter vertieft werden.

Observer

Über die Methode Update() wird der Observer vom Subject bei Wertänderung benachrichtigt. Da dieses Verhalten bei allen Observern gleich ist, wird das Interface I_Observer definiert, welches alle Observer implementieren.

Der Funktionsblock FB_Observer definiert außerdem eine Eigenschaft, die den aktuellen Istwert zurückliefert.

Pic02 Pic03

Da die Daten per Methode ausgetauscht werden, sind weitere Ein- oder Ausgänge nicht notwendig.

FUNCTION_BLOCK PUBLIC FB_Observer IMPLEMENTS I_Observer
VAR
  fValue      : LREAL;
END_VAR

Hier die Implementierung der Methode Update():

METHOD PUBLIC Update
VAR_INPUT
  fValue      : LREAL;
END_VAR
THIS^.fValue := fValue;

und das Property fActualValue:

PROPERTY PUBLIC fActualValue : LREAL
fActualValue := THIS^.fValue;

Subject

Das Subject verwaltet eine Liste von Observern. Über die Methoden Attach() und Detach() können sich die einzelnen Observer an- und abmelden.

Pic04 Pic05

Da alle Observer das Interface I_Observer implementieren, ist die Liste vom Typ ARRAY [1..Param.cMaxObservers] OF I_Observer. Die genaue Implementierung der Observer muss an dieser Stelle nicht bekannt sein. Es können weitere Varianten von Observern erstellt werden, solange diese das Interface I_Observer implementieren, kann das Subject mit diesen kommunizieren.

Die Methode Attach() enthält als Parameter den Interface Pointer auf den Observer. Bevor dieser in die Liste abgelegt wird (Zeile 23), wird geprüft ob er gültig und nicht schon in der Liste enthalten ist.

METHOD PUBLIC Attach : BOOL
VAR_INPUT
  ipObserver              : I_Observer;
END_VAR
VAR
  nIndex                  : INT := 0;
END_VAR

Attach := FALSE;
IF (ipObserver = 0) THEN
  RETURN;
END_IF
// is the observer already registered?
FOR nIndex := 1 TO Param.cMaxObservers DO
  IF (THIS^.aObservers[nIndex] = ipObserver) THEN
    RETURN;
  END_IF
END_FOR

// save the observer object into the array of observers and send the actual value
FOR nIndex := 1 TO Param.cMaxObservers DO
  IF (THIS^.aObservers[nIndex] = 0) THEN
    THIS^.aObservers[nIndex] := ipObserver;
    THIS^.aObservers[nIndex].Update(THIS^.fValue);
    Attach := TRUE;
    EXIT;
  END_IF
END_FOR

Auch die Methode Detach() enthält als Parameter den Interface Pointer auf den Observer. Ist der Interface Pointer gültig, wird in der Liste nach dem Observer gesucht und die entsprechende Stelle gelöscht (Zeile 15).

METHOD PUBLIC Detach : BOOL
VAR_INPUT
  ipObserver     : I_Observer;
END_VAR
VAR
  nIndex         : INT := 0;
END_VAR

Detach := FALSE;
IF (ipObserver = 0) THEN
  RETURN;
END_IF
FOR nIndex := 1 TO Param.cMaxObservers DO
  IF (THIS^.aObservers[nIndex] = ipObserver) THEN
    THIS^.aObservers[nIndex] := 0;
    Detach := TRUE;
  END_IF
END_FOR

Liegt eine Statusänderung im Subject vor, so wird von allen gültigen Interface Pointern, die sich in der Liste befinden, die Methode Update() aufgerufen (Zeile 8). Diese Funktionalität liegt in der privaten Methode Notify().

METHOD PRIVATE Notify
VAR
  nIndex          : INT := 0;
END_VAR

FOR nIndex := 1 TO Param.cMaxObservers DO
  IF (THIS^.aObservers[nIndex]  0) THEN
    THIS^.aObservers[nIndex].Update(THIS^.fActualValue);
  END_IF
END_FOR

In diesem Beispiel generiert das Subject jede Sekunde einen Zufallswert und benachrichtigt anschließend die Observer über die Methode Notify().

FUNCTION_BLOCK PUBLIC FB_Subject IMPLEMENTS I_Subject
VAR
  fbDelay                : TON;
  fbDrand                : DRAND;
  fValue                 : LREAL;
  aObservers             : ARRAY [1..Param.cMaxObservers] OF I_Observer;
END_VAR

// creates every sec a random value and invoke the update method
fbDelay(IN := TRUE, PT := T#1S);
IF (fbDelay.Q) THEN
  fbDelay(IN := FALSE);
  fbDrand(SEED := 0);
  fValue := fbDrand.Num * 1234.5;
  Notify();
END_IF

Im Subject gibt es keine Anweisung, bei der auf FB_Observer direkt zugegriffen wird. Der Zugriff findet immer indirekt über das Interface I_Observer statt. Eine Anwendung kann mit beliebigen Observer erweitert werden, solange diese das Interface I_Observer implementieren, sind keine Anpassungen am Subject notwendig.

Pic06

Anwendung

Der folgende Baustein soll helfen das Beispielprogramm zu testen. In diesem wird ein Subject und zwei Observer angelegt. Durch Setzen entsprechender Hilfsvariablen können die beiden Observer zur Laufzeit sowohl mit dem Subject verbunden, als auch wieder getrennt werden.

PROGRAM MAIN
VAR
  fbSubject               : FB_Subject;
  fbObserver1             : FB_Observer;
  fbObserver2             : FB_Observer;
  bAttachObserver1        : BOOL;
  bAttachObserver2        : BOOL;
  bDetachObserver1        : BOOL;
  bDetachObserver2        : BOOL;
END_VAR

fbSubject();

IF (bAttachObserver1) THEN
  fbSubject.Attach(fbObserver1);
  bAttachObserver1 := FALSE;
END_IF
IF (bAttachObserver2) THEN
  fbSubject.Attach(fbObserver2);
  bAttachObserver2 := FALSE;
END_IF
IF (bDetachObserver1) THEN
  fbSubject.Detach(fbObserver1);
  bDetachObserver1 := FALSE;
END_IF
IF (bDetachObserver2) THEN
  fbSubject.Detach(fbObserver2);
  bDetachObserver2 := FALSE;
END_IF

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Optimierungen

Subject: Interface oder Basisklasse?

Die Notwendigkeit des Interfaces I_Observer ist bei dieser Implementierung offensichtlich. Die Zugriffe auf Observer werden durch das Interface von der Implementierung entkoppelt.

Dagegen erscheint das Interface I_Subject hier nicht erforderlich. Und tatsächlich könnte auf das Interface I_Subject verzichtet werden. Ich habe es aber trotzdem vorgesehen, da es die Option offen hält, spezielle Varianten von FB_Subject anzulegen. So könnte es einen Funktionsblock geben, der die Observer-Liste nicht in einem Array organisiert. Der Zugriff auf die Methoden zum An- und Abmelden der unterschiedlichen Observer könnte dann generisch über das Interface I_Subject erfolgen.

Der Nachteil bei dem Interface liegt allerdings darin, dass der Code für das An- und Abmelden jedesmal neu implementiert werden muss, auch dann, wenn es die Applikation nicht erfordert. Stattdessen erscheint eine Basisklasse (FB_SubjectBase) für das Subject sinnvoller. Der Verwaltungscode für die Methoden Attach() und Detach() könnten in diese Basisklasse verlagert werden. Besteht die Notwendigkeit ein spezielles Subject (FB_SubjectNew) zu erstellen, so kann von dieser Basisklasse (FB_SubjectBase) geerbt werden.

Was ist aber, wenn dieser spezielle Funktionsblock (FB_SubjectNew) schon von einer anderen Basisklasse (FB_Base) erbt? Mehrfachvererbung ist nicht möglich (dagegen können aber mehrere Interfaces implementiert werden).

Hier bietet es sich an, die Basisklasse in den neuen Funktionsblock einzubetten, also eine lokale Instanz von FB_SubjetBase anzulegen.

FUNCTION_BLOCK PUBLIC FB_SubjectNew EXTENDS FB_Base IMPLEMENTS I_Subject
VAR
  fValue              : LREAL;
  fbSubjectBase       : FB_SubjectBase;
END_VAR

In den Methoden Attach() und Detach() kann dann auf diese lokale Instanz zugegriffen werden.

Methode Attach():

METHOD PUBLIC Attach : BOOL
VAR_INPUT
  ipObserver          : I_Observer;
END_VAR

Attach := FALSE;
IF (THIS^.fbSubjectBase.Attach(ipObserver)) THEN
  ipObserver.Update(THIS^.fValue);
  Attach := TRUE;
END_IF

Methode Detach():

METHOD PUBLIC Detach : BOOL
VAR_INPUT
  ipObserver		: I_Observer;
END_VAR
Detach := THIS^.fbSubjectBase.Detach(ipObserver);

Methode Notify():

METHOD PRIVATE Notify
VAR
  nIndex              : INT := 0;
END_VAR

FOR nIndex := 1 TO Param.cMaxObservers DO
  IF (THIS^.fbSubjectBase.aObservers[nIndex]  0) THEN
    THIS^.fbSubjectBase.aObservers[nIndex].Update(THIS^.fActualValue);
  END_IF
END_FOR

Somit implementiert das neue Subject das Interface I_Subject, erbt von dem Funktionsblock FB_Base und kann über die eingebettete Instanz auf die Funktionalitäten von FB_SubjectBase zugreifen.

Pic07

Beispiel 2 (TwinCAT 3.1.4022) auf GitHub

Update: Push- oder Pull-Methode?

Es gibt zwei Varianten, wie der Observer die gewünschten Informationen vom Subject erhält:

Bei der Push-Methode werden über die Update-Methode alle Informationen an den Observer übergeben. Für den gesamten Informationsaustausch ist nur der Aufruf einer Methode notwendig. In dem Beispiel war es immer nur eine Variable vom Datentyp LREAL, die das Subject übergeben hat. Je nach Anwendung, können es aber deutlich mehr Daten sein. Doch nicht jeder Observer benötigt immer alle Informationen, die an ihn übergeben werden. Weiterhin werden Erweiterungen erschwert: Was ist, wenn die Methode Update() um weitere Daten erweitert wird? Es müssen alle Observer angepasst werden. Abhilfe schafft in diesem Fall die Nutzung eines speziellen Funktionsblocks als Parameter. Dieser Funktionsblock kapselt alle notwendigen Informationen in Eigenschaften. Kommen weitere Eigenschaften hinzu, so ist es nicht notwendig die Update-Methode anzupassen.

Wird die Pull-Methode implementiert, so erhält der Observer nur eine minimale Benachrichtigung. Dieser holt sich dann aus den Subject alle Informationen die benötigt werden. Hierzu müssen allerdings zwei Bedingungen erfüllt sein. Zum einen muss das Subject alle Daten als Eigenschaften zur Verfügung stellen. Zum anderen muss der Observer eine Referenz auf das Subject erhalten, damit dieser auf die Eigenschaften zugreifen kann. Ein Lösungsansatz kann darin bestehen, dass die Update-Methode als Parameter eine Referenz auf das Subject (also auf sich selbst) enthält.

Natürlich lassen sich beide Varianten miteinander kombinieren. Das Subject stellt alle relevanten Daten als Eigenschaften zur Verfügung. Gleichzeitig kann die Update-Methode eine Referenz auf das Subject mitliefern und die wichtigsten Informationen als Funktionsblock übergeben. Dieser Ansatz ist das klassische Vorgehen von zahlreichen GUI-Bibliotheken.

Tipp: Sofern das Subject wenig über seine Observer weiß, ist die Pull-Methode vorzuziehen. Kennt das Subject dagegen seine Observer (da es nur wenige verschiede Arten von Observern geben kann), sollte die Push-Methode angewendet werden.

Holger Schwichtenberg: Community-Konferenz in Madgeburg im April ab 40 Euro pro Tag

Die Magdeburger Developer Days gehen in die dritte Auflage und dieses Mal auch dreitägig vom 9. bis 11. April 2018.

Holger Schwichtenberg: GroupBy funktioniert in Entity Framework Core 2.1 Preview 1 immer noch nicht so ganz

Aggregatoperatoren wie Min(), Max(), Sum() und Average() funktionieren, nicht aber Count().

Manfred Steyer: Custom Schematics - Part IV: Frictionless Library Setup with the Angular CLI and Schematics


Table of Contents

This blog post is part of an article series.


Thanks a lot to Hans Larsen from the Angular CLI team for reviewing this article.

It's always the same: After npm installing a new library, we have to follow a readme step by step to include it into our application. Usually this involves creating configuration objects, referencing css files, and importing Angular Modules. As such tasks aren't fun at all it would be nice to automate this.

This is exactly what the Angular CLI supports beginning with Version 6 (Beta 5). It gives us a new ng add command that fetches an npm package and sets it up with a schematic -- a code generator written with the CLI's scaffolding tool Schematics. To support this, the package just needs to name this schematic ng-add.

In this article, I show you how to create such a package. For this, I'll use ng-packagr and a custom schematic. You can find the source code in my GitHub account.

If you haven't got an overview to Schematics so far, you should lookup the well written introduction in the Angular Blog before proceeding here.

Goal

To demonstrate how to leverage ng add, I'm using an example with a very simple logger library here. It is complex enough to explain how everything works and not indented for production. After installing it, one has to import it into the root module using forRoot:

[...] import { LoggerModule } from '@my/logger-lib'; @NgModule({ imports: [ [...], LoggerModule.forRoot({ enableDebug: true }) ], [...] }) export class AppModule { }

As you see in the previous listing, forRoot takes a configuration object. After this, the application can get hold of the LoggerService and use it:

[...] import { LoggerService } from '@my/logger-lib'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent { constructor(private logger: LoggerService) { logger.debug('Hello World!'); logger.log('Application started'); } }

To prevent the need for importing the module manually and for remembering the structure of the configuration object, the following sections present a schematic for this.

Schematics is currently an Angular Labs project. Its public API is experimental and can change in future.

Angular Labs

Getting Started

To get started, you need to install version 6 of the Angular CLI. Make sure to fetch Beta 5 or higher:

npm i -g @angular/cli@~6.0.0-beta

You also need the Schematics CLI:

npm install -g @angular-devkit/schematics-cli

The above mentioned logger library can be found in the start branch of my sample:

git clone https://github.com/manfredsteyer/schematics-ng-add
cd schematics-ng-add
git checkout start

After checking out the start branch, npm install its dependencies:

npm install

If you want to learn more about setting up a library project from scratch, I recommend the resources outlined in the readme of ng-packagr.

Adding an ng-add Schematic

As we have everything in place now, let's add a schematics project to the library. For this, we just need to run the blank Schematics in the project's root:

schematics blank --name=schematics

This generates the following folder structure:

Generated Schematic

The folder src/schematics contains an empty schematic. As ng add looks for an ng-add schematic, let's rename it:

Renamed Schematic

In the index.ts file in the ng-add folder we find a factory function. It returns a Rule for code generation. I've adjusted its name to ngAdd and added a line for generating a hello.txt.

import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; export function ngAdd(): Rule { return (tree: Tree, _context: SchematicContext) => { tree.create('hello.txt', 'Hello World!'); return tree; }; }

The generation of the hello.txt file represents the tasks for setting up the library. We will replace it later with a respective implementation.

As our schematic will be looked up in the collection.json later, we have also to adjust it:

{ "$schema": "../node_modules/@angular-devkit/schematics/collection-schema.json", "schematics": { "ng-add": { "description": "Initializes Library", "factory": "./ng-add/index#ngAdd" } } }

Now, the name ng-add points to our rule -- the ngAdd function in the ng-add/index.ts file.

Adjusting the Build Script

In the current project, ng-packagr is configured to put the library build out of our sources in the folder dist/lib. The respective settings can be found within the ngPackage node in the package.json. When I'm mentioning package.json here, I'm referring to the project root's package.json and not to the generated one in the schematics folder.

To make use of our schematic, we have to make sure it is compiled and copied over to this folder. For the latter task, I'm using the cpr npm package we need to install in the project's root:

npm install cpr --save-dev

In order to automate the mentioned tasks, add the following scripts to the package.json:

[...] "scripts": { [...], "build:schematics": "tsc -p schematics/tsconfig.json", "copy:schematics": "cpr schematics/src dist/lib/schematics --deleteFirst", [...] }, [...]

Also, extend the build:lib script so that the newly introduced scripts are called:

[...] "scripts": { [...] "build:lib": "ng-packagr -p package.json && npm run build:schematics && npm run copy:schematics", [...] }, [...]

When the CLI tries to find our ng-add schematic, it looks up the schematics field in the package.json. By definition it points to the collection.json which in turn points to the provided schematics. Hence, let's add this field to our package.json too:

{ [...], "schematics": "./schematics/collection.json", [...] }

Please note that the mentioned path is relative to the folder lib where ng-packagr copies the package.json over.

Test the Schematic Directly

For testing the schematic, let's build the library:

npm run build:lib

After this, move to the dist/lib folder and run the schematic:

schematics .:ng-add

Testing the ng-add schematic

Even though the output mentions that a hello.txt is generated, you won't find it because when executing a schematic locally it's performing a dry run. To get the file, set the dry-run switch to false:

schematics .:ng-add --dry-run false

After we've seen that this works, generate a new project with the CLI to find out whether our library plays together with the new ng add:

ng new demo-app
cd demo-app
ng add ..\logger-lib\dist\lib

ng add with relative path

Make sure that you point to our dist/lib folder. Because I'm working on Windows, I've used backslashes here. For Linux or Mac, replace them with forward slashes.

When everything worked, we should see a hello.txt.

As ng add is currently not adding the installed dependency to your package.json, you should do this manually. This might change in future releases.

Test the Schematic via an npm Registry

As we know now that everything works locally, let's also check whether it works when we install it via an npm registry. For this, we can for instance use verdaccio -- a very lightweight node-based implementation. You can directly npm install it:

npm install -g verdaccio

After this, it is started by simply running the verdaccio command:

Running verdaccio

Before we can publish our library to verdaccio, we have to remove the private flag from our package.json or at least set it to false:

{ [...] "private": false, [...] }

To publish the library, move to your project's dist/lib folder and run npm publish:

npm publish --registry http://localhost:4873

Don't forget to point to verdaccio using the registry switch.

Now, let's switch over to the generated demo-app. To make sure our registry is used, create an .npmrc file in the project's root:

@my:registry=http://localhost:4873

This entry causes npm to look up each library with the @my scope in our verdaccio instance.

After this, we can install our logger library:

ng add @my/logger-lib

ng add

When everything worked, we should find our library in the node_modules/@my/logger-lib folder and the generated hello.txt in the root.

Extend our Schematic

So far, we've created a library with a prototypical ng-add schematic that is automatically executed when installing it with ng add. As we know that our setup works, let's extend the schematic to setup the LoggerModule as shown in the beginning.

Frankly, modifying existing code in a safe way is a bit more complicated than what we've seen before. But I'm sure, we can accomplish this together ;-).

For this endeavour, our schematic has to modify the project's app.module.ts file. The good message is, that this is a common task the CLI performs and hence its schematics already contain the necessary logic. However, when writing this, the respective routines have not been part of the public API and so we have to fork it.

For this, I've checked out the Angular DevKit and copied the contents of its packages/schematics/angular/utility folder to my library project's schematics/src/utility folder. Because those files are subject to change, I've conserved the current state here.

Now, let's add a Schematics rule for modifying the AppModule. For this, move to our schematics/src/ng-add folder and add a add-declaration-to-module.rule.ts file. This file gets an addDeclarationToAppModule function that takes the path of the app.module.ts and creates a Rule for updating it:

import { Rule, Tree, SchematicsException } from '@angular-devkit/schematics'; import { normalize } from '@angular-devkit/core'; import * as ts from 'typescript'; import { addSymbolToNgModuleMetadata } from '../utility/ast-utils'; import { InsertChange } from "../utility/change"; export function addDeclarationToAppModule(appModule: string): Rule { return (host: Tree) => { if (!appModule) { return host; } // Part I: Construct path and read file const modulePath = normalize('/' + appModule); const text = host.read(modulePath); if (text === null) { throw new SchematicsException(`File ${modulePath} does not exist.`); } const sourceText = text.toString('utf-8'); const source = ts.createSourceFile(modulePath, sourceText, ts.ScriptTarget.Latest, true); // Part II: Find out, what to change const changes = addSymbolToNgModuleMetadata(source, modulePath, 'imports', 'LoggerModule', '@my/logger-lib', 'LoggerModule.forRoot({ enableDebug: true })'); // Part III: Apply changes const recorder = host.beginUpdate(modulePath); for (const change of changes) { if (change instanceof InsertChange) { recorder.insertLeft(change.pos, change.toAdd); } } host.commitUpdate(recorder); return host; }; }

Most of this function has been "borrowed" from the Angular DevKit. It reads the module file and calls the addSymbolToNgModuleMetadata utility function copied from the DevKit. This function finds out what to modify. Those changes are applied to the file using the recorder object and its insertLeft method.

To make this work, I had to tweak the copied addSymbolToNgModuleMetadata function a bit. Originally, it imported the mentioned Angular module just by mentioning its name. My modified version has an additional parameter which takes an expression like LoggerModule.forRoot({ enableDebug: true }). This expression is put into the module's imports array.

Even though this just takes some minor changes, the whole addSymbolToNgModuleMetadata method is rather long. That's why I'm not printing it here but you can look it up in my solution.

After this modification, we can call addDeclarationToAppModule in our schematic:

import { Rule, SchematicContext, Tree, chain, branchAndMerge } from '@angular-devkit/schematics'; import { addDeclarationToAppModule } from './add-declaration-to-module.rule'; export function ngAdd(): Rule { return (tree: Tree, _context: SchematicContext) => { const appModule = '/src/app/app.module.ts'; let rule = branchAndMerge(addDeclarationToAppModule(appModule)); return rule(tree, _context); }; }

Now, we can test our Schematic as shown above. To re-publish it to the npm registry, we have to increase the version number in the package.json. For this, you can make use of npm version:

npm version minor

After re-building it (npm run build:lib) and publishing the new version to verdaccio (npm publish --registry http://localhost:4873), we can add it to our demo app:

Add extended library

Conclusion

An Angular-based library can provide an ng-add Schematic for setting it up. When installing the library using ng add, the CLI calls this schematic automatically. This innovation has a lot of potential and will dramatically lower the entry barrier for installing libraries in the future.

MSDN Team Blog AT [MS]: Neue Azure Regions

Microsoft hat heute eine starke Erweiterung Ihrer Azure Rechenzentren in Europa bekannt gegeben. Es wurden zwei neue Azure Regionen in der Schweiz angekündigt, zwei weitere Regionen in Deutschland und die Inbetriebnahme der beiden fertiggestellten Regionen in Frankreich. Die beiden zusätzlichen Regionen in Deutschland werden, anders als die existierenden, Teil der internationalen Azure Rechenzentren sein. Eine dedizierte Datenspeicherung in Deutschland ist möglich aber auch die Nutzung der Skalierbarkeit und Ausfalsssicherheit im Zusammenspiel mit Irland, den Niederlanden, Frankreich und bald auch der Schweiz.

Neben Europa wurden auch zwei Regionen in den Vereinigten Arabischen Emiraten angekündigt.

 

Holger Schwichtenberg: Erste Preview-Version von .NET Core 2.1 & Co.

Ganz knapp zum Ende des von Microsoft geplanten Zeitraums ist dann doch noch die erste Preview-Version am 27. Februar erschienen.

Jürgen Gutsch: Running and Coding

I wasn't really sporty before one and a half years, but anyway active. I was also forced to be active with three little kids and a sporty and lovely women. But anyway, a job where I mostly sit in a comfortable chair, even great food and good southern German beers also did its work. When I first met my wife, I had around 80 Kg, what is good for my size of 178cm. But my weight increased up to 105Kg until Christmas 2015. This was way too much I thought. Until then I always tried to reduce it by doing some more cycling, more hiking and some gym, but it didn't really work well.

Anyway, there is not a more effective way to loose weight than running. It is btw. tree times more effective than cycling. I tried it a lot in the past, but it pretty much hurts in the lower legs and I stopped it.

Running the agile way

I tried it again in Easter 2016 in a little different way, at it worked. I tried to do it the same way as in a perfect software project:

I did it in an agile way, using pretty small goals to get as much success as possible.

Also I bought me fitness watch to count steps, calories, levels and to measure the hart rate while running, to get some more challenges to do. At the same time I changed food a lot.

It sounds weird and funny, but it worked really well. I lost 20Kg since then!

I think it was important to not set to huge goals. I just wanted to loose 20Kg. I didn't set a time limit, or something like this.

I knew it hurts in the lower legs while running. I started to learn a lot of running and the different stiles of running. I chose the way of easy running which worked pretty well with natural running shoes and barefoot shoes. This also worked well for me.

Finding time to run

Finding time was the hared thing. I discussed it a lot with the family and we figured out the best time to run was during lunch time, because I need to walk the dog anyway and this also was an option to run with the dog. This was also a good thing for our huge dog.

Running at lunch time had another good advantage: I get the brain cleaned a little bit after four to five hours of work. (Yes, I start between 7 or 8 in the morning.) Running is great when you working on software projects with a huge level of complexity. Unfortunately when I'm working in Basel, I cannot go run, because there is now shower available.

Starting to run

The first runs ware a real pain. I just chose a small lap of 2,5Km, because I needed to learn running as the first step. Also because of the pain in the lower legs, I chose to run, shorter tracks up-hill. Why? Because this is more exhausting than running straight. So I had short up-hill running phases and longer quick walking phases. Just a few runs later the running phases start to be a little bit longer and longer.

This was the first success, just a few runs later. That was great. it was even greater when I finished my first kilometer after 1,5 months running every second day. That was amazing.

On every run, there was an success. That really pushed me. But not only running succeeded, I also started to loose weight, which pushed me even more. So the pain wasn't too hard. and I continued running.

Some weeks later I ran the entire lap of 2.5Km. Not really fast, but I was running the whole lap, without a walking pause. Some more motivation.

I continued running just this 2.5Km for a few more weeks to get some success on personal records on this lap.

Low carb

I mentioned the change with food. I changed to low-carb diet. Which is in general a way to reduce the consumption of sugar. Every kind of sugar, which means bread, potatoes, pasta, rice and corn as well. In the first phase of three months I almost completely stopped eating carbs. After that phase, started to eat a little of them. I also had one cheating-day per week, where I was able to eat the normal way.

After 6 Months of eating less carbs and running, I lost around 10Kg, which was amazing.I was absolutely happy with this progress.

Cycling as a compensation

As already mentioned I run every second day. The days between I used my new mountain bike to climb the hills around the city where I live. Actually it really was a compensation because cycling uses other parts of the legs. (Except when I run up-hill).

Using my smart watch, I was able to measure the that running burns three times more calories per hour than cycling. This is a measurement done on my person only and cannot adopt to any other person, but actually makes sense to me.

Unfortunately cycling during the winter was a different kind of pain. It hurts the face, the feet and the hands. It was too cols. so I stopped it, if the temperature was lower than 10 degrees.

Extending the lap

After a few weeks running the entire 2.5Km, I increased the length to 4.5. This was more exhausting than expected. Two kilometers more needs a completely new kind of training. I needed to enforce myself to not run too fast at the beginning. I needed to start to manage my power. Again I started slowly and used some walking pauses to get the whole lap done. During the next months the walking pauses decreases more and more.

The first official run

Nine months later, I wanted to challenge myself a little bit and attended the first public run. It was a new years eve run. Pretty cold than, but unexpectedly a lot of fun. I was running with my brother, which was a good idea. The atmosphere before and during the run was pretty special and I still like it a lot. I got two challenges done during this run. I reached the finish and I wasn't the last one who passed the finish line. That was great. I also got a new personal record on 5Km.

This was done one year and two months ago. I did exactly the same run again last new ears eve and got a new personal record, was faster than my brother and reached the finish. Amazing. More success to push my self.

The first 10km

During the last year I increased the number of kilometers, attended some more public runs. In September 2015, I finished my first public 10km run. Even more success to push me foreword.

I didn't increase the number of kilometer fast. Just one one by one kilometer. Trained one to three months on this range and added some more kilometer. Last spring I started to do a longer run during the weekend, just because I had time to do this. On workdays, it doesn't make sense to run more than 7 km, because this would also increase the time used for the lunch break. I tried to just use one hour for the lunch run, including the shower and changing the cloths.

Got it done

Last November, I got it done: I actually did loose 20kg since I started to run. This was really great. It was a great thing to see a weight less than 85kg.

Conclusion

How did running changed my life? it changed it a lot. I cannot really life with running pause of more than two days. I get really nervous than. Because of the sports, I am more tired than before, muscle ache, two sport accidents. But pretty much more relaxed I think.

Also some annoying work was done more easy. I really look foreword to the next lunch break, to run the six or seven kilometer with the dog, or to ride the bike up the hills. Now I really like to catch one personal record to another.

I'm running on almost every weather, except it is slippery because of ice or snow. Fresh snow is fine, mud is fun, rain I don't feel anymore, sunny is even better and heat is challenging. Only the dog doesn't love warm weather.

Crazy? Yes, but I love it.

Jürgen Gutsch: Creating a chat application using React and ASP.NET Core - Part 5

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Intro

In this post I will write about the deployment of the app to Azure App Services. I will use CAKE to build pack and deploy the apps, both the identity server and the actual app. I will run the build an AppVeyor, which is a free build server for open source projects and works great for projects hosted on GitHub.

I'll not go deep into the AppVeyor configuration, the important topics are cake and azure and the app itself.

BTW: SignalR was going into the next version the last weeks. It is not longer alpha. The current version is 1.0.0-preview1-final. I updated the version in the package.json and in the ReactChatDemo.csproj. Also the NPM package name changed from "@aspnet/signalr-client" to "@aspnet/signalr". I needed to update the import statement in the WebsocketService.ts file as well. After updating SignalR I got some small breaking changes, which are easily fixed. (Please see the GitHub repo, to learn about the changes.)

Setup CAKE

CAKE is a build DSL, that is built on top of Roslyn to use C#. CAKE is open source and has a huge community, who creates a ton of add-ins for it. It also has a lot of built-in features.

Setting up CAKE is easily done. Just open the PowerShell and cd to the solution folder. Now you need to load a PowerShell script that bootstraps the CAKE build and loads more dependencies if needed.

Invoke-WebRequest https://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1

Later on, you need to run the build.ps1 to start your build script. Now the Setup is complete and I can start to create the actual build script.

I created a new file called build.cake. To edit the file it makes sense to use Visual Studio Code, because @code also has IntelliSense. In Visual Studio 2017 you only have syntax highlighting. Currently I don't know an add-in for VS to enable IntelliSense.

My starting point for every new build script is, the simple example from the quick start demo:

var target = Argument("target", "Default");

Task("Default")
  .Does(() =>
  {
    Information("Hello World!");
  });

RunTarget(target);

The script then gets started by calling the build.ps1 in a PowerShell

.\build.ps1

If this is working I'm able to start hacking the CAKE script in. Usually the build steps I use looks like this.

  • Cleaning the workspace
  • Restoring the packages
  • Building the solution
  • Running unit tests
  • Publishing the app
    • In the context of non-web application this means packaging the app
  • Deploying the app

To deploy the App I use the CAKE Kudu client add-in and I need to pass in some Azure App Service credentials. You get this credentials, by downloading the publish profile from the Azure App Service. You can just copy the credentials out of the file. Be careful and don't save the secrets in the file. I usually store them in environment variables and read them from there. Because I have two apps (the actual chat app and the identity server) I need to do it twice:

#addin nuget:?package=Cake.Kudu.Client

string  baseUriApp     = EnvironmentVariable("KUDU_CLIENT_BASEURI_APP"),
        userNameApp    = EnvironmentVariable("KUDU_CLIENT_USERNAME_APP"),
        passwordApp    = EnvironmentVariable("KUDU_CLIENT_PASSWORD_APP"),
        baseUriIdent   = EnvironmentVariable("KUDU_CLIENT_BASEURI_IDENT"),
        userNameIdent  = EnvironmentVariable("KUDU_CLIENT_USERNAME_IDENT"),
        passwordIdent  = EnvironmentVariable("KUDU_CLIENT_PASSWORD_IDENT");;

var target = Argument("target", "Default");

Task("Clean")
    .Does(() =>
          {	
              DotNetCoreClean("./react-chat-demo.sln");
              CleanDirectory("./publish/");
          });

Task("Restore")
	.IsDependentOn("Clean")
	.Does(() => 
          {
              DotNetCoreRestore("./react-chat-demo.sln");
          });

Task("Build")
	.IsDependentOn("Restore")
	.Does(() => 
          {
              var settings = new DotNetCoreBuildSettings
              {
                  NoRestore = true,
                  Configuration = "Release"
              };
              DotNetCoreBuild("./react-chat-demo.sln", settings);
          });

Task("Test")
	.IsDependentOn("Build")
	.Does(() =>
          {
              var settings = new DotNetCoreTestSettings
              {
                  NoBuild = true,
                  Configuration = "Release",
                  NoRestore = true
              };
              var testProjects = GetFiles("./**/*.Tests.csproj");
              foreach(var project in testProjects)
              {
                  DotNetCoreTest(project.FullPath, settings);
              }
          });

Task("Publish")
	.IsDependentOn("Test")
	.Does(() => 
          {
              var settings = new DotNetCorePublishSettings
              {
                  Configuration = "Release",
                  OutputDirectory = "./publish/ReactChatDemo/",
                  NoRestore = true
              };
              DotNetCorePublish("./ReactChatDemo/ReactChatDemo.csproj", settings);
              settings.OutputDirectory = "./publish/ReactChatDemoIdentities/";
              DotNetCorePublish("./ReactChatDemoIdentities/ReactChatDemoIdentities.csproj", settings);
          });

Task("Deploy")
	.IsDependentOn("Publish")
	.Does(() => 
          {
              var kuduClient = KuduClient(
                  baseUriApp,
                  userNameApp,
                  passwordApp);
              var sourceDirectoryPath = "./publish/ReactChatDemo/";
              var remoteDirectoryPath = "/site/wwwroot/";

              kuduClient.ZipUploadDirectory(
                  sourceDirectoryPath,
                  remoteDirectoryPath);

              kuduClient = KuduClient(
                  baseUriIdent,
                  userNameIdent,
                  passwordIdent);
              sourceDirectoryPath = "./publish/ReactChatDemoIdentities/";
              remoteDirectoryPath = "/site/wwwroot/";

              kuduClient.ZipUploadDirectory(
                  sourceDirectoryPath,
                  remoteDirectoryPath);
          });

Task("Default")
    .IsDependentOn("Deploy")
    .Does(() =>
          {
              Information("Your build is done :-)");
          });

RunTarget(target);

To get this script running locally, you need to set each of the environment variables in the current PowerShell session:

$env:KUDU_CLIENT_PASSWORD_APP = "super secret password"
# and so on...

If you only want to test the compile and publish stuff, just set the dependency of the default target to "Publish" instead of "Deploy". Doing this the deploy part will not run, you don't deploy in accident and you save some time while trying this.

Use CAKE in AppVeyor

On AppVeyor the environment variables are set in the UI. Don't set them in the YAML configuration, because they are not properly save and everybody can see them.

The most simplest appveyor.yml file looks like this.

version: 1.0.0-preview1-{build}
pull_requests:
  do_not_increment_build_number: true
branches:
  only:
  - master
skip_tags: true
image: Visual Studio 2017 Preview
build_script:
- ps: .\build.ps1
test: off
deploy: off
# this is needed to install the latest node version
environment:
  nodejs_version: "8.9.4"
install:
  - ps: Install-Product node $env:nodejs_version
  # write out version
  - node --version
  - npm --version

This configuration only builds the master and the develop branch, which makes sense if you use git flow, as I used to do. Otherwise change it to just use the master branch or whatever branch you want to build. I skip tags to build and any other branches.

The image is Visual Studio 2017 (preview only if you want to try the latest features)

I can switch off tests, because this is done in the CAKE script. The good thing is, that the XUnit test output, built by the test runs in CAKE , gets anyway published to the AppVeyor reports. Deploy is also switched off, because it's done in CAKE too.

The last thing that needs to be done is to install the latest Node.JS version. Otherwise the already installed pretty much outdated version is is used. This is needed to download the React dependencies and to run Webpack to compile and bundle the React app.

You could also configure the CAKE script in a way that test, deploy and build calls different targets inside CAKE. But this is not really needed and makes the build a little less readable.

If you now push the entire repository to your repository on GitHub, you need to go to AppVeyor and to setup a new build project by selecting your GitHub repository. An new AppVeyor account is easily set up using an existing GitHub account. When the build project is created, you don't need to setup more. Just start a new build and see what happens. Hopefully you'll also get a green build like this:

Closing words

This post was finished one day after the Global MVP Summit 2018 on a pretty sunny day in Seattle

I spent two nights before the summit starts in Seattle downtown and the two nights after. Both times it was unexpectedly sunny.

I finish this series with this fifth blog post and learned a little bit about React and how it behaves in an ASP.NET Core project. And I really like it. I wouldn't really do a single page application using React, this seems to be much easier and faster using Angular, but I will definitely use react in future to create rich HTML UIs.

It works great using the React ASP.NET Core project in Visual Studio. It is great that Webpack is used here, because it saves a lot of time and avoids hacking around the VS environment.

MSDN Team Blog AT [MS]: Mobile Developer After-Work #17

mobility.builders
Mobile Developer After-Work #17

Progressive Web Apps, Blockchain und Mixed Reality

Mittwoch, 21 März 2018, 17:30 – 21:30
Raum D, Museumsquartier, 1070 Wien
Jetzt anmelden!

Die Technik-Welt wird gerade von 3 Themen beherrscht. Beim #mdaw17 erfährst du, was dahintersteckt:

  • Wie erstellt man eine Progressive Web App in 30 Minuten?
  • Was bieten Blockchains für seriöse Geschäftsfälle?
  • Mixed Reality: die neuesten Trends und ein Praxisbeispiel aus der Kunst

Agenda

17:00 – 17:30: Anmeldung

17:30 – 17:35: Willkommen
Andreas Jakl, FH St. Pölten
Helmut KrämerTieto

17:35 – 18:10: In 30 Minuten zur Progressive Web App (35 min)
Stefan Baumgartner, fettblog.eu
Das Web als Applikationsplattform für mobile Geräte hat sich ja jetzt nicht ganz so durchgesetzt, wie man immer wollte. Viele Versuche gab es, aber auch genauso viele sind wieder in der Versenkung verschwunden. Mit Progressive Web Applications soll sich das ändern. Dahinter stehen eine Reihe von Web Standards, ein neues Mindset und ganz wichtig: ein kollaboratives Vorgehen aller Browser- und Gerätehersteller. In diesem Vortrag schauen wir uns am Beispiel an, wie man am schnellsten zu einer PWA kommt, und welche Technologien es dafür benötigt.

18:15 – 18:50: Blockchain - Nachweis und Vertrauen in der Supplychain (35 min)
Sebastian Nigl, Tieto
Durch das steigende Interesse der Unternehmen an der Blockchain-Technologie abseits der privaten Finanzwirtschaft hat das medial gehypte Thema auch Einzug in der Geschäftswelt gefunden. Vom Hype zum seriösen Geschäftsfall gilt es für viele Unternehmen nun herauszufinden wie Blockchain sich tatsächlich in den betrieblichen Prozessen einbinden lässt. Nachweis und Vertrauen in der Supplychain sind von der Beschaffung bis zur Auslieferung des Fertigprodukts Bereiche, wo Blockchain eine Lösung hat. Am Beispiel der FSC Zertifizierung soll gezeigt werden, wie eine Blockchain Lösung in der Logistik aussehen kann.

18:50 – 19:10: Pause

19:10 – 19:30: Hands-On Mixed Reality (20 min)
Andreas Jakl, FH St. Pölten
Mit Google ARCore oder Apple ARKit tragen bald fast alle Smartphone-User AR-taugliche Geräte immer bei sich. Auf der anderen Seite ist mit Windows Mixed Reality der Einstieg in VR einfach & günstig wie nie. Was bieten die aktuellen Plattformen auf der technischen Seite? Anhand von kurzen Beispielen werfen wir einen Blick auf die Entwicklung von Mixed Reality-Apps für die neuesten Plattformen mit Unity.

19:35 – 20:05: AR im Kunstkontext (30 min)
Reinhold Bidner
Augmented Reality ist mittlerweile massentauglich geworden. Abseits von Games oder Produktpräsentationen experimentieren auch KünstlerInnen zunehmend in diesem Bereich. Reinhold Bidner wird in seinem Talk über ein paar seiner künstlerischen AR Geh-Versuche erzählen: einerseits als eigenständiger Animations-Künstler, andererseits als Part des Kollektivs gold extra, oder momentan als Mitglied in einem AR-Projekt für das Volkstheater/Volx Margareten namens "Vienna - All Tomorrows" (Leitung: Georg Hobmeier).

20:05: Networking mit Snacks und Getränken

Anreise

Zum Event gelangst du direkt mit der U2 / U3: Museumsquartier bzw. Volkstheater. Die Veranstaltung findet im Raum D im Quartier 21 des Museumsquartiers in 1070 Wien statt.

Raum D / Museumsquartier Wien

Organisation & Partner

Vielen Dank an unsere wunderbaren Partner! Das Event wird von mobility.builders organisiert, unterstützt von der FH St. Pölten und Tieto. Microsoft stellt die großartige Verpflegung zur Verfügung. Besonderer Dank gilt auch der ASIFA Austria für die besonders schöne Event-Location im Museumsquartier!

Partner: FH St. Pölten, Tieto, ASIFA Austria, Microsoft

Anmelden zum #mdaw17

Die Anzahl der TeilnehmerInnen ist beschränkt – gleich kostenlos anmelden!

MSDN Team Blog AT [MS]: Computer Vision Hackfest in Paris

image

Für alle die sich mit Computer Vision Technologien beschäftigen bieten wir vom 18. – 20. April in Paris eine tolle Gelegenheit um gemeinsam mit Software Engineers der CSE (Commercial Software Engineering) am eigenen computer vision-based Scenario / Projekt weiterzuarbeiten. Unsere Experten können Euch unterstützen einen Prototyp Eurer Lösung zu erstellen.

Im Rahmen des Hackfests zeigen wir Euch wie ihr Microsoft AI/Open Source Technologien, Azure Machine Learning, Cognitive Services, und andere Microsoft Angebote nutzen könnt, um Euer Projekt umzusetzen.

Da das Hackfest in englischer Sprache durchgeführt wird gibt es auch die weiteren Details auf englisch:

As a part of the Hackfest you will:

  • Hear from Engineering and Microsoft Technical teams on the tools and services to build great Computer Vision solutions.
  • Engage in architecture and coding sessions with Microsoft teams
  • Get to code side-by-side with the Engineering teams to help implement your scenario
  • Learn from the other attendees who will work on different scenarios but on the same set of technologies.

What’s required?

  • Bring a specific business problem and a related dataset where Image Classification and/or Object Detection is required.
  • Commit to at least 2 members of your development team to attend the Hackfest to code together with the Software engineers
  • Your company must have an Azure subscription

What’s provided?

  • Space to work as a team with Microsoft engineers
  • Coffee breaks + Lunch + Afternoon snacks
  • Network connectivity

Logistics

  • The hackfest will happen in Paris, in a venue to be confirmed
  • It will start on Monday, April 16th at 10am and will close on Friday, April 20th at 12pm.
  • The event is invitation-based and free.

 

Wie melde ich mich und mein Team an?

Für Anmeldungen oder Details zum Computer Vision Hackfest in Paris schickt bitte eine E-Mail an Gerhard.Goeschl@Microsoft.com. Wir setzen dann mit euch einen Scoping Call für Euer Projekt auf um dieses im Vorfeld besser verstehen und Euch beim Hackfest besser unterstützen zu können..

MSDN Team Blog AT [MS]: Artificial Intelligence Hackfest in Belgien

image

 

Vom 9. – 13. April veranstalten wir in Belgien ein Artificial Intelligence Hackfest. Für alle die sich schon mit dem Thema AI beschäftigen eine tolle Gelegenheit um gemeinsam mit Software Engineers der CSE (Commercial Software Engineering) am eigenen Projekt weiterzuarbeiten. Unsere Experten können Euch unterstützen einen Prototyp Eurer AI Lösung zu erstellen.

Im Rahmen des Hackfests zeigen wir Euch wie ihr Azure Machine Learning, Cognitive Services, und andere Microsoft AI Angebote nutzen könnt, um Eure Firmen Daten in Intelligente Erkenntnisse umzusetzen.

Da das Hackfest in englischer Sprache durchgeführt wird gibt es auch die weiteren Details auf englisch:

As a part of the Hackfest you will:

  • Hear from Engineering and Microsoft Technical Evangelism teams on the tools and services to build great AI solutions.
  • Engage in architecture and coding sessions with Microsoft teams
  • Get to code side-by-side with the Engineering and Technical Evangelist teams to help implement AI scenarios with your apps or sites.

What’s required?

  • Bring a specific business problem and a dataset around which this problem is centered. (briefing document attached)
  • Commit to at least 2 members of their development team to attend the Hackfest to work together with the Software engineers to build their idea into a working prototype
  • Your company must have an Azure subscription

What’s provided?

  • Network connectivity
  • Coffee breaks + Lunch + Space to work as a team with a Microsoft engineer at the Van Der Valk hotel


Für Anmeldungen oder Details zum Artificial Intelligence Hackfest in Belgien schickt bitte eine E-Mail an Gerhard.Goeschl@Microsoft.com

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 4

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Intro

My idea about this app is to split the storages, between a storage for flexible objects and immutable objects. The flexible objects are the users and the users metadata in this case. Immutable objects are the chat message.

The messages are just stored one by one and will never change. Storing a message doesn't need to be super fast, but reading the messages need to be as fast as possible. This is why I want to go with the Azure Table Storage. This is one of the fastest storages on Azure. In the past, at the YooApps, we also used it as an event store for CQRS based applications.

Handling the users doesn't need to be super fast as well, because we only handle one user at one time. We don't read all of the users at one blow, we don't do batch operations on it. So using a SQL Storage with IdentityServer4on e.g. a Azure SQL Database should be fine.

The users online will be stored in memory only, which is the third storage. The memory is save in this case, because, if the app shuts down, the users need to logon again anyway and the list of users online gets refilled. And it is even not really critical, if the list of the users online is not in sync with the logged on users.

This leads into three different storages:

  • Users: Azure SQL Database, handled by IdentityServer4
  • Users online: Memory, handled by the chat app
    • A singleton instance of a user tracker class
  • Messages: Azure Table Storage, handled by the chat app
    • Using the SimpleObjectStore and the Azure table Storage provider

Setup IdentityServer4

To keep the samples easy, I do the logon of the users on the server side only. (I'll go through the SPA logon using React and IdentityServer4 in another blog post.) That means, we are validating and using the senders name on the server side - in the MVC controller, the API controller and in the SignalR Hub - only.

It is recommended to setup the IdentityServer4 in a separate web application. We will do it the same way. So I followed the quickstart documentation on the IdentityServer4 web site, created a new empty ASP.NET Core project and added the IdentiyServer4 NuGet packages, as well as the MVC package and the StaticFile package. I first planned to use ASP.NET Core Identity with the IdentityServer4 to store the identities, but I changed that, to keep the samples simple. Now I only use the in-memory configuration, you can see in the quickstart tutorials, I'm able to use ASP.NET Identity or any other custom SQL storage implementation later on. I also copied the IdentityServer4 UI code from the IdentityServer4.Quickstart.UI repository into that project.

The Startup.cs of the IdentityServer project look s pretty clean. It adds the IdentityServer to the service collection and uses the IdentityServer middleware. While adding the services, I also add the configurations for the IdentityServer. As recommended and shown in the quickstart, the configuration is wrapped in the Config class, that is used here:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddMvc();

        // configure identity server with in-memory stores, keys, clients and scopes
        services.AddIdentityServer()
            .AddDeveloperSigningCredential()
            .AddInMemoryIdentityResources(Config.GetIdentityResources())
            .AddInMemoryApiResources(Config.GetApiResources())
            .AddInMemoryClients(Config.GetClients())
            .AddTestUsers(Config.GetUsers());
    }

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        // use identity server
        app.UseIdentityServer();

        app.UseStaticFiles();
        app.UseMvcWithDefaultRoute();
    }
}

The next step is to configure the IdentityServer4. As you can see in the snippet above, this is done in a class called Config:

public class Config
{
    public static IEnumerable<Client> GetClients()
    {
        return new List<Client>
        {
            new Client
            {
                ClientId = "reactchat",
                ClientName = "React Chat Demo",

                AllowedGrantTypes = GrantTypes.Implicit,
                    
                RedirectUris = { "http://localhost:5001/signin-oidc" },
                PostLogoutRedirectUris = { "http://localhost:5001/signout-callback-oidc" },

                AllowedScopes =
                {
                    IdentityServerConstants.StandardScopes.OpenId,
                    IdentityServerConstants.StandardScopes.Profile
                }
            }
        };
    }

    internal static List<TestUser> GetUsers()
    {
        return new List<TestUser> {
            new TestUser
            {
                SubjectId = "1",
                Username = "juergen@gutsch-online.de",
                Claims = new []{ new Claim("name", "Juergen Gutsch") },
                Password ="Hello01!"
            }
        };
    }
    
    public static IEnumerable<ApiResource> GetApiResources()
    {
        return new List<ApiResource>
        {
            new ApiResource("reactchat", "React Chat Demo")
        };
    }

    public static IEnumerable<IdentityResource> GetIdentityResources()
    {
        return new List<IdentityResource>
        {
            new IdentityResources.OpenId(),
            new IdentityResources.Profile(),
        };
    }
}

The clientid is calles reactchat. I configured both projects, the chat application and the identity server application, to run with specific ports. The chat application runs with port 5001 and the identity server uses port 5002. So the redirect URIs in the client configuration points to the port 5001.

Later on we are able to replace this configuration with a custom storage for the users and the clients.

We also need to setup the client (the chat application) to use this identity server.

Adding authentication to the chat app

To add authentication, I need to add some configuration to the Startup.cs. The first thing is to add the authentication middleware to the Configure method. This does all the authentication magic and handles multiple kinds of authentication:

app.UseAuthentication();

Be sure to add this line before the usage of MVC and SignalR. I also put this line before the usage of the StaticFilesMiddleware.

Now I need to add and to configure the needed services for this middleware.

services.AddAuthentication(options =>
    {
        options.DefaultScheme = "Cookies";
        options.DefaultChallengeScheme = "oidc";                    
    })
    .AddCookie("Cookies")
    .AddOpenIdConnect("oidc", options =>
    {
        options.SignInScheme = "Cookies";

        options.Authority = "http://localhost:5002";
        options.RequireHttpsMetadata = false;
        
        options.TokenValidationParameters.NameClaimType = "name";

        options.ClientId = "reactchat";
        options.SaveTokens = true;
    });

We add cookie authentication as well as OpenID connect authentication. The cookie is used to temporary store the users information to avoid an OIDC login on every request. To keep the samples simples I switched off HTTPS.

I need to specify the NameClaimType, because IdentityServer4 provides the users name within a simpler claim name, instead of the long default one..

That's it for the authentication part. We now need to secure the chat, This is done by adding the AuthorizeAttribute to the HomeController. Now the app will redirect to the identity servers login page, if we try to access the view created by the secured controller:

After entering the credentials, we need to authorize the app to get the needed profile information from the identity server:

If this is done we can start using the users name in the chat. To do this, we need to change the AddMessage method in the ChatHab a little bit:

public void AddMessage(string message)
{
    var username = Context.User.Identity.Name;
    var chatMessage =  _chatService.CreateNewMessage(username, message);
    // Call the MessageAdded method to update clients.
    Clients.All.InvokeAsync("MessageAdded", chatMessage);
}

I removed the magic string with my name in it and replaced it with the username I get from the current Context. Now the chat uses the logged on user to add chat messages:

I'll not go into the user tracker here, to keep this post short. Please follow the GitHub repos to learn more about tracking the online state of the users.

Storing the messages

The idea is to keep the messages stored permanently on the server. The current in-memory implementation doesn't handle a restart of the application. Every time the app restarts the memory gets cleared and the messages are gone. I want to use the Azure Table Storage here, because it is pretty simple to use and reading the storage is amazingly fast. We need to add another NuGet package to our app which is the AzureStorageClient.

To encapsulate the Azure Storage I will create a ChatStorageRepository, that contains the code to connect to the Tables.

Let's quickly setup a new storage account on azure. Logon to the azure portal and go to the storage section. Create a new storage account and follow the wizard to complete the setup. After that you need to copy the storage credentials ("Account Name" and "Account Key") from the portal. We need them to to connect to the storage account alter on.

Be careful with the secrets

Never ever store the secret information in a configuration or settings file, that is stored in the source code repository. You don't need to do this anymore, with the user secrets and the Azure app settings.

All the secret information and the database connection string should be stored in the user secrets during development time. To setup new user secrets, just right click the project that needs to use the secrets and choose the "Manage User Secrets" entry from the menu:

Visual Studio then opens a secrets.json file for that specific project, that is stored somewhere in the current users AppData folder. You see the actual location, if you hover over the tab in Visual Studio. Add your secret data there and save the file.

The data than gets passed as configuration entries into the app:

// ChatMessageRepository.cs
private readonly string _tableName;
private readonly CloudTableClient _tableClient;
private readonly IConfiguration _configuration;

public ChatMessageRepository(IConfiguration configuration)
{
    _configuration = configuration;

    var accountName = configuration.GetValue<string>("accountName");
    var accountKey = configuration.GetValue<string>("accountKey");
    _tableName = _configuration.GetValue<string>("tableName");

    var storageCredentials = new StorageCredentials(accountName, accountKey);
    var storageAccount = new CloudStorageAccount(storageCredentials, true);
    _tableClient = storageAccount.CreateCloudTableClient();
}

On Azure there is an app settings section in every Azure Web App. Configure the secrets there. This settings get passes as configuration items to the app as well. This is the most secure approach to store the secrets.

Using the table storage

You don't really need to create the actual table using the Azure portal. I do it by code if the table doesn't exists. To do this, I needed to create a table entity object first. This defines the available fields in that Azure Table Storage

public class ChatMessageTableEntity : TableEntity
{
    public ChatMessageTableEntity(Guid key)
    {
        PartitionKey = "chatmessages";
        RowKey = key.ToString("X");
    }

    public ChatMessageTableEntity() { }

    public string Message { get; set; }

    public string Sender { get; set; }
}

The TableEntity has three default properties, which are a Timestamp, a unique RowKey as string and a PartitionKey as string. The RowKey need to be unique. In a users table the RowKey could be the users email address. In our case we don't have a unique value in the chat messages so we'll use a Guid instead. The PartitionKey is not unique and bundles several items into something like a storage unit. Reading entries from a single partition is quite fast, data inside a partition never gets spliced into many storage locations. They will kept together. In the current phase of the project it doesn't make sense to use more than one partition. Later on it would make sense to use e.g. one partition key per chat room.

The ChatMessageTableEntity has one constructor we will use to create a new entity and an empty constructor that is used by the TableClient to create it out of the table data. I also added two properties for the Message and the Sender. I will use the Timestamp property of the parent class for the time shown in the chat window.

Add a message to the Azure Table Storage

To add a new message to the Azure Table Storage, I created a new method to the repository:

// ChatMessageRepository.cs
public async Task<ChatMessageTableEntity> AddMessage(ChatMessage message)
{
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();

    var chatMessage = new ChatMessageTableEntity(Guid.NewGuid())
    {
        Message = message.Message,
        Sender = message.Sender
    };

    // Create the TableOperation object that inserts the customer entity.
    TableOperation insertOperation = TableOperation.Insert(chatMessage);

    // Execute the insert operation.
    await table.ExecuteAsync(insertOperation);

    return chatMessage;
}

This method uses the TableClient created in the constructor.

Read messages from the Azure Table Storage

Reading the messages is done using the method ExecuteQuerySegmentedAsync. With this method it is possible to read all the table entities in chunks from the Table Storage. This makes sense, because there is a request limit of 1000 table entities. In my case I don't want to load all the data but the latest 100:

// ChatMessageRepository.cs
public async Task<IEnumerable<ChatMessage>> GetTopMessages(int number = 100)
{
    var table = _tableClient.GetTableReference(_tableName);

    // Create the table if it doesn't exist.
    await table.CreateIfNotExistsAsync();
    
    string filter = TableQuery.GenerateFilterCondition(
        "PartitionKey", 
        QueryComparisons.Equal, 
        "chatmessages");
    var query = new TableQuery<ChatMessageTableEntity>()
        .Where(filter)
        .Take(number);

    var entities = await table.ExecuteQuerySegmentedAsync(query, null);

    var result = entities.Results.Select(entity =>
        new ChatMessage
        {
            Id = entity.RowKey,
            Date = entity.Timestamp,
            Message = entity.Message,
            Sender = entity.Sender
        });

    return result;
}

Using the repository

In the Startup.cs I changed the registration of the ChatService from Singleton to Transient, because we don't need to store the messages in memory anymore. I also add a transient registration for the IChatMessageRepository:

services.AddTransient<IChatMessageRepository, ChatMessageRepository>();
services.AddTransient<IChatService, ChatService>();

The IChatMessageRepository gets injected into the ChatService. Since the Repository is async I also need to change the signature of the service methods a little bit to support the async calls. The service looks cleaner now:

public class ChatService : IChatService
{
    private readonly IChatMessageRepository _repository;

    public ChatService(IChatMessageRepository repository)
    {
        _repository = repository;
    }

    public async Task<ChatMessage> CreateNewMessage(string senderName, string message)
    {
        var chatMessage = new ChatMessage(Guid.NewGuid())
        {
            Sender = senderName,
            Message = message
        };
        await _repository.AddMessage(chatMessage);

        return chatMessage;
    }

    public async Task<IEnumerable<ChatMessage>> GetAllInitially()
    {
        return await _repository.GetTopMessages();
    }
}

Also the Controller action and the Hub method need to change to support the async calls. It is only about making the methods async, returning Tasks and to await the service methods.

// ChatController.cs
[HttpGet("[action]")]
public async Task<IEnumerable<ChatMessage>> InitialMessages()
{
    return await _chatService.GetAllInitially();
}

Almost done

The authentication and storing the messages is done now. What needs to be done in the last step, is to add the logged on user to the UserTracker and to push the new user to the client. I'll not cover that in this post, because it already has more than 410 lines and more than 2700 words. Please visit the GitHub repository during the next days to learn how I did this.

Closing words

Even this post wasn't about React. The authentication is only done server side, since this isn't really a single page application.

To finish this post I needed some more time to get the Authentication using IdentityServer4 running. I stuck in a Invalid redirect URL error. At the end it was just a small typo in the RedirectUris property of the client configuration of the IdentityServer, but it took some hours to find it.

In the next post I will come back a little bit to React and Webpack while writing about the deployment. I'm going to write about automated deployment to an Azure Web App using CAKE, running on AppVeyor.

I'm attending the MVP Summit next week, so the last post of this series, will be written and published from Seattle, Bellevue or Redmond :-)

Holger Sirtl: Use Azure Automation for creating Resource Groups despite having limited permissions only

In Azure-related engagements I often observe that Azure users are only assigned Contributor-role at one single Azure Resource Group. In general, motivation for this is…

  • Users can provision resources within this Resource Group
  • Users are protected from accidentally deleting resources in other Resource Groups (e.g. settings of a Virtual Network, security settings, …)

On the other hand, this approach leads to a few issues:

  • With only having Contributor role on one Resource Group, you cannot create additional Resource Groups (which would be nice to structure your Azure resources).
  • Some Azure Marketplace solutions require to be installed in empty Resource Groups. So, if a user already provisioned resources in “his” Resource Group, he just can’t provision these solutions.

Solution

Azure Automation lets you – among other features – configure PowerShell skripts (called Runbooks) that can run under elevated privileges allowing Contributors to call these Runbooks and perform actions they normally wouldn’t have sufficient permissions for. Such a Runbook can perform following actions (under Subscription Owner role):

  1. Create a new Resource Group (name specified by a parameter)
  2. Assign an AD Group (name specified by a parameter) as Contributor to this Resource Group

Necessary steps for implementing the solution

For implementing this, following steps must be taken:

  • Step 1: Create an Azure Automation Account
  • Step 2: Create a Run As Account with sufficient access permissions
  • Step 3: Create and test a new Automation Runbook that creates Resource Groups
  • Step 4: Publish the Runbook

Description of the Solution

Step 1: Create an Azure Automation Account

  • In Azure Portal click on New / Monitoring + Management / Automation

    image

  • Fill in the required parameters and make sure you create a Run As account

    image

  • Confirm with Create.

For more information about creating Azure Automation Accounts see:
https://docs.microsoft.com/en-us/azure/automation/automation-quickstart-create-account

Step 2: Create a Run As Account with sufficient access permissions

  • If you haven’t created the Run As Account during the creation of the Azure Automation Account, create one following this description: https://docs.microsoft.com/en-us/azure/automation/automation-create-runas-account

  • Go to your Azure Automation Account

  • Navigate to Connections / AzureRunAsConnection

    image

  • Now you see all the information of the service principal that the Runbook is later running under. Copy the Application ID to the clipboard.

    image

  • Navigate to the Subscription. Type “Subscriptions” into the search field of the portal.

    image

    Click on Subscriptions.

  • Select your Subscription. On the Subscription overview page click Access Control (IAM) / Add.

    image

  • Choose the “Owner”-Role. Enter the Application ID from the clipboard to the Select field and click on the User that gets displayed. That is the service principal from your Azure AD.

    image

  • Confirm with Save.

You have now successfully assigned Subscription Owner rights to the Service Principal created for the Azure Automation RunAs-Account the Runbook will run under.

Step 3: Create and test a new Automation Runbook that creates Resource Groups

Go back to your Automation Account.

  • Select Runbooks and click on Add a Runbook.

    image

  • Select Create a new runbook.

    image

    Fill in the requested fields:
    Name: <a name for your runbook>
    Runbook type: PowerShell
    Description: <some description for the runbook>

  • Confirm with Create

In the editor add the following lines of code

Param(
 [string]$ResourceGroupName,
 [string]$RGGroupID
)

$connectionName = "AzureRunAsConnection"

try
{
    # Get the connection "AzureRunAsConnection "
    $servicePrincipalConnection=Get-AutomationConnection -Name $connectionName  

    $tenantID = $servicePrincipalConnection.TenantId
    $applicationId = $servicePrincipalConnection.ApplicationId
    $certificateThumbprint = $servicePrincipalConnection.CertificateThumbprint

    "Logging in to Azure..."
    Login-AzureRmAccount `
        -ServicePrincipal `
        -TenantId $tenantID `
        -ApplicationId $applicationId `
        -CertificateThumbprint $certificateThumbprint

    New-AzureRmResourceGroup -Name $ResourceGroupName -Location 'West Europe'

    # Set the scope to the Resource Group created above
    $scope = (Get-AzureRmResourceGroup -Name $ResourceGroupName).ResourceId

    # Assign Contributor role to the group
    New-AzureRmRoleAssignment -ObjectId $RGGroupID -Scope $scope -RoleDefinitionName "Contributor"
}
catch {
   if (!$servicePrincipalConnection)
   {
      $ErrorMessage = "Connection $connectionName not found."
      throw $ErrorMessage
  } else{
      Write-Error -Message $_.Exception
      throw $_.Exception
  }
}


  • Save the Runbook and click on Test Pane.

    image

  • Fill in the two parameter fields (Resource Group Name and Group ID) and click on Start.

    image

This creates the Resource Group and assigns Contributor role to the Azure AD group specified. You might get an error message “Authentication_Unauthorized” because of a bug that occurs when working with a service principal. Ignore this message as the script (according to my tests) does the job.

Step 4: Publish the Runbook

  • Close the test pane and click on Publish.

    image

    Confirm with Yes.

  • That’s it. Users can now go to the Automation Account, select the Runbook and click on Start.

    image

  • This opens the form for entering the parameters. After clicking OK, the Resource Group will be created and the group assignment will be done.

    image

Final Words

One extension to this (runbook) could be to allow the user to enter a group name instead of the group id. This would require one additional step: an AD lookup. See the code here:

$groupID = (Get-AzureRmADGroup -SearchString $RGGroup).Id
New-AzureRmRoleAssignment -ObjectId $groupID -Scope $scope -RoleDefinitionName "Contributor"

This, however, would require giving the service principal (Run As account) access permissions to the Azure AD. That’s something I wanted to avoid here.

Code-Inside Blog: Windows Fall Creators Update 1709 and Docker Windows Containers

Who shrunk my Windows Docker image?

We started to package our ASP.NET/WCF/Full-.NET Framework based web app into Windows Containers, which we then publish to the Docker Hub.

Someday we discovered that one of our new build machines produced Windows Containers only half the size: Instead of a 8GB Docker image we only got a 4GB Docker image. Yeah, right?

The problem with Windows Server 2016

I was able to run the 4GB Docker image on my development machine without any problems and I thought that this is maybe a great new feature (it is… but!). My boss then told my that he was unable to run this on our Windows Server 2016.

The issue: Windows 10 Fall Creators Update

After some googling around we found the problem: Our build machine was a Windows 10 OS with the most recent “Fall Creators Update” (v1709) (which was a bad idea from the beginning, because if you want to run Docker as a Service you will need a Windows Server!). The older build machine, which produced the much larger Docker image, was running with the normal Creators Update from March(?).

Docker resolves the base images for Windows like this:

Compatibility issue

As it turns out: You can’t run the smaller Docker images on Windows Server 2016. Currently it is only possible to do it via the preview “Windows Server, version 1709” or on the Windows 10 Client OS.

Oh… and the new Windows Server is not a simple update to Windows Server 2016, instead it is a completely new version. Thanks Microsoft.

Workaround

Because we need to run our images on Windows Server 2016, we just target the LTSC2016 base image, which will produce 8GB Docker images (which sucks, but works for us).

This post could also be in the RTFM-category, because there are some notes on the Docker page available, but it was quite easy to overread ;)

MSDN Team Blog AT [MS]: Global Azure Bootcamp, 21. April 2018, Linz

Am 21. April 2018 werden im Rahmen des Global Azure Bootcamps auf der ganzen Welt hunderte Workshops zum Thema Cloud Computing und Microsoft Azure stattfinden. Die letzten Jahre waren ein voller Erfolg und daher sind wir 2018 auch in Österreich wieder im Wissensturm in Linz mit dabei. 2017 haben wir erstmals deutlich die 140-Teilnehmer-Grenze gesprengt. Diesen Rekord wollen wir heuer toppen und hoffen, dass wieder viele von euch dabei sind.

Das Event ist 100%ig von der Community für die Community. Die Teilnahme am Event ist kostenlos. Die Sponsoren übernehmen alle Kosten. Ein großes Dankeschön dafür! Tausende Teilnehmer weltweit bestätigten in den letzten Jahren, dass das Global Azure Bootcamp eine tolle Gelegenheit ist, entweder in Azure einzusteigen oder schon bestehendes Wissen zu vertiefen.

Global Azure Bootcamp 2018

Nähere Informationen über Vorträge, Sprecher und Location sowie die Möglichkeit zur Anmeldung findest Du auf unserer Event-Seite. Die Tickets sind begrenzt, am besten gleich anmelden!

Zur Event-Seite des Global Azure Bootcamp Österreich ...

Hier als "Appetitanreger" ein Auszug aus den Themen, zu denen es Sessions am GAB geben wird:

  • Machine Learning und Deep Learning
  • Web APIs mit GraphQL
  • NoSQL Datenbanken mit CosmosDB
  • Power BI
  • Microservices mit Service Fabric
  • IoT
  • Serverless Workflows mit Logic Apps
  • und vieles mehr...

Martin Richter: Advanced Developers Conference C++ / ADC++ 2018 in Burghausen vom 15.-16.05.2018

Am 15. und 16. Mai 2018 wird die nächste Advanced Developers Conference C++ stattfinden.
Am 14. Mai finden ganztägige Workshops statt.!

Diesmal in der Heimatstadt des ppedv Teams in Burghausen unter der bewährten Leitung von Hannes Preishuber.
Mit dem Heimvorteil des ppdevs Team darf man gespannt sein was für eine Abendveranstaltung geplant sein wird. Diese haben zumindest für mich ja schon fast einen „legendären“ 😉 Ruf und haben immer viel Spaß und Abwechslung geboten.

Diesmal sind wieder interessante Gäste geladen. Insbesondere der Bekannte Buch Autor Nicolai Josuttis sticht für mich förmlich aus der Menge der Redner heraus. Manch altes und auch neues Gesicht findet sich in der Rednerliste.

Hauptthema wird C++17 sein und C++20 befindet sich ja auch schon wieder in der Standardisierung.
Neben dem (immer wieder guten) Social-Event bei dem man viele Kollegen treffen kann und darf, werden die Vorträge sicherlich die nächsten Evolutionsstufen von beleuchten.

Und bei der mittlerweile rasanten Updatefolge in Visual Studio werden wir vielleicht auch einiges interessantes neues vom Microsoft-Sprecher Steve Caroll erfahren.

Weitere Infos unter dem Link http://www.adcpp.de/2018 


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Uli Armbruster: Beispiele für Zielvereinbarungen

Allgemeines

Dieser Beitrag soll als kleine Hilfestellung für all diejenigen dienen, die sich mit der Findung von guten Zielen schwertun. Wer wie wir darauf hinarbeitet, dass die Mitarbeiter sich aus der Unternehmensstrategie die individuellen Ziele selbst ableiten und der Vorgesetzte nur noch unterstützende Funktion (z.B. Bereitstellen von Resourcen) ausüben muss, der wird zuerst viel kommunizieren und beraten müssen.

Während die eine Seite erfolgreicher Zielvereinbarungen die Verlagerung der Zieldefinition auf den Mitarbeiter selbst ist, ist die andere das Sichtbarmachen der operativen und strategischen Unternehmensziele. Damit einher geht natürlich das kontinuierliche Gespräch über die gemeinsame Unternehmenskultur. Beispielsweise wird es schwierig über monetäre Ergebnisziele zu sprechen, wenn die Kennzahlen nicht transparent zur Verfügung gestellt und erklärt werden. Nicht jeder Chef ist davon begeistert, wenn die Angestellten die genauen Margen von Aufträgen kennen.

  • Der Kern guter Ziele sind die S.M.A.R.T. Kriterien, auf die an dieser Stelle nicht weiter eingegangen werden soll.
  • Die Ziele drüfen sich nicht gegenseitig beeinträchtigen. Stattdessen sollen sie sich vielmehr gegenseitig beflügeln.
  • Die Erreichung des Ziels soll immer einen direkten oder indirekten Benefit für das Unternehmen haben (Stichwort Return on Invest).

 

Auf den Punkt gebracht: Die Mitarbeiter müssen um die Unternehmsziele wissen und diese mittragen. Ein gemeinsames Verständnis der Modalitäten und eine dazu passende Informationstransparenz sind ebenso elementar wie das Mitwirkenwollen des Einzelnen.

 

Beispiele für individuelle Ziele

  • Community Auftritte
    • Bis 31.12.18 hält der Mitarbeiter in den User Groups Berlin, Dresden, Leipzig, Chemnitz, Karlsruhe und Luzern Vorträge
  • Fachartikel
    • Bis 31.12.18 publiziert der Mitarbeiter 3 Fachartikel in Fachzeitschriften (z.B. in der dotnetpro)
  • Öffentliche Auftritte auf Konferenzen
    • Bis 31.12.18 hält der Mitarbeiter auf mind. 3 unterschiedlichen, kommerziellen Konferenzen Talks oder Trainings (z.B. DWX, Karlsruher Entwicklertage etc.)
  • Kommerzielle Workshops
    • Bis 31.12.18 generiert der Mitarbeiter durch Workshops einen Gesamtumsatz von mind. 20.000€
  • Fakturierung
    • Bis 31.12.18 fakturiert der Mitarbeiter durch Consulting mind. 800 Stunden.
  • Zertifizierung

MSDN Team Blog AT [MS]: Zur Microsoft Build anmelden – der Countdown läuft

image

Die Microsoft Build findet heuer von 7. – 9. Mai in Seattle statt. Die Anmeldung beginnt morgen, den 15. Februar um 18:00. Die Zeit rinnt dahin!!

Erfahrungsgemäß ist die Veranstaltung binnen kürzester Zeit ausverkauft. Wer also erst noch seinen Chef fragen muss. sollte das schleunigst tun. Winking smile

Zum Anmelden einfach auf die Webseite der Microsoft Build gehen: https://microsoft.com/build

Zum Einstimmen hier ein kleines Teaser Video:

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 3

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

About SignalR

SignalR for ASP.NET Core is a framework to enable Websocket communication in ASP.NET Core applications. Modern browsers already support Websocket, which is part of the HTML5 standard. For older browser SignalR provides a fallback based on standard HTTP1.1. SignalR is basically a server side implementation based on ASP.NET Core and Kestrel. It uses the same dependency injection mechanism and can be added via a NuGet package into the application. Additionally, SignalR provides various client libraries to consume Websockets in client applications. In this chat application, I use @aspnet/signalr-client loaded via NPM. The package also contains the TypeScript definitions, which makes it easy to use in a TypeScript application, like this.

I added the React Nuget package in the first part of this blog series. To enable SignalR I need to add it to the ServiceCollection:

services.AddSignalR();

The server part

In C#, I created a ChatService that will later be used to connect to the data storage. Now it is using a dictionary to store the messages and is working with this dictionary. I don't show this service here, because the implementation is not relevant here and will change later on. But I use this Service in in the code I show here. This service is mainly used in the ChatController, the Web API controller to load some initial data and in the ChatHub, which is the Websocket endpoint for this chat. The service gets injected via dependency injection that is configured in the Startup.cs:

services.AddSingleton<IChatService, ChatService>();

Web API

The ChatController is simple, it just contains GET methods. Do you remember the last posts? The initial data of the logged on users and the first chat messages were defined in the React components. I moved this to the ChatController on the server side:

[Route("api/[controller]")]
public class ChatController : Controller
{
    private readonly IChatService _chatService;

    public ChatController(IChatService chatService)
    {
        _chatService = chatService;
    }
    // GET: api/<controller>
    [HttpGet("[action]")]
    public IEnumerable<UserDetails> LoggedOnUsers()
    {
        return new[]{
            new UserDetails { Id = 1, Name = "Joe" },
            new UserDetails { Id = 3, Name = "Mary" },
            new UserDetails { Id = 2, Name = "Pete" },
            new UserDetails { Id = 4, Name = "Mo" } };
    }

    [HttpGet("[action]")]
    public IEnumerable<ChatMessage> InitialMessages()
    {
        return _chatService.GetAllInitially();
    }
}

The method LoggedOnUsers simply created the users list. I will change that, if the authentication is done. The method InitialMessages loads the first 50 messages from the faked data storage.

SignalR

The Websocket endpoints are defined in so called Hubs. One Hub is defining one single Websocket endpoint. I created a ChatHub, that is the endpoint for this application. The methods in the ChatHub are handler methods, to handle incoming messages through a specific channel.

The ChatHub needs to be added to the SignalR middleware:

app.UseSignalR(routes =>
{
    routes.MapHub<ChatHub>("chat");
});

A SignalR Methods in the Hub are the channel definitions and the handlers at the same time, while NodeJS socket.io is defining channels and binds an handler to this channel.

The currently used data are still fake data and authentication is not yet implemented. This is why the users name is hard coded yet:

using Microsoft.AspNetCore.SignalR;
using ReactChatDemo.Services;

namespace ReactChatDemo.Hubs
{
    public class ChatHub : Hub
    {
        private readonly IChatService _chatService;

        public ChatHub(IChatService chatService)
        {
            _chatService = chatService;
        }

        public void AddMessage(string message)
        {
            var chatMessage = _chatService.CreateNewMessage("Juergen", message);
            // Call the MessageAdded method to update clients.
            Clients.All.InvokeAsync("MessageAdded", chatMessage);
        }
    }
}

This Hub only contains a method AddMessage, that gets the actual message as a string. Later on we will replace the hard coded user name, with the name of the logged on user. Than a new message gets created and also added to the data store via the ChatService. The new message is an object, that contains a unique id, the name of the authenticated user, a create date and the actual message text.

Than the message gets, send to the client through the Websocket channel "MessageAdded".

The client part

On the client side, I want to use the socket in two different components, but I want to avoid to create two different Websocket clients. The idea is to create a WebsocketService class, that is used in the two components. Usually I would create two instances of this WebsocketService, but this would create two different clients too. I need to think about dependency injection in React and a singleton instance of that service.

SignalR Client

While googling for dependency injection in React , I read a lot about the fact, that DI is not needed in React. I was kinda confused. DI is everywhere in Angular, but it is not necessarily needed in React? There are packages to load, to support DI, but I tried to find another way. And actually there is another way. In ES6 and in TypeScript it is possible to immediately create an instance of an object and to import this instance everywhere you need it.

import { HubConnection, TransportType, ConsoleLogger, LogLevel } from '@aspnet/signalr-client';

import { ChatMessage } from './Models/ChatMessage';

class ChatWebsocketService {
    private _connection: HubConnection;

    constructor() {
        var transport = TransportType.WebSockets;
        let logger = new ConsoleLogger(LogLevel.Information);

        // create Connection
        this._connection = new HubConnection(`http://${document.location.host}/chat`,
            { transport: transport, logging: logger });
        
        // start connection
        this._connection.start().catch(err => console.error(err, 'red'));
    }

    // more methods here ...
   
}

const WebsocketService = new ChatWebsocketService();

export default WebsocketService;

Inside this class the Websocket (HubConnection) client gets created and configured. The transport type needs to be WebSockets. Also a ConsoleLogger gets added to the Client, to send log information the the browsers console. In the last line of the constructor, I start the connection and add an error handler, that writes to the console. The instance of the connections is stored in a private variable inside the class. Right after the class I create an instance and export the instance. This way the instance can be imported in any class:

import WebsocketService from './WebsocketService'

To keep the Chat component and the Users component clean, I created additional service classes for each the components. This service classes encapsulated the calls to the Web API endpoints and the usage of the WebsocketService. Please have a look into the GitHub repository to see the complete services.

The WebsocketService contains three methods. One is to handle incoming messages, when a user logged on the chat:

registerUserLoggedOn(userLoggedOn: (id: number, name: string) => void) {
    // get new user from the server
    this._connection.on('UserLoggedOn', (id: number, name: string) => {
        userLoggedOn(id, name);
    });
}

This is not yet used. I need to add the authentication first.

The other two methods are to send a chat message to the server and to handle incoming chat messages:

registerMessageAdded(messageAdded: (msg: ChatMessage) => void) {
    // get nre chat message from the server
    this._connection.on('MessageAdded', (message: ChatMessage) => {
        messageAdded(message);
    });
}
sendMessage(message: string) {
    // send the chat message to the server
    this._connection.invoke('AddMessage', message);
}

In the Chat component I pass a handler method to the ChatService and the service passes the handler to the WebsocketService. The handler than gets called every time a message comes in:

//Chat.tsx
let that = this;
this._chatService = new ChatService((msg: ChatMessage) => {
    this.handleOnSocket(that, msg);
});

In this case the passed in handler is only an anonymous method, a lambda expression, that calls the actual handler method defined in the component. I need to pass a local variable with the current instance of the chat component to the handleOnSocket method, because this is not available after when the handler is called. It is called outside of the context where it is defined.

The handler than loads the existing messages from the components state, adds the new message and updates the state:

//Chat.tsx
handleOnSocket(that: Chat, message: ChatMessage) {
    let messages = that.state.messages;
    messages.push(message);
    that.setState({
        messages: messages,
        currentMessage: ''
    });
    that.scrollDown(that);
    that.focusField(that);
}

At the end, I need to scroll to the latest message and to focus the text field again.

Web API client

The UsersService.ts and the ChatService.ts, both contain a method to fetch the data from the Web API. As preconfigured in the ASP.NET Core React project, I am using isomorphic-fetch to call the Web API:

//ChatService.ts
public fetchInitialMessages(fetchInitialMessagesCallback: (msg: ChatMessage[]) => void) {
    fetch('api/Chat/InitialMessages')
        .then(response => response.json() as Promise<ChatMessage[]>)
        .then(data => {
            fetchInitialMessagesCallback(data);
        });
}

The method fetchLogedOnUsers in the UsersService service looks almost the same. The method gets a callback method from the Chat component, that gets the ChatMessages passed in. Inside the Chat component this method get's called like this:

this._chatService.fetchInitialMessages(this.handleOnInitialMessagesFetched);

The handler than updates the state with the new list of ChatMessages and scrolls the chat area down to the latest message:

handleOnInitialMessagesFetched(messages: ChatMessage[]) {
    this.setState({
        messages: messages
    });

    this.scrollDown(this);
}

Let's try it

Now it is time to try it out. F5 starts the application and opens the configured browser:

This is almost the same view as in the last post about the UI. To be sure React is working, I had a look into the network tap in the browser developer tools:

Here it is. Here you can see the message history of the web socket endpoint. The second line displays the message sent to the server and the third line is the answer from the server containing the ChatMessage object.

Closing words

This post was less easy than the posts before. Not the technical part, but I refactored the the client part a little bit to keep the React component as simple as possible. For the functional components, I used regular TypeScript files and not the TSX files. This worked great.

I'm still impressed by React.

In the next post I'm going to add Authorization to get the logged on user and to authorize the chat to logged-on users only. I'll also add a permanent storage for the chat message.

Holger Schwichtenberg: TCP-Verbindungen für SQL Server aktivieren via PowerShell

Auf einem Rechner wollte der "SQL Server Configuration Manager" nicht mehr starten, der gebraucht wurde, um das TCP-Protokoll als "Client Protocol" für den Zugriff auf den Microsoft SQL Server zu aktivieren.

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 2

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Basic Layout

First let's have a quick look into the hierarchy of the React components in the folder ClientApp.

The app gets bootstrapped within the boot.tsx file. This is the first sort of component where the AppContainer gets created and the router is placed. This file also contains the the call to render the react app in the relevant HTML element, which is a div with the ID react-app in this case. It is a div in the Views/Home/Index.cshtml

This component also renders the content of the routes.tsx. This file contains the route definitions wrapped inside a Layout element. This Layout element is defined in the layout.tsx inside the components folder. The routes.tsx also references three more components out of the components folder: Home, Counter and FetchData. So it seems the router renders the specific components, depending on the requested path inside the Layout element:

// routes.tsx
import * as React from 'react';
import { Route } from 'react-router-dom';
import { Layout } from './components/Layout';
import { Home } from './components/Home';
import { FetchData } from './components/FetchData';
import { Counter } from './components/Counter';

export const routes = <Layout>
    <Route exact path='/' component={ Home } />
    <Route path='/counter' component={ Counter } />
    <Route path='/fetchdata' component={ FetchData } />
</Layout>;

As expected, the Layout component than defines the basic layout and renders the contents into a Bootstrap grid column element. I changed that a little bit to render the contents directly into the fluid container and the menu is now outside the fluid container. This component now contains less code than before.:

import * as React from 'react';
import { NavMenu } from './NavMenu';

export interface LayoutProps {
    children?: React.ReactNode;
}

export class Layout extends React.Component<LayoutProps, {}> {
    public render() {
        return <div>
            <NavMenu />
            <div className='container-fluid'>
                {this.props.children}
            </div>
        </div>;
    }
}

I also changed the NavMenu component to place the menu on top of the page using the typical Bootstrap styles. (Visit the repository for more details.)

My chat goes into the Home component, because this is the most important feature of my app ;-) This is why I removed all the contents of the Home component and placed the layout for the actual chat there.

import * as React from 'react';
import { RouteComponentProps } from 'react-router';

import { Chat } from './home/Chat';
import { Users } from './home/Users';

export class Home extends React.Component<RouteComponentProps<{}>, {}> {
    public render() {
        return <div className='row'>
            <div className='col-sm-3'>
              	<Users />
            </div>
            <div className='col-sm-9'>
                <Chat />
            </div>
        </div>;
    }
}

This component uses two new components: Users to display the online users and Chat to add the main chat functionalities. It seems to be a common way in Rdeact to store sub-components inside a subfolder with the same name as the parent component. So, I created a Home folder inside the components folder and placed the Users component and the Chat component inside of that new folder.

The Users Component

Let's have a look into the more simple Users component first. This component doesn't have any interaction yet. It only fetches and displays the users online. To keep the first snippet simple I removed the methods inside. This file imports all from the module 'react' as React object. Using this we are able to access the Component type we need to derive from:

// components/Home/Users.tsx
import * as React from 'react';

interface UsersState {
    users: User[];
}
interface User {
    id: number;
    name: string;
}

export class Users extends React.Component<{}, UsersState> {
    //
}

This base class also defines a state property. The type of that state is defined in the second generic argument of the React.Component base class. (The first generic argument is not needed here). The state is a kind of a container type that contains data you want to store inside the component. In this case I just need a UsersState with a list of users inside. To display a user in the list we only need an identifier and a name. A unique key or id is required by React to create a list of items in the DOM

I don't fetch the data from the server side yet. This post is only about the UI components, so I'm going to mock the data in the constructor:

constructor() {
    super();
    this.state = {
        users: [
            { id: 1, name: 'juergen' },
            { id: 3, name: 'marion' },
            { id: 2, name: 'peter' },
            { id: 4, name: 'mo' }]
    };
}

Now the list of users is available in the current state and I'm able to use this list to render the users:

public render() {
    return <div className='panel panel-default'>
        <div className='panel-body'>
            <h3>Users online:</h3>
            <ul className='chat-users'>
                {this.state.users.map(user =>
                    <li key={user.id}>{user.name}</li>
                )}
            </ul>
        </div>
    </div>;
}

JSX is a wired thing: HTML like XML syntax, completely mixed with JavaScript (or TypeScript in this case) but it works. It remembers a little bit like Razor. this.state.users.map iterates through the users and renders a list item per user.

The Chat Component

The Chat component is similar, but contains more details and some logic to interact with the user. Initially we have almost the same structure:

// components/Home/chat.tsx
import * as React from 'react';
import * as moment from 'moment';

interface ChatState {
    messages: ChatMessage[];
    currentMessage: string;
}
interface ChatMessage {
    id: number;
    date: Date;
    message: string;
    sender: string;
}

export class Chat extends React.Component<{}, ChatState> {
    //
}

I also imported the module moment, which is moment.js I installed using NPM:

npm install moment --save

moment.js is a pretty useful library to easily work with dates and times in JavaScript. It has a ton of features, like formatting dates, displaying times, creating relative time expressions and it also provides a proper localization of dates.

Now it makes sense to have a look into the render method first:

// components/Home/chat.tsx
public render() {
    return <div className='panel panel-default'>
        <div className='panel-body panel-chat'
            ref={this.handlePanelRef}>
            <ul>
                {this.state.messages.map(message =>
                    <li key={message.id}><strong>{message.sender} </strong>
                        ({moment(message.date).format('HH:mm:ss')})<br />
                        {message.message}</li>
                )}
            </ul>
        </div>
        <div className='panel-footer'>
            <form className='form-inline' onSubmit={this.onSubmit}>
                <label className='sr-only' htmlFor='msg'>Message</label>
                <div className='input-group col-md-12'>
                    <button className='chat-button input-group-addon'>:-)</button>
                    <input type='text' value={this.state.currentMessage}
                        onChange={this.handleMessageChange}
                        className='form-control'
                        id='msg'
                        placeholder='Your message'
                        ref={this.handleMessageRef} />
                    <button className='chat-button input-group-addon'>Send</button>
                </div>
            </form>
        </div>
    </div>;
}

I defined a Bootstrap panel, that has the chat area in the panel-body and the input fields in the panel-footer. In the chat area we also have a unordered list ant the code to iterate through the messages. This is almost similar to the user list. We only display some more date here. Here you can see the usage of moment.js to easily format the massage date.

The panel-footer contains the form to compose the message. I used a input group to add a button in front of the input field and another one after that field. The first button is used to select an emoji. The second one is to also send the message (for people who cannot use the enter key to submit the message).

The ref attributes are used for a cool feature. Using this, you are able to get an instance of the element in the backing code. This is nice to work with instances of elements directly. We will see the usage later on. The code in the ref attributes are pointing to methods, that get's an instance of that element passed in:

msg: HTMLInputElement;
panel: HTMLDivElement;

// ...

handlePanelRef(div: HTMLDivElement) {
    this.panel = div;
}
handleMessageRef(input: HTMLInputElement) {
    this.msg = input;
}

I save the instance globally in the class. One thing I didn't expect is a wired behavior of this. This behavior is a typical JavaScript behavior, but I expected is to be solved in TypeScript. I also didn't see this in Angular. The keyword this is not set. It is nothing. If you want to access this in methods used by the DOM, you need to kinda 'inject' or 'bind' an instance of the current object to get this set. This is typical for JavaScript and makes absolutely sense This needs to be done in the constructor:

constructor() {
    super();
    this.state = { messages: [], currentMessage: '' };

    this.handlePanelRef = this.handlePanelRef.bind(this);
    this.handleMessageRef = this.handleMessageRef.bind(this);
    // ...
}

This is the current constructor, including the initialization of the state. As you can see, we bind the the current instance to those methods. We need to do this for all methods, that need to use the current instance.

To get the message text from the text field, it is needed to bind an onChange method. This method collects the value from the event target:

handleMessageChange(event: any) {
    this.setState({ currentMessage: event.target.value });
}

Don't forget to bind the current instance in the constructor:

this.handleMessageChange = this.handleMessageChange.bind(this);

With this code we get the current message into the state to use it later on. The current state is also bound to the value of that text field, just to clear this field after submitting that form.

The next important event is onSubmit in the form. This event gets triggered by pressing the send button or by pressing enter inside the text field:

onSubmit(event: any) {
    event.preventDefault();
    this.addMessage();
}

This method stops the default behavior of HTML forms, to avoid a reload of the entire page. And calls the method addMessage, that creates and ads the message to the current states messages list:

addMessage() {
    let currentMessage = this.state.currentMessage;
    if (currentMessage.length === 0) {
        return;
    }
    let id = this.state.messages.length;
    let date = new Date();

    let messages = this.state.messages;
    messages.push({
        id: id,
        date: date,
        message: currentMessage,
        sender: 'juergen'
    })
    this.setState({
        messages: messages,
        currentMessage: ''
    });
    this.msg.focus();
    this.panel.scrollTop = this.panel.scrollHeight - this.panel.clientHeight;
}

Currently the id and the sender of the message are faked. Later on, in the next posts, we'll send the message to the server using Websockets and we'll get a massage including a valid id back. We'll also have an authenticated user later on. As mentioned the current post, is just about to get the UI running.

We get the currentMessage and the massages list out of the current state. Than we add the new message to the current list and assign a new state, with the updated list and an empty currentMessage. Setting the state triggers an event to update the the UI. If I just update the fields inside the state, the UI don't get notified. It is also possible to only update a single property of the state.

If the state is updated, I need to focus the text field and to scroll the panel down to the latest message. This is the only reason, why I need the instance of the elements and why I used the ref methods.

That's it :-)

After pressing F5, I see the working chat UI in the browser

Closing words

By closing this post, the basic UI is working. This was easier than expected, I just stuck a little bit, by accessing the HTML elements to focus the text field and to scroll the chat area and when I tried to access the current instance using this. React is heavily used and the React community is huge. This is why it is easy to get help pretty fast.

In the next post, I'm going to integrate SignalR and to get the Websockets running. I'll also add two Web APIs to fetch the initial data. The current logged on users and the latest 50 chat messages, don't need to be pushed by the Websocket. Using this I need to get into the first functional component in React and to inject this into the UI components of this post.

Holger Schwichtenberg: Kommende Vorträge

Der Dotnet-Doktor wird in den kommenden drei Monaten wieder einige öffentliche Vorträge halten. Hier eine Terminübersicht.

Golo Roden: Einführung in React, Folge 6: Wiederverwendung von Code

React-Komponenten lassen sich wiederverwenden, wobei zwischen Präsentations- und Funktionskomponenten unterschieden wird. Dafür gibt es verschiedene Ansätze, unter anderem Container-Komponenten und Komponenten höherer Ordnung. Wie funktionieren sie?

Jürgen Gutsch: Creating a chat application using React and ASP.​NET Core - Part 1

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks.

Requirements

I want to create a small chat application that uses React, SignalR and ASP.NET Core 2.0. The frontend should be created using React. The backend serves a Websocket end-point using SignalR and some basic Web API end-points to fetch some initial data, some lookup data and to do the authentication (I'll use IdentityServer4 to do the authentication). The project setup should be based on the Visual Studio React Project I introduced in one of the last posts.

The UI should be clean and easy to use. It should be possible to use the chat without a mouse. So the focus is also on usability and a basic accessibility. We will have a large chat area to display the messages, with an input field for the messages below. The return key should be the primary method to send the message. There's one additional button to select emojis, using the mouse. But basic emojis should also be available using text symbols.

On the left side, I'll create a list of online users. Every new logged on user should be mentioned in the chat area. The user list should be auto updated after a user logs on. We will use SignalR here too.

  • User list using SignalR
    • small area on the left hand side
    • Initially fetching the logged on users using Web API
  • Chat area using SignalR
    • large area on the right hand side
    • Initially fetching the last 50 messages using Web API
  • Message field below the chat area
    • Enter kay should send the message
    • Emojis using text symbols
  • Storing the chat history in a database (using Azure Table Storage)
  • Authentication using IdentityServer4

Project setup

The initial project setup is easy and already described in one of the last post. I'll just do a quick introduction here.

You can either use visual studio 2017 to create a new project

or the .NET CLI

dotnet new react -n react-chat-app

It takes some time to fetch the dependent packages. Especially the NPM packages are a lot. The node_modules folder contains around 10k files and will require 85 MB on disk.

I also added the "@aspnet/signalr-client": "1.0.0-alpha2-final" to the package.json

Don'be be confused, with the documentation. In the GitHub repository they wrote the NPM name signalr-client should not longer used and the new name is just signalr. But when I wrote this lines, the package with the new name is not yet available on NPM. So I'm still using the signalr-client package.

After adding that package, an optional dependency wasn't found and the NPM dependency node in Visual Studio will display a yellow exclamation mark. This is annoying and id seems to be an critical error, but it will work anyway:

The NuGet packages are fine. To use SignalR I used the the Microsoft.AspNetCore.SignalR package with the version 1.0.0-alpha2-final.

The project compiles without errors. And after pressing F5, the app starts as expected.

Since a while I configured Edge as the start-up browser to run ASP.NET Core projects, because Chrome got very slow. Once the IISExpress or Kestrel is running you can easily use Chrome or any other browser to call and debug the web. Which makes sense, since the React developer tolls are not yet available for Edge and IE.

This is all to setup and to configure. All the preconfigured TypeScript and Webpack stuff is fine and runs as expected. If there's no critical issue, you don't really need to know about it. It just works. I would anyway recommend to learn about the TypeScript configuration and Webpack to be safe.

Closing words

Now the requirements are clear and the project is set-up. In this series I will not set up an automated build using CAKE. I'll also not write about unit tests. The focus is React, SignalR and ASP.NET Core only.

In the next chapter I'm going build the UI React components and to implement the basic client logic to get the UI working.

Jürgen Gutsch: Another GraphQL library for ASP.​NET Core

I recently read a interesting tweet by Glenn Block about a GraphQL app running on the Linux Subsystem for Windows:

It is impressive to run a .NET Core app in Linux on Windows, which is not a Virtual Machine on Windows. I never hat the chance to try that. I just played a little bit with the Linux Subsystem for Windows. The second that came to mind was: "wow, did he use my GraphQL Middleware library or something else?"

He uses different libraries, as you can see in his repository on GitHub: https://github.com/glennblock/orders-graphql

  • GraphQL.Server.Transports.AspNetCore
  • GraphQL.Server.Transports.WebSockets

This libraries are built by the makers of graphql-dotnet. The project is hosted in the graphql-dotnet organization on GitHub: https://github.com/graphql-dotnet/server. They also provide a Middleware that can be used in ASP.NET Core projects. The cool thing about that project is a WebSocket endpoint for GraphQL.

What about the GraphQL middleware I wrote?

Because my GraphQL middleware, is also based on graphql-dotnet, I'm not yet sure whether to continue maintaining this middleware or to retire this project. I'm not yet sure what to do, but I'll try the other implementation to find out more.

I'm pretty sure the contributors of the graphql-dotnet project know a lot more about GraphQL and there library, than I do. Both project will work the same way and will return the same result - hopefully. The only difference is the API and the configuration. The only reason to continue working on the project is to learn more about GraphQL or to maybe provide a better API ;-)

If I retire my project, I would try to contribute to the graphql-dotnet projects.

What do you think? Drop me a comment and tell me.

Code-Inside Blog: WCF Global Fault Contracts

If you are still using WCF you might have stumbled upon this problem: WCF allows you to throw certain Faults in your operation, but unfortunatly it is a bit awkward to configure if you want “Global Fault Contracts”. With this solution here it should be pretty easy to get “Global Faults”:

Define the Fault on the Server Side:

Let’s say we want to throw the following fault in all our operations:

[DataContract]
public class FoobarFault
{

}

Register the Fault

The tricky part in WCF is to “configure” WCF that it will populate the fault. You can do this manually via the [FaultContract-Attribute] on each operation, but if you are looking for a global WCF fault configuration, you need to apply it as a contract behavior like this:

[AttributeUsage(AttributeTargets.Interface, AllowMultiple = false, Inherited = true)]
public class GlobalFaultsAttribute : Attribute, IContractBehavior
{
    // this is a list of our global fault detail classes.
    static Type[] Faults = new Type[]
    {
        typeof(FoobarFault),
    };

    public void AddBindingParameters(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint,
        BindingParameterCollection bindingParameters)
    {
    }

    public void ApplyClientBehavior(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint,
        ClientRuntime clientRuntime)
    {
    }

    public void ApplyDispatchBehavior(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint,
        DispatchRuntime dispatchRuntime)
    {
    }

    public void Validate(
        ContractDescription contractDescription,
        ServiceEndpoint endpoint)
    {
        foreach (OperationDescription op in contractDescription.Operations)
        {
            foreach (Type fault in Faults)
            {
                op.Faults.Add(MakeFault(fault));
            }
        }
    }

    private FaultDescription MakeFault(Type detailType)
    {
        string action = detailType.Name;
        DescriptionAttribute description = (DescriptionAttribute)
            Attribute.GetCustomAttribute(detailType, typeof(DescriptionAttribute));
        if (description != null)
            action = description.Description;
        FaultDescription fd = new FaultDescription(action);
        fd.DetailType = detailType;
        fd.Name = detailType.Name;
        return fd;
    }
}	

Now we can apply this ContractBehavior in the Service just like this:

[ServiceBehavior(...), GlobalFaults]
public class FoobarService
...

To use our Fault, just throw it as a FaultException:

throw new FaultException<FoobarFault>(new FoobarFault(), "Foobar happend!");

Client Side

On the client side you should now be able to catch this exception just like this:

    try
	{
		...
	}
	catch (Exception ex)
	{
		if (ex is FaultException faultException)
		{
			if (faultException.Action == nameof(FoobarFault))
			{
			...
			}
		}
	}

Hope this helps!

(This old topic was still on my “To-blog” list, even if WCF is quite old, maybe someone is looking for something like this)

Jürgen Gutsch: The ASP.​NET Core React Project

In the last post I wrote I had a first look into a plain, clean and lightweight React setup. I'm still impressed how easy the setup is and how fast the loading of a React app really is. Before trying to push this setup into a ASP.NET Core application, it would make sense to have a look into the ASP.NET Core React project.

Create the React project

You can either use the "File New Project ..." dialog in Visual Studio 2017 or the .NET CLI to create a new ASP.NET Core React project:

dotnet new react -n MyPrettyAwesomeReactApp

This creates a ready to go React project.

The first impression

At the first glance I saw the webpack.config.js, which is cool. I really love Webpack and I love how it works, how it bundles the relevant files recursively and how it saves a lot of time. Also a tsconfig.json is available in the project. This means the React-Code will be written in TypeScript. Webpack compiles the TypeScript into JavaScript and bundles it into an output file, called main.js

Remember: In the last post the JavaScript code was written in ES6 and transpiled using Babel

The TypeScript files are in the folder ClientApp and the transpiled and bundled Webpack output gets moved to the wwwroot/dist/ folder. This is nice. The Build in VS2017 runs Webpack, this is hidden in MSBuild tasks inside the project file. To see more, you need to have a look into the project file by right clicking the project and select Edit projectname.csproj

You'll than find a ItemGroup with the removed ClientApp Folder:

<ItemGroup>
  <!-- Files not to publish (note that the 'dist' subfolders are re-added below) -->
  <Content Remove="ClientApp\**" />
</ItemGroup>

And there are two Targets, which have definitions for the Debug and Publish build defined:

<Target Name="DebugRunWebpack" BeforeTargets="Build" Condition=" '$(Configuration)' == 'Debug' And !Exists('wwwroot\dist') ">
  <!-- Ensure Node.js is installed -->
  <Exec Command="node --version" ContinueOnError="true">
    <Output TaskParameter="ExitCode" PropertyName="ErrorCode" />
  </Exec>
  <Error Condition="'$(ErrorCode)' != '0'" Text="Node.js is required to build and run this project. To continue, please install Node.js from https://nodejs.org/, and then restart your command prompt or IDE." />

  <!-- In development, the dist files won't exist on the first run or when cloning to
        a different machine, so rebuild them if not already present. -->
  <Message Importance="high" Text="Performing first-run Webpack build..." />
  <Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js" />
  <Exec Command="node node_modules/webpack/bin/webpack.js" />
</Target>

<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
  <!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
  <Exec Command="npm install" />
  <Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js --env.prod" />
  <Exec Command="node node_modules/webpack/bin/webpack.js --env.prod" />

  <!-- Include the newly-built files in the publish output -->
  <ItemGroup>
    <DistFiles Include="wwwroot\dist\**; ClientApp\dist\**" />
    <ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
      <RelativePath>%(DistFiles.Identity)</RelativePath>
      <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
    </ResolvedFileToPublish>
  </ItemGroup>
</Target>

As you can see it runs Webpack twice. Once for the vendor dependencies like Bootstrap, jQuery, etc. and once for the react app in the ClientApp folder.

Take a look at the ClientApp

The first thing you'll see, if you look into the ClientApp folder. There are *.tsx-files instead of *.ts files. This are TypeScript files which are supporting JSX, the wired XML/HTML syntax inside JavaScript code. VS 2017 already knows about the JSX syntax and doesn't show any errors. That's awesome.

This client app is bootstrapped in the boot.tsx (we had the index.js in the other blog post). This app supports routing via the react-router-dom Component. The boot.tsx defines an AppContainer, that primarily hosts the route definitions. stored in the routes.tsx. The Routes than calls the different components depending on the path in the bowsers address bar. This routing concept is a little more intuitive to use than the Angular one. The routing is defined in the component that hosts the routed contents. In this case the Layout component contains the dynamic contents:

// routes.tsx
export const routes = <Layout>
    <Route exact path='/' component={ Home } />
    <Route path='/counter' component={ Counter } />
    <Route path='/fetchdata' component={ FetchData } />
</Layout>;

Inside the Layout.tsx you see, that the routed components will be rendered in a specific div tag that renders the children defined in the routes.tsx

// Layout.tsx
export class Layout extends React.Component<LayoutProps, {}> {
  public render() {
    return <div className='container-fluid'>
      <div className='row'>
        <div className='col-sm-3'>
          <NavMenu />
          </div>
    <div className='col-sm-9'>
      { this.props.children }
    </div>
    </div>
    </div>;
  }
}

Using this approach, it should be possible to add sub routes for specific small areas of the app. Some kind of "nested routes".

There's also an example available about how to fetch data from a Web API. This sample uses isomorphic-fetch' to fetch the data from the Web API:

constructor() {    
  super();
  this.state = { forecasts: [], loading: true };

  fetch('api/SampleData/WeatherForecasts')
    .then(response => response.json() as Promise<WeatherForecast[]>)
    .then(data => {
          this.setState({ forecasts: data, loading: false });
	});
}

Since React doesn't provide a library to load data via HTTP request, you are free to use any library you want. Some other libraries used with React are axios, fetch or Superagent.

A short look into the ASP.NET Core parts

The Startup.cs is a little special. Not really much, but you'll find some differences in the Configure method. There is the use of the WebpackDevMiddleware, that helps while debugging. It calls Webpack on every change in the used TypeScript files and reloads the scripts in the browser while debugging. Using this middleware, you don't need to recompile the whole application or to restart debugging:

if (env.IsDevelopment())
{
  app.UseDeveloperExceptionPage();
  app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions
  {
    HotModuleReplacement = true,
    ReactHotModuleReplacement = true
  });
}
else
{
  app.UseExceptionHandler("/Home/Error");
}

And the route configuration contains a fallback route, that gets used, if the requested path doesn't match any MVC route:

app.UseMvc(routes =>
{
  routes.MapRoute(
    name: "default",
    template: "{controller=Home}/{action=Index}/{id?}");

  routes.MapSpaFallbackRoute(
    name: "spa-fallback",
    defaults: new { controller = "Home", action = "Index" });
});

The Integration in the views is interesting as well. In the _Layout.cshtml:

  • There is a base href set to the current base URL.
  • The vendor.css and a site.css is referenced in the head of the document.
  • The vendor.js is referenced at the bottom.
<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>@ViewData["Title"] - ReactWebApp</title>
    <base href="~/" />

    <link rel="stylesheet" href="~/dist/vendor.css" asp-append-version="true" />
    <environment exclude="Development">
        <link rel="stylesheet" href="~/dist/site.css" asp-append-version="true" />
    </environment>
</head>
<body>
    @RenderBody()

    <script src="~/dist/vendor.js" asp-append-version="true"></script>
    @RenderSection("scripts", required: false)
</body>
</html>

The actual React app isn't referenced here, but in the Index.cshtml:

@{
    ViewData["Title"] = "Home Page";
}

<div id="react-app">Loading...</div>

@section scripts {
    <script src="~/dist/main.js" asp-append-version="true"></script>
}

This makes absolutely sense. Doing like this, you are able to create a React app per view. Routing probably doesn't work this way, because there is only one SpaFallbackRoute, but if you just want to make single views more dynamic, it would make sense to create multiple views which are hosting a specific React app.

This is exactly what I expect using React. E. g. I have many old ASP.NET Applications and I want to get rid of the old client script and I want to modernize those applications step by step. In many cases a rewrite costs to much and it would be easy to replace the old code by clean React apps.

The other changes in that project are not really related to React in general. They are just implementation details of this React demo applications

  • There is a simple API controller to serve the weather forecasts
  • The HomeController only contains the Index and the Error actions

Some concluding words

I didn't really expect such a clearly and transparently configured project template. If I try to put the setup of the last post into a ASP.NET Core project, I would do it almost the same way. Using Webpack to transpile and bundle the files and save them somewhere in the wwwroot folder.

From my perspective, I would use this project template as a starter for small projects to medium sized projects (whatever this means). For medium to bigger sized projects, I would - again - propose to divide the client app ad the server part into two different projects, to host them independently, to develop them independently. Hosting independently also means, scale independently. Develop independently means both, scale the teams independently and to focus only on the technology and tools, which are used for this part of the application.

To learn more about React and how it works with ASP.NET Core in Visual Studio 2017, I will create a Chat-App. I will also write a small series about it:

  1. React Chat Part 1: Requirements & Setup
  2. React Chat Part 2: Creating the UI & React Components
  3. React Chat Part 3: Adding Websockets using SignalR
  4. React Chat Part 4: Authentication & Storage
  5. React Chat Part 5: Deployment to Azure

I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo

Jürgen Gutsch: Trying React the first time

The last two years I worked a lot with Angular. I learned a lot and I also wrote some blog posts about it. While I worked with Angular, I always had React in mind and wanted to learn about that. But I never head the time or a real reason to look at it. I still have no reason to try it, but a little bit of time left. So why not? :-)

This post is just a small overview of what I learned during the setup and in the very first tries.

The Goal

It is not only about developing using React, later I will also see how React works with ASP.NET and ASP.NET Core and how it behaves in Visual Studio. I also want to try the different benefits (compared to Angular) I heard and read about React:

  • It is not a huge framework like Angular but just a library
  • Because It's a library, it should be easy to extend existing web-apps.
  • You should be more free to use different libraries, since there is not all the stuff built in.

Setup

My first ideas was to follow the tutorials on https://reactjs.org/. Using this tutorial some other tools came along and some hidden configuration happened. The worst thing from my perspective is that I need to use a package manager to install another package manager to load the packages. Yarn was installed using NPM and was used. Webpack was installed and used in some way, but there was no configuration, no hint about it. This tutorial uses the create-react-app starter kid. This thing hides many stuff.

Project setup

What I like while working with Angular is a really transparent way of using it and working with it. Because of this I searched for a pretty simple tutorial to setup React in a simple, clean and lightweight way. I found this great tutorial by Robin Wieruch: https://www.robinwieruch.de/minimal-react-Webpack-babel-setup/

This Setup uses NPM to get the packages. It uses Webpack to bundle the needed Javascript, Babel is integrated in to Webpack to transpile the JavaScripts from ES6 to more browser compatible JavaScript.

I also use the Webpack-dev-server to run the React app during development. Also react-hot-loader is used to speed up the development time a little bit. The main difference to Angular development is the usage of ES6 based JavaScript and Babel instead of using Typescript. It should also work with typescript, but it doesn't really seem to matter, because they are pretty similar. I'll try using ES6 to see how it works. The only thing I possibly will miss is the type checking.

As you can see, there is not really a difference to Typescript yet, only the JSX thing takes getting used to:

// index.js
import React from 'react';
import ReactDOM from 'react-dom';

import Layout from './components/Layout';

const app = document.getElementById('app');

ReactDOM.render(<Layout/>, app);

module.hot.accept();

I can also uses classes in JavaScript:

// Layout.js
import React from 'react';
import Header from './Header';
import Footer from './Footer';

export default class Layout extends React.Component {
    render() {
        return (
            <div>
                <Header/>
                <Footer/>
            </div>
        );
    }
}

With this setup, I believe I can easily continue to play around with React.

Visual Studio Code

To support ES6, React and JSX in VSCode I installed some extensions for it:

  • Babel JavaScript by Michael McDermott
    • Syntax-Highlighting for modern JavaScripts
  • ESLint by Dirk Baeumer
    • To lint the modern JavaScripts
  • JavaScript (ES6) code snippets by Charalampos Karypidis
  • Reactjs code snippets by Charalampos Karypidis

Webpack

Webpack is configured to build a bundle.js to thde ./dist folder. This folder is also the root folder for the Webpack dev server. So it will serve all the files from within this folder.

To start building and running the app, there is a start script added to the packages.config

"start": "Webpack-dev-server --progress --colors --config ./Webpack.config.js",

With this I can easily call npm start from a console or from the terminal inside VSCode. The Webpack dev server will rebuild the codes and reload the app in the browser, if a code file changes.

const webpack = require('webpack');

module.exports = {
    entry: [
        'react-hot-loader/patch',
        './src/index.js'
    ],
    module: {
        rules: [{
            test: /\.(js|jsx)$/,
            exclude: /node_modules/,
            use: ['babel-loader']
        }]
    },
    resolve: {
        extensions: ['*', '.js', '.jsx']
    },
    output: {
        path: __dirname + '/dist',
        publicPath: '/',
        filename: 'bundle.js'
    },
    plugins: [
      new webpack.HotModuleReplacementPlugin()
    ],
    devServer: {
      contentBase: './dist',
      hot: true
    }
};

React Developer Tools

For Chrome and Firefox there are add-ins available to inspect and debug React apps in the browser. For Chrome I installed the React Developer Tools, which is really useful to see the component hierarchy:

Hosting the app

The react app is hosted in a index.html, which is stored inside the ./dist folder. It references the bundle.js. The React process starts in the index.js. React putts the App inside a div with the Id app (as you can see in the first code snippet in this post.)

<!DOCTYPE html>
<html>
  <head>
      <title>The Minimal React Webpack Babel Setup</title>
  </head>
  <body>
    <div id="app"></div>
    <script src="bundle.js"></script>
  </body>
</html>

The index.js import the Layout.js. Here a basic layout is defined, by adding a Header and a Footer component, which are also imported from other components.

// Header.js
import React from 'react';
import ReactDOM from 'react-dom';

export default class Header extends React.Component {
    constructor(props) {
        super(props);
        this.title = 'Header';
    }
    render() {
        return (
            <header>
                <h1>{this.title}</h1>
            </header>
        );
    }
}
// Footer.js
import React from 'react';
import ReactDOM from 'react-dom';

export default class Footer extends React.Component {
    constructor(props) {
        super(props);
        this.title = 'Footer';
    }
    render() {
        return (
            <footer>
                <h1>{this.title}</h1>
            </footer>
        );
    }
}

The resulting HTML looks like this:

<!DOCTYPE html>
<html>
  <head>
    <title>The Minimal React Webpack Babel Setup</title>
  </head>
  <body>
    <div id="app">
      <div>
        <header>
          <h1>Header</h1>
        </header>
        <footer>
          <h1>Footer</h1>
        </footer>
      </div>
    </div>
    <script src="bundle.js"></script>
  </body>
</html>

Conclusion

My current impression is that React is much more fast on startup than Angular. This is just a kind of a Hello world app, but even for such an app Angular need some time to start a few lines of code. Maybe that changes if the App gets bigger. But I'm sure it keeps to be fast, because of less overhead in the framework.

The setup was easy and works on the first try. The experience in Angular helped a lot here. I already know the tools. Anyway, Robins tutorial is pretty clear, simple and easy to read: https://www.robinwieruch.de/minimal-react-Webpack-babel-setup/

To get started with React, there's also a nice Video series on YouTube, which tells you about the really basics and how to get started creating components and adding the dynamic stuff to the components: https://www.youtube.com/watch?v=MhkGQAoc7bc&list=PLoYCgNOIyGABj2GQSlDRjgvXtqfDxKm5b

Marco Scheel: What’s in your bag, Marco?

Ich bin ein Fan von diesem Typ Blog-Post. Tolle Beispiele gibt es zum Beispiel bei The Verge. Ich möchte euch heute Einblick in meine Tasche geben und zeigen, was ein 100% Cloud Berater Lead Cloud Architect für einen erfolgreichen Tag braucht.

Wir reden oft über den modernen Arbeitsplatz, aber gerade für mich ist der moderne Arbeitsplatz nirgendwo und überall. Was habe ich dabei? Es gibt immer mal wieder Blogs in denen ihr Autor beschreibt, was er mit sich trägt und welche Motivation dahintersteckt. In vielen Meetings kann man sehen, dass ich ein Microsoft Surface Book (GEN1) nutze, aber was noch alles in meiner Tasche steckt, zeige ich euch jetzt. Für die spannenden Dinge gibt es dann auch noch ein Satz zum Warum.

image

Das Bild nochmal in XL auf Twitter.

Fangen wir mit dem Offensichtlichsten an: Vaude Albert M ist meine Tasche (1). Ich versuche „leicht“ unterwegs zu sein. Eine Tasche mit Rollen mag praktisch sein, aber für meine Anforderungen und mein Budget hat es die Tasche genau getroffen. Es sollten Laptops bis 13‘‘ reinpassen und für ausgedehnte Reisen kann Sie auf einen Trolley gesteckt werden. Die Fächeraufteilung finde ich ausreichend und es gibt 4 Netze im Inneren für all mein Kleinkram.

Mein Laptop (2) oder mein 2-1 ist ein Microsoft Book (GEN1). Seit ich bei Glück & Kanja arbeite, bin ich in der glücklichen Lage Stift-PCs zu verwenden. Eingestiegen mit einem Toshiba Portege m200 über eine Microsoft Surface Pro (das Original) direkt zum Surface Book. Einziger Ausrutscher war ein Dell Latitude E6400. Mein Laptop transportiere ich auch in der Tasche geschützt durch ein Belkin Neopren Schutzhülle (3). Die Hülle kommt auch zum Einsatz und schützt das Gerät vor eventuellem Regen, wenn ich mal bei einem Kunden ein Meeting in einem anderen Gebäude/Stadtteil habe.

Die Akkuleistung des Surface Book ist OK, aber ohne Netzteil (4) traut sich keiner aus dem Haus. Das Netzteil ist super. Was kann an einem Netzteil super sein? Es hat ein USB Ladeport! Ich habe immer ein kurzes Micro-USB Kabel mit USB-C Adapter dran. Handy, Headset, PowerBank und ähnliches kann so schnell und unkompliziert geladen werden.

Als mobile Maus kommt eine Microsoft Arc Touch Mouse Surface Edition (also mit Bluetooth) zum Einsatz. Die Maus war essentiell in der Zeit mit dem Surface Pro, da es nur ein USB Anschluss gab und der Trackpad seinen Namen nicht verdient hat. Mit dem Surface Book habe ich nun ein sehr gutes Trackpad und die Maus kommt immer seltener zum Einsatz, aber ganz ohne geht es einfach noch nicht. Durch den Knick-Mechanismus nimmt sie quasi kein Platz weg in der Tasche.

Für die zahlreichen Telcos in der Woche ist ein richtiges Headset (6) unersetzlich. Glück & Kanja ist im Bereich UC nicht unbekannt und so haben ich viele Kollegen, die mir immer wieder mitteilen wie wichtig ein gutes Headset ist (und ein LAN Kabel). Mit dem Plantronics Voyager Focus UC habe ich ein Oberklasse Headset. Es hat sogar „Active Noise Cancelling“ und kann so auch im Zug zum Video gucken genutzt werden. Ich nutze dieses Headset mobil und am Arbeitsplatz in Offenbach. Das Teil hat ein super einfach zu erreichenden Mute Switch und so störe ich in Konferenzen zumindest nicht durch unnötige Geräusche. Durch ein Bug auf meinem Surface Book und der aktuellen Windows Insider Version nutze ich mein „altes“ mobiles Headset (7). Das Jabra Stealth UC hat mich immer mobil begleitet und nie meine Tasche verstopft. Privat nutze ich die Jaybird Freedom Sprint Bluetooth Kopfhörer (8). In der Regel sind sie nur mit meinem Smartphone gekoppelt und ich nutze sie überwiegend für Podcasts. Durch die Taste am Ohr kann ich den Podcast so jederzeit unterbrechen und fortsetzen. Die Kopfhörer kann man auch zum Joggen anziehen.

Meine Tasche hat zwar viele Netze und Fächer, aber der Kleinkram ist immer irgendwie hin und her geflogen und die Kabelkopfhörer haben sie immer in allem verheddert. Ich habe auf einem US Blog in einem „Whats in my bag“ Post dieses tolle Gadget gefunden. Ein „Stuff“-Organizer (9) mit dem Namen Cocoon GRID-IT. Die Teile sind mittlerweile ganz erschwinglich und in unterschiedlichen Größen zu haben. Ich haben folgende Dinge organisiert:

  • Kabelkopfhörer
  • Kabelheadset (noch von meinem Lumia 920)
  • MicroSD mit Adapter
  • USB 2.0 Stick (klein) + USB 3.0 Stick (groß)
  • Surface Pen Tip Kit
  • USB-C to Headphone-Jack Adapter
  • Mini-Display-Port zu VGA
  • Mini-Display-Port zu HDMI, DVI, Display-Port
  • Micro-USB zu USB Adapter
image

Eine PowerBank gehört heute einfach in jede Tasche. Ich habe mir im letzten Amazon Xmas Sale nochmal einen „kleinen“ Anker den PowerCore 10000 geholt. Ich hatte sonst einen Anker PowerCore mit 20100 mAH dabei, aber der war nur unnötig schwer. Unterwegs ein Headset oder Handy laden kann man mit dem kleinen Akku viel eleganter. Für diesen Einsatz habe ich auch kurze Kabel in der kleiner Tasche (Micro-USB und USB-C).

Meine Kabelsammlung und ein Anker 24-Watt 2-Port Netzteil trage ich immer in der vordersten Tasche dabei. Für den Einsatz am Netzteil eignen sich lange Kabel. Mein Lieblingskabel (weil lang) ist das weiße Micro-USB Kabel, dass ich noch aus den Kindle Keyboard Zeiten übrighabe. Aktuellere Geräte brauchen ein USB-C Kabel und da habe ich noch eins von meinem Lumia 950 XL übrig. Mein Frau nutzt als „Computer“ ein iPad, also kann es nie schaden auch ein Lightning Kabel (natürlich von Anker) dabei zu haben.

Es bleibt nur der Rest (12) übrig. Handcreme, Kugelschreiber, Bonbons gegen das Kratzen im Hals, Heuschnupfen-Tabellen, Batterien für den Surface Pen, Regenschirm und als Vater von zwei Jungs immer ein paar Feuchtigkeitstücher.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.