MSDN Team Blog AT [MS]: Microsoft DevOps tools win Tool Challenge @ Software Quality Days

Yes, we did it (again). Microsoft did win the Tool Challenge at the Software Quality Days and was presented the BEST TOOL AWARD 2018. The beaRainer and Gerwald with the Best Tool awarduty about this is that the conference participants voted for the best tool between vendors like CA Technologies, Micro Focus, Microsoft or Tricentis. Rainer Stropek presented for Microsoft on the Future of Visual Studio & Visual Studio Team Services covering topics like DevOps, mobile DevOps, Live Unit Testing and how Machine Learning will affect testing.

 

During the conference we presented our DevOps solution based on Visual Studio Team Services, the new Visual Studio App Center service for mobile DevOps and the Cloud platform Microsoft Azure as a place for every tester and developer, regardless of platform or language used, to run their applications or test environments.

 

Software Quality Days are the brand of a yearly 2-day conference (+2 days workshops) focusing on software quality and testing technologies with about 400 attendees. The conference is held in Vienna, Austria and was celebrating its 20th anniversary in 2018. Five tracks - practical, scientific and tool-oriented are building the agenda of conference. In the 3 Software Quality Days 2018 Logopractical tracks there are presentations of application-oriented experiences and lectures - from users for users. The scientific track presents a corresponding level of innovation and research results, and how they relate to practical usage scenarios. The leading vendors of the industry are presenting latest services and tools in the exhibition and showcase practical examples and implementations in the Solution Provider Forum.

Tool challenge
As part of the Software Quality Days the Tool challenge is a special format on the first day of the conference. Participating vendors get questions or a practical challenge that needs to be “solved” during the day. In the late afternoon the solution needs to be presented back to the audience of the conference. For the participating vendors the challenge is the development of the solution and content at the conference location with limited time available as well as the presentation to the audience in a slot of 12 minutes only. Each conference participant gets one voting card and can selects his favorite solution or presentation. The vendor with the highest number of voting cards wins the tool challenge.

Rainer and GerwaldThe slides of our contribution are posted on SlideShare: http://www.slideshare.net/rstropek/software-quality-days-2018-tools-challenge

Video of the Tool Challenge presentation: https://www.youtube.com/watch?v=STr0ZiBtfPQ

Special thanks go to Rainer Stropek for the superior presentation at the Tool Challenge!

 

Rainer Stropek, Regional Director & MVP, Azure (right in the picture)
Gerwald Oberleitner, Technical Sales, Intelligent Cloud, Microsoft (left in the picture)

André Krämer: Fehler: Xamarin.Forms legt in Visual Studio 2017 Update 5 leere Projektmappe an

Kürzlich stieß ich auf einen sehr unschönen Fehler in Visual Studio 2017 Update 5. Beim Testen der neuen Xamarin.Forms Projektvorlage, die nun auch .NET Standard für das Teilen des Codes unterstützt, erhielt ich als Ergebnis in Visual Studio eine leere Projektmappe. Sowohl das geteilte Projekt, als auch die plattformspezifischen Projekte fehlten. Ein Blick in den Dateiexplorer zeigte, dass es sich nicht um einen Anzeigefehler in Visual Studio handelte, sondern dass tatsächlich auch im Dateisystem keine Dateien angelegt wurden.

Jürgen Gutsch: Book Review: ASP.​NET Core 2 and Angular 5

Last fall, I did my first technical review of a book written by Valerio De Sanctis, called ASP.NET Core 2 and Angular 5. This book is about to use Visual Studio 2017 to create a Single Page Application using ASP.NET Core and Angular.

About this book

The full title is "ASP.NET Core 2 and Angular 5: Full-Stack Web Development with .NET Core and Angular" and was published by PacktPub and also available on Amazon. It is available as a printed version and via various e-book formats.

This book doesn't cover both technologies in deep, but gives you a good introduction on how both technologies are working together. It leads you step by step from the initial setup to the finished application. Don't expect a book for expert developers. But this book is great for ASP.NET Developers who want to start with ASP.NET Core and Angular. This book is a step by step tutorial to create all parts of an Application that manages tests, its questions, answers and results. It describes the database as well as the Web APIs, the Angular parts and the HTML, the authentication and finally the deployment to a web server.

Valerio uses the Angular based SPA project , which is available in Visual Studio 2017 and the .NET Core 2.0 SDK. This project template is not the best solution for bigger projects, but but it fits good for small size projects as described in this book.

About the technical review

It was my first technical review of an entire book. It was kinda fun to do this. I'm pretty sure it was a pretty hard job for Valerio, because the technologies changed while he was working on the chapters. ASP.NET Core 2.0 was released after he finished four or five chapter and he needed to rewrite those chapters. He changed the whole Angular integration into the ASP.NET Project, because of the new Angular SPA project template. Also Angular 5 came out during writing. Fortunately there wasn't so much relevant changes between, version 4 and version 5. In know this issues, about writing good contents, while technology changes. I did a article series for a developer magazine about ASP.NET Core and Angular 2 and both ASP.NET Core and Angular changes many times. And changes again right after I finished the articles. I rewrote that stuff a lot and worked almost six months on only three articles. Even my Angular posts in this blog are pretty much outdated and don't work anymore with the latest versions.

Kudos to Valerio, he really did a great job.

I got one chapter by another to review. My job wasn't just to read the chapters, but also to find logical errors, mistakes that will possibly confuse the readers and also to find not working code parts. I followed the chapters as written by Valerio to build this sample application. I followed all instructions and samples to find errors. I reported a lot of errors, I think. And I'm sure that all of them where removed. After I finished the review of the last chapter, I also finished the coding and got a running application deployed on a webserver.

Readers reviews on Amazon and PacktPub

I just had a look into the readers reviews on Amazon and PacktPub. There are not so much reviews done currently, but unfortunately there are (currently) 4 out of 9 reviews talking about errors in the code samples. Mostly about errors in the client side Angular code. This is a lot IMHO. This turns me sadly. And I really apologize that. I was pretty sure I found almost all mistakes, maybe at least those errors that prevents a running application. Because I got it running at the end. Additionally I wasn't the only technical review. There was Ramchandra Vellanki who also did a great job, for sure.

What happened that some readers found errors? Two reasons I thought about first:

  1. The readers didn't follow the instructions really carefully. Especially experienced developers really know how it works or how it should work in their perspective. They don't read exactly, because they seem to know where the way goes. I did so as well during the first three or four chapters and I needed to start again from the beginning.
  2. Dependencies were changing since the book was published. Especially if the package versions inside the package.json were not fixed to a specific version. npm install then loads the latest version, which may contain breaking changes. The package.json in the book has fixed version, but the sources on GitHub doesn't.

I'm pretty sure there are some errors left in the codes, but at the end the application should run.

Also there are conceptual differences. While writing about Angular and ASP.NET Core and while working with it, I learned a lot and from my current point of view, I would not host an Angular app inside an ASP.NET Core application anymore. (Maybe I'll think about doing that in a really small application.) Anyway, there is that ASP.NET Core Angular SPA project and it is really easy to setup a SPA using this. So, why not using this project template to describe the concepts and interaction of Angular and ASP.NET Core? This keeps the book simple and short for beginners.

Conclusion

I would definitely do a technical review again, if needed. As I said, it is fun and an honor to help an author to write a book like this.

Too bad, that some readers struggle about errors anyway and couldn't get the code running. But writing a book is hard work. And we developers all know, that no application is really bug free, so even a book about quickly changing technologies cannot be free of errors.

Manfred Steyer: Microservice Clients with Web Components using Angular Elements: Dreams of the (near) future?

In one of my last blog posts I've compared several approaches for using Single Page Applications, esp. Angular-based ones, in a microservice-based environment. Some people are calling such SPAs micro frontends; other call them micro aps. As you can read in the mentioned post, there is not the one and only perfect approach but several feasible concepts with different advantages and disadvantages.

In this post I'm looking at one of those approaches in more detail: Using Web Components. For this, I'm leveraging the new Angular Elements library (@angular/elements) the Core Team is currently working on. Please note that it's still an Angular Labs Project which means that it's experimental and that there can be breaking changes anytime.

Angular Labs

Angular Elements

To get started with @angular/elements you should have a look at Vincent Ogloblinsky's blog post. It really explains the ideas behind it very well. If you prefer a video, have a look to Rob Wormald's presentation from Angular Connect 2017. Also, my buddy Pascal Precht gave a great talk about this topic at ng-be 2017.

As those resources are really awesome, I won't repeat the information they provide here. Instead, I'm showing how to leverage this know-how to implement microservice clients.

Case Study

The case study presented here is as simple as possible. It contains a shell app that activates microservice clients as well as routes within those microservice clients. They are just called Client A and Client B. In addition, Client B also contains a widget from Client A.

Client A is activated

Client B with widget from Client A

The whole source code can be found in my GitHub repo.

Routing within Microservice Clients

One thing that is rather unusual here, is that whole clients are implemented as Web Components and therefore they are using routing:

@NgModule({ imports: [ ReactiveFormsModule, BrowserModule, RouterModule.forRoot([ { path: 'client-a/page1', component: Page1Component }, { path: 'client-a/page2', component: Page2Component }, { path: '**', component: Page1Component} ], { useHash: true }) ], declarations: [ ClientAComponent, Page1Component, Page2Component, [...] ], entryComponents: [ ClientAComponent, [...] ] }) export class AppModule { ngDoBootstrap() { } }

When bootstrapping such components as Web Components we have to initialize the router manually:

@Component([...]) export class ClientAComponent { constructor(private router: Router) { router.initialNavigation(); // Manually triggering initial navigation for @angular/elements ? } }

Excluding zone.js

Normally, Angular leverages zone.js for change detection. It provides a lot of convenience by informing Angular about all browser events. To be capable of this, it's monkey-patching all browser objects. Especially, when we want to use several microservice clients within a single page it can be desirable to avoid such a behavior. This would also lead to smaller bundle sizes.

Beginning with Angular 5 we can exclude zone.js by setting the property ngZone to noop during bootstrapping:

registerAsCustomElements([ClientAComponent, ClientAWidgetComponent], () => platformBrowserDynamic().bootstrapModule(AppModule, { ngZone: 'noop' }) );

After this, we have to trigger change detection manually. But this is cumbersome and error-prone. There are some ideas to deal with this. A prototypical (!) one comes from Fabian Wiles who is an active community member. It uses a custom push pipe that triggers change detection when an observable yields a new value. It works similar to the async pipe but other than it push also works without zone.js:

@Component({ selector: 'client-a-widget', template: ` <div id="widget"> <h1>Client-A Widget</h1> <input [formControl]="control"> {{ value$ | push }} </div> `, styles: [` #widget { padding:10px; border: 2px darkred dashed } `], encapsulation: ViewEncapsulation.Native }) export class ClientAWidgetComponent implements OnInit { control = new FormControl(); value$: Observable<string>; ngOnInit(): void { this.value$ = this.control.valueChanges; } }

You can find Fabian's push pipe within my github repo.

Build Process

For building the web components, I'm using a modified version of the webpack configuration from Vincent Ogloblinsky's blog post. I've modified it to create a bundle for each microservice client. Normally, they would be build within separate projects but for the sake of simplicity I've put everything into my sample:

const AotPlugin = require('@ngtools/webpack').AngularCompilerPlugin; const path = require('path'); var clientA = { entry: { 'client-a': './src/client-a/main.ts' }, resolve: { mainFields: ['es2015', 'browser', 'module', 'main'] }, module: { rules: [{ test: /\.ts$/, loaders: ['@ngtools/webpack'] }] }, plugins: [ new AotPlugin({ tsConfigPath: './tsconfig.json', entryModule: path.resolve(__dirname, './src/client-a/app.module#AppModule' ) }) ], output: { path: __dirname + '/dist', filename: '[name].bundle.js' } }; var clientB = { entry: { 'client-b': './src/client-b/main.ts' }, resolve: { mainFields: ['es2015', 'browser', 'module', 'main'] }, module: { rules: [{ test: /\.ts$/, loaders: ['@ngtools/webpack'] }] }, plugins: [ new AotPlugin({ tsConfigPath: './tsconfig.json', entryModule: path.resolve(__dirname, './src/client-b/app.module#AppModule' ) }) ], output: { path: __dirname + '/dist', filename: '[name].bundle.js' } }; module.exports = [clientA, clientB];

Loading bundles

After creating the bundles, we can load them into a shell application:

<client-a></client-a> <client-b></client-b> <script src="dist/client-a.bundle.js"></script> <script src="dist/client-b.bundle.js"></script>

In this example the bundles are located via relative paths but you could also load them from different origins. The latter one allows for a separate development and deployment of microservice clients.

In addition to that, we need some kind of meta-routing that makes sure that the microservice clients are only displayed when specific menu items are activated. I've implemented this in VanillaJS. You can look it up in the example provided.

Providing Widgets for other Microservice Clients

A bundle can provide several Web Components. For instance, the bundle for Client A also contains a ClientAWidgetComponent which is used in Client B:

registerAsCustomElements([ClientAComponent, ClientAWidgetComponent], () => platformBrowserDynamic().bootstrapModule(AppModule, { ngZone: 'noop' }) );

When calling it there is one challenge: In Client B, Angular doesn't know anything about Client A's ClientAWidgetComponent. Calling it would therefore make Angular to throw an exception. To avoid this, we can make use of the CUSTOM_ELEMENTS_SCHEMA:

@NgModule({ [...] schemas: [CUSTOM_ELEMENTS_SCHEMA], [...] }) export class AppModule { ngDoBootstrap() { } }

After this, we can call the widget anywhere within Client B:

<h2>Client B - Page 2</h2> <client-a-widget></client-a-widget>

Evaluation

As mentioned, @angular/elements is currently experimental. Therefore this approach is more or less a dream of the (near) future. Besides this, there are some advantages and disadvantages:

Advantages

  • Styling is isolated from other Microservice Clients due to Shadow DOM
  • Allows for separate development and separate deployment
  • Mixing widgets from different Microservice Clients is possible
  • The shell can be a Single Page Application too
  • We can use different SPA frameworks in different versions for our Microservice Clients

Disadvantages

  • Microservice Clients are not completely isolated as it would be the case when using hyperlinks or iframes instead. This means that they could influence each other in an unplanned way. This also means that there can be conflicts when using different frameworks in different versions.
  • Shadow DOM doesn't work with IE 11
  • We need polyfills for some browsers

Norbert Eder: .NET Core und Integrationstests

Im Gegensatz zu Unit Tests werden mit Integrationstests komplette Funktionalitäten getestet. Verwendete Systeme (Datenbanken etc.) müssen für Tests entsprechend konfiguriert sein und zur Verfügung stehen.

Nehmen wir als Beispiel eine Web API. Diese gibt definierte Endpunkte nach außen. Ein Client (Browser, Mobilgerät etc.) kann diese Endpunkte bedienen und darüber Informationen abfragen oder übermitteln. Integrationstests fungieren als Client. Der Unterschied ist, dass Ergebnisse zu erwartenden Ergebnissen gegenübergestellt werden. So kann entschieden werden, ob alle APIs korrekt funktionieren.

Wie auch bei Unit Tests sind nicht nur Schönwetter-Fälle zu testen. Es ist mit Fehleingaben zu rechnen, wie geht das System damit um? Stürzt es ab, oder liefert es dem Client eine Information darüber, warum eine Anfrage nicht angenommen wurde oder kein Ergebnis geliefert hat?

Testprojekt erstellen

Welche Schritte sind nun notwendig, um Integrationstests unter .NET Core ausführen zu können?

Im ersten Schritt wird ein neues Testprojekt erstellt:

Visual Studio .NET Core xUnit Test Projekt | Norbert Eder

Visual Studio .NET Core xUnit Testprojekt erstellen

Zur Auswahl steht die Möglichkeit einer MSTest-Umgebung oder der Verwendung von xUnit. In diesem Fall wurde das xUnit-Projekt gewählt. Nach der Erstellung des Projektes muss ein Rebuild vorgenommen werden, damit alle notwendigen Abhängigkeiten bezogen werden. Dies kann via nuget restore auch ohne Rebuild durchgeführt werden.

Im nächsten Schritt ist das NuGet-Package Microsoft.AspNetCore.TestHost dem Projekt hinzuzufügen. Hiermit wird unter anderem die Klasse TestServer zur Verfügung gestellt. Damit kann eine komplette Server-Instanz hochgezogen werden (ohne IIS etc.):

public class EndpointTest
{
    private readonly TestServer server;
    private readonly HttpClient client;

    public EndpointTest()
    {
        var webHostBuilder =
            new WebHostBuilder()
                .UseEnvironment("Test")
                // Startup-Klasse des eigentlichen Projektes
                .UseStartup<Startup>();

        this.server = new TestServer(webHostBuilder);
        this.client = server.CreateClient();            
    }

    [Fact]
    public async void ConnectToEndpoint_ShouldBeOk()
    {
        string result = await client.GetStringAsync("/api/endpoint");
        Assert.Equal("[RESPONSE]", result);
    }
}

In diesem Beispiel wird im Konstruktor der Testserver mit der Startup-Klasse des Projektes hochgezogen. Damit werden alle im Startup angegebenen Konfigurationen verwendet. In den Testmethoden werden nun auf die einzelnen Endpunkte Abfragen abgesetzt und das Ergebnis geprüft.

Ausführen der Integrationstests

Die Tests können über Visual Studio ausgeführt werden. Alternativ dazu ist auch die Ausführung in der Konsole möglich:

dotnet test

Weiterführende Informationen können in der Dokumentation zu dotnet test gefunden werden.

Bei Unit Tests ist es wichtig, dass jeder Test unabhängig der anderen Tests ausgeführt werden kann (und auch funktioniert). Integrationstests müssen hingegen oft in einer definierten Reihenfolge ausgeführt werden. So ist es in der Regel erst möglich Daten abzufragen, nachdem eine Anmeldung am System erfolgte. Zur Veranschaulichung möchte ich ein kleines Beispiel aufzeigen:

  1. Anmelden am System
  2. Anlage eines Kunden
  3. Anlage eines Kundenprojektes
  4. Aktualisieren eines Kundenprojektes
  5. Aktualisieren des Kunden
  6. Löschversuch des Kunden
  7. Löschen eines Kundenprojektes
  8. Löschen des Kundens
  9. Abmelden vom System

Diese und viele weitere Schritte müssen unternommen werden, um die Funktionsweise, Stabilität und Konsistenz der Software zu garantieren.

Bei der Verwendung von xUnit ist es möglich, einen Reihenfolgen-Mechanismus zu verwenden. Hierzu wird die Schnittstelle ITestCaseOrderer zur Verfügung gestellt.

Reihenfolge per Attribut steuern

Damit die gewünschte Reihenfolge gesetzt werden kann, ist ein Attribut zu implementieren und zu verwenden:

public class TestPriorityAttribute : Attribute
{
    public int Priority { get; set; }

    public TestPriorityAttribute(int priority)
    {
        Priority = priority;
    }
}

Gesetzt wird die Priorität dann so:

[Fact, TestPriority(20)]
public async void ConnectToEndpoint_ShouldBeOk()
{
    string result = await client.GetStringAsync("/api/1/endpoint");
    Assert.Equal("[RESPONSE]", result);
}

Orderer implementieren und verwenden

Der nachfolgende Orderer implementiert die Schnittstelle ITestCaseOrderer und sortiert alle Testfälle innerhalb einer Testklasse nach deren gesetzten Priorität:

public class TestPriorityOrderer : ITestCaseOrderer
{
    public IEnumerable<TTestCase> OrderTestCases<TTestCase>(IEnumerable<TTestCase> testCases) where TTestCase : ITestCase
    {
        SortedList<int, TTestCase> sortedTestCases = new SortedList<int, TTestCase>();
        foreach (var testCase in testCases)
        {
            var methodInfo = testCase.TestMethod.Method;
            var attribute = methodInfo.GetCustomAttributes((typeof(TestPriorityAttribute).AssemblyQualifiedName)).FirstOrDefault();
            var priority = attribute.GetNamedArgument<int>("Priority");
            sortedTestCases.Add(priority, testCase);
        }
        return sortedTestCases.Values.ToList();
    }
}

Beachte bitte, dass dies für die Reihenfolge der Testfälle innerhalb einer Testklasse gilt. Dies gilt nicht für die Reihenfolge von Testklassen. Dafür gibt es die Schnittstelle ITestCollectionOrderer. Dies funktioniert analog zum gezeigten Beispiel.

In der jeweiligen Testklasse ist der zu verwendende Orderer als Attribut zu setzen:

[TestCaseOrderer("MyProject.IntegrationTests.TestPriorityOrderer", "MyProject.IntegrationTests")]
public class EndpointTest
{
    // Code goes here
}

Ab sofort wird dieser ausgerufen und alle Testfällt entsprechend sortiert.

Weitere Schritte

In komplexeren Umgebungen empfiehlt es sich, entsprechende Abstraktionen zu schaffen. Eventuell ist es auch sinnvoll, von der Startup abzuleiten, um an der Middleware zu schrauben.

Eine Integration ins Buildsystem ist unbedingt in Erwägung zu ziehen. Können die Tests, trotz der Beteiligung von weiteren Systemen, in kurzer Zeit durchlaufen, sollten diese durchgeführt werden, bevor der Sourcecode weiteren Entwicklern (über ein Source Control) zur Verfügung gestellt wird.

Fazit

Integrationsmittel sind ein hervorragendes Mittel, um den aktuellen Zustand eines Systems zu ermitteln. Das Aussagekraft ist jedoch nur so gut, wie die vorhandenen Tests. Tests sollten während der Entwicklung und bei Bekanntwerden von Problemen erstellt werden. Dadurch wächst die Testbasis ständig an und hilft Probleme frühzeitig zu erkennen bzw. wiederholte Fehler zu vermeiden. Ich empfehle, sowohl Unit Tests als auch Integration Tests von Beginn an zu verwenden. Ein hinausgezögerter Einbau findet in der Regel nie statt, oder wird dann getriggert, wenn die Kacke bereits am Dampfen ist. Viel Spaß beim Erschaffen einer qualitativ hochwertigen Lösung!

Happy Coding.

The post .NET Core und Integrationstests appeared first on Norbert Eder.

Holger Schwichtenberg: Tupel in Tupeln in C# 7.x

Tupel dienen dazu, strukturierte und typisierte Einzelinformationen aneinander zu binden, ohne dafür eine Klasse oder Struktur zu deklarieren. Man kann sie ineinander verschachteln.

Holger Schwichtenberg: User-Group-Vortrag zu .NET 4.7 und Visual Studio 2017 am 10. Januar in Dortmund

Der Dotnet-Doktor zeigt an diesem Abend die neusten Features in .NET, C# und Visual Studio.

Norbert Eder: Ist meine .NET Core/Standard Anwendung plattformunabhängig?

.NET Core/Standard kommt häufig zum Einsatz, wenn die Software nicht nur unter Windows, sondern auch unter Linux oder macOS laufen soll. Bei der Entwicklung auf einem Windows-Gerät fällt die Verwendung einer problematischen API jedoch nicht auf. Eine Library hilft uns Entwickler dabei.

Der Platform Compatibility Analyzer lässt sich einfach als NuGet-Paket einbinden und nimmt sofort seine Arbeit auf. Sämtliche gefunden Probleme werden entweder über das Glühbirnen-Symbol dargestellt, oder erscheinen in der Fehlerliste (die Einstufung ob Warning, Error etc. kann in den Einstellungen angepasst werden).

Welche Prüfungen finden statt?

Folgende Plattform-Prüfungen werden aktuell abgedeckt:

  • Prüfung 1: Die verwendete .NET Core oder .NET Standard API wirft eine PlatformNotSupportedException. Dabei wird ersichtlich, auf welcher Plattform dieser API nicht verfügbar ist.
  • Prüfung 2: Laut NuGet implementiert .NET Framework 4.6.1 den .NET Standard 2.0. Dies ist allerdings nicht korrekt. Bei dieser Meldung wird eine API verwendet, die im .NET Framework 4.6.1 nicht unterstützt wird.
  • Prüfung 3: Verwendung einer nicht unterstützten nativen API in einer .NET Standard/UWP-Anwendung.

Zusätzlich wird auf Verwendung von obsoleten API-Aufrufen untersucht. Zum aktuellen Zeitpunkt werden die nachfolgenden Verwendungen markiert:

Sollte ich diesen Analyzer verwenden?

Besteht die Chance, dass die Software auf einer anderen Plattform eingesetzt wird, empfehle ich den Einsatz des API Analyzer unbedingt. So werden bereits zur Entwicklung Problemstellen identifiziert und können sofort behoben werden. Etwaige Inkompatibilitäten erst bei Testläufen zu finden, kann große Umstellungen und somit hohe Kosten bedeuten.

Wer die Plattformprüfung auf spezifische Plattformen einschränken möchte, kann auszulassende Plattformen im Projektfile konfigurieren:

<PropertyGroup>
    <PlatformCompatIgnore>Linux;MacOSX</PlatformCompatIgnore>
</PropertyGroup>

Happy Coding!

The post Ist meine .NET Core/Standard Anwendung plattformunabhängig? appeared first on Norbert Eder.

Manfred Steyer: Generating custom Angular Code with the CLI and Schematics, Part III: Extending existing Code with the TypeScript Compiler API


Table of Contents

This blog post is part of an article series.


In my two previous blog posts, I've shown how to leverage Schematics to generate custom code with the Angular CLI as well as to update an existing NgModules with declarations for generated components. The latter one was not that difficult because this is a task the CLI performs too and hence there are already helper functions we can use.

But, as one can imagine, we are not always that lucky and find existing helper functions. In these cases we need to do the heavy lifting by ourselves and this is what this post is about: Showing how to directly modify existing source code in a safe way.

When we look into the helper functions used in the previous article, we see that they are using the TypeScript Compiler API which e. g. gives us a syntax tree for TypeScript files. By traversing this tree and looking at its nodes we can analyse existing code and find out where a modification is needed.

Using this approach, this post extends the schematic from the last article so that the generated Service is injected into the AppComponent where it can be configured:

[...] import { SideMenuService } from './core/side-menu/side-menu.service'; @Component({ [...] }) export class AppComponent { constructor( private sideMenuService: SideMenuService) { // sideMenuService.show = true; } }

I think, providing boilerplate for configuring a library that way can lower the barrier for getting started with it. However, please note that this simple example represents a lot of situations where modifying existing code provides more convenience.

The source code for the examples used for this can be found here in my GitHub repository.

Schematics is currently an Angular Labs project. Its public API is experimental and can change in future.

Angular Labs

Walking a Syntax Tree with the TypeScript Compiler API

To get familiar with the TypeScript Compiler API, let's start with a simple NodeJS example that demonstrates its fundamental usage. All we need for this is TypeScript itself. As I'm going to use it within an simple NodeJS application, let's also install the typings for it. For this, we can use the following commands in a new folder:

npm init
npm install typescript --save
npm install @types/node --save-dev

In addition to that, we need a tsconfig.json with respective compiler settings:

{ "compilerOptions": { "target": "es6", "module": "commonjs", "lib": ["dom", "es2017"], "moduleResolution": "node" } }

Now we have everything in place for our first experiment with the Compiler CLI. Let's create a new file index.ts:

import * as ts from 'typescript'; import * as fs from 'fs'; function showTree(node: ts.Node, indent: string = ' '): void { console.log(indent + ts.SyntaxKind[node.kind]); if (node.getChildCount() === 0) { console.log(indent + ' Text: ' + node.getText()); } for(let child of node.getChildren()) { showTree(child, indent + ' '); } } let buffer = fs.readFileSync('demo.ts'); let content = buffer.toString('utf-8'); let node = ts.createSourceFile('demo.ts', content, ts.ScriptTarget.Latest, true); showTree(node);

The showTree function recursively traverses the syntax tree beginning with the passed node. For this it logs the node's kind to the console. This property tells us whether the node represents for instance a class name, a constructor or a parameter list. If the node doesn't have any children, the program is also printing out the node's textual content, e. g. the represented class name. The function repeats this for each child node with an increased indent.

At the end, the program is reading a TypeScript file and constructing a new SourceFile object with it's content. As the type SourceFile is also a node, we can pass it to showTree.

In addition to this, we also need the demo.ts file the application is loading. For the sake of simplicity, let's go with the following simple class:

class Demo { constructor(otherDemo: Demo) {} }

To compile and run the application, we can use the following commands:

tsc index.ts
node index.js

Of course, it would make sense to create a npm script for this.

When running, the application should show the following syntax tree:

SourceFile
    SyntaxList
        ClassDeclaration
            ClassKeyword
                Text: class
            Identifier
                Text: Demo
            FirstPunctuation
                Text: {
            SyntaxList
                Constructor
                    ConstructorKeyword
                        Text: constructor
                    OpenParenToken
                        Text: (
                    SyntaxList
                        Parameter
                            Identifier
                                Text: otherDemo
                            ColonToken
                                Text: :
                            TypeReference
                                Identifier
                                    Text: Demo
                    CloseParenToken
                        Text: )
                    Block
                        FirstPunctuation
                            Text: {
                        SyntaxList
                            Text: 
                        CloseBraceToken
                            Text: }
            CloseBraceToken
                Text: }
    EndOfFileToken
        Text: 

Take some time to look at this tree. As you see, it contains a node for every aspect of our demo.ts. For instance, there is a node with of the kind ClassDeclaration for our class and it contains a ClassKeyword and an Identifier with the text Demo. You also see a Constructor with nodes that represent all the pieces a constructor consists of. It contains a SyntaxList with a sub tree for the constructor argument otherDemo.

When we combine what we've learned when writing this example with the things we already know about Schematics from the previous articles, we have everything to implement the initially described endeavor. The next sections describe the necessary steps.

Providing Key Data

When writing a Schematics rule, a first good step is thinking about all the data it needs and creating a class for it. In our case, this class looks like this:

export interface AddInjectionContext { appComponentFileName: string; // e. g. /src/app/app.component.ts relativeServiceFileName: string; // e. g. ./core/side-menu/side-menu.service serviceName: string; // e. g. SideMenuService }

To get this data, let's create a function createAddInjectionContext:

function createAddInjectionContext(options: ModuleOptions): AddInjectionContext { let appComponentFileName = '/' + options.sourceDir + '/' + options.appRoot + '/app.component.ts'; let destinationPath = constructDestinationPath(options); let serviceName = classify(`${options.name}Service`); let serviceFileName = join(normalize(destinationPath), `${dasherize(options.name)}.service`); let relativeServiceFileName = buildRelativePath(appComponentFileName, serviceFileName); return { appComponentFileName, relativeServiceFileName, serviceName } }

As this listing shows, createAddInjectionContext takes an instance of the class ModuleOptions. It is part of the utils Schematics contains and represents the parameters the CLI passes. The three needed fields are inferred from those instance. To find out in which folder the generated files are placed, it uses the custom helper constructDestinationPath:

export function constructDestinationPath(options: ModuleOptions): string { return '/' + (options.sourceDir? options.sourceDir + '/' : '') + (options.path || '') + (options.flat ? '' : '/' + dasherize(options.name)); }

In addition to this, it uses further helper functions Schematics provides us:

  • classify: Creates a class name, e. g. SideMenu when passing side-menu.
  • normalize: Normalizes a path in order to compensate for platform specific characters like \ under Windows.
  • dasherize: Converts to Kebab case, e. g. it returns side-menu for SideMenu.
  • join: Combines two paths.
  • buildRelativePath: Builds a relative path that points from the first passed absolute path to the second one.

Please note, that some of the helper functions used here are not part of the public API. To prevent breaking changes I've copied the respective files. More about this wrinkle can be found in my previous article about this topic.

Adding a new constructor

In cases where the AppComponent does not have a constructor, we have to create one. The Schematics way of doing this is creating a Change-Object that describes this modification. For this task, I've created a function createConstructorForInjection. Although it is a bit long because we have to include several null/undefined checks, it is quite straight:

function createConstructorForInjection(context: AddInjectionContext, nodes: ts.Node[], options: ModuleOptions): Change { let classNode = nodes.find(n => n.kind === ts.SyntaxKind.ClassKeyword); if (!classNode) { throw new SchematicsException(`expected class in ${context.appComponentFileName}`); } if (!classNode.parent) { throw new SchematicsException(`expected constructor in ${context.appComponentFileName} to have a parent node`); } let siblings = classNode.parent.getChildren(); let classIndex = siblings.indexOf(classNode); siblings = siblings.slice(classIndex); let classIdentifierNode = siblings.find(n => n.kind === ts.SyntaxKind.Identifier); if (!classIdentifierNode) { throw new SchematicsException(`expected class in ${context.appComponentFileName} to have an identifier`); } if (classIdentifierNode.getText() !== 'AppComponent') { throw new SchematicsException(`expected first class in ${context.appComponentFileName} to have the name AppComponent`); } // Find opening cury braces (FirstPunctuation means '{' here). let curlyNodeIndex = siblings.findIndex(n => n.kind === ts.SyntaxKind.FirstPunctuation); siblings = siblings.slice(curlyNodeIndex); let listNode = siblings.find(n => n.kind === ts.SyntaxKind.SyntaxList); if (!listNode) { throw new SchematicsException(`expected first class in ${context.appComponentFileName} to have a body`); } let toAdd = ` constructor(private ${camelize(context.serviceName)}: ${classify(context.serviceName)}) { // ${camelize(context.serviceName)}.show = true; } `; return new InsertChange(context.appComponentFileName, listNode.pos+1, toAdd); }

The parameter nodes contains all nodes of the syntax tree in a flat way. This structure is also used by some default rules Schematics comes with and allows to easily search the tree with Array methods. The function looks for the first node of the kind ClassKeyword which contains the class keyword. Compare this with the syntax tree above which was displayed by the first example.

After this it gets an array with the ClassKeyword's siblings (=its parent's children) and searches it from left to right in order to find a position for the new constructor. To search from left to right, it truncates everything that is on the left of the current position using slice several times. To be honest, this is not the best decision in view of performance, but it should be fast enough and I think that it makes the code more readable.

Using this approach, the functions walks to the right until it finds a SyntaxList (= class body) that follows a FirstPunctuation node (= the character '{' in this case) which in turn follows an Identifier (= the class name). Then it uses the position of this SyntaxList to create an InsertChange object that describes that a constructor should be inserted there.

Of course, we could also search the body of the class to find a more fitting place for the constructor -- e. g. between the property declarations and the method declarations -- but for the sake of simplicity and demonstration, I've dropped this idea.

Adding a constructor argument

If there already is a constructor, we have to add another argument for our service. The following function is taking care about this task. Among other parameters, it takes the node that represents the constructor. You can also compare this with the syntax tree of our first example at the beginning.

function addConstructorArgument(context: AddInjectionContext, ctorNode: ts.Node, options: ModuleOptions): Change { let siblings = ctorNode.getChildren(); let parameterListNode = siblings.find(n => n.kind === ts.SyntaxKind.SyntaxList); if (!parameterListNode) { throw new SchematicsException(`expected constructor in ${context.appComponentFileName} to have a parameter list`); } let parameterNodes = parameterListNode.getChildren(); let paramNode = parameterNodes.find(p => { let typeNode = findSuccessor(p, [ts.SyntaxKind.TypeReference, ts.SyntaxKind.Identifier]); if (!typeNode) return false; return typeNode.getText() === context.serviceName; }); // There is already a respective constructor argument --> nothing to do for us here ... if (paramNode) return new NoopChange(); // Is the new argument the first one? if (!paramNode && parameterNodes.length == 0) { let toAdd = `private ${camelize(context.serviceName)}: ${classify(context.serviceName)}`; return new InsertChange(context.appComponentFileName, parameterListNode.pos, toAdd); } else if (!paramNode && parameterNodes.length > 0) { let toAdd = `, private ${camelize(context.serviceName)}: ${classify(context.serviceName)}`; let lastParameter = parameterNodes[parameterNodes.length-1]; return new InsertChange(context.appComponentFileName, lastParameter.end, toAdd); } return new NoopChange(); }

This function retrieves all child nodes of the constructor and searches for a SyntaxList (=the parameter list) node having a TypeReference child which in turn has a Identifier child. For this, it uses the helper function findSuccessor displayed below. The found identifier holds the type of the argument in question. If there is already an argument that points to the type of our service, we don't need to do anything. Otherwise the function checks wether we are inserting the first argument or a subsequent one. In each case, the correct position for the new argument is located and then the function returns a respective InsertChange-Object for the needed modification.

function findSuccessor(node: ts.Node, searchPath: ts.SyntaxKind[] ) { let children = node.getChildren(); let next: ts.Node | undefined = undefined; for(let syntaxKind of searchPath) { next = children.find(n => n.kind == syntaxKind); if (!next) return null; children = next.getChildren(); } return next; }

Deciding whether to create or modify a Constructor

The good message first: We've done the heavy work. What we need now is a function that decides which of the two possible changes -- adding a constructor or modifying it -- needs to be done:

function buildInjectionChanges(context: AddInjectionContext, host: Tree, options: ModuleOptions): Change[] { let text = host.read(context.appComponentFileName); if (!text) throw new SchematicsException(`File ${options.module} does not exist.`); let sourceText = text.toString('utf-8'); let sourceFile = ts.createSourceFile(context.appComponentFileName, sourceText, ts.ScriptTarget.Latest, true); let nodes = getSourceNodes(sourceFile); let ctorNode = nodes.find(n => n.kind == ts.SyntaxKind.Constructor); let constructorChange: Change; if (!ctorNode) { // No constructor found constructorChange = createConstructorForInjection(context, nodes, options); } else { constructorChange = addConstructorArgument(context, ctorNode, options); } return [ constructorChange, insertImport(sourceFile, context.appComponentFileName, context.serviceName, context.relativeServiceFileName) ]; }

As the first sample in this post, it uses the TypeScript Compiler API to create a SourceFile object for the file containing the AppComponent. Then it uses the function getSourceNodes which is part of Schematics to traverse the whole tree and creates a flat array with all nodes. These nodes are searched for a constructor. If there is none, we are using our function createConstructorForInjection to create a Change object; otherwise we are going with addConstructorArgument. At the end, the function returns this Change together with another Change created by insertImport which also comes with Schematics and creates the needed import statement at the beginning of the TypeScript file.

Please note that the order of these two changes is vital because they are adding lines to the source file which is forging the position information within the node objects.

Putting all together

Now, we just need a factory function for a rule that is calling buildInjectionChanges and applying the returned changes:

export function injectServiceIntoAppComponent(options: ModuleOptions): Rule { return (host: Tree) => { let context = createAddInjectionContext(options); let changes = buildInjectionChanges(context, host, options); const declarationRecorder = host.beginUpdate(context.appComponentFileName); for (let change of changes) { if (change instanceof InsertChange) { declarationRecorder.insertLeft(change.pos, change.toAdd); } } host.commitUpdate(declarationRecorder); return host; }; };

This function takes the ModuleOptions holding the parameters the CLI passes and returns a Rule function. It creates the context object with the key data and delegates to buildInjectionChanges. The received rules are iterated and applied.

Adding Rule to Schematic

To get our new injectServiceIntoAppComponent rule called, we have to call it in its index.ts:

[...] export default function (options: MenuOptions): Rule { return (host: Tree, context: SchematicContext) => { [...] const rule = chain([ branchAndMerge(chain([ mergeWith(templateSource), addDeclarationToNgModule(options, options.export), injectServiceIntoAppComponent(options) ])) ]); return rule(host, context); } }

Testing the extended Schematic

To try the modified Schematic out, compile it and copy everything to the node_modules folder of an example application. As in the former blog article, I've decided to copy it to node_modules/nav. Please make sure to exclude the Schematic Collection's node_modules folder, so that there is no folder node_modules/nav/node_modules.

After this, switch to the example application's root and call the Schematic:

Calling Schematic which generated component and registers it with the module

This not only created the SideMenu but also injects its service into the AppComponent:

import { Component } from '@angular/core'; import { OnChanges, OnInit } from '@angular/core'; import { SideMenuService } from './core/side-menu/side-menu.service'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { constructor(private sideMenuService: SideMenuService) { // sideMenuService.show = true; } title = 'app'; }

Norbert Eder: Ziele 2018

Ziele 2018

Mittlerweile ist es schon Tradition, dass ich hier auf meinem Blog meine Ziele für das Neue Jahr veröffentliche (zumindest die, die für die Öffentlichkeit bestimmt sind). Diese Tradition möchte ich natürlich auch heuer fortsetzen.

Softwareentwicklung

Im kommenden Jahr wird sich viel um .NET Core, AngularJS, Azure IoT und Docker drehen. Da ist nun wenig Neues dabei, aber ich plane natürlich mein Wissen hier am Blog und auf anderen Kanälen zu teilen (Genaueres weiter unten). Im Bereich der Softwarearchitekturen möchte ich mich weiterentwicklen und daher auch entsprechende Informationen (Artikel und Bücher) konsumieren.

Fotografie

Auch in diesem Jahr möchte ich den #fotomontag weiter betreiben, d.h. es wird weiterhin jeden Montag ein neues Foto von mir geben. Zusätzlich möchte ich zahlreiche Tipps & Tricks zum Thema Fotografie drüben auf meinem fachspezifischen Blog unter https://norberteder.photography veröffentlichen und so Mehrwert für fotobegeisterte Leser schaffen. Zumindest ein Beitrag pro Woche soll es sein.

Blog

Inhaltlich stehen dem Blog einige Änderungen ins Haus. Ich möchte es wieder verstärkt in Richtung Softwareentwicklung spezialisieren. Dies bedeutet, dass einige Inhalte, wie z.B. Unterwegs, auf https://norberteder.photography übersiedeln werden. Ein- bis zweimal pro Woche möchte ich einen Beitrag veröffentlichen.

Zahlreiche ältere Blog-Beiträge werden ein Update erfahren. Hoffnungslos veraltete Artikel (z.B. zu Silverlight) werden überhaupt verschwinden.

Lesen

Weiterbildung ist so unheimlich wichtig. Deshalb werde ich auch dieses Jahr wieder zahlreiche Fachbücher konsumieren. Im Gegensatz zu den beiden vergangenen Jahren möchte ich TV-Serien reduzieren und stattdessen auch wieder bei Perry Rhodan Neo weiterlesen, oder anderes, was mich interessiert. Insgesamt setze ich mir 15 Bücher zum Ziel.

Verfolgen kannst du das unter Goodreads 2018 Reading Challenge.

Hilfsbereitschaft

Ich bin in der glücklichen Lage, gesund und in einer guten Situation zu sein. Das ist nicht bei allen Mitmenschen so. Deshalb möchte ich 2018 vermehrt Mitmenschen unterstützen und Gutes tun, wenn ich die Möglichkeit dazu sehe.

Schlussendlich möchte ich dir, lieber Leser, für deine Treue danken und wünsche dir ein wunderbares 2018.

The post Ziele 2018 appeared first on Norbert Eder.

Code-Inside Blog: First steps to enable login with Microsoft or Azure AD account for your application

It is quite common these days to “Login with Facebook/Google/Twitter”. Of course Microsoft has something similar. If I remember it correctly the first version was called “Live SDK” with the possibility to login with your personal Microsoft Account.

With Office 365 and the introduction of Azure AD we were able to build an application to sign-in with a personal account via the “Live SDK” and organizational account via “Azure AD”.

However: The developer and end user UX was far way from perfect, because the implementation for each account type was different and for the user it was not clear which one to choose.

Microsoft Graph & Azure AD 2.0

Fast forward to the right way: Use the Azure AD 2.0 endpoint.

Step 1: Register your own application

You just need to register your own application in the Application Registration Portal. The registration itself is a typical OAuth-application registration and you get a ClientId and Secret for your application.

Warning: If you have “older” LiveSDK application registered under your account you need to choose Converged Applications. LiveSDK applications are more or less legacy and I wouldn’t use them anymore.

Step 2: Choose a platform

Now you need to choose your application platform. If you want to enable the sign-in stuff for your web application you need to choose “Web” and insert the redirect URL. After the sign-in process the token will be send to this URL.

x

Step 3: Choose Microsoft Graph Permissions (Scopes)

In the last step you need to select what permissions your applications need. A first-time user needs to accept your permission requests. The “Microsoft Graph” is a collection of APIs that works for personal Microsoft accounts and Office 365/Azure AD account.

x

The “User.Read” permission is the most basic permission that would allow to sign-in, but if you want to access other APIs as well you just need to add those permissions to your application:

x

Finish

After the application registration and the selection of the needed permissions you are ready to go. You can even generate a sample application on the portal. For a quick start check this page

Microsoft Graph Explorer

x

As I already said: The Graph is the center of Microsofts Cloud Data and the easiest way to play around with the different scopes and possibilities is with the Microsoft Graph Explorer.

Hope this helps.

Norbert Eder: Rückblick 2017

Rückblick 2017

Unglaublich. Schon wieder sitze ich hier, sinniere über das vergangene Jahr und schreibe an meinem persönlichen Rückblick. 2017 war ein sehr schwieriges Jahr, in dem ich allerdings auch sehr viel gelernt habe. Aber alles der Reihe nach.

Natürlich hatte ich mir Ziele für 2017 vorgenommen. Diese sind für mich immer ein grober Rahmen für meine weitere Entwicklung. Mal sehen wie es mir ergangen ist. Einige Ziele finden sich ja nicht auf der öffentlichen Liste, werden von mir aber genauso getracked. Vermutlich sogar strenger.

Softwareentwicklung

Tja, also Fachartikel habe ich 2017 keinen geschrieben, ich habe auch nicht wirklich mehr darüber gebloggt, als im Jahr zuvor. Aber: Nachdem ich 2016 doch weniger in die Tasten gegriffen habe, hat sich das 2017 wieder massiv verstärkt. Hauptsächlich kamen .NET Core, C# sowie TypeScript, AngularJS und Node.js zum Einsatz. Zusätzlich konnte ich Docker in mein Repertoire aufnehmen.

Warum habe ich meine Ziele in diesem Bereich nicht erreicht? Es hat sich in diesem Jahr einfach soviel verändert, dass ich keine Zeit fand, entsprechende Ideen für Beiträge (sowohl für Fachzeitschriften, als auch meinem Blog) zu entwickeln. Gab es die Zeit, war Erholung und Ablenkung angesagt.

Fotografie

Gleich zu Beginn startete das Jahr mit einer Reise nach Kroatien. Auf der Insel Krk wurde das Hotel Haludovo besichtigt. Im April ging es nach Amsterdam, eine wirklich beeindruckende Stadt. Ende Juni ging es nach Tschechien, ins wunderschöne Brünn. Die Fotoreisen wurden im Oktober mit Laibach abgeschlossen.

Durch die vielen Reisen konnte ich gerade im Bereich Architektur und Landschaft viel lernen. In der Nachbearbeitung setzte ich neben Lightroom verstärkt auch Photoshop ein. Zusätzlich gab es einschlägige Photoshop-Projekte (siehe Levitation), die meine Fähigkeiten verbessern sollten.

Neben all dieser Themen kamen die Portrait-Shootings etwas zu kurz, das eine oder andere Shooting konnte aber absolviert werden.

Website

Es gab zwar viele kleine Änderungen an der Seite, das Grundkonzept blieb 2017 allerdings unverändert. Aus gutem Grund:

Metrik Veränderung
Aufrufe +10%
Besucher +20%
Beiträge -15%

Interessant hierbei ist, dass mit doch erheblich weniger Beitragen, die Website häufiger aufgerufen wurde. Geschuldet ist das einigen Beiträgen, die zur richtigen Zeit geschrieben wurden.

Top 5 neue Beiträge 2017

Top 5 Beiträge 2017

Bücher

Seit jeher sind Bücher für mich absolut wichtig. Auch 2017 habe ich wieder einige interessante gelesen:

Meine Leseliste ist online zu finden.

Das Leben

2017 hat es viele einschneidende Erlebnisse gegeben. Viele davon traurig, einige hoffnungsvolle. Es waren viele Momente dabei, in denen ich einfach nur versucht habe, zu funktionieren. In diesem Jahr war so viel los, dass ich kaum Zeit fand, über alles nachzudenken. Erstmals seit Jahren (ich glaube das erste Mal überhaupt), habe ich mir nun über Weihnachten freigenommen, um alles zu sortieren und ordnen. Ich bin damit wohl noch länger beschäftigt. Was dabei rauskommt, kann ich noch nicht sagen. Fakt ist aber, dass man – nach Möglichkeit – das Leben mehr genießen muss.

Fazit

Das Jahr 2017 hielt einige besondere Herausforderungen bereit. Neben schweren Krankenheiten im direkten Umfeld, musste ich mich auch zweimal für immer verabschieden. Beide Male doch recht unerwartet. Das sind Momente, die lange Phasen des Nachdenkens nach sich ziehen.

The post Rückblick 2017 appeared first on Norbert Eder.

Alexander Schmidt: Unit Tests gegen automatisch bereit gestellte SQL-Datenbanken

Automatische Verteilung von ge-seedeten Datenbanken bei Ausführung von Unit-Tests.

Manfred Steyer: A software architect's approach towards using Angular (and SPAs in general) for microservices aka microfrontends

People ask me on regular basis how to use SPAs and/or Angular in an microservice-based environment. The need for such microfrontends is no surprise, as microservices are quite popular nowadays. The underlying idea of microservices is quite simple: Create several tiny applications -- so called microservices -- instead of one big monolytic applications. This leads for instance (but not only) to smaller teams (per microservice) that can make decisions faster and chose for the "best" technology that suites their needs.

But when we want to use several microservices that form a bigger software system in the browser, we need a way to load them side by side and to isolate them from each other so that they cannot interact in an unplanned manner. The fact that each team can use different frameworks in different versions brings additional complexity into play.

Fortunately, there are several approaches for this. Unfortunately, no approach is perfect -- each of them has it's own pros and cons.

To decide for one, a software architect would evaluate those so called architectural candidates against the architectural goals given for the software system in question. Typical (but not the only) goals for SPAs in microservice-based environments are shown in the next section.

Architectural Goals

Architectural Goal Description
a) Isolation Can the clients influence each other in an unplanned way?
b) Separate Deployment Can the microservices be deployed separately without the need to coordinate with other teams responsible for other microservices?
c) Single Page Shell Is the shell, composing the loaded microfrontends a SPA -- or does it at least feel like one for the user (no postbacks, deep linking, holding state)
d) Different SPA-Frameworks Can we use different SPA frameworks (or libraries) in different versions
e) Tree Shaking Can we make use of tree shaking?
f) Vendor Bundles Can we reuse already loaded vendor bundles or do we need to load the same framework several times, if it's used by several microfrontends
g) Several microfrontends at the same time Can we display several microfrontends at the same time, e. g. a product list and a shopping basket
h) Prevents version conflicts Does the approach prevent version conflicts between used libraries?
i) Separate development Can separate teams develop their microfrontends independently of other ones

Evaluation

The following table evaluates some architectural candidates for microfrontends against the discussed goals (a - g).

Architectural Candidate a b c d e f g h i
I) Just using Hyperlinks x x x x x x
II) Using iframes x x x x x x x x
III) Loading different SPAs into the same page x x x x x x
IV) Plugins x x x
V) Packages (npm, etc.) x x x x
VI) Monorepo Approach x x x x x
VII) Web Components x x x x x x

If you are interested into some of those candidates, the next table provides some additional thoughts on them:

Nr Remarks
I) Just using Hyperlinks We could save the state before navigating to another microfrontend. Using something like Redux (@ngrx/store) could come in handy b/c it manages the state centrally)
II) Using iframes We need something like a meta router that synchronizes the url with the iframes ones
III) Loading different SPAs into the same page A popular framework that loads several SPAs into the browser is Single SPA. The main drawback seems to be the lack of isolation, cause all applications share the same global namespace and the same global browser objects. If the latter ones are monkey patched by a framework (like zone.js) this affects all the loaded SPAs
IV) Plugins Dynamically loading parts of a SPA can be done with Angular but webpack and so the CLI demands on compiling everything together. Switching to SystemJS would allow to load parts that have been compiled separately
V) Packages This means, providing each frontend as a package via (a private) npm registry or the monorepo approach and consuming it in a shell application. This also means, that there is one compilation step that goes through each frontend.
VI) Monorepo Approach This approach, heavily used at Google or Facebook, is similar to using packages but instead of distributing code via a npm registry everything is put into one source code repository. In addition, all projects in the repository share the same dependencies. Hence, there are no version conflicts b/c everyone has to use the same/ the latest version. And you don't need to deal with a registry when you just want to use your own libraries. A good post motivating this can be found here. To get started with this idea in the world of Angular, you should have a look at Nrwl's Nx - a carefully thought through library and code generator that helps (not only) with monorepos.
VII) Web Components This is similar to III) but Web Components seem to be a good fit here, b/c they can be used with any framework - at least, in theory. They also provide a bit of isolation when it comes to rendering and CSS due to the usage of Shadow DOM (not supported by IE 11). Currently, the Angular Team is working on a very promising Labs project called Angular Elements. The idea is to compile Angular Components down to Web Components.

The result of this evaluation shows that the iframe approach is quite tempting even though using this word causes shivering for most web devs. That's why I've decided to create a small library providing a meta router that loads different SPAs into iframes. This router also takes care about creating iframes so that we don't need to touch them manually.

But to make it clear: There is no perfect solution and it really depends on your current situation and upon how important the architectural goals are for you. E. g. I've seen many team writing successful applications leveraging libraries and at companies like Google and Facebook there is a long tradition of using monorepos. Also, I expect that Web Components will be used more and more due to the growing framework and browser support.

It's not an "either/or thing"!

Mixing those approaches can also be a good idea. For instance, you could go with Hyperlinks or iframes for the general routing and when you have to display widgets from one microfrontend within an other one, you could choose for libraries or web components.

MSDN Team Blog AT [MS]: Weihnachstgrüße von Codefest.AT und der PowerShell UserGroup Austria

Hallo PowerShell Gemeinde!

Auch im letzten Monat war einiges los bei uns

Künftige Veranstaltungen

Newsletter - die "Schnipseljagd":  Unsere wöchentlichen PowerShell Newsletter kamen regelmäßig raus und beinhalteten folgende Themen:

https://www.powershell.co.at/powershell-schnipseljagd-4517/

  • Webhook anstossen
  • Keine Passwörter im Code – The Azure-way!
  • Try/catch/error blocks in PowerShell – Error Handling wie die Profis
  • Service Accounts: Passwort automatisch verändern
  • GPO – Konflikte erkennen

https://www.powershell.co.at/powershell-schnipseljagd-46-17/

  • FTP
  • Mehrere CSV bearbeiten
  • PowerShell Gallery Informationen auslesen – mit PowerShell
  • Bilder sind auch für Kommandozeilen Enthusiasten wichtig
  • Mehrere PowerShell Fenster anordnen
  • Welche Infos gibt es über mich?

https://www.powershell.co.at/powershell-schnipseljagd-47-17/

  • Neuigkeiten/Unterschiede in PowerShell 6.0
  • Schneller mit Vorlagen
  • Anzahl der Elemente eines Office365 Ordners auslesen
  • Eigene Eigenschaften hinzufügen
  • Sharepoint Recovery

https://www.powershell.co.at/powershell-schnipseljagd-48-17/

  • PowerShell & Azure
  • Colour your Console
  • Sieger des PowerShell Contests
  • Adventkalender
  • Learn to build tools not to code
  • Multi-Valued Eigenschaften in ein CSV exportierren
  • Deep Learning

https://www.powershell.co.at/powershell-schnipseljagd-49-17/

  • PowerShell Countdown Timer
  • Wie man ein Script auf die PowerShell Gallery postet
  • Den Manager eines Benutzers aus dem AD auslesen
  • Rechnen mit PowerShell

https://www.powershell.co.at/powershell-schnipseljagd-50-17/

  • PowerShell in der Wolke (Microsoft Flow)
  • Betriebssysteminformationen auslesen
  • PowerShell Module
  • Microsoft MVP
  • Wo kann man mein Netzwerk verletzen?
  • Pester und Schleifen
  • Adventkalender

Wir hoffen es  war auch für Dich etwas dabei!

Wir wünschen ein Frohes Fest und einen Guten Rutsch ins Jahr 2018!

CodeFest.AT und die PowerShell UserGroup Austria - www.powershell.co.at

MSDN Team Blog AT [MS]: SQL Saturday Vienna (2018)

Am Freitag, den 19. Jänner 2018 dreht sich wieder einmal alles um die Microsoft Data Platform. Der SQL Saturday Vienna (2018) startet in die nunmehr fünfte Auflage mit noch mehr Sessions, noch mehr Speaker und vielen interessierten Teilnehmern!

Der SQL Saturday ist eine Ganztagesveranstaltung welche von der Community (SQL Pass Austria) für die Community veranstaltet wird. Am Freitag (Hauptkonferenz) steht unter anderem eine Keynote von Lindsey Allen (MS Corp Redmond) auf dem Program: Spannende Neuheiten aus dem Data Platform Universum inkludiert. Danach stehen 30 Sessions auf dem Programm – für DBAs, Entwickler, BI und Azure Themen ist gesorgt. Mehr Informationen zum Schedule: www.sqlsaturday.com/679/Sessions/Schedule.aspx.

Am Vortag der Konferenz besteht die Möglichkeit, an einer der drei angebotenen Pre-Cons (Ganztagesworkshops) teilzunehmen. Die geplanten Themen sind:

Die Plätze füllen sich langsam aber stetig – Eine Registrierung ist für beide Tage zwingend notwendig!

Die Eckdaten:

  • Donnerstag, 18. Jänner 2018: Pre Cons (Ganztagsworkshop)
  • Freitag, 19. Jänner 2018: SQL Saturday Vienna 2018
  • Ort:  Jufa Wien, Mautner-Markhof-Gasse 50, 1110 Wien
  • Veranstalter: SQL Pass Austria (http://austria.sqlpass.org,  @sqlsatvienna)

Wir freuen uns auf spannende Tage voll mit Data Platform Neuigkeiten!

SQL Pass Austria, das #SQLSatVienna Orga Team

http://www.sqlsaturday.com/679

Holger Schwichtenberg: Wenn Entity Framework Core Migrations die Kontextklasse nicht finden können

Eine Abweichung der Versionsnummer an der dritten Stelle kann dazu führen, dass die Schemamigrationen nicht mehr funktionieren.

MSDN Team Blog AT [MS]: Software Quality Days 2018

Auch 2018 wird Microsoft wieder auf den Software Quality Days in Wien vertreten sein.

Software Quality Days 2018 (SWQD 18)
17. bis 18. Jänner 2018
Austria Trend Hotel Savoyen

Konferenzprogramm

Die Software Quality Days sind eine Fachkonferenz zum Thema Softwarequalität und Cloud.

Möchten Sie an der Konferenz teilnehmen ? Wir können Ihnen exklusiv einen 20% Rabattcode oder Ermäßigung für Ihre Ticketbuchung anbieten.

Wir würden uns freuen wenn Sie ein Ticket mit dieser Möglichkeit erwerben und wir Sie bei uns am Stand oder der Tool Challenge begrüßen dürfen.

 

MSDN Team Blog AT [MS]: Azure Red Shirt Dev Tour mit Scott Guthrie – jetzt anmelden

 

Mitte Jänner startet die Azure Red Shirt Dev Tour Germany 2018 mit Stopps in Berlin, am 17. Jänner 2018 und München, am 18. Jänner 2018. Microsoft´s Cloud Ober-Guru Scott Guthrie führt die Tour an und verrät Tipps und Tricks für das Entwickeln, Inbetriebnehmen und Verwalten von Cloud Anwendungen.

Die Anmeldung für beide Termine ist bereits möglich, Registrationlink, Agenda und weitere Infos gibt es auf der Red Shirt Dev Tour Website

 

 

0627_cloud-wars-scott-guthrie_1200x675Azure Red Shirt Dev Tour_Banner Ad_300x250_Berlin

Golo Roden: Einführung in React, Folge 5: Der unidirektionale Datenfluss

Mit React entwickelte Anwendungen verwenden einen unidirektionalen Datenfluss. Das bedeutet, dass Daten stets nur in einer Richtung weitergegeben und verarbeitet werden. Das wirft einige Fragen auf, beispielsweise wie mit Zustand umzugehen ist. Worauf gilt es zu achten?

Manfred Steyer: A lightweight and solid approach towards micro frontends (micro service clients) with Angular and/or other frameworks

Even though the word iframe causes bad feelings for most web devs, it turns out that using them for building SPAs for micro services -- aka micro frontends -- is a good choice. For instance, they allow for a perfect isolation between clients and for a separate deployment. Because of the isolation they also allow using different SPA frameworks. Besides iframes, there are other approaches to use SPAs in micro service architectures -- of course, each of them has their own pros and cons. A good overview can be found here. Another great resource comparing the options available is Brecht Billiet's presentation about this topic.

In addition to this, I've written another blog post comparing several approaches by evaluating them against some selected architectural goals.

As Asim Hussain shows in this blog article, using iframes can also be a nice solution for migrating an existing AngularJS application to Angular.

For the approach described here, I've written a "meta router" to load different spa clients for micro services in iframes. It takes care about the iframe's creation and about synchronizing their routes with the shell's url. It also resizes the iframe dynamically to prevent a scrolling bar within it. The library is written in a framework agnostic way.

The router can be installed via npm:

npm install meta-spa-router --save

The source code and an example can be found in my GitHub account.

In the example I'm using VanillaJS for the shell application and Angular for the routed child apps.

This is how to set up the shell with VanillaJS:

var MetaRouter = require('meta-spa-router').MetaRouter;

var config = [
    {
        path: 'a',
        app: '/app-a/dist'
    },
    {
        path: 'b',
        app: '/app-b/dist'
    }
];

window.addEventListener('load', function() { 

	var router = new MetaRouter();
	router.config(config);
	router.init();
	router.preload();


    document.getElementById('link-a')
            .addEventListener('click', function() { router.go('a') });

    document.getElementById('link-b')
            .addEventListener('click', function() { router.go('b') });

    document.getElementById('link-aa')
            .addEventListener('click', function() { router.go('a', 'a') });

            document.getElementById('link-ab')
            .addEventListener('click', function() { router.go('a', 'b') });        

}); 

And here is the HTML for the shell:

<div>
    <a id="link-a">Route to A</a> |
    <a id="link-b">Route to B</a> |
    <a id="link-aa">Jump to A within A</a> |
    <a id="link-ab">Jump to B within A</a>
</div>

<!-- placeholder for routed apps -->
<div id="outlet"></div>

The router creates the iframes as children of the element with the id outlet and allows switching between them using the method go. As you see in the example, it also allows to jump to a subroute within an application.

The routed applications use the RoutedApp class to establish a connection with the shell. This is necessary to sync the client app's router with the shell's one. As I'm using Angular in my example, I'm registering it as a service. Instead of this, one could also directly instantiate it when going with other frameworks.

To register this service that comes without Angular Metadata for AOT because its framework agnostic, I'm creating a token in a new file app.tokens.ts:

import { RoutedApp } from 'meta-spa-router';
import { InjectionToken } from '@angular/core';

export const ROUTED_APP = new InjectionToken<RoutedApp>('ROUTED_APP');

Then I'm using it to create a service provider for the RoutedApp class:

import { RoutedApp } from 'meta-spa-router';
[...]

@NgModule({
  [...],  
  providers: [{ provide: ROUTED_APP, useFactory: () => new RoutedApp() }],
  bootstrap: [AppComponent]
})
export class AppModule { }

In the AppComponent I'm getting hold of a RoutedApp instance by using dependency injection:

// app.component.ts in routed app

import { Router, NavigationEnd } from '@angular/router';
import { Component } from '@angular/core';
import { filter } from 'rxjs/operators';
import { RoutedApp } from 'meta-spa-router';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent {
  title = 'app';

  constructor(
    private router: Router, 
    @Inject(ROUTED_APP) private routedApp: RoutedApp) {
    this.initRoutedApp();
  }
  
  initRoutedApp() {
    
    this.routedApp.config({ appId: 'a' });
    this.routedApp.init();

    this.router.events.pipe(filter(e => e instanceof NavigationEnd)).subscribe((e: NavigationEnd) => {
      this.routedApp.sendRoute(e.url);
    });

    this.routedApp.registerForRouteChange(url => this.router.navigateByUrl(url));
  }

}

I'm assigning an appId which is by convention the same as the child app's path in the shell. In addition to that, I'm also synchronizing the meta router with the child's apps one.

Jürgen Gutsch: Trying BitBucket Pipelines with ASP.NET Core

BitBucket provides a continuous integration tool called Pipelines. This is based on Docker containers which are running on a Linux based Docker machine. Within this post I wanna try to use BitBucket Pipelines with an ASP.NET Core application.

In the past I preferred BitBucket over GitHub, because I used Mercurial more than Git. But that changed five years ago. Since than I use GitHub for almost every new personal project that doesn't need to be a private project. But at the YooApps we use the entire Atlassian ALM Stack including Jira, Confluence and BitBucket. (We don't use Bamboo yet, because we also use Azure a lot and we didn't get Bamboo running on Azure). BitBucket is a good choice, if you anyway use the other Atlassian tools, because the integration to Jira and Confluence is awesome.

Since a while, Atlassian provides Pipelines as a simple continuous integration tool directly on BitBucket. You don't need to setup Bamboo to build and test just a simple application. At the YooApps we actually use Pipelines in various projects which are not using .NET. For .NET Projects we are currently using CAKE or FAKE on Jenkins, hosted on an Azure VM.

Pipelines can also used to build and test branches and pull request, which is awesome. So why shouldn't we use Pipelines for .NET Core based projects? BitBucket actually provides an already prepared Pipelines configuration for .NET Core related projects, using the microsoft/dotnet Docker image. So let's try pipelines.

The project to build

As usual, I just setup a simple ASP.NET Core project and add a XUnit test project to it. In this case I use the same project as shown in the Unit testing ASP.NET Core post. I imported that project from GitHub to BitBucket. if you also wanna try Pipelines, feel free to use the same way or just download my solution and commit it into your repository on BitBucket. Once the sources are in the repository, you can start to setup Pipelines.

Setup Pipelines

Setting up Pipelines actually is pretty easy. In your repository on BitBucket.com is a menu item called Pipelines. After pressing it you'll see the setup page, where you are able to select a technology specific configuration. .NET Core is not the first choice for BitBucket, because the .NET Core configuration is placed under "More". It is available anyway, which is really nice. After selecting the configuration type, you'll see the configuration in an editor inside the browser. It is actually a YAML configuration, called bitbucket-pipelines.yml, which is pretty easy to read. This configuration is prepared to use the microsoft/dotnet:onbuild Docker image and it already has the most common .NET CLI commands prepared, that will be used with that ASP.NET Core projects. You just need to configure the projects names for the build and test commands.

The completed configuration for my current project looks like this:

# This is a sample build configuration for .NET Core.
# Check our guides at https://confluence.atlassian.com/x/5Q4SMw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: microsoft/dotnet:onbuild

pipelines:
  default:
    - step:
        caches:
          - dotnetcore
        script: # Modify the commands below to build your repository.
          - export PROJECT_NAME=WebApiDemo
          - export TEST_NAME=WebApiDemo.Tests
          - dotnet restore
          - dotnet build $PROJECT_NAME
          - dotnet test $TEST_NAME

If you don't have tests yet, comment the last line out by adding a #-sign in front of that line.

After pressing "Commit file", this configuration file gets stored in the root of your repository, which makes it available for all the developers of that project.

Let's try it

After that config was saved, the build started immediately... and failed!

Why? Because that Docker image was pretty much outdated. It contains an older version with an SDK that still uses the the project.json for .NET Core projects.

Changing the name of the Docker image from microsoft/dotnet:onbuild to microsoft/dotnet:sdk helps. You now need to change the bitbucket-pipelines.yml in your local Git workspace or using the editor on BitBucket directly. After committing the changes, again the build starts immediately and is green now

Even the tests are passed. As expected, I got a pretty detailed output about every step configured in the "script" node of the bitbucket-pipelines.yml

You don't need to know how to configure Docker using the pipelines. This is awesome.

Let's try the PR build

To create a PR, I need to create a feature branch first. I created it locally using the name "feature/build-test" and pushed that branch to the origin. You now can see that this branch got built by Pipelines:

Now let's create the PR using the BitBucket web UI. It automatically assigns my latest feature branch and the main branch, which is develop in my case:

Here we see that both branches are successfully built and tested previously. After pressing save we see the build state in the PRs overview:

This is actually not a specific built for that PR, but the build of the feature branch. So in this case, it doesn't really build the PR. (Maybe it does, if the PR comes from a fork and the branch wasn't tested previously. I didn't test it yet.)

After merging that PR back to the develop (in that case), we will see that this merge commit was successfully built too:

We have four builds done here: The failing one, the one 11 hours ago and two builds 52 minutes ago in two different branches.

The Continuous Deployment pipeline

With this, I would be save to trigger a direct deployment on every successful build of the main branches. As you maybe know, it is super simple to deploy a web application to an Azure web app, by connecting it directly to any Git repository. Usually this is pretty dangerous, if you don't have any builds and tests before you deploy the code. But in this case, we are sure the PRs and the branches are building and testing successfully.

We just need to ensure that the deployment is only be triggered, if the build is successfully done. Does this work with Pipelines? I'm pretty curious. Let's try it.

To do that, I created a new Web App on Azure and connect this app to the Git repository on BitBucket. I'll now add a failing test and commit it to the Git repository. What now should happen is, that the build starts before the code gets pushed to Azure and the failing build should disable the push to Azure.

I'm skeptical whether this is working or not. We will see.

The Azure Web App is created and running on http://build-with-bitbucket-pipelines.azurewebsites.net/. The deployment is configured to listen on the develop branch. That means, every time we push changes to that branch, the deployment to Azure will start.

I'll now create a new feature branch called "feature/failing-test" and push it to the BitBucket. I don't follow the same steps as described in the previous section about the PRs, to keep the test simple. I merge the feature branch directly and without an PR to develop and push all the changes to BitBucket. Yes, I'm a rebel... ;-)

The build starts immediately and fails as expected:

But what about the deployment? Let's have a look at the deployments on Azure. We should only see the initial successful deployment. Unfortunately there is another successful deployment with the same commit message as the failing build on BitBucket:

This is bad. We now have an unstable application running on azure. Unfortunately there is no option on BitBucket to trigger the WebHook on a successful build. We are able trigger the Hook on a build state change, but it is not possible to define on what state we want to trigger the build.

Too bad, this doesn't seem to be the right way to configure the continuous deployment pipeline in the same easy way than the continuous integration process. Sure there are many other, but more complex ways to do that.

Update 12/8/2017

There is anyway a simple option to setup an deployment after successful build. This could be done by triggering the Azure webhook inside the Pipelines. An sample bash script to do that can be found here: https://bitbucket.org/mojall/bitbucket-pipelines-deploy-to-azure/ Without the comments it looks like this:

curl -X POST "https://\$$SITE_NAME:$FTP_PASSWORD@$SITE_NAME.scm.azurewebsites.net/deploy" \
  --header "Content-Type: application/json" \
  --header "Accept: application/json" \
  --header "X-SITE-DEPLOYMENT-ID: $SITE_NAME" \
  --header "Transfer-encoding: chunked" \
  --data "{\"format\":\"basic\", \"url\":\"https://$BITBUCKET_USERNAME:$BITBUCKET_PASSWORD@bitbucket.org/$BITBUCKET_USERNAME/$REPOSITORY_NAME.git\"}"

echo Finished uploading files to site $SITE_NAME.

I now need to set the environment variables in the Pipelines configuration:

Be sure to check the "Secured" checkbox for every password variable, to hide the password in this UI and in the log output of Pipelines.

And we need to add two script commands to the bitbucket-pipelines.yml:

- chmod +x ./deploy-to-azure.bash
- ./deploy-to-azure.bash

The last step is to remove the Azure web hook from the web hook configuration in BitBucket and to remove the failing test. After pushing the changes to BitBucket the build and the first successfull deployment starts immediately.

I now add the failing test again to test the failing deployment again and it worked as expected. The test fails and the next commands don't get executed. The web hook will never triggered and the unstable app will not be deployed.

Now there is a failing build on Pipelines:

(See the commit messages)

And that failing commit is not deployed to azure:

The Continuous Deployment is successfully done.

Conclusion

Isn't it super easy to setup a continuous integration? ~~Unfortunately we are not able to complete the deployment using this.~~ But anyway, we now have a build on any branch and on any pull-request. That helps a lot.

Pros:

  • (+++) super easy to setup
  • (++) almost fully integrated
  • (+++) flexibility based on Docker

Cons:

  • (--) runs only on Linux. I would love to see windows containers working
  • (---) not fully integrated into web hooks. "trigger on successful build state" is missing for the hooks

I would like to have something like this on GitHub too. The usage is almost similar to AppVeyor, but pretty much simpler to configure, less complex and it just works. The reason is Docker, I think. For sure, AppVeyor can do a lot more stuff and couldn't really compared to Pipelines. Anyway, I will do compare it to AppVeyor and will do the same with it in one of the next posts.

Currently there is a big downside with BitBucket Pipelines: Currently this is only working with Docker images that are running on Linux. It is not yet possible to use it for full .NET Framework projects. This is the reason why we never used it at the YooApps for .NET Projects. I'm sure we need to think about doing more projects using .NET Core ;-)

David Tielke: DDC 2017 - Inhalte meiner Keynote, DevSession und Workshops


Wie in jedem Jahr so fand auch in diesem wieder die Dotnet Developer Conference im Pullman Hotel in Köln statt. Zum ersten Mal an vier Tagen, genauer gesagt vom 27.11.2017 bis 30.11.2017, wurden neben DevSessions und dem eigentlichen Koferenztag auch zwei Workshoptage angeboten. 

An dieser Stelle möchte ich nun allen Teilnehmern die Materialien meiner einzelnen Beiträge zur Verfügung stellen.

Keynote "C# - vNow & vNext"

Eröffnet wurde die Konferenz am Mittwoch mit meiner Keynote zum aktuellen und zukünftigen Stand von Microsofts Entwicklungssprache C#. In 55 Minuten zeigte ich den Zuhörern zunächst die Geschichte und den Werdegang der Sprache, die aktuell hinzugekommenen Features (C# 6.0 & C#7.0), die kürzlichen Updates 7.1 und 7.2 sowie die anstehende Sprachversion 8.0. Anschließend wurde argumentiert, warum meiner Meinung nach C# eine der besten und sichersten Plattformen für die Zukunft ist.

Folien

DevSession "Mehr Softwarequalität durch Tests"

Einen Tag vor der eigentlichen Konferenz, konnten die Teilnehmer zwischen vier parallelen DevSessions wählen, welche eine Dauer von vier Stunden hatten. Mein Beitrag zum Thema "Mehr Softwarequalität durch Tests" stellte das Thema der Tests auf eine etwas andere Art und Weise vor: Nach einer grundlegenden Betrachtung des Themas "Softwarequalität" zeigt ich den Teilnehmern, was in der Softwareentwicklung so alles getestet werden kann. Neben dem Testen von Architekturen, Teams, Funktionalitäten, Code wurde besonders dem Thema "Prozesse" viel Aufmerksamkeit gewidmet.

Notizen

Workshop "Scrum mit dem Team Foundation Server 2018"

Am Montag wurde die Konferenz mit dem ersten Workshoptag eröffnet. Dabei stellte ich das agile Projektmanagementframework "Scrum" in Verbindung mit dem Team Foundation Server 2018 vor. Dazu wurde zu Beginn der Fokus zunächst auf Prozesse im allgemeinen und später dann auf die praktische Implementierung eines solchen anhand von Scrum, gelegt. Neben den Grundlagen wurde besonders die Einführung von Scrum anhand von zahlreichen Praxisbeispielen gezeigt und dabei auf Risiken und mögliche Probleme hingewiesen. Anschließend wurde am Beispiel des Team Foundation Servers gezeigt, wie eine toolgestützte Umsetzung einer agilen Planung aussehen kann.

Folien & Notizen

Workshop "Composite Component Architecture 2.0"

Am letzten Konferenztag stellte ich den Teilnehmern die neuste Version meiner "Composite Component Architecture" vor. Nach einem kurzen Überflug über die Grundlagen wurden die neuen Aspekte im Bereich Logging, Konfiguration, EventBroker, Bootstrapping sowie das verfügbare Tooling behandelt. Final dann durften die Teilnehmer einen exklusiven Blick auf eine frühe Alpha von "CoCo.Core" werfen, dem lange geplanten Framework für eine einfachere und flexiblere Implementierung der CoCo-Architektur.

Da das Framework im Workshop alles andere als stabil lief, werde ich das Release noch einmal separat in diesem Blog ankündigen, sobald eine stabile erste Alpha verfügbar ist.

Notizen

Ich möchte an dieser Stelle noch einmal allen Teilnehmern für ihr Feedback danken, egal ob persönlich oder per Twitter, Email oder auf sonst irgend einem Wege. Die Konferenz hat mal wieder unheimlich viel Spaß gemacht! Weiter geht ein riesig großer Dank an das Team von Developer Media, die mal wieder eine grandiose Konferenz auf die Beine gestellt haben.

Bis zur DDC 2018 :) 

Manfred Steyer: Automatically Updating Angular Modules with Schematics and the CLI


Table of Contents

This blog post is part of an article series.


Thanks to Hans Larsen from the Angular CLI Team for providing valuable feedback

In my last blog article, I've shown how to leverage Schematics, the Angular CLI's code generator, to scaffold custom components. This article goes one step further and shows how to register generated building blocks like Components, Directives, Pipes, or Services with an existing NgModule. For this I'll extend the example from the last article that generates a SideMenuComponent. The source code shown here can also be found in my GitHub repository.

Schematics is currently experimental and can change in future.
Angular Labs

Goal

To register the generated SideMenuComponent we need to perform several tasks. For instance, we have to lookup the file with respective NgModule. After this, we have to insert several lines into this file:

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

// Add this line to reference component
import { SideMenuComponent } from './side-menu/side-menu.component';

@NgModule({
  imports: [
    CommonModule
  ],
  
  // Add this Line
  declarations: [SideMenuComponent],

  // Add this Line if we want to export the component too
  exports: [SideMenuComponent]
})
export class CoreModule { }

As you've seen in the last listing, we have to create an import statement at the beginning of the file. And then we have to add the imported component to the declarations array and - if the caller requests it - to the exports array too. If those arrays don't exist, we have to create them too.

The good message is, that the Angular CLI contains existing code for such tasks. Hence, we don't have to build everything from scratch. The next section shows some of those existing utility functions.

Utility Functions provided by the Angular CLI

The Schematics Collection @schematics/angular used by the Angular CLI for generating stuff like components or services turns out to be a real gold mine for modifying existing NgModules. For instance, you find some function to look up modules within @schematics/angular/utility/find-module. The following table shows two of them which I will use in the course of this article:

Function Description
findModuleFromOptions Looks up the current module file. For this, it starts in a given folder and looks for a file with the suffix .module.ts while the suffix .routing.module.ts is not accepted. If nothing has been found in the current folder, its parent folders are searched.
buildRelativePath Builds a relative path that points from one file to another one. This function comes in handy for generating the import statement pointing from the module file to the file with the component to register.

Another file containing useful utility functions is @schematics/angular/utility/ast-utils. It helps with modifying existing TypeScript files by leveraging services provided by the TypeScript compiler. The next table shows some of its functions used here:

Function Description
addDeclarationToModule Adds a component, directive or pipe to the declarations array of an NgModule. If necessary, this array is created
addExportToModule Adds an export to the NgModule

There are also other methods that add entries to the other sections of an NgModule (addImportToModule, addProviderToModule, addBootstrapToModule).

Please note, that those files are currently not part of the package's public API. Therefore, they can change in future. To be on the safe side, Hans Larsen from the Angular CLI Team suggested to fork it. My fork of the DevKit Repository containing those functions can be found here.

After forking, I've copied the contents of the folder packages\schematics\angular\utility containing the functions in question to the folder schematics-angular-utils in my project and adjusted some import statements. For the time being, you can also copy my folder with this adjustments for your own projects. I think that sooner or later the API will stabilize and be published as a public one so that we don't need this workaround.

Creating a Rule for adding a declaration to an NgModule

After we've seen that there are handy utility functions, let's use them to build a Rule for our endeavor. For this, we use a folder utils with the following two files:

Utils for custom Rule

The file add-to-module-context.ts gets a context class holding data for the planned modifications:

import * as ts from 'typescript';

export class AddToModuleContext {
    // source of the module file
    source: ts.SourceFile;

    // the relative path that points from  
    // the module file to the component file
    relativePath: string;

    // name of the component class
    classifiedName: string;
}

In the other file, ng-module-utils.ts, a factory function for the needed rule is created:

import { Rule, Tree, SchematicsException } from '@angular-devkit/schematics';
import { AddToModuleContext } from './add-to-module-context';
import * as ts from 'typescript';
import { dasherize, classify } from '@angular-devkit/core';

import { ModuleOptions, buildRelativePath } from '../schematics-angular-utils/find-module';
import { addDeclarationToModule, addExportToModule } from '../schematics-angular-utils/ast-utils';
import { InsertChange } from '../schematics-angular-utils/change';


const stringUtils = { dasherize, classify };

export function addDeclarationToNgModule(options: ModuleOptions, exports: boolean): Rule {
  return (host: Tree) => {
   [...]
  };
}

This function takes an ModuleOptions instance that describes the NgModule in question. It can be deduced by the options object containing the command line arguments the caller passes to the CLI.

It also takes a flag exports that indicates whether the declared component should be exported too. The returned Rule is just a function that gets a Tree object representing the part of the file system it modifies. For implementing this Rule I've looked up the implementation of similar rules within the CLI's Schematics in @schematics/angular and "borrowed" the patterns found there. Especially the Rule triggered by ng generated component was very helpful for this.

Before we discuss how this function is implemented, let's have a look at some helper functions I've put in the same file. The first one collects the context information we've talked about before:

function createAddToModuleContext(host: Tree, options: ModuleOptions): AddToModuleContext {

  const result = new AddToModuleContext();

  if (!options.module) {
    throw new SchematicsException(`Module not found.`);
  }

  // Reading the module file
  const text = host.read(options.module);

  if (text === null) {
    throw new SchematicsException(`File ${options.module} does not exist.`);
  }

  const sourceText = text.toString('utf-8');
  result.source = ts.createSourceFile(options.module, sourceText, ts.ScriptTarget.Latest, true);

  const componentPath = `/${options.sourceDir}/${options.path}/`
		      + stringUtils.dasherize(options.name) + '/'
		      + stringUtils.dasherize(options.name)
		      + '.component';

  result.relativePath = buildRelativePath(options.module, componentPath);

  result.classifiedName = stringUtils.classify(`${options.name}Component`);

  return result;

}

The second helper function is addDeclaration. It delegates to addDeclarationToModule located within the package @schematics/angular to add the component to the module's declarations array:

function addDeclaration(host: Tree, options: ModuleOptions) {

  const context = createAddToModuleContext(host, options);
  const modulePath = options.module || '';

  const declarationChanges = addDeclarationToModule(
			      context.source,
			      modulePath,
			      context.classifiedName,
			      context.relativePath);

  const declarationRecorder = host.beginUpdate(modulePath);
  for (const change of declarationChanges) {
    if (change instanceof InsertChange) {
      declarationRecorder.insertLeft(change.pos, change.toAdd);
    }
  }
  host.commitUpdate(declarationRecorder);
};

The addDeclarationToModule function takes the retrieved context information and the modulePath from the passed ModuleOptions. Instead of directly updating the module file it returns an array with necessary modifications. These are iterated and applied to the module file within a transaction, started with beginUpdate and completed with commitUpdate.

The second helper function is addExport. It adds the component to the module's exports array and works exactly like the addDeclaration:

function addExport(host: Tree, options: ModuleOptions) {
  const context = createAddToModuleContext(host, options);
  const modulePath = options.module || '';

  const exportChanges = addExportToModule(
				context.source,
				modulePath,
				context.classifiedName,
				context.relativePath);

  const exportRecorder = host.beginUpdate(modulePath);

  for (const change of exportChanges) {
    if (change instanceof InsertChange) {
      exportRecorder.insertLeft(change.pos, change.toAdd);
    }
  }
  host.commitUpdate(exportRecorder);
};

Now, as we've looked at these helper function, let's finish the implementation of our Rule:

export function addDeclarationToNgModule(options: ModuleOptions, exports: boolean): Rule {
  return (host: Tree) => {
    addDeclaration(host, options);
    if (exports) {
      addExport(host, options);
    }
    return host;
  };
}

As you've seen, it just delegates to addDeclaration and addExport. After this, it returns the modified file tree represented by the variable host.

Extending the used Options Class and its JSON schema

Before we put our new Rule in place, we have to extend the class MenuOptions which describes the passed (command line) arguments. As usual in Schematics, it's defined in the file schema.ts. For our purpose, it gets two new properties:

export interface MenuOptions {
    name: string;
    appRoot: string;
    path: string;
    sourceDir: string;
    menuService: boolean;

	// New Properties:
    module: string;
    export: boolean;
}

The property module holds the path for the module file to modify and export defines whether the generated component should be exported too.

After this, we have to declare these additional property in the file schema.json:

{
    "$schema": "http://json-schema.org/schema",
    "id": "SchemanticsForMenu",
    "title": "Menu Schema",
    "type": "object",
    "properties": {
      [...]
      "module":  {
        "type": "string",
        "description": "The declaring module.",
        "alias": "m"
      },
      "export": {
        "type": "boolean",
        "default": false,
        "description": "Export component from module?"
      }
    }
  }
  

As mentioned in the last blog article, we also could generate the file schema.ts with the information provided by schema.json.

Calling the Rule

Now, as we've created our rule, let's put it in place. For this, we have to call it within the Rule function in index.ts:

export default function (options: MenuOptions): Rule {

    return (host: Tree, context: SchematicContext) => {

      options.path = options.path ? normalize(options.path) : options.path;

	  // Infer module path, if not passed:
      options.module = options.module || findModuleFromOptions(host, options) || '';

      [...]

      const rule = chain([
        branchAndMerge(chain([

          [...]

		  // Call new rule
          addDeclarationToNgModule(options, options.export)

        ])),
      ]);

      return rule(host, context);

    }
}

As the passed MenuOptions object is structurally compatible to the needed ModuleOptions we can directly pass it to addDeclarationToNgModule. This is the way, the CLI currently deals with option objects.

In addition to that, we infer the module path at the beginning using findModuleFromOptions.

Testing the extended Schematic

To try the modified Schematic out, compile it and copy everything to the node_modules folder of an example application. As in the former blog article, I've decided to copy it to node_modules/nav. Please make sure to exclude the collection's node_modules folder, so that there is no folder node_modules/nav/node_modules.

After this, switch to the example application's root, generate a module core and navigate to its folder:

ng g module core
cd src\app\core

Now call the custom Schematic:

Calling Schematic which generated component and registers it with the module

This not only generates the SideMenuComponent but also registers it with the CoreModule:

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { SideMenuComponent } from './side-menu/side-menu.component';

@NgModule({
  imports: [
    CommonModule
  ],
  declarations: [SideMenuComponent],
  exports: [SideMenuComponent]
})
export class CoreModule { }

Code-Inside Blog: Signing with SignTool.exe - don't forget the timestamp!

If you currently not touching signtool.exe at all or have nothing to do with “signing” you can just pass this blogpost, because this is more or less a stupid “Today I learned I did a mistake”-blogpost.

Signing?

We use authenticode code signing for our software just to prove that the installer is from us and “safe to use”, otherwise you might see a big warning from Windows that the application is from an “unknown publisher”:

x

To avoid this, you need a code signing certificate and need to sign your program (e.g. the installer and the .exe)

The problem…

We are doing this code signing since the first version of our application. Last year we needed to buy a new certificate because the first code signing certificate was getting stale. Sadly, after the first certificate was expired we got a call from a customer who recently tried to install our software and the installer was signed with the “old” certificate. The result was the big “Warning”-screen from above.

I checked the file and compared it to other installers (with expired certificates) and noticed that our signature didn’t had a timestamp:

x

The solution

I stumbled upon this great blogpost about authenticode code signing and the timestamp was indeed important:

When signing your code, you have the opportunity to timestamp your code; you should definitely do this. Time-stamping adds a cryptographically-verifiable timestamp to your signature, proving when the code was signed. If you do not timestamp your code, the signature will be treated as invalid upon the expiration of your digital certificate. Since it would probably be cumbersome to re-sign every package you’ve shipped when your certificate expires, you should take advantage of time-stamping. A signed, time-stamped package remains valid indefinitely, so long as the timestamp marks the package as having been signed during the validity period of the certificate.

Time-stamping itself is pretty easy and only one parameter was missing all the time… now we invoke Signtool.exe like this and we have a digitial signature with a timestamp:

signtool.exe sign /tr http://timestamp.digicert.com /sm /n "Subject..." /d "Description..." file.msi

Remarks:

  • Our code signing cert is from Digicert and they provide the timestamp URL.
  • SignTool.exe is part of the Windows SDK and currently is in the ClickOnce folder (e.g. C:\Program Files (x86)\Microsoft SDKs\ClickOnce\SignTool)

Hope this helps.

Norbert Eder: .NET Core Anwendung in einem Docker Container laufen lassen

Um eine .NET Core Anwendung in Docker zu hosten, bedarf es natürlich eines installierten .NET Core, einer App (im Falle dieses Beitrags existiert eine Web API auf Basis .NET Core) und natürlich Docker.

Für Tests am Desktop empfiehlt sich, die Docker Community Edition zu installieren.

Im nächsten Schritt muss dem .NET Core Projekt ein Dockerfile hinzugefügt werden. Dabei handelt es sich eine Datei mit Instruktionen, welches Basis-Image herangezogen werden soll, welche Dateien auf das neu zu erstellende Image gepackt werden sollen und einigen Informationen mehr.

Hier ein Beispiel für ein Dockerfile:

# Stage 1
FROM microsoft/aspnetcore-build AS builder
WORKDIR /source

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . .
RUN dotnet publish --output /app/ --configuration Release

# Build runtime image
FROM microsoft/aspnetcore
WORKDIR /app
COPY --from=builder /app .
EXPOSE 5000
ENTRYPOINT ["dotnet", "MeinDotnetCoreProjekt.dll"]

Im Docker-Hub stehen relevante Images von Microsoft bereit:

Von den Images gibt es unterschiedliche Ausprägungen mit unterschiedlichem Tooling. Darauf ist zu achten.

Program.cs anpassen

Standardmäßig läuft ASP.NET Core unter http://localhost:5000. Das ist für Tests am eigenn Rechner in Ordnung. In einem Docker-Container würde dies jedoch bedeuten, dass nicht am öffentlichen Port gelauscht wird, Anfragen also nicht angenommen werden. Deswegen ist sicherzustellen, dass auf allen Interfaces gelauscht wird:

public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseStartup<Startup>()
        .UseUrls("http://*:5000")
        .Build();

Das wäre nun alles.

Docker Container erstellen

Zuerst müssen wir ein eigenes Image erstellen:

docker build -t UnsereDemoApp .

Der Aufruf sollte aus dem Verzeichnis vorgenommen werden, in dem auch das Dockerfile liegt. Dieses wird eingelesen, validiert und bei Erfolg das neue Image restellt.

Dieses muss mit unserem definierten Tag UnsereDemoApp in der Liste der Images aufgeführt werden. Diese können mittels docker images abgerufen werden.

Aus diesem Image kann nun ein neuer Container erstellt werden. Ein Container besitzt einen eindeutigen Namen. Bei keiner Angabe erstellt Docker einen Namen für uns.

docker run -d -p 5000:5000 UsereDemoApp

Zu beachten ist, dass mit der Option -d der Container in einem Detached Modus läuft, d.h. wir erhalten keine Meldungen/Ausgabe. Durch -p 5000:5000 wird der lokale Port 5000 auf den Port 5000 des Containers gemappt. Deswegen kann am Host-Computer per http://localhost:5000 auf das Service im Container zugegriffen werden.

Ein entsprechender Zugriff auf einen vorhandenen Endpunkt sollte das gewünschte Ergebnis bringen.

Hilfreiche Befehle

Ausgabe der laufenden Container:

docker container

Einen Container stoppen

docker container stop [name]

EInen Container entfernen

docker container rm [name]

An einen Container attachen (und die Ausgabe konsumieren):

docker attach [name]

Die letzten angefallenen Meldungen ausgeben:

docker logs [name]

Viel Spaß bei der Verwendung von Docker und .NET Core.

The post .NET Core Anwendung in einem Docker Container laufen lassen appeared first on Norbert Eder.

Jürgen Gutsch: NuGet, Cache and some more problems

Recently I had some problems using NuGet, two of them were huge, which took me a while to solve them. But all of them are easy to fix, if you know how to do it.

NuGet Cache

The fist and more critical problem was related to the NuGet Cache in .NET Core projects. It seems the underlying problem was a broken package in the cache. I didn't find out the real reason. Anyway, every time I tried to restore or add packages, I got an error message, that told me about an error at the first character in the project.assets.json. Yes, there is still a kind of a project.json even in .NET Core 2.0 projects. This file is in the "obj" folder of a .NET Core project and stores all information about the NuGet packages.

This error looked like a typical encoding error. This happens often if you try to read a ANSI encoded file, from a UTF-8 encoded file, or vice versa. But the project.assets.json was absolutely fine. It seemed to be a problem with one of the packages. It worked with the predefined .NET Core or ASP.NET Core packages, but it doesn't with any other. I wasn't able anymore to work on any .NET Core projects that targets .NET Core, but it worked with projects that are targeting the full .NET Framework.

I couldn't solve the real problem and I didn't really want to go threw all of the packages to find the broken one. The .NET CLI provides a nice tool to manage the NuGet Cache. It provides a more detailed CLI to NuGet.

dotnet nuget --help

This shows you tree different commands to work with NuGet. delete and push are working on the remote server to delete a package from a server or to push a new package to the server using the NuGet API. The third one is a command to work with local resources:

dotnet nuget locals --help

This command shows you the help about the locals command. Try the next one to get a list of local NuGet resources:

dotnet nuget locals all --list

You can now use the clear option to clear all caches:

dotnet nuget locals all --clear

Or a specific one by naming it:

dotnet nuget locals http-cache --clear

This is much more easier than searching for all the different cache locations and to delete them manually.

This solved my problem. The broken package was gone from all the caches and I was able to load the new, clean and healthy ones from NuGet.

Versions numbers in packages folders

The second huge problem is not related to .NET Core, but to classic .NET Framework projects using NuGet. If you also use Git-Flow to manage your source code, you'll have at least to different main branches: Master and Develop. Both branches contain different versions. Master contains the current version code and Develop contains the next version code. It is also possible that both versions use different versions of dependent NuGet packages. And here is the Problem:

Master used e. g. AwesomePackage 1.2.0 and Develop uses AwesomePackage 1.3.0-beta-build54321

Both versions of the code are referencing to the AwesomeLib.dll but in different locations:

  • Master: /packages/awesomepackage 1.2.0/lib/net4.6/AwesomeLib.dll
  • Develop: /packages/awesomepackage 1.3.0-beta-build54321/lib/net4.6/AwesomeLib.dll

If you now release the Develop to Master, you'll definitely forget to go to all the projects to change the reference paths, don't you? The build of master will fail, because this specific beta folder wont exist on the server, or even more bad: The build will not fail because the folder of the old package still exists on the build server, because you didn't clear the build work space. This will result in runtime errors. This problem will probably happen more likely, if you provide your own packages using your own NuGet Server.

I solved this by using a different NuGet client than NuGet. I use Paket, because it doesn't store the binaries in version specific folder and the reference path will be the same as long as the package name doesn't change. Using Paket I don't need to take care about reference paths and every branch loads the dependencies from the same location.

Paket officially supports the NuGet APIs and is mentioned on NuGet org, in the package details.

To learn more about Paket visit the official documentation: https://fsprojects.github.io/Paket/

Conclusion

Being an agile developer, doesn't only mean to follow an iterative process. It also means to use the best tools you can buy. But you don't always need to buy the best tools. Many of them are open source and free to use. Just help them by donating some bugs, spread the word, file some issues or contribute in a way to improve the tool. Paket is one of such tools, lightweight, fast, easy to use and it solves many problems. It is also well supported in CAKE , which is the build DSL I use to build, test and deploy applications.

Norbert Eder: Bootstrap 4: Ersatz für die Glyphicons

Mit Version 4 wurde in Bootstrap ordentlich aufgeräumt. Unter anderem werden die Glyphicons nicht mehr mitgeliefert. Diese müssen nun ab sofort manuell eingebunden werden. Als Alternative bieten sich auch andere Icon Fonts an.

Weit verbreitet und auch von mir immer wieder gerne benutzt ist Font Awesome. Sehr cool an Font Awesome ist:

Font Awesome is fully open source and is GPL friendly. You can use it for commercial projects, open source projects, or really just about whatever you want.

Auch eine Nennung ist nicht notwendig – ist aber nur fair.

Ich verwende Bootstrap nun zusammen mit Angular. Damit Font Awesome genutzt werden kann, sind diese zwei Abhängigkeiten im package.json einzutragen:

"font-awesome": "^4.7.0",
"angular-font-awesome": "^3.0.3"

Nach der Installation der der Pakete via npm install muss das Angular-Modul bekannt gemacht werden. Für die nachfolgende Variante muss Angular-CLI verwendet werden.

import { AngularFontAwesomeModule } from 'angular-font-awesome';

@NgModule({
  ...
  imports: [
    ...
    AngularFontAwesomeModule,
    ...
  ]
})

In der Datei angular-cli.json ist nun noch das Style-Sheet einzutragen:

"styles": [
  ...
  "../node_modules/font-awesome/css/font-awesome.min.css"
]

Nun noch ng serve neu starten und schon kann Font-Awesome verwendet werden.

Wer Bootstrap auf Version 4 migrieren möchte, bekommt hier ausführliche Informationen. Für ausführliche Informationen zu angular-font-awesome empfehle ich einen Blick ins README.

Happy Coding!

The post Bootstrap 4: Ersatz für die Glyphicons appeared first on Norbert Eder.

Golo Roden: Einführung in React, Folge 4: Der Komponenten-Lebenszyklus

Komponenten in React durchlaufen einen komplexen Lebenszyklus, der es ermöglicht, das Verhalten einer Komponente zu verschiedenen Zeitpunkten detailliert zu steuern. Wie funktioniert das, und worauf gilt es dabei zu achten?

MSDN Team Blog AT [MS]: News von der österreichischen PowerShell Community

Auch wenn der Experts Live Country Event mit über 100 Besuchern den Großteil der Zeit in Anspruch nahm, so war doch auch am Powershell Blog einiges los:

Euer Team der PS Usergroup Austria

MSDN Team Blog AT [MS]: Docker Roadshow in Frankfurt & München

Einladung zur Docker MTA Roadshow Warum Sie traditionelle Applikationen modernisieren sollten.

IT-Organisationen geben weiterhin 80% ihres Budgets aus um ihre bestehenden Applikationen Aufrecht zu erhalten und nur 20% des Budgets für neue Innovationen. Gemeinsam zeigen Docker und HPE wie sie mit dem Programm „Modernisierung von traditionellen Applikationen“ dazu beitragen können, diese 80% zu eliminieren und Ihnen somit ermöglicht wird mehr auf Innovationen setzen zu können. Dieses schlüsselfertige Programm garantiert mehr als 50% Einsparungen bei den gesamten IT-Kosten und bietet gleichzeitig moderne Sicherheits-, Portabilitätseigenschaften und einen Weg zu Hybrid IT & DevOps.

Was können Sie erwarten?
·         Informative Sessions, Demos, praktische Tips und nützliche Tools - von den vertrauenswürdigsten Namen im Geschäft.

·         Gewinnen Sie Einblick in das MTA-Programm und wie Sie loslegen können.

·         Eauchen Sie in die ROI-Analyse ein und wie man einen Business Case macht.

Wer sollte teilnehmen?

IT-Operations - und IT-Infrastruktur-Manager, IT-Direktoren, CIOs und CTOs, die daran interessiert sind, zu verstehen, wie Docker dazu beitragen kann, ältere Anwendungen zu verwandeln und bestehende Anwendungen zu modernisieren.

Agenda

Vormittag

·         08:00 - 09:00 Breakfast & Networking

·         09:00 - 09:05 Welcome & Opening

·         09:05 - 09:35 What is Docker & Why MTA

·         09:35 - 09:50 Q&A

·         09:50 - 10:00 Break

·         10:00 - 12:00 Hands-On Demo - Specific MTA Use Case

·         12:00 - 13:00 Lunch & Networking

Nachmittag

·         13:00 - 13:30 Partner presentation

·         13:30 - 13:45 Partner presentation Q&A

·         13:45 - 14:15 How to measure and track progress during MTA

·         14:15 - 14:30 Break

·         14:30 - 16:00 Hands-On Demo - Expand & Growth

·         16:00 - 16:15 Day review

·         16:15 - 17:00 Q&A & Mingle

Docker Roadshow Frankfurt

Hier registrieren »

Datum: 28. November 2017 08:00 - 17:00
Tagungsort: Radisson Blu Hotel Franklinstraße 65, 60486 Frankfurt am Main

 

Docker Roadshow Munich

Hier registrieren »

Datum: 30. November 2017 08:00 - 17:00
Tagungsort: Le Méridien Bayerstraße 41, 80335 München

Stefan Henneken: IEC 61131-3: Unit-Tests

Unit-Tests sind ein unentbehrliches Hilfsmittel für jeden Programmierer, um die Funktionsfähigkeit seiner Software sicherzustellen. Programmfehler kosten Zeit und Geld, daher benötigt man eine automatisierte Lösung, um diesen Fehlern auf die Spur zu kommen – und zwar möglichst bevor die Software zum Einsatz kommt. Unit-Tests sollten überall dort eingesetzt werden, wo professionell Software entwickelt wird. Dieser Artikel soll einen schnellen Einstieg und ein Verständnis für den Nutzen der Unit-Tests ermöglichen.

Motivation

Häufig werden für das Testen von Funktionsblöcken separate Testprogramme geschrieben. In solch einem Testprogramm wird eine Instanz des gewünschten Funktionsblock angelegt und aufgerufen. Dabei werden die Ausgangsvariablen beobachtet und manuell auf Korrektheit geprüft. Stimmen diese nicht mit den erwarteten Werte überein, so wird der Funktionsblock solange angepasst, bis der Funktionsblock das gewünschte Verhalten aufweist.

Doch mit dem einmaligen Testen von Software ist es nicht getan. So führen Änderungen oder Erweiterungen an einem Programm immer wieder dazu, dass Funktionen oder Funktionsblöcke, die zuvor ausgetestet wurden und fehlerfrei funktionierten, plötzlich nicht mehr korrekt arbeiten. Auch kommt es vor, dass sich die Behebung von Programmfehlern auch auf andere Programmteile auswirkt und somit an anderen Stellen im Code zu Fehlfunktionen führen kann. Die zuvor ausgeführten und abgeschlossenen Tests müssen somit manuell wiederholt werden.

Ein mögliche Herangehensweise für eine Verbesserung dieser Arbeitsweise besteht darin, die Tests zu automatisieren. Dazu wird ein Test-Programm entwickelt, welches die Funktionalität des zu testenden Programms aufruft und die Rückgabewerte überprüft. Ein einmal geschriebenes Testprogramm bietet eine Reihe von Vorteilen:

– Die Tests sind automatisiert und mit gleichen Rahmenbedingung (Timings, ..)  somit jederzeit wiederholbar.

– Einmal geschriebene Tests bleiben auch für andere Mitglieder des Teams erhalten.

Unit-Tests

Ein Unit-Test prüft einen sehr kleinen und autarken Teil (Unit) einer Software. In der IEC 61131-3 ist dieses ein einzelner Funktionsblock oder eine Funktion. Bei jedem Test wird die zu testende Einheit (Funktionsblock, Methode oder Funktion) mit Testdaten (Parametern) aufgerufen und deren Reaktion auf diese Testdaten geprüft. Stimmt das gelieferte Ergebnis mit dem erwarteten Ergebnis überein, so gilt der Test als bestanden. Ein Test besteht im Allgemeinen aus einer ganzen Reihe von Testfällen, die nicht nur ein Soll-Ist-Paar prüft, sondern gleich mehrere.

Welche Test-Szenarien der Entwickler implementiert, bleibt ihm überlassen. Sinnvoll ist es aber mit Werten zu testen, die typischerweise auch bei deren Aufruf in der Praxis auftreten. Auch die Betrachtung von Grenzwerten (extrem große oder kleine Werte) oder besonderen Werten (Null-Zeiger, Leerstring), ist sinnvoll. Liefern all diese Testszenarien erwartungsgemäß die korrekten Werte, so kann der Entwickler davon ausgehen, dass seine Implementierung korrekt ist.

Ein positiver Nebeneffekt ist der, dass es dem Entwickler weniger Kopfschmerzen bereitet komplexe Änderungen an seinem Code vorzunehmen. Schließlich kann er nach derartigen Änderungen das System jederzeit überprüfen. Treten also nach einer solchen Änderung keine Fehler auf, so ist sie höchstwahrscheinlich geglückt.

Man darf dabei allerdings die Gefahr einer schlechten Implementierung der Tests nicht außer Acht lassen. Sind diese unzureichend oder gar falsch, liefern aber ein positives Ergebnis, so führt diese trügerische Sicherheit früher oder später zu großen Problemen.

Das Unit-Test Framework TcUnit

Unit-Test Frameworks bieten die notwendigen Funktionalitäten an, um Unit-Tests schnell und effektiv zu erstellen. Durch ergeben sich weitere Vorteile:

– Jeder aus dem Team kann die Tests schnell und einfach erweitern.

– Jeder ist in der Lage die Tests zu starten und das Ergebnis der Tests auf Korrektheit zu überprüfen.

Im Rahmen eines Projektes ist das Unit-Test Framework TcUnit entstanden. Genaugenommen handelt es sich um eine SPS-Bibliothek, welche Methoden zur Verifizierung von Variablen bereit hält (Assert-Methoden). War eine Überprüfung nicht erfolgreich, so wird eine Statusmeldung in das Ausgabefenster ausgegeben. Enthalten sind die Assert-Methoden in dem Funktionsblock FB_Assert.

Je Datentyp gibt es eine Methode, wobei der Aufbau immer ähnlich ist. Es gibt immer einen Parameter der den Istwert enthält und einen Parameter für den Sollwert. Stimmen beide überein, gibt die Methode TRUE zurück, ansonsten FALSE. Der Parameter sMessage gibt den Ausgabetext vor, der im Falle eines Fehlers ausgegeben wird. Dadurch lassen sich die Meldungen den einzelnen Testfällen zuordnen. Die Namen der Assert-Methoden beginnen immer mit AreEqual.

Hier als Beispiel die Methode um eine Variable vom Typ Integer auf Gültigkeit zu überprüfen.

Pic01

Manche Methode enthalten noch zusätzliche Parameter.

Pic02

Für alle Standarddatentypen (BOOL, BYTE, INT, WORD, STRING, TIME, …) sind entsprechende Assert-Methoden vorhanden. Aber auch einige spezielle Datentypen, wie z.B. AreEqualMEM zur Prüfung eines Speicherbereichs oder AreEqualGIUD, werden unterstützt.

Ein erstes Beispiel

Unit-Tests werden dazu verwendet einzelne Funktionsblöcke unabhängig von anderen Komponenten zu überprüfen. Diese Funktionsblöcke können sich in einer SPS-Bibliothek oder in einem SPS-Projekt befinden.

Für das erste Beispiel soll sich der zu testende FB in einem SPS-Projekt befinden. Es handelt sich hierbei um den Funktionsblock FB_Foo.

Pic03
Definiert die Zeit, die der Ausgang bOut gesetzt bleibt, falls keine weiteren positiven Flanken an bSwitch angelegt werden.

bSwitch Durch eine positive Flanke wird der Ausgang bOut auf TRUE gesetzt. Dieser bleibt für die Zeit tDuration aktiv. Ist der Ausgang schon gesetzt, so wird die Zeit tDuration neu gestartet.
bOff Der Ausgang bOut wird durch eine positive Flanke unmittelbar zurückgesetzt.
tDuration Definiert die Zeit, die der Ausgang bOut gesetzt bleibt, falls keine weiteren positiven Flanken an bSwitch angelegt werden.

Durch Unit-Tests soll bewiesen werden, dass sich der Funktionsblock FB_Foo wie erwartet verhält. Den Code zum Testen wird hierbei direkt in dem TwinCAT Projekt implementieren.

Projektaufbau

Um den Test-Code von der Applikation zu trennen, wird der Ordner TcUnit_Tests angelegt. In diesem Ordner wird der POU P_Unit_Tests abgelegt von dem aus die jeweiligen Testfälle aufgerufen werden.

Für jeden FB wird ein entsprechender Test-FB angelegt. Dieser hat den gleichen Namen plus den Postfix _Tests. Für unser Beispiel ergibt sich der Name FB_Foo_Tests.

Pic04

In P_Unit_Tests wird eine Instanz von FB_Foo_Tests angelegt und aufgerufen.

PROGRAM P_Unit_Tests
VAR
  fbFoo_Tests : FB_Foo_Tests;
END_VAR

fbFoo_Tests();

In FB_Foo_Tests befindet sich der gesamte Test-Code zur Überprüfung von FB_Foo. Hierzu werden in FB_Foo_Tests jeweils pro Testfall eine Instanz von FB_Foo angelegt. Diese werden mit unterschiedlichen Parametern aufgerufen und die Rückgabewerte werden mit Hilfe der Assert-Methoden validiert.

Die Abarbeitung der einzelnen Testfälle geschieht in einer Statemachine, die auch von der SPS-Bibliothek TcUnit verwaltet wird. Dadurch wird z.B. der Test automatisch beendet, sobald ein Fehler erkannt wurde.

Definition der Testfälle

Zuvor müssen die einzelnen Testfälle definiert werden. Jeder Testfall belegt in der Statemachine einen bestimmten Bereich.

Für die Benennung der einzelnen Testfälle haben sich einige Benennungsregeln bewährt, die helfen, den Test-Aufbau übersichtlicher zu gestalten.

Bei den Testfällen, die einen Eingang von FB_Foo prüfen sollen, setzt sich der Name zusammen aus: [Name des Eingangs]_[Testbedingung]_[erwartetes Verhalten]. Analog dazu werden Testfälle benannt, die Methoden von FB_Foo testen, also [Name der Methode]_[Testbedingung]_[erwartetes Verhalten].

Nach diesem Schema werden folgende Testfälle festgelegt:

Switch_RisingEdgeAndDuration1s_OutIsTrueFor1s

Testet, ob durch eine positive Flanke an bSwitch der Ausgang bOut für 1 s gesetzt wird, wenn tDuration auf t#1s gesetzt wurde.

Switch_RisingEdgeAndDuration1s_OutIsFalseAfter1100ms

Testet, ob durch eine positive Flanke an bSwitch der Ausgang bOut nach 1100 ms wieder FALSE wird, wenn tDuration auf t#1s gesetzt wurde.

Switch_RetriggerSwitch_OutKeepsTrue

Testet, ob durch eine erneute positive Flanke an bSwitch die Zeit tDuration neu gestartet wird.

Off_RisingEdgeAndOutIsTrue_OutIsFalse

Testet, ob durch eine positive Flanke an bOff der gesetzt Ausgang bOut auf FALSE geht.

Implementierung der Testfälle

Jeder Testfall belegt in der Statemachine mindestens einen Schritt. In diesem Beispiel wurde als Schrittweite zwischen den einzelnen Testfällen 16#0100 gewählt. Der 1. Testfall beginnt bei 16#0100, der zweite bei 16#0200, usw. In Schritt 16#0000 werden Initialisierungen durchgeführt werden, während der Schritt 16#FFFF vorhanden sein muss, da dieser von der Statemachine angesprungen wird, sobald eine Assert-Methode einen Fehler festgestellt hat. Läuft der Test fehlerfrei durch, so wird in 16#FF00 eine Meldung ausgegeben und der Unit-Test für FB_Foo ist beendet.

Das Pragma region ist hierbei sehr hilfreich, um die Navigation im Quellcode zu vereinfachen.

FUNCTION_BLOCK FB_Foo_Tests
VAR_INPUT
END_VAR
VAR_OUTPUT
  bError : BOOL;
  bDone : BOOL;
END_VAR
VAR
  Assert : FB_ASSERT('FB_Foo');
  fbFoo_0100 : FB_Foo;
  fbFoo_0200 : FB_Foo;
  fbFoo_0300 : FB_Foo;
  fbFoo_0400 : FB_Foo;
END_VAR

CASE Assert.State OF
{region 'start'}
16#0000:
  bError := FALSE;
  bDone := FALSE;
  Assert.State := 16#0100;
{endregion}

{region 'Switch_RisingEdgeAndDuration1s_OutIsTrueFor1s'}
16#0100:
  fbFoo_0100(...
  ...
  Assert.State := 16#0200;
{endregion}

{region 'Switch_RisingEdgeAndDuration1s_OutIsFalseAfter1100ms'}
16#0200:
  fbFoo_0200(...
  ...
  Assert.State := 16#0300;
{endregion}

{region 'Switch_RetriggerSwitch_OutKeepsTrue'}
16#0300:
  fbFoo_0300(...
  ...
  Assert.State := 16#0400;
{endregion}

{region 'Off_RisingEdgeAndOutIsTrue_OutIsFalse'}
16#0400:
  fbFoo_0400(...
  ...
  Assert.State := 16#FF00;
{endregion}

{region 'done'}
16#FF00:
  Assert.PrintPassed('Done');
  Assert.State := 16#FF10;

16#FF10:
  bDone := TRUE;

{endregion}

{region 'error'}
16#FFFF:
  bError := TRUE;
{endregion}

ELSE
  Assert.StateMachineError();
END_CASE

Für jeden Testfall gibt es eine separate Instanz von FB_Foo. Dadurch wird sichergestellt, dass jeder Testfall mit einer neu initialisierten Instanz von FB_Foo arbeitet. Eine gegenseitig Beeinflussung der Testfälle wird dadurch vermieden.

Im einfachsten Fall, besteht ein Testfall aus nur einen Schritt:

16#0100:
  fbFoo_0100(bSwitch := TRUE, tDuration := T#1S);
  Assert.AreEqualBOOL(TRUE, fbFoo_0100.bOut, 'Switch_RisingEdgeAndDuration1s_OutIsTrueFor1s');
  tonDelay(IN := TRUE, PT := T#900MS);
  IF (tonDelay.Q) THEN
    tonDelay(IN := FALSE);
    Assert.State := 16#0200;
  END_IF

Der zu testende Baustein wird für 900 ms aufgerufen. Während dieser Zeit muss bOut TRUE sein, da bSwitch auf TRUE gesetzt wurde und tDuration 1 s beträgt. Die Assert-Methode AreEqualBOOL prüft den Ausgang bOut. Hat dieser nicht den erwarteten Zustand, so wird eine Fehlermeldung ausgegeben. Nach 900 ms wird durch Setzen der Eigenschaft State von FB_Assert in den nächsten Testfall gewechselt

Ein Testfall kann auch aus mehreren Schritten bestehen:

16#0300:
  fbFoo_0300(bSwitch := TRUE, tDuration := T#500MS);
  Assert.AreEqualBOOL(TRUE, fbFoo_0300.bOut, 'Switch_RetriggerSwitch_OutKeepsTrue');
  tonDelay(IN := TRUE, PT := T#400MS);
  IF (tonDelay.Q) THEN
    tonDelay(IN := FALSE);
    fbFoo_0300(bSwitch := FALSE);
    Assert.State := 16#0310;
  END_IF

16#0310:
  fbFoo_0300(bSwitch := TRUE, tDuration := T#500MS);
  Assert.AreEqualBOOL(TRUE, fbFoo_0300.bOut, 'Switch_RetriggerSwitch_OutKeepsTrue');
  tonDelay(IN := TRUE, PT := T#400MS);
  IF (tonDelay.Q) THEN
    tonDelay(IN := FALSE);
    Assert.State := 16#0400;
  END_IF

Das Triggern von bSwitch wird in Zeile 7 und in Zeile 12 durchgeführt. In den Zeile 3 und 13 wird geprüft ob der Ausgang gesetzt bleibt.

Ausgabe der Meldungen

Nach Ausführung aller Testfälle für FB_Foo wird eine Meldung ausgegeben (Schritt 16#FF00).

Pic05

Sollte der Fall eintreten, dass eine Assert-Methode einen Fehler erkennt, so wird dieses ebenfalls als Meldung ausgegeben.

Pic06

Ist die Eigenschaft AbortAfterFail von FB_Assert auf TRUE gesetzt, so wird bei einem Fehler der Schritt 16#FFFF angesprungen und somit der Test beendet.

Die Assert-Methoden verhindert, dass in einem Schritt die gleiche Meldung mehrfach hintereinander ausgegeben wird. Die mehrfache Ausgabe der gleichen Meldung, z.B. in einer Schleife, wird somit unterdrückt. Durch Setzen der Eigenschaft MultipleLog auf TRUE wird dieser Filter deaktiviert und jede Meldung kommt zur Ausgabe.

Durch den oben gezeigten Aufbau sind die Unit-Tests klar von der eigentlichen Applikation getrennt. FB_Foo bleibt vollständig unverändert.

Diese TwinCAT-Solution wird gemeinsam mit der TwinCAT Solution für die SPS-Bibliothek in die Quellcodeverwaltung (wie z.B. TFS oder Git) abgelegt. Somit steht allen Team-Mitgliedern eines Projektes die Tests zur Verfügung. Durch das Unit-Test Framework können Tests auch von jedem erweitert und vorhandene Tests gestartet und einfach ausgewertet werden.

Auch wenn der Begriff Unit-Test Framework für die SPS-Bibliothek TcUnit etwas hochgegriffen ist, so zeigt sich doch das mit wenigen Hilfsmitteln automatisierte Tests auch mit der IEC 61131-3 möglich sind. Kommerzielle Unit-Test Framework gehen deutlich über das hinaus, was eine SPS-Bibliothek leisten kann. So enthalten diese entsprechende Dialoge um die Tests zu starten und das Ergebnis anzuzeigen. Auch werden häufig die Bereiche im Quellcode markiert, die von den einzelnen Testfällen durchlaufen wurden.

Bibliothek TcUnit (TwinCAT 3.1.4022) auf GitHub

Beispiel (TwinCAT 3.1.4022) auf GitHub

Tips

Die größte Hürde bei Unit-Tests ist häufig der innere Schweinehund. Ist dieser erst überwunden, schreiben sich die Unit-Tests fast von alleine. Die zweite Hürde stellt sich durch die Frage nach den zu testenden Teilen der Software. Es ist wenig sinnvoll, alles testen zu wollen. Vielmehr sollte man sich auf wesentliche Bereiche der Software konzentrieren und die Funktionsblöcke, welche die Basis der Anwendung ausmachen, gut testen.

Im Grunde gilt ein Unit-Test als einigermaßen qualitativ, wenn bei der Ausführung möglichst viele Zweige durchlaufen werden. Beim Schreiben der Unit-Tests sollten die Testfälle so gewählt werden, dass möglichst alle Zweige des Funktionsblocks durchlaufen werden.

Treten dann doch noch Fehler in der Praxis auf, so kann es von Vorteil sein, wenn für diesen Fehlerfall Tests geschrieben werden. Damit wird sichergestellt, dass ein Fehler, der einmal aufgetreten ist, nicht ein weiteres Mal auftritt.

Allein das zwei oder mehrere Funktionsblöcke korrekt arbeiten und dieses durch Unit-Tests bewiesen wird, bedeutet noch nicht, dass eine Anwendung diese Funktionsblöcke auch korrekt anwendet. Unit-Tests ersetzen somit in keinster Weise Integrations- und Akzeptanztests. Derartige Testmethoden validieren das Gesamtsystem und bewerten somit das große Ganze. Es ist auch unter Verwendung von Unit-Tests nötig, weiterhin das Gesamtwerk zu testen. Allerdings wird ein nicht unwesentlicher Teil potentieller Fehler schon im Vorfeld durch Unit-Tests ausgeschaltet, was im Endeffekt Aufwand für das Testen und somit Zeit und Geld spart.

Weitere Informationen

Während der Vorbereitung zu diesem Post, hat Jakob Sagatowski in seinem Blog AllTwinCAT den ersten Teil einer Artikel-Serie über Test driven development in TwinCAT veröffentlicht. Für alle die tiefer in das Thema einsteigen wollen, kann ich den Blog sehr empfehlen. Es ist erfreulich, dass auch andere SPS-Programmierer sich mit dem Testen ihrer Software auseinander setzen. Auch das Buch The Art of Unit Testing von Roy Osherove ist ein guter Einstieg in das Thema. Auch wenn das Buch nicht für die IEC 61131-3 geschrieben wurde, so enthält es doch einige interessante Ansätze, die sich ohne Probleme auch in der SPS umsetzen lassen.

Abschließend will ich mich noch bei meinen Kollegen Birger Evenburg und Nils Johannsen bedanken. Als Grundlage für diesen Post diente eine SPS-Bibliothek, die mir freundlicherweise von beiden zur Verfügung gestellt wurde.


Holger Schwichtenberg: .NET 4.7.1 erkennen

Mit dem Erscheinen von .NET Framework 4.7.1 am 19. Oktober 2017 ist eine neue .NET-Version hinzugekommen.

MSDN Team Blog AT [MS]: Einladung: Mobile Industry Solutions

Holger Schwichtenberg: Neues Buch zu Windows PowerShell 5.1 und PowerShell Core 6.0

Die Neuauflage von Holger Schwichtenbergs Fachbuch zur PowerShell behandelt nun neben der Windows PowerShell auch die plattformunabhängige PowerShell Core.

Golo Roden: Einführung in React, Folge 3: Eingaben verarbeiten

Mit React Eingaben zu verarbeiten ist auf den ersten Blick gar nicht so einfach, denn React kennt zwei Arten von Komponenten, deren Zustandsverwaltung voneinander abweicht. Außerdem gilt es zu steuern, wie Events verteilt werden. Wie funktioniert das?

MSDN Team Blog AT [MS]: Herbst Update der österreichischen PowerShell Community

Hallo PowerShell Gemeinde!

Auch im letzten Monat war einiges los bei uns

Künftige Veranstaltungen

Vergangene Veranstaltungen

Wir hatten 2 sehr spannende und gut besuchte Veranstaltungen. Eine in Wien bei ETC und eine in Linz/Leondig bei Cubido.

Newsletter - die "Schnipseljagd"

Unsere wöchentlichen PowerShell Newsletter kamen regelmäßig raus und beinhalteten folgende Themen:

Hoffe es  war für dich etwas dabei!
Die PowerShell UserGroup Austria
www.powershell.co.at

Code-Inside Blog: Introducing Electron.NET - building Electron Desktop Apps with ASP.NET Core

x

The last couple of weeks I worked with my buddy Gregor Biswanger on a new project called Electron.NET.

As you might already guess: It is some sort of bridge between the well known Electron and .NET.

If you don’t know what Electron is: It helps to build desktop apps written in HTML/CSS/Javascript

The idea

Gregor asked me a while ago if it is possible to build desktop apps with ASP.NET Core (or .NET Core in general) and - indeed - there are some ideas how to make it, but unfortunatly there is no “official” UI stack available for .NET Core. After a little chat we agreed that the best bet would be to use Electron as is and somehow “embed” ASP.NET Core in it.

I went to bed, but Gregor was keen on to build a prototyp and he did it: He was able to launch the ASP.NET Core application inside the electron app and invoke some Electron APIs from the .NET World.

First steps done, yeah! In the following weeks Gregor was able to “bridge” most Electron APIs and I could help him with the tooling via our dotnet-extension.

Overview

The basic functionality is not too complex:

  • We ship a “standard” (more or less blank) Electron app
  • Inside the Electron part two free ports are searched:
    • The first free port is used inside the Electron app itself
    • The second free port is used for the ASP.NET Core process
  • The app launches the .NET Core process with ASP.NET Core port (e.g. localhost:8002) and injects the first port as parameter
  • Now we have a Socket.IO based linked between the launched ASP.NET Core app and the Electron app itself - this is our communication bridge!

At this point you can write your Standard ASP.NET Core Code and can communicate via our Electron.API wrapper to the Electron app.

Gregor did a fabulous blogpost with a great example.

Interested? This way!

If you are interested, maybe take a look at the ElectronNET-Org on GitHub. The complete code is OSS and there are two demo repositories.

No way - this is a stupid idea!

The last days were quite intersting. We got some nice comments about the project and (of course) there were some critics.

As far as I know the current “this is bad, because… “-list is like this:

  • We still need node.js and Electron.NET is just a wrapper around Electron: Yes, it is.
  • Perf will suck: Well… to be honest - the current startup time does really suck, because we not only launch the Electron stuff, but we also need to start the .NET Core based WebHost - maybe we will find a solution
  • Starting a web server inside the app is bad on multiple levels because of security and perf: I agree, there are some ideas how to fix it, but this might take some time.

There are lots of issues open and the project is pretty young, maybe we will find a solution for the above problems, maybe not.

Final thoughts

The interesting point for me is, that we seem to hit a nerf with this project: There is demand to write X-Plat desktop applications.

We are looking for feedback - please share your opinion on the ElectronNET-GitHub-Repo or try it out :)

Desktop is dead, long live the desktop!

Holger Schwichtenberg: Zeichen in Microsoft SQL Server ersetzen

Während man für mehrere Zeichenersetzungen bisher mehrere verschachtelte Aufrufe der Replace()-Funktion brauchte, geht es seit Microsoft SQL Server 2017 effizienter mit der neuen Translate()-Funktion.

Manfred Steyer: Generating Custom Code with the Angular CLI and Schematics


Table of Contents

This blog post is part of an article series.


Since some versions, the Angular CLI uses a library called Schematics to scaffold building blocks like components or services. One of the best things about this is that Schematics allows to create own code generators too. Using this extension mechanism, we can modify the way the CLI generates code. But we can also provide custom collections with code generators and publish them as npm packages. A good example for this is Nrwl's Nx which allows to generated boilerplate code for Ngrx or upgrading an existing application from AngularJS 1.x to Angular.

These code generators are called Schematics and can not only create new files but also modify existing ones. For instance, the CLI uses the latter to register generated components with existing modules.

In this post, I'm showing how to create a collection with a custom Schematic from scratch and how to use it with an Angular project. The sources can be found here.

In addition to this, you'll find a nice video with Mike Brocchi from the CLI-Team explaining the basics and ideas behind Schematics here.

The public API of Schematics is currently experimental and can change in future.
Angular Labs

Goal

To demonstrate how to write a simple Schematic from scratch, I will build a code generator for a Bootstrap based side menu. With an respective template like the free ones at Creative Tim the result could look like this:

Solution

Before creating a generator it is a good idea to have an existing solution that contains the code you want to generate in all variations.

In our case, the component is quite simple:

import { Component, OnInit } from '@angular/core';

@Component({
    selector: 'menu',
    templateUrl: 'menu.component.html'
})
export class MenuComponent {
}

In addition to that, the template for this component is just a bunch of html tags with the right Bootstrap based classes -- something I cannot learn by heart what's the reason a generator seems to be a good idea:

<div class="sidebar-wrapper">
    <div class="logo">
        <a class="simple-text">
            AppTitle
        </a>
    </div>
    <ul class="nav">

        <li>
            <a>
                <i class="ti-home"></i>
                <p>Home</p>
            </a>
        </li>

        <!-- add here some other items as shown before -->
    </ul>
</div>

In addition to the code shown before, I want also have the possibility to create a more dynamic version of this side menu. This version uses an interface MenuItem to represent the items to display:

export interface MenuItem {
    title: string;
    iconClass: string;
}

A MenuService is providing instances of MenuItem:

import { MenuItem } from './menu-item';

export class MenuService {

    public items: MenuItem[] = [
        { title: 'Home', iconClass: 'ti-home' },
        { title: 'Other Menu Item', iconClass: 'ti-arrow-top-right' },
        { title: 'Further Menu Item', iconClass: 'ti-shopping-cart'},
        { title: 'Yet another one', iconClass: 'ti-close'}
    ];

}

The component gets an instance of the service by the means of dependency injection:

import { Component, OnInit } from '@angular/core';
import { menuItem } from './menu-item';
import { menuService } from './menu.service';

@Component({
    selector: 'menu',
    templateUrl: './menu.component.html',
    providers:[MenuService]
})
export class MenuComponent {

    items: MenuItem[];

    constructor(service: MenuService) {
        this.items = service.items;
    }
}

After fetching the MenuItems from the service the component iterates over them using *ngFor and creates the needed markup:

<div class="sidebar-wrapper">
    <div class="logo">
        <a class="simple-text">
            AppTitle
        </a>
    </div>
    <ul class="nav">
        <li *ngFor="let item of items">
            <a href="#">
                <i class="{{item.iconClass}}"></i>
                <p>{{item.title}}</p>
            </a>
        </li>
    </ul>
</div>

Even though this example is quite easy it provides enough stuff to demonstrate the basics of Schematics.

Scaffolding a Collection for Schematics ... with Schematics

To provide a project structure for an npm package with a Schematics Collection, we can leverage Schematics itself. The reason is that the product team provides a "meta schematic" for this. To get everything up and running we need to install the following npm packages:

  • @angular-devkit/schematics for executing Schematics
  • @schematics/schematics for scaffolding a Collection
  • ```rxjs`` which is a needed transitive dependency

For the sake of simplicity, I've installed them globally:

npm i -g @angular-devkit/schematics
npm i -g @schematics/schematics
npm i -g rxjs

In order to get our collection scaffolded we just need to type in the following command:

schematics @schematics/schematics:schematic --name nav

The parameter @schematics/schematics:schematic consists of two parts. The first part -- @schematics/schematics -- is the name of the collection, or to be more precise, the npm package with the collection. The second part -- schematic -- is the name of the Schematic we want to use for generating code.

After executing this command we get an npm package with a collection that holds three demo schematics:

npm package with collection

The file collection.json contains metadata about the collection and points to the schematics in the three sub folders. Each schematic has meta data of its own describing the command line arguments it supports as well as generator code. Usually, they also contain template files with placeholders used for generating code. But more about this in the following sections.

Before we can start, we need to npm install the dependencies the generated package.json points to. In addition to that, it is a good idea to rename its section dependencies to devDependencies because we don't want to install them when we load the npm package into a project:

{
  "name": "nav",
  "version": "0.0.0",
  "description": "A schematics",
  "scripts": {
    "build": "tsc -p tsconfig.json",
    "test": "npm run build && jasmine **/*_spec.js"
  },
  "keywords": [
    "schematics"
  ],
  "author": "",
  "license": "MIT",
  "schematics": "./src/collection.json",
  "devDependencies": {
    "@angular-devkit/core": "^0.0.15",
    "@angular-devkit/schematics": "^0.0.25",
    "@types/jasmine": "^2.6.0",
    "@types/node": "^8.0.31",
    "jasmine": "^2.8.0",
    "typescript": "^2.5.2"
  }
}

As you saw in the last listing, the packages.json contains a field schematics which is pointing to the file collection.json to inform about the metadata.

Adding an custom Schematic

The three generated schematics contain comments that describe quite well how Schematics works. It is a good idea to have a look at them. For this tutorial, I've deleted them to concentrate on my own schematic. For this, I'm using the following structure:

Structure for custom Schematics

The new folder menu contains the custom schematic. It's command line arguments are described by the file schema.json using a json schema. The described data structure can also be found as an interface within the file schema.ts. Normally it would be a good idea to generate this interface out of the schema but for this easy case I've just handwritten it.

The index.ts contains the so called factory for the schematic. This is a function that generates a rule (containing other rules) which describes how the code can be scaffolded. The templates used for this are located in the files folder. We will have a look at them later.

First of all, let's update the collection.json to make it point to our menu schematic:

{
    "schematics": {
      "menu": {
        "aliases": [ "mnu" ],
        "factory": "./menu",
        "description": "Generates a menu component",
        "schema": "./menu/schema.json"
      }
    }
}

Here we have an property menu for the menu schematic. This is also the name we reference when calling it. The array aliases contains other possible names to use and factory points to the file with the schematic's factory. Here, it points to ./menu which is just a folder. That's why the factory is looked up in the file ./menu/index.js.

In addition to that, the collection.json also points to the schema with the command line arguments. This file describes a property for each possible argument:

{
    "$schema": "http://json-schema.org/schema",
    "id": "SchemanticsForMenu",
    "title": "Menu Schema",
    "type": "object",
    "properties": {
      "name": {
        "type": "string",
        "default": "name"
      },
      "path": {
        "type": "string",
        "default": "app"
      },
      "appRoot": {
        "type": "string"
      },
      "sourceDir": {
        "type": "string",
        "default": "src"
      },
      "menuService": {
        "type": "boolean",
        "default": false,
        "description": "Flag to indicate whether an menu service should be generated.",
        "alias": "ms"
      }
    }
  }

The argument name holds the name of the menu component, its path as well as the path of the app (appRoot) and the src folder (sourceDir). These parameters are usually used by all schematics the CLI provides. In addition to that, I've defined a property menuService to indicate, whether the above mentioned service class should be generated too.

The interface for the schema within schema.ts is called MenuOptions:

export interface MenuOptions {
    name: string;
    appRoot: string;
    path: string;
    sourceDir: string;
    menuService: boolean;
}

Schematic Factory

To tell Schematics how to generated the requested code files, we need to provide a factory. This function describes the necessary steps with a rule which normally makes use of further rules:

import { MenuOptions } from './schema';
import { Rule, [...] } from '@angular-devkit/schematics';
[...]
export default function (options: MenuOptions): Rule {
    [...]
}

For this factory, I've defined two helper constructs at the top of the file:

import { dasherize, classify } from '@angular-devkit/core';
import { MenuOptions } from './schema';
import { filter, Rule, [...] } from '@angular-devkit/schematics';

[...]

const stringUtils = { dasherize, classify };

function filterTemplates(options: MenuOptions): Rule {
  if (!options.menuService) {
    return filter(path => !path.match(/\.service\.ts$/) && !path.match(/-item\.ts$/) && !path.match(/\.bak$/));
  }
  return filter(path => !path.match(/\.bak$/));
}

[...]

The first one is the object stringUtils which just groups some functions we will need later within the templates: The function dasherize transforms a name into its kebab case equivalent which can be used as a file name (e. g. SideMenu to side-menu) and classify transforms into Pascal case for class names (e. g. side-menu to SideMenu).

The function filterTemplates creates a Rule that filters the templates within the folder files. For this, it delegates to the existing filter rule. Depending on whether the user requested a menu service, more or less template files are used. To make testing and debugging easier, I'm excluding .bak in each case.

Now let's have a look at the factory function:

import { chain, mergeWith } from '@angular-devkit/schematics';
import { dasherize, classify } from '@angular-devkit/core';
import { MenuOptions } from './schema';
import { apply, filter, move, Rule, template, url, branchAndMerge } from '@angular-devkit/schematics';
import { normalize } from '@angular-devkit/core';

[...]

export default function (options: MenuOptions): Rule {

    options.path = options.path ? normalize(options.path) : options.path;
    
    const templateSource = apply(url('./files'), [
        filterTemplates(options),
        template({
          ...stringUtils,
          ...options
        }),
        move(options.sourceDir)
      ]);
      
      return chain([
          mergeWith(templateSource)
      ]);

}

At the beginning, the factory normalizes the path the caller passed in. This means that it deals with the conventions of different operating systems, e. g. using different path separators (e. g. / vs. \).

Then, it uses apply to apply all templates within the files folder to the passed rules. After filtering the available templates they are executed with the rule returned by template. The passed properties are used within the templates. This creates a virtual folder structure with generated files that is moved to the sourceDir.

The resulting templateSource is a Source instance. It's responsibility is creating a Tree object that represents a file tree which can be either virtual or physical. Schematics uses virtual file trees as a staging area. Only when everything worked, it is merged with the physical file tree on your disk. You can also think about this as committing a transaction.

At the end, the factory returns a rule created with the chain function (which is a rule too). It creates a new rule by chaining the passed ones. In this example we are just using the rule mergeWith but the enclosing chain makes it extendable.

As the name implies, mergeWith merges the Tree represented by templateSource with the tree which represents the current Angular project.

Templates

Now it's time to look at our templates within the files folder:

Folder with Templates

The nice thing about this is that the file names are templates too. For instance __x__ would be replaced with the contents of the variable x which is passed to the template rule. You can even call functions to transform these variables. In our case, we are using __name@dasherize__ which passes the variable name to the function dasherize which in turn is passed to template too.

The easiest one is the template for the item class which represents a menu item:

export interface <%= classify(name) %>Item {
    title: string;
    iconClass: string;
}

Like in other known template languages (e. g. PHP), we can execute code for the generation within the delimiters <% and %>. Here, we are using the short form <%=value%> to write a value to the generated file. This value is just the name the caller passed transformed with classify to be used as a class name.

The template for the menu service is build in a similar way:

import { <%= classify(name) %>Item } from './<%=dasherize(name)%>-item';

export class <%= classify(name) %>Service {

    public items: <%= classify(name) %>Item[] = [
        { title: 'Home', iconClass: 'ti-home' },
        { title: 'Other Menu Item', iconClass: 'ti-arrow-top-right' },
        { title: 'Further Menu Item', iconClass: 'ti-shopping-cart'},
        { title: 'Yet another one', iconClass: 'ti-close'}
    ];
}

In addition to that, the component template contains some if statements that check whether a menu service should be used:

import { Component, OnInit } from '@angular/core';
<% if (menuService) { %>
import { <%= classify(name) %>Item } from './<%=dasherize(name)%>-item';
import { <%= classify(name) %>Service } from './<%=dasherize(name)%>.service';
<% } %>

@Component({
    selector: '<%=dasherize(name)%>',
    templateUrl: '<%=dasherize(name)%>.component.html',
    <% if (menuService) { %>
        providers: [<%= classify(name) %>Service]
    <% } %>
})
export class <%= classify(name) %>Component {

<% if (menuService) { %>
    items: <%= classify(name) %>Item[];

    constructor(service: <%= classify(name) %>Service) {
        this.items = service.items;
    }
<% } %>

}

The same is the case for the component's template. When the caller requested a menu service, it's using it; otherwise it just gets hardcoded sample items:

<div class="sidebar-wrapper">
    <div class="logo">
        <a class="simple-text">
            AppTitle
        </a>
    </div>
    <ul class="nav">

<% if (menuService) { %>
    <li *ngFor="let item of items">
        <a>
            <i class="{{item.iconClass}}"></i>
            <p>{{item.title}}</p>
        </a>
    </li>

<% } else { %>
        <li>
            <a>
                <i class="ti-home"></i>
                <p>Home</p>
            </a>
        </li>

        <li>
            <a>
                <i class="ti-arrow-top-right"></i>
                <p>Other Menu Item</p>
            </a>
        </li>

		<li>
			<a>
				<i class="ti-shopping-cart"></i>
				<p>Further Menu Item</p>
			</a>
		</li>

		<li>
			<a>
				<i class="ti-close"></i>
				<p>Yet another one</p>
			</a>
		</li>
        <% } %>
    </ul>
</div>

Building and Testing with an Sample Application

To build the npm package, we just need to call npm run build which is just triggering the TypeScript compiler.

For testing it, we need a sample application that can be created with the CLI. Please make sure to use Angular CLI version 1.5 RC.4 or higher.

For me, the easiest way to test the collection was to copy the whole package into the sample application's node_module folder so that everything ended up within node_modules/nav. Please make sure to exclude the collection's node_modules folder, so that there is no folder node_modules/nav/node_modules.

Instead of this, pointing to a relative folder with the collection should work too. In my experiments, I did with a release candidate, this wasn't the case (at least not in any case).

After this, we can use the CLI to scaffold our side menu:

ng g menu side-menu --menuService --collection nav

Here, menu is the name of the schematic, side-menu the file name we are passing and nav the name of the npm package.

Using the Schematic

After this, we need to register the generated component with the AppModule:

import { SideMenuComponent } from './side-menu/side-menu.component';
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';

import { AppComponent } from './app.component';

@NgModule({
  declarations: [
    AppComponent,
    SideMenuComponent
  ],
  imports: [
    BrowserModule
  ],
  providers: [],
  bootstrap: [AppComponent]
})
export class AppModule { }

In an other post, I will show how to even automate this task with Schematics.

After this, we can call the component in our AppModule. The following sample also contains some boiler blade for the Bootstrap Theme used in the initial screen shot.

<div class="wrapper">
  <div class="sidebar" data-background-color="white" data-active-color="danger">

      <side-menu></side-menu>
  
  </div>
  <div class="main-panel">
      <div class="content">
        <div class="card">
          <div class="header">
            <h1 class="title">Hello World</h1>
          </div>
          <div class="content">
            <div style="padding:7px">
             Lorem ipsum ...
            </div>
          </div>
        </div>
      </div>
  </div>
</div>

To get Bootstrap and the Bootstrap Theme, you can download the free version of the paper theme and copy it to your assets folder. Also reference the necessary files within the file .angular-cli.json to make sure they are copied to the output folder:

[...]
"styles": [
  "styles.css",
  "assets/css/bootstrap.min.css",
  "assets/css/paper-dashboard.css",
  "assets/css/demo.css",
  "assets/css/themify-icons.css"
],
[...]

After this, we can finally run our application: ng serve.

Jürgen Gutsch: GraphiQL for ASP.​NET Core

One nice thing about blogging is the feedback from the readers. I got some nice kudos, but also great new ideas. One idea was born out of a question about a "graphi" UI for the GraphQL Middleware I wrote some months ago. I never heard about "graphi", which actually is "GraphiQL", a generic HTML UI over a GraphQL endpoint. It seemed to be something like a Swagger UI, but just for GraphQL. That sounds nice and I did some research about that.

What is GraphiQL?

Actually it is absolutely not the same as Swagger and not as detailed as Swagger, but it provides a simple and easy to use UI to play around with your GraphQL end-point. So you cannot really compare it.

GraphiQL is a React component provided by the GraphQL creators, that can be used in your project. It basically provides an input area to write some GraphQL queries and a button to sent that query to your GrapgQL end-point. You'll than see the result or the error on the right side of the UI.

Additionally it provides some more nice features:

  • A history of sent queries, which appears on the left side, if you press the history-button. To reuse previously used queries.
  • It rewrites the URL to support linking to a specific query. It stores the query and the variables in the URL, to sent it to someone else, or to bookmark the query to test.
  • It actually creates a documentation out of the GraphQL end-point. By clicking at the "Docks" link it opens a documentation about the types used in this API. This is really magic, because it shows the documentation of a type I never requested:

Implementing GraphiQL

The first idea was to write something like this by my own. But it should be the same as the existing GraphiQL UI. Why not using the existing implementation? Thanks to Steve Sanderson, we have the Node Services for ASP.NET Core. Why not running the existing GraphiQL implementation in a Middleware using the NodeServices?

I tried it with the "apollo-server-module-graphiql" package. I called this small JavaScript to render the graphiql UI and return it back to C# via the NodeSerices:

var graphiql = require('apollo-server-module-graphiql');

module.exports = function (callback, options) {
    var data = {
        endpointURL: options.graphQlEndpoint
    };

    var result = graphiql.renderGraphiQL(data);
    callback(null, result);
};

The usage of that script inside the Middleware looks like this:

var file = _env.WebRootFileProvider.GetFileInfo("graphiql.js");
var result = await _nodeServices.InvokeAsync<string>(file.PhysicalPath, _options);
await httpCont

That works great, but has one problem. It wraps the GraphQL query in a JSON-Object that was posted to the GraphQL end-point. I would need to change the GraphQlMiddleware implementation, because of that. The current implementation expects the plain GraphQL query in the POST body.

What is the most useful approach? Wrapping the GraphQL query in a JSON object or sending the plain query? Any Ideas? What do you use? Please tell me by dropping a comment.

With this approach I'm pretty much dependent to the Apollo developers and need to change my implementation, whenever they change their implementations.

This is why I decided to use the same concept of generating the UI as the "apollo-server-module-graphiql" package but implemented in C#. This unfortunately doesn't need the NodeServices anymore.

I use exact the same generated code as this Node module, but changed the way the query is send to the server. Now the plain query will be sent to the server.

I started playing around with this and added it to the existing project, mentioned here: GraphQL end-point Middleware for ASP.NET Core.

Using the GraphiqlMiddleware

The result is as easy to use as the GraphQlMiddleware. Let's see how it looks to add the Middlewares:

if (env.IsDevelopment())
{
    app.UseDeveloperExceptionPage();
	// adding the GraphiQL UI
    app.UseGraphiql(options =>
    {
        options.GraphiqlPath = "/graphiql"; // default
        options.GraphQlEndpoint = "/graph"; // default
    });
}
// adding the GraphQL end point
app.UseGraphQl(options =>
{
    options.GraphApiUrl = "/graph"; // default
    options.RootGraphType = new BooksQuery(bookRepository);
    options.FormatOutput = true; // default: false
});

As you can see the second Middleware is bound to the first one by using the same path "/graph". I didn't create any hidden dependency between the both Middlewares, to make it ease to use it in various combinations. Maybe you want to use the GraphiQL UI only in the Development or Staging environment as shown in this example.

Now start the web using Visual Studio (press [F5]). The web starts with the default view or API. Add "graphiql" to the URL in the browsers address bar and see what happens. You should see a generated UI for your GraphQL endpoint, where you can now start playing around with our API, testing and debugging it with your current data. (See the screenshots on top.)

I'll create a separate NuGet package for the GraphiqlMiddleware. This will not have the GraphQlMiddleware as a dependency and could be used completely separate.

Conclusion

This was a lot easier to implement than expected. Currently there is still some refactoring needed:

  • I don't like to have the HMTL and JavaScript code in the C#. I'd like to load that from an embedded resource file, which actually is a HTML file.
  • I should add some more configuration options. E.g. to change the theme, as equal to the original Node implementation, to preload queries and results, etc.
  • Find a way to use it offline as well. Currently there's a connection to the internet needed to load the CSS and JavaScripts from the CDNs.

You wanna try it? Download, clone or fork the sources on GitHub.

What do you think about that? Could this be useful to you? Please leave a comment and tell me about your opinion.

Update [10/26/2017 21:03]

GraphiQL is much more powerful than expected. I was wondering how the GraphQL create IntelliSense support in the editor and how it creates the documentation. I had a deeper look into the traffic and found two more cool things about it:

First: GraphiQL sends a special query to the GraphQL to request for the GraphQL specific documentation. IN this case it looks like this:

  query IntrospectionQuery {
    __schema {
      queryType { name }
      mutationType { name }
      subscriptionType { name }
      types {
        ...FullType
      }
      directives {
        name
        description
        locations
        args {
          ...InputValue
        }
      }
    }
  }

  fragment FullType on __Type {
    kind
    name
    description
    fields(includeDeprecated: true) {
      name
      description
      args {
        ...InputValue
      }
      type {
        ...TypeRef
      }
      isDeprecated
      deprecationReason
    }
    inputFields {
      ...InputValue
    }
    interfaces {
      ...TypeRef
    }
    enumValues(includeDeprecated: true) {
      name
      description
      isDeprecated
      deprecationReason
    }
    possibleTypes {
      ...TypeRef
    }
  }

  fragment InputValue on __InputValue {
    name
    description
    type { ...TypeRef }
    defaultValue
  }

  fragment TypeRef on __Type {
    kind
    name
    ofType {
      kind
      name
      ofType {
        kind
        name
        ofType {
          kind
          name
          ofType {
            kind
            name
            ofType {
              kind
              name
              ofType {
                kind
                name
                ofType {
                  kind
                  name
                }
              }
            }
          }
        }
      }
    }
  }

Try this query and sent it to your GraphQL API using Postman or a similar tool and see what happens :)

Second: GraphQL for .NET knows how to answer that query and sent the full documentation about my data structure to the client like this:

{
    "data": {
        "__schema": {
            "queryType": {
                "name": "BooksQuery"
            },
            "mutationType": null,
            "subscriptionType": null,
            "types": [
                {
                    "kind": "SCALAR",
                    "name": "String",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Boolean",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Float",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Int",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "ID",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Date",
                    "description": "The `Date` scalar type represents a timestamp provided in UTC. `Date` expects timestamps to be formatted in accordance with the [ISO-8601](https://en.wikipedia.org/wiki/ISO_8601) standard.",
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
                {
                    "kind": "SCALAR",
                    "name": "Decimal",
                    "description": null,
                    "fields": null,
                    "inputFields": null,
                    "interfaces": null,
                    "enumValues": null,
                    "possibleTypes": null
                },
              	[ . . . ]
                // many more documentation from the server
        }
    }
}

This is really awesome. With GraphiQL I got a lot more stuff than expected. And it didn't take more than 5 hours to implement this middleware.

Christian Dennig [MS]: Deploy a hybrid Kubernetes Cluster to Azure Container Service

Lately, I have been working a lot with Kubernetes as one (of many) solutions to run Docker containers in the cloud. Microsoft therefore offers Azure Container Service (ACS), a service to create and (partly) manage a Kubernetes cluster on Azure.

You normally would deploy such a cluster via the Azure Portal or e.g. via the Azure Command Line Interface. Here is a sample command:

az acs create --orchestrator-type kubernetes 
  --resource-group k8s-rg --name myk8scluster --generate-ssh-keys

Unfortunately, you cannot customize all the properties of the Kubernetes deployment with this approach, e.g. if you want to place the cluster in an existing Azure Virtual Network (VNET) or if you want to run multiple node types within the cluster to be able to run Linux and Windows based images/pods in parallel.

To achieve this, you must use the ACS engine which is a kind of “translator” between cluster configurations (which are provided in JSON format) and Azure Resource Manager templates.

ACS Engine

The ACS engine provides a convenient way to generate an ARM template that creates a Kubernetes cluster for you in Azure. The nice thing about it is, that you can influence a lot more properties of the cluster than you can do via the portal or CLI. But more on this later…

If you execute the ACS engine, the resulting template created consists of all the resources, you need to run a cluster in Azure, e.g.:

  • Availability Sets for Master and Agent nodes
  • VMs / VM extensions
  • NICs / VNET configurations
  • Load Balancer
  • etc.

You can deploy the ARM template as you would deploy any other template to Azure by running a Powershell or CLI command, even via the portal.

But let’s get to our sample, creating a hybrid Windows/Linux cluster…

Hybrid Cluster With ACS Engine

We will start by creating a cluster definition file…

Some details on that:

  • First of all, we create the cluster configuration, setting the Kubernetes version to “1.8” (starting line 4)
  • the profile of the master node is set, giving it a name and a VM type  “Standard_D2_v2” (beginning line 8).
  • agent profiles are defined, setting up two profiles. One for Linux nodes, the other one for Windows nodes…also setting the VM size to “Standard_D2_v2” (lines 13 to 27)
  • next, the two profiles are configured, each with the corresponding access information (user name or ssh key), lines 28 to 41
  • the last step is to set the service principal (application id and password, lines 42 to 45), which is needed by Kubernetes to interact with Azure. E.g. when you define a service of type “LoadBalancer”, K8s reaches out to Azure to create an external IP at the Azure Load Balancer
  • the .env file has to be placed in the same folder as the JSON file

In case a SSH key has to be created, you can do this on Windows via PuttyGen or on Linux/Mac via ssh directly. The value of the public key must be specified in the definition file (keyData, line 37).

If no service principal already exists, you can create it using the following command:

az ad sp create-for-rbac 
  --role Contributor --scopes="/subscriptions/[YOUR_SUBSCRIPTION_ID]"

The values from “appId” and “password” must be stored in the corresponding properties of the cluster configuration (lines 43 and 44).

Generate The ARM Template

In order to create the ARM template from the cluster configuration, we first need the ACS-engine binary. You can download it from here. Unzip the contents and place the destination folder into the PATH environment variable, to be able to run it from anywhere on your machine.

Next, run the ACS engine with your configuration…

acs-engine generate .\k8s-hybrid.json

The ACS engine takes the JSON file and generates an _output directory with all the necessary artifacts (among other things, a rather large ARM template + parameters-file) to deploy the K8s cluster to Azure. Here’s a visual representation of the ARM template:

hybrid_cluster

ARMVIZ representation of the resulting template

Next, you simply deploy the template, as you would do it with other ARM templates. First, create a Azure Resource Group:

az group create -n k8scluster-rg -l westeurope

Afterwards, switch to the _output/[CLUSTERNAME] folder and deploy the template:

az group deployment create -g k8scluster-rg 
  --template-file .\azuredeploy.json 
  --parameters .\azuredeploy.parameters.json

After some time, the command returns, telling you that the cluster has been created successfully.

Connect To The Cluster

After creating the cluster, we want to connect to it via kubectl. Therefore we need the configuration of the cluster, which we have to copy by scp or pscp (from Putty) from the master node. To do this, execute this command (I’m on Windows, so I use pscp):

pscp -i [PATH_TO_PRIVATE_SSHKEY_FILE] 
  azureuser@[FQDN_OF_MASTER_NODE]:.kube/config .

If you want to, set the config file you just downloaded as the default file (environment variable KUBECONFIG), otherwise use the config by passing

--kubeconfig=".\config"

to each kubectl command.

So, let’s try to get the nodes of our cluster:

kubectl --kubeconfig=".\config" get nodes

Result:

get_nodes

That looks pretty good! Now let’s connect to the dashboard via…

kubectl --kubeconfig=".\config" proxy

…then open up a browser window and point to:

http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard/

Now head over to the “Nodes” view and click on one of the “*acs90*” nodes:

dashboard

Nodes

windows_node

A Window node

As you can see, this node runs Windows (one of the node labels is “beta.kubernetes.io/os: windows“)…a Windows version of course, that can run Windows based containers!

Deploy Some Windows And Linux Containers

To test our cluster, we deploy the following template to Kubernetes:

What it will create:

The important part of the deployment file is the node selection for the pods/containers. In lines 25/26, the template defines to select Linux-based nodes to deploy the nginx containers:

nodeSelector:
        beta.kubernetes.io/os: linux

The same is done for the IIS deployment (lines 52/53):

nodeSelector:
        beta.kubernetes.io/os: windows

These properties tell the Kubernetes cluster to create the nginx pods only on Linux and the IIS pods on Windows nodes, magic!

After some time (give it a few “seconds”, the image of the IIS pods is about 5 GB!), you will be able to query the running pods and the services:

result

Et voilà, we are able to connect to the services via a browser…

svc_linux

Linux based container

svc_win

Windows based container

That’s what we wanted to achieve…Linux and Windows containers managed by the same cluster! 🙂

Wrap Up

I hope you could see, how easy it is to deploy a Kubernetes cluster to Azure. In this example, I only showed how to create a hybrid cluster, to be able to run Linux and Windows based containers/pods in parallel. Nevertheless, on the ACS Engine Github repository (where this sample is basically from) you can find many other examples, e.g. to place the cluster in an existing VNET, attach disks to nodes, create a Swarm cluster etc. Check it out, play with ACS…and have fun! 🙂


Holger Schwichtenberg: Buch "Moderne Datenzugriffslösungen mit Entity Framework Core 2.0: Datenbankprogrammierung mit .NET/.NET Core und C#"

Das Buch behandelt auf rund 460 Seiten alle wichtigen Szenarien des Datenbankzugriffs mit Entity Framework Core 2.0 sowie zahlreiche Praxislösungen und Tipps.

Johannes Renatus: Laden von 64Bit DLLs in 32Bit Anwendung mit Reflection

Da könnte man sich fragen wozu man sowas braucht. Aber es gibt immer einen guten Grund und in meinem Fall ging es darum, das ich in einem T4 Template 64Bit DLLs laden musste und diese nach Benutzerdefinierten Attributen zu durchsuchen um dann eine passende Ausgabe zu erstellen. Leider gibt es die Konsolenanwendung mit der VisualStudio […]

Jürgen Gutsch: .NET Core 2.0 and ASP.NET 2.0 Core are here and ready to use

Recently I did a overview talk about .NET Core, .NET Standard and ASP.NET Core at the Azure Meetup Freiburg. I told them about .NET Core 2.0, showed the dotnet CLI and the integration in Visual Studio. Explained the sense of .NET Standard and why developers should take care about it. I also showed them ASP.NET Core, how it works, how to host and explained the main differences to the ASP.NET 4.x versions.

BTW: This Meetup was really great. Well organized on a pretty nice and modern location. It was really fun to talk there. Thanks to Christian, Patrick and Nadine to organize this event :-)

After that talk they asked me some pretty interesting and important questions:

Question 1: "Should we start using ASP.NET Core and .NET Core?"

My answer is a pretty clear YES.

  • Use .NET Standard for your libraries, if you don't have dependencies to platform specific APIs (eg. Registry, drivers, etc.) even if you don't need to be x-plat. Why? Because it just works and you'll keep a door open to share your library to other platforms later on. Since .NET Standard 2.0 you are not really limited, you are able to do almost all with C# you can do with the full .NET Framework
  • Use ASP.NET Core for new web projects, if you don't need to do Web Forms. Because it is fast, lightweight and x-plat. Thanks to .NET standard you are able to reuse your older .NET Framework libraries, if you need to.
  • Use ASP.NET Core to use the new modern MVC framework with the tag helpers or the new lightweight razor pages
  • Use ASP.NET Core to host your application on various cloud providers. Not only on Azure, but also on Amazon and Google:
    • http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/dotnet-core-tutorial.html
    • https://aws.amazon.com/blogs/developer/running-serverless-asp-net-core-web-apis-with-amazon-lambda/
    • https://codelabs.developers.google.com/codelabs/cloud-app-engine-aspnetcore/#0
    • https://codelabs.developers.google.com/codelabs/cloud-aspnetcore-cloudshell/#0
  • Use ASP.NET Core to write lightweight and fast Web API services running either self hosted, in Docker or on Linux, Mac or Windows
  • Use ASP.NET Core to create lightweight back-ends for Angular or React based SPA applications.
  • Use .NET Core to write tools for different platforms

As an library developer, there is almost no reason to not use the .NET Standard. Since .NET Standard 2.0 the full API of the .NET Framework is available and can be used to write libraries for .NET Core, Xamarin, UWP and the full .NET Framework. It also supports to reference full .NET Framework assemblies.

The .NET Standard is an API specification that need to be implemented by the platform specific Frameworks. The .NET Framework 4.6.2, .NET Core 2.0 and Xamarin are implementing the .NET Standard 2.0, which means they all uses the same API (namespaces names, class names, method names). Writing libraries against the .NET Standard 2.0 API will run on .NET Framework 2.0, on .NET Core 2.0, as well as on Xamarin and on every other platform specific Framework that supports that API.

Question 2: Do we need to migrate our existing web applications to ASP.NET Core?

My answer is: NO. You don't need to and I would propose to not do it if there's no good reason to do it.

There are a lot blog posts out there about migrating web applications to ASP.NET Core, but you don't need to, if you don't face any problems with your existing one. There are just a few reasons to migrate:

  • You want to go x-plat to host on Linux
  • You want to host on small devices
  • You want to host in linux based Docker containers
  • You want to use a faster framework
    • A faster framework is useless, if your code or your dependencies are slow ;-)
  • You want to use a modern framework
    • Note: ASP.NET 4.x is not outdated, still supported and still gets new features
  • You want to run your web on a Microsoft Nano Server

Depending on the level of customizing you did in your existing application, the migration could be a lot of effort. Someone needs to pay for the effort, that's why I would propose not to migrate to ASP.NET Core, if you don't have any problems or a real need to do it.

Conclusion

I would use ASP.NET Core for every new web project and .NET Standard for every library I need to write. Because it is almost mature and really usable since the versions 2.0. You can do almost all the stuff, you can do with the full .NET framework.

BTW: Rick Strahl also just wrote an article about that. Please read it. It's great, as almost all of his posts: https://weblog.west-wind.com/posts/2017/Oct/22/NET-Core-20-and-ASPNET-20-Core-are-finally-here

BTW: The slides off that talk are on SlideShare. If you want me to do that talk in your meetup or in your user group, just ping me on twitter or drop me an email

André Krämer: Was tun, wenn die Xamarin App während der iOS Store Prüfung wegen einer Exception abgelehnt wird?

Vermutlich jeder, der bereits eine App geschrieben hat, kennt dieses Gefühl der Ungeduld wenn man die letzten Features der App entwickelt. Man möchte einfach nur noch so schnell wie möglich fertig sein, um sein Werk endlich in den App-Store schieben zu können und es somit der Welt zu präsentieren. Während der letzte Schritt, nämlich das Deployment in den Store, unter Android relativ einfach ist, stellt es unter iOS eine echte Herausforderung dar.

André Krämer: Hilfe! Xamarin iOS Simulator startet nicht und stürzt mit Fehler: 'A fatal error occured when trying to start the server' ab

Ein schönes Feature der Enterprise Edition von Visual Studio ist die Darstellung des iOS Simulators unter Windows. Während man zum Debugging einer Xamarin iOS App den Simulator normalerweise auf dem Mac bedienen muss, auch wenn man das Debugging unter Windows gestartet hat, erlaubt die Enterprise Edition von Visual Studio die Bedienung des Simulators direkt unter Windows. Hat man sich einmal an diese Arbeitsweise gewöhnt, möchte man sie eigentlich nicht mehr missen.

Holger Schwichtenberg: Überflüssige Leerzeichen in Microsoft SQL Server abschneiden

Mit SQL Server 2017 führt Microsoft endlich die Trim()-Funktion ein, mit der man Leerzeichen am Beginn und am Ende einer Zeichenkette entfernen kann.

Christina Hirth : 10 Jahre Open Space – meine Retrospektive

Workshop-Tag:

Seit ein paar Jahren gibt es die Möglichkeit, den Open Space um ein Tag Workshop zu erweitern – wenn einem die zwei Tage Nerdtalk nicht reichen  😉

Ich habe mich diesmal für Tensorflow: Programming Neural Networks mit Sören Stelzer entschieden – und es war großartig. Obwohl ein sehr schwieriges Thema (das Wort Voodoo ist öfter gefallen), ich weiß jetzt genug über Machine Learning und Neuronale Netze, um mit dem Thema gut starten zu können. Ich formuliere es mal so: ich weiß jetzt, was ich weiß und vor allem, was ich nicht weiß und wie wir weiter machen müssen. Und mehr kann man von einem Workshop nicht erwarten. Zusätzlich finde ich, dass Sören eine sehr große Bereicherung für unsere Community ist, die sich genauso weiterentwickeln muss, wie die IT-Welt da draußen. Vielen Dank für dein Engagement!

Eigentlich ein fetten Dank an alle Trainer, die sich bei Community-Events engagieren!!

Erkenntnisse der nächsten 48 Stunden – geclustert:

Agile datengetriebene Entwicklung – war meine eigene Session (das heißt, ich habe das Thema vorgeschlagen, war Themen-Owner aber das war’s dann auch mit den Pflichten).

Ich wollte Tipps und Ideen dazu hören, wie man seine Arbeit nach scrum organisieren kann wenn man Themen beackert, wie Reporting, wo die Features auf große Menge Daten basieren. Es ist eine Sache, ein Testsetup für 2 möglichen Situationen zu schreiben und es ist eine ganz andere, die vielfalt der Situationen in Reporting zu beschreiben.

Take-aways:

  • wir werden damit leben müssen, dass unsere Features, Tests, Erwartungen eventual consistent sind  😀 Wichtig ist, dass wir Annahmen treffen, die wir für den Anfang als “die Wahrheit” betrachten.
  • User labs beauftragen.
  • Measurements weit vor ihre Auswertung einzubauen ist ok, bricht nicht mit dem Konzept “Jedes Feature muss Business Value haben” – auch wenn der echte Business Value erst in 2 Jahren auswertbar ist.
  • Aha-Effekt: In der Welt von Business Teams gibt es keine Fachabteilung. Ich bin in dem Reporting-Team ergo ich bin die Fachabteilung. (finde ich gut, häßliches Wort  😎 )

Stolperfallen mit React

  • unser Internationalisierungskonzept ist richtig (Texte aufteilen nach Modulen/Bereiche/o.ä., ein common Bereich, alles via API in den State laden)
  • Package-Empfehlung: react-intl
  • das Thema so früh, wie möglich berücksichtigen, später kann es richtig weh tun.
  • DevTool-Empfehlung: https://github.com/crysislinux/chrome-react-perf um die Performance der einzelnen React-Componenten zu sehen.
  • (es)Linting Empfehlung um zirkuläre Referenzen zu vermeiden:  “import/no-internal-modules” (Danke @kjiellski)

Wann kann Scrum funktionieren

  • wenn die Möglichkeit besteht, auf Feedback zu reagieren, sprich die Entwickler sind keine Resourcen sondern kreative Menschen.
  • das Team, in dem ich die Ehre habe, unser Produkt mitzugestallten, und @cleverbridge ist führend was agiles Arbeiten betrifft.

Menschen

  • man kann bei Trinkspielen mitmachen, ohne zu trinken
  • nachts träumen, dass der Partner einen enttäuscht hat und danach den ganzen Tag sauer auf ihn sein, ist eine Frauen-Sache (bestätigt von @AHirschmueller und @timur_zanagar) 😀

Nachtrag: fast vergessen, dass

  • wir dank @agross eine super wertvolle Session über dotfiles hatten
  • DDD wird gerade durch Zertifizierung kaputt gemacht, Serverless durch Hype
  • mit der Session von @a_mirmohammadi über/zu den Anonymen Abnehmer ist der @devopenspace eindeutig in die Kategorie “es gibt nichts, was nicht geht” angekommen

Uli Armbruster: Workshop: Conquer your Codebase – Bewährter Clean Code aus der Praxis (Materialien)

Einen herzlichen Dank an alle Teilnehmer des Workshops am vergangenen Freitag beim Developer Open Space 2017. Trotz des völlig unterschiedlichen Backgrounds – von Ruby, über PHP, zu Java oder gar nicht objektorientierte Sprachen wie JavaScript – war ebenso alles vertreten wie vom Azubi bis hin zum Entwickler mit 20-jähriger Berufserfahrung.

IMG_20171013_132904

Einer von vielen mutigen Freiwilligen

Den Code zur Super Mario Kata gibt es hier. Beachtet die neuen Anforderungen (9-12), die ich auf dem Heimweg im Zug noch hinzugefügt habe. Damit einhergehend habe ich noch ein Refactoring des Konzepts ‚Leben‘ durchgeführt, sodass keine Actions bzw. Funcs mehr an die Methoden übergeben werden müssen. Gleichfalls fällt das Branching-Statement raus und es ergibt sich eine nahezu völlig flexible Möglichkeit zur Erstellung von Spielmodi. Die Tests sind nun ebenfalls alle umgesetzt, was wir gegen Ende hin aufgrund von Zeitmangel ausgelassen haben. Eine Testmethode hat in der Regel eine Zeile und wie schon im Workshop erwähnt: Was gut zu testen ist, ist in der Regel auch eine saubere Lösung.

IMG_20171013_132938

20 Teilnehmer mit unterschiedlichen Backgrounds

Derjenige, der noch eine Lösung mit dem Command-Pattern umsetzt und dadurch das Anpassen bestehender Klassen völlständig vermeidet, erhält von mir einen Gutschein für unsere Trainings. Damit kann er kostenlos an jedem beliebigen 3-tägigen Workshop in unseren Karlsruher Büros teilnehmen (Fahrt- und Übernachtungskosten sind ausgeschlossen).

Alle, die über den Workshop bloggen oder twittern, erhalten darüber hinaus noch einen 30% Rabattcode. Einfach Link schicken, dann kriegt er diesen per Mail.

tweet-workshop

Ein Teilnehmer versucht sich in einer Lösung mit Java statt C#

Weiterführende Links

  • Das erwähnte Video zu Enumeration mit Verhalten könnt ihr hier anschauen.
  • Danke an Tim für den Link zum Case Converter für Visual Studio.
  • Außerdem lege ich euch die Reactive Extensions ans Herz, die es für unterschiedliche, gängige Programmiersprachen gibt.
  • Git Snippet
  • Wer noch ein wenig üben möchte, kann dies mit der erweiterten FizzBuzz Kata tun.
  • Blog Post über die Migration von NHibernate zu EntityFramework in 3 Tagen.

 

Demnächst geht ein Video mit dem theoretischen Teil (Why-How-What) online, in dem ich nochmal die Zusammenhänge zw. „schlechtem Code“ und den Gründen dafür erläutere. Wer automatisch informiert werden will, wenn es veröffentlich wird, abonniert einfach meinen Blog oder schickt mir eine Nachricht.

 

 

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.