Michael Schwarz: Nach längerer Pause - jetzt zu Apple Themen auf Twitter

Nach längerer Pause bin ich jetzt zu Apple Themen auf Twitter umgestiegen. Unter https://twitter.com/DieApfelFamilie könnt ihr mir folgen.


Christina Hirth : Continuous Delivery Is a Journey – Part 2

After describing the context a little bit in part one it is time to look at the single steps the source code must pass in order to be delivered to the customers. (I’m sorry, but it is a quite long part 🙄)

The very first step starts with pushing all the current commits to master (if you work with feature branches you will probably encounter a new level of self-made complexity which I don’t intend to discuss about).

This action triggers the first checks and quality gates like licence validation and unit tests. If all checks are “green” the new version of the software will be saved to the repository manager and will be tagged as “latest”.

Successful push leads to a new version of my service/pkg/docker image

At this moment the continuous integration is done but the features are far from being used by any customer. I have a first feedback that I didn’t brake any tests or other basic constraints but that’s all because nobody can use the features, it is not deployed anywhere yet.

Well let Jenkins execute the next step: deployment to the Kubernetes environment called integration (a.k.a. development)

Continuous delivery to the first environment including the execution of first acceptance tests

At this moment all my changes are tested if they can work together with the currently integrated features developed by my colleagues and if the new features are evolving in the right direction (or are done and ready for acceptance).

This is not bad, but what if I want to be sure that I didn’t break the “platform”, what if I don’t want to disturb everybody else working on the same product because I made some mistakes – but I still want to be a human ergo be able to make mistakes 😉? This means that my behavioral and structure changes introduced by my commits should be tested before they land on integration.

These must be obviously a different set of tests. They should test if the whole system (composed by a few microservices each having it’s own data persistence, one or more UI-Apps) is working as expected, is resilient, is secure, etc.

At this point came the power of Kubernetes (k8s) and ksonnet as a huge help. Having k8s in place (and having the infrastructure as code) it is almost a no-brainer to set up a new environment to wire up the single systems in isolation and execute the system tests against it. This needs not only the k8s part as code but also the resources deployed and running on it. With ksonnet can be every service, deployment, ingress configuration (manages external access to the services in a cluster), or config map defined and configured as code. ksonnet not only supports to deploy to different environments but offers also the possibility to compare these. There are a lot of tools offering these possibilities, it is not only ksonnet. It is important to choose the fitting tool and is even more important to invest the time and effort to configure everything as code. This is a must-have in order to achieve a real automation and continuous deployment!

Good developer experience also means simplified continuous deployment

I will not include here any ksonnet examples, they have a great documentation. What is important to realize is the opportunity offered with such an approach: if everything is code then every change can be checked in. Everything checked in can be included observed/monitored, can trigger pipelines and/or events, can be reverted, can be commented – and the feature that helped us in our solution – can be tagged.

What happens in a continuous delivery? Some change in VCS triggers pipeline, the fitting version of the source code is loaded (either as source code like ksonett files or as package or docker image), the configured quality gate checks are verified (runtime environment is wired up, the specs with the referenced version are executed) and in case of success the artifact will be tagged as “thumbs up” and promoted to the next environment. We started do this manually to gather enough experience to automate the process.

Deploy manually the latest resources from integration to the review stage

If you have all this working you have finished the part with the biggest effort. Now it is time to automate and generalize the single steps. After the Continuous Integration the only changes will occur in the ksonnet repo (all other source code changes are done before), which is called here deployment repo.

Roll out, test and eventually roll back the system ready for review

I think, this post is already to long. The next part ( I think, it will be the last one) I would like to write about the last essential method, how to deploy to production, without annoying anybody (no secret here, this is why feature toggles were invented for 😉) and about some open questions or decisions what we encountered on our journey.

Every graphic is realized with plantuml thank you very much!

to be continued …

Golo Roden: Einführung in Node.js, Folge 26: Let's code (comparejs)

JavaScript verfügt – wie auch andere Programmiersprachen – über Operatoren zum Vergleichen von Werten. Leider läuft ihre Funktionsweise häufig der Intuition zuwider. Warum also nicht die Vergleichsoperatoren in Form eines Moduls neu schreiben und dabei auf vorhersagbares Verhalten achten?

Christina Hirth : Continuous Delivery Is a Journey – Part 1

Last year my colleagues and I had the pleasure to spend 2 days with @hamvocke and @diegopeleteiro from @thoughtworks reviewing the platform we created. One essential part of our discussions was about CI/CD described like this: “think about continuous delivery as a journey. Imagine every git push lands on production. This is your target, this is what your CD should enable.”

Even if (or maybe because) this thought scared the hell out of us, it became our vision for the next few months because we saw great opportunities we would gain if we would be able to work this way.

Let me describe the context we were working:

  • Four business teams, 100% self-organized, owning 1…n Self-contained Systems, creating microservices running as Docker containers orchestrated with Kubernetes, hosted on AWS.
  • Boundaries (as in Domain Driven Design) defined based on the business we were in.
  • Each team having full ownership and full accountability for their part of business (represented by the SCS).
  • Basic heuristics regarding source code organisation: “share nothing” about business logic, “share everything” about utility functions (in OSS manner), about experiences you made, about the lessons you learned, about the errors you made.
  • Ensuring the code quality and the software quality is 100% team responsibility.
  • You build it, you run it.
  • One Platform-as-a-service team to enable this business teams to deliver features fast.
  • Gitlab as VS, Jenkins as build server, Nexus as package repository
  • Trunk-based development, no cherry picking, “roll fast forward” over roll back.
Teams
4 Business Teams + 1 Platform-as-a-Service Team = One Product

The architecture we have chosen was meant to support our organisation: independent teams able to work and deliver features fast and independently. They should decide themselves when and what they deploy. In order to achieve this we defined a few rules regarding inter-system communication. The most important ones are:

  • Event-driven Architecture: no synchronous communication only asynchronous via the Domain Event Bus
  • Non-blocking systems: every SCS must remain (reduced) functional even if all the other systems are down

We had only a couple of exceptions for these rules. As an example: authentication doesn’t really make sense in asynchronous manner.

Working in self-organized, independent teams is a really cool thing. But

with great power there must also come great responsibility

Uncle Ben to his nephew

Even though we set some guards regarding the overall architecture, the teams still had the ownership for the internal architecture decisions. As at the beginning we didn’t have continuous delivery in place every team was alone responsible for deploying his systems. Due the missing automation we were not only predestined to make human errors but we were also blind for the couplings between our services. (And we spent of course a lot of time doing stuff manually instead of letting Jenkins or Gitlab or some other tool doing this stuff for us 🤔 )

One example: every one of our systems had at least one React App and a GraphQL API as the main communication (read/write/subscribe) channel. One of the best things about GraphQL is the possibility to include the GraphQL-schema in the react App and this way having the API Interface definition included in the client application.

Is this not cool? It can be. Or it can lead to some very smelly behavior, to a real tight coupling and to inability to deploy the App and the API independently. And just like my friend @etiennedi says: “If two services cannot be deployed independently they aren’t two services!”

This was the first lesson we have learned on this journey: If you don’t have a CD pipeline you will most probably hide the flaws of your design.

One can surely ask “what is the problem with manual deployment?” – nothing, if you have only a few services to handle, if every one in your team knows about these couplings and dependencies and is able to execute the very precise deployment steps to minimize the downtime. But otherwise? This method doesn’t scale, this method is not very professional – and the biggest problem: this method ignores the possibilities offered by Kubernetes to safely roll out, take down, or scale everything what you have built.

Having an automated, standardized CD pipeline as described at the beginning – with the goal that every commit will land on production in a few seconds – having this in place forces everyone to think about the consequences of his/hers commit, to write backwards compatible code, to become a more considered developer.

to be continued …

Stefan Henneken: MEF Part 3 – Life cycle management and monitoring

Part 1 took a detailed look at binding of composable parts. In an application, however, we sometimes need to selectively break such bindings without deleting the entire container. We will look at interfaces which tell parts whether binding has taken place or whether a part has been deleted completely.

The IPartImportsSatisfiedNotification interface

For parts, it can be helpful to know when binding has taken place. To achieve this, we implement an interface called IPartImportsSatisfiedNotification. This interface can be implemented in both imports and exports.

[Export(typeof(ICarContract))]
public class BMW : ICarContract, IPartImportsSatisfiedNotification
{
    // ...
    public void OnImportsSatisfied()
    {
        Console.WriteLine("BMW import is satisfied.");
    }
}
class Program : IPartImportsSatisfiedNotification
{
    [ImportMany(typeof(ICarContract))]
    private IEnumerable<Lazy<ICarContract>> CarParts { get; set; }
 
    static void Main(string[] args)
    {
        new Program().Run();
    }
    void Run()
    {
        var catalog = new DirectoryCatalog(".");
        var container = new CompositionContainer(catalog);
        container.ComposeParts(this);
        foreach (Lazy<ICarContract> car in CarParts)
            Console.WriteLine(car.Value.StartEngine("Sebastian"));
        container.Dispose();
    }
    public void OnImportsSatisfied()
    {
        Console.WriteLine("CarHost imports are satisfied.");
    }
}

Sample 1 (Visual Studio 2010) on GitHub

When the above program is run, after executing container.ComposeParts() (line 14) the method OnImportsSatisfied() of the host will be executed. If this is the first time an export has been accessed, the export will first run the constructor, then its OnImportsSatisfied() method, and finally its StartEngine() method.

If we don’t use the Lazy<T> class, the sequence in which the methods are called is somewhat different. In this case, after executing the container.ComposeParts() method, the constructor, and then the OnImportsSatisfied() method will first be executed for all exports. Only then the OnImportsSatisfied() method of the host will be called, and finally the StartEngine() method for all exports.

Using IDisposable

As usual in .NET, the IDisposable interface should also be implemented by exports. Because the Managed Extensibility Framework manages the parts, only the container containing the parts should call Dispose(). If the container calls Dispose(), it also calls the Dispose() method of all of the parts. It is therefore important to call the container’s Dispose() method once the container is no longer required.

Releasing exports

If the creation policy is defined as NonShared, multiple instances of the same export will be created. These instances will then only be released when the entire container is destroyed by using the Dispose() method. With long-lived applications in particular, this can lead to problems. Consequently, the CompositionContainer class possesses the methods ReleaseExports() and ReleaseExport(). ReleaseExports() destroys all parts, whilst ReleaseExport() releases parts individually. If an export has implemented the IDisposable interface, its Dispose() method is called when you release the export. This allows selected exports to be removed from the container, without having to destroy the entire container. The ReleaseExports() and ReleaseExport() methods can only be used on exports for which the creation policy is set to NonShared.

In the following example, the IDisposable interface has been implemented in each export.

using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarBMW
{
    [Export(typeof(ICarContract))]
    public class BMW : ICarContract, IDisposable
    {
        private BMW()
        {
            Console.WriteLine("BMW constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the BMW.", name);
        }
        public void Dispose()
        {
            Console.WriteLine("Disposing BMW.");
        }
    }
}

The host first binds all exports to the import. After calling the StartEngine() method, we use the ReleaseExports() method to release all of the exports. After re-binding the exports to the import, this time we remove the exports one by one. Finally, we use the Dispose() method to destroy the container.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.NonShared)]
        private IEnumerable<Lazy<ICarContract>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
 
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
 
            Console.WriteLine("");
            Console.WriteLine("ReleaseExports.");
            container.ReleaseExports<ICarContract>(CarParts);
            Console.WriteLine("");
 
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
 
            Console.WriteLine("");
            Console.WriteLine("ReleaseExports.");
            foreach (Lazy<ICarContract> car in CarParts)
                container.ReleaseExport<ICarContract>(car);
 
            Console.WriteLine("");
            Console.WriteLine("Dispose Container.");
            container.Dispose();
        }
    }
}

The program output therefore looks like this:

CommandWindowSample02

Sample 2 (Visual Studio 2010) on GitHub

Golo Roden: Einführung in Node.js, Folge 25: Let's code (is-subset-of)

Will man in JavaScript wissen, ob ein Array oder ein Objekt eine Teilmenge eines anderen Arrays oder eines anderen Objekts ist, lässt sich das nicht einfach herausfinden – erst recht nicht, wenn eine rekursive Analyse gewünscht ist. Warum also nicht ein Modul zu dem Zweck entwickeln?

André Krämer: Verstärkung bei der Quality Bytes GmbH in Sinzig gesucht (Softwareentwickler .NET, Softwareentwickler Angular, Xamarin, ASP.NET Core)

Das könnte dein neuer Schreibtisch sein Im Sommer 2018 habe ich gemeinsam mit einem Partner die Quality Bytes GmbH in Sinzig am Rhein, gelegen zwischen Bonn und Koblenz, gegründet. Seitdem entwicklen wir in einem Team von vier Entwicklern spannende Lösungen im Web- und Mobile-Umfeld. Wir setzen dabei auf moderne Technologien und Werkzeuge, wie z. B. ASP.NET Core Angular Xamarin Azure DevOps git Typescript C# Aktuell haben wir mehrere Stellen zu besetzen.

Code-Inside Blog: Check Scheduled Tasks with Powershell

Task Scheduler via Powershell

Let’s say we want to know the latest result of the “GoogleUpdateTaskMachineCore” task and the corresponding actions.

x

All you have to do is this (in a Run-As-Administrator Powershell console) :

Get-ScheduledTask | where TaskName -EQ 'GoogleUpdateTaskMachineCore' | Get-ScheduledTaskInfo

The result should look like this:

LastRunTime        : 2/26/2019 6:41:41 AM
LastTaskResult     : 0
NextRunTime        : 2/27/2019 1:02:02 AM
NumberOfMissedRuns : 0
TaskName           : GoogleUpdateTaskMachineCore
TaskPath           : \
PSComputerName     :

Be aware that the “LastTaskResult” might be displayed as an integer. The full “result code list” documentation only lists the hex value, so you need to convert the number to hex.

Now, if you want to access the corresponding actions you need to work with the “actual” task like this:

PS C:\WINDOWS\system32> $task = Get-ScheduledTask | where TaskName -EQ 'GoogleUpdateTaskMachineCore'
PS C:\WINDOWS\system32> $task.Actions


Id               :
Arguments        : /c
Execute          : C:\Program Files (x86)\Google\Update\GoogleUpdate.exe
WorkingDirectory :
PSComputerName   :

If you want to dig deeper, just checkout all the properties:

PS C:\WINDOWS\system32> $task | Select *


State                 : Ready
Actions               : {MSFT_TaskExecAction}
Author                :
Date                  :
Description           : Keeps your Google software up to date. If this task is disabled or stopped, your Google
                        software will not be kept up to date, meaning security vulnerabilities that may arise cannot
                        be fixed and features may not work. This task uninstalls itself when there is no Google
                        software using it.
Documentation         :
Principal             : MSFT_TaskPrincipal2
SecurityDescriptor    :
Settings              : MSFT_TaskSettings3
Source                :
TaskName              : GoogleUpdateTaskMachineCore
TaskPath              : \
Triggers              : {MSFT_TaskLogonTrigger, MSFT_TaskDailyTrigger}
URI                   : \GoogleUpdateTaskMachineCore
Version               : 1.3.33.23
PSComputerName        :
CimClass              : Root/Microsoft/Windows/TaskScheduler:MSFT_ScheduledTask
CimInstanceProperties : {Actions, Author, Date, Description...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties

If you have worked with Powershell in the past this blogpost should “easy”, but it took me a while to see the result code and to check if the action was correct or not.

Hope this helps!

Golo Roden: Einführung in Node.js, Folge 24: Let's code (typedescriptor)

Der typeof-Operator von JavaScript hat einige Schwächen: Er kann beispielsweise nicht zwischen Objekten und Arrays unterscheiden und identifiziert null fälschlicherweise als Objekt. Abhilfe schafft ein eigenes Modul, das Typen verlässlich identifiziert und beschreibt.

Jürgen Gutsch: Problems using a custom Authentication Cookie in classic ASP.​NET

A customer of mine created an own authentication service that combines various login mechanisms in their on-premise application environment. On this central service combines authentication via Active Directory, classic ASP.NET Forms Authentication and a custom login via a number of an access card.

  • Active Directory (For employees only)
  • Forms Authentication (Against a user store in the database for extranet users and and against the AD for employees via extranet)
  • Access badges (for employees, this authentication results in lower access rights)

This worked pretty nice in their environment until I created a new application which needs to authenticate against this service and was build using ASP.NET 4.7.2

BTW: Unfortunately I couldn't use ASP.NET Core here because I needed to reuse specific MVC components that are shared between all the applications.

I also wrote "classic ASP.NET" which feels a bit wired. I worked with ASP.NET for a long time (.NET 1.0) and still work with ASP.NET for specific customers. But it really is kinda classic since ASP.NET Core is out and since I worked with ASP.NET Core a lot as well.

Hot the customer solution works

I cannot go into the deep details, because this is the customers code, you only need to get the idea.

The problem because it didn't work with the new ASP.NET Framework is, that they use a custom authentication cookie that was based on ASP.NET forms authentication. I'm pretty sure, when the authentication service was created they didn't know about ASP.NET Identity or it didn't exist. They created a custom Identity, that stores all the user information as properties. They build an authentication ticket out of it and use forms authentication to encrypt and store that cookie. The cookie name is customized in the web.config which is not an issue. All the apps share the same encryption information.

The client applications that uses the central authentication service, read that cookie, decrypt the information using forms authentication, de-serialize the data into that custom authentication ticket that contains the user information. The user than gets created and stored into the User property of the current HttpContext and is authenticated in the application.

This sounds pretty straight foreword is working well, except in newer ASP.NET versions.

How it should work

The best way to use the authentication cookie would be to use the ASP.NET Identity mechanisms to create that cookie. After the authentication happened on the central service, the needed user information should have been stored as claims inside the identity object, instead of properties in a custom Identity object. The authentication cookie should have been stored using the forms authentication mechanism only, without an custom authentication ticket. The forms authentication is able to create that ticket including all the claims.

On the client applications forms authentication would have been red the cookie and would have been created a new Identity including all the claims that are defined in the central authentication service. The forms authentication module would have stored the user in the current HttpContext as well.

Less code, more easy. IMHO.

What is the actual problem?

The actual problem is, that the client applications reads the authentication cookie from the CookieCollection on Application_PostAuthenticateRequest:

// removed logging and other overhead

protected void Application_PostAuthenticateRequest(Object sender, EventArgs e)
{
    var serMod = default(CustomUserSerializeModel);

	var authCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
	if (authCookie != null || Request.IsLocal)
	{
		var ticket = FormsAuthentication.Decrypt(authCookie.Value); 
		var serializer = new JavaScriptSerializer();
		serMod = serializer.Deserialize<CustomUserSerializeModel>(ticket.UserData);
    }
    
    // some fallback code ...
    
    if (serMod != null)
	{
		var user = new CustomUser(serMod);
		var cultureInfo = CultureInfo.GetCultureInfo(user.Language);

		HttpContext.Current.User = user;
        Thread.CurrentThread.CurrentCulture = ci;
        Thread.CurrentThread.CurrentUICulture = ci;
	}
    
    // some more code ...
}

In newer ASP.NET Frameworks the authentication cookie gets removed from the cookie collection after the user was authenticated.

Actually I have no idea since what version the cookie will be removed, but this is anyway a good thing because of security reasons, but there are no information in the release notes since ASP.NET 4.0.

Anyway the cookie collection doesn't contain the authentication cookie anymore and the cookie variable is null if I try to read it out of the collection.

BTW: The cookie is still in the request headers and could be read manually. But including the encryption it could be difficult to read it.

I tried to solve this problem by reading the cookie on Application_AuthenticateRequest. This is also not working, because the FormsAuthenticationModule reads the cookie previously.

The next try was on to read it on Application_BeginRequest. This in generally woks, I get the cookie and I can read it. But, because the cookie is configured as authentication cookie, the FormsAuthModule tries to read it and fails. It'll set the User to null because there is an authentication cookie available which doesn't contain valid information. Which also makes kinda sense.

So this is not the right solution as well.

I worked on that problem almost four months. (Not completely four months, but for many hours within this four months.) I compared applications and other solutions. Because there was no hint about the removal of the authentication cookie and because it was working on the old applications I was pretty confused about the behavior.

I studied the source code of ASP.NET to get the solution. And there is one.

And finally the solution

The solution is to read the cookie on FormsAuthentication_OnAuthenticate in the global.asax and not to store the user in the current context, but in the event arguments User property. The user than gets stored in the context by the FormsAutheticationModule, that also executes this event handler.

// removed logging and other overhead

protected void FormsAuthentication_OnAuthenticate(Object sender, FormsAuthenticationEventArgs args)
{
	AuthenticateUser(args);
}

public void AuthenticateUser(FormsAuthenticationEventArgs args)
{    
	var serMod = default(CustomUserSerializeModel);

	var authCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
	if (authCookie != null || Request.IsLocal)
	{
		var ticket = FormsAuthentication.Decrypt(authCookie.Value); 
		var serializer = new JavaScriptSerializer();
		serMod = serializer.Deserialize<CustomUserSerializeModel>(ticket.UserData);
    }
    
    // some fallback code ...
    
    if (serMod != null)
	{
		var user = new CustomUser(serMod);
		var cultureInfo = CultureInfo.GetCultureInfo(user.Language);

		args.User = user; // <<== this does the thing!
        Thread.CurrentThread.CurrentCulture = ci;
        Thread.CurrentThread.CurrentUICulture = ci;
	}
    
    // some more code ...
}

That's it.

Conclusion

Pleas don't create custom authentication cookies, try the FormsAuthentication and ASP.NET Identity mechanisms first. This is much simpler and won't break that way because of future changes.

Also please don't write a custom authentication service, because there is already a good one out there that is the almost the standard. Have a look into the IdentityServer, that also provides the option to handle different authentications mechanisms using common standards and technologies.

If you really need to create a custom solution, be carefully and know what you are doing.

Jürgen Gutsch: Thoughts about repositories and ORMs or why we love rocket science!

The last architectural brain dump I did in my blog was more than three years ago. At that time it was my German blog, which was shut down unfortunately. Anyway, this is another brain dump. This time I want to write about the sense of the repository pattern in combination with an object relational mapper (ORM) like Entity Framework (EF) Core.

A Brain Dump is a blog post where I write down a personal opinion about something. Someone surely has a different opinion and it is absolutely fine. I'm always happy to learn about the different opinions and thoughts. So please tell me afterwards in the comments.

In the past years I had some great discussions about the sense and nonsense of the repository pattern. Mostly offline, but also online on twitter or in the comments of this blog. I also followed discussions on twitter, in Jeff Fritz's stream. (Unfortunately I can't find all the links to the online discussions anymore.)

My idea is you don't need to use repositories, if you use a unit of work. It is not only my idea, but it makes absolutely sense to me and I also favor not to use repositories in case I use EF or EF Core. There are many reasons. Let us look at them.

BTW: One of the leads of the .NET user group Bern was one of the first person who pointed me to this thought many years ago while I was complaining about an early EF version in my old blog.

YAGNI - you ain't gonna need it

In the classic architecture pattern you had three layers: UI, Business and Data. That made kinda sense in the past, in a world of monolithic applications without an ORM. At that time you wrapped all of the SQL and data mapping stuff into the data layer. The actual work with the data was done in the business layer and the user interacted with the data in the UI. Later this data layers become more and more generic or turned onto repositories.

BTW: At least I created generic data layers in the past which generated the SQL based on the type the data need to get mapped to. I used the property information of the types as well as attributes. Just before ORM were a thing in .NET. Oh... Yes... I created OR mappers, but I didn't really care the past days ;-)

Wait... What is a data layer for, if you already use a ORM?

To encapsulate the ORM? Why would you do that? To change the underlaying ORM, if needed? When did you ever changed the ORM and why?

These days you don't need to change the ORM in case you change the database. You only need to change the date provider of the ORM. Because the ORM is generic and is able to access various database systems.

To not have ORM dependencies in the business and UI layers? You'll ship your app including all dependencies anyway.

To easier test the business logic in an isolated way, without the ORM? This is anyway possible and you need to test the repositories as well in an isolated way. Just mock the DbContext.

You ain't gone need an additional layer you also need to maintain and to test. This is additional senseless code in the most cases. It just increases the lines of code and only makes sense, if you get paid for code instead of solutions (IMHO)

KISS - keep it simple and stupid

In almost all cases, the simplest solution is the best one. Why? Because it is a solution and because it is simple ;-)

Simple to understand, simple to test and simple to maintain. For the most of us, it is hard to create a simple solution, because our brains aren't working that way. That's the crux we have as software developers: We are able to understand complex scenarios, write complex programs, building software for self driving cars, video games and space stations.

In reality our job is to make complex things as simple as possible. The most of us do that, by writing business software that helps a lot of people doing their work in an efficient way that saves a lot of time and money. But often we use rocket science, or we use sky scraper technology to just build a tiny house.

Why? Because we are developers, we think in a complex way and we really, really love rocket science.

But sometimes we should look for a little simpler solution. Don't write code you don't need. Don't create a complex architecture, if you just need a tiny house. Don't use rocket science to build a car. Keep it simple and stupid. Your application just needs to work for the customer. This way you'll save your customers and your own money. You'll save time to make more customers happy. Happy customers also means more money for you and your company in a mid term.

SRP - Single responsibility prinziple

I think the SRP principle was confused a little in the past. What kind of responsibilities are we talking about? Should a business logic not fetch data or should a product service not create orders? Do you see the point? In my opinion we should split the responsibilities by topic first and later inside the service classes we are able to split on method level by abstraction or what ever we need to separate.

This should keep the dependencies as small as possible and every service is a single isolated module, which is responsible for a specific topic instead of a specific technology or design pattern.

BTW: What is a design pattern for? IMO it is needed to classify a peace of code, to talk about it and to get a common language. Don't think in patterns and write patterns. Think about features and write working code instead.

Let me write some code to describe what I mean

Back to the repositories, let's write some code and let's compare some code snippets. This is just some kind of fake code. But a saw something like this a lot in the past. In the first snippet we have a business layer, which needs three repositories to update some data. Mostly a repository is created per database table or per entity. This is why this business layer need to use two more repositories just to check for additional fields:

public class AwesomeBusiness
{
    private readonly AwesomeRepository _awesomeRepository;
    private readonly CoolRepository _coolRepository;
    private readonly SuperRepository _superRepository;

    public AwesomeBusiness(
        AwesomeRepository awesomeRepository,
        CoolRepository coolRepository,
        SuperRepository superRepository)
    {
        _awesomeRepository = awesomeRepository;
        _coolRepository = coolRepository;
        _superRepository = superRepository;
    }
    
    public void UpdateAwesomeness(int id)
    {
        var awesomeness = _awesomeRepository.GetById(id);
        awesomeness.IsCool = _coolRepository.HasCoolStuff(awesomeness.Id);
        awesomeness.IsSuper = _superRepository.HasSuperStuff(awesomeness.Id);
        awesomeness.LastCheck = DateTime.Now;
        _awesomeRepository.UpdateAwesomeness(awesomeness);
    }
}

public class AwesomeRepository
{
    private readonly AppDbContext _dbContext;

    public AwesomeRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal void UpdateAwesomeness(Awesomeness awesomeness)
    {
        var aw = _dbContext.Awesomenesses.FirstOrDefault(x => x.Id = awesomeness.Id);
        aw.IsCool = awesomeness.IsCool;
        aw.IsSuper = awesomeness.IsSuper;
        aw.LastCheck = awesomeness.LastCheck;
        _dbContext.SaveChanges();
    }
}

public class SuperRepository
{
    private readonly AppDbContext _dbContext;

    public SuperRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal bool HasSuperStuff(int id)
    {
        return _dbContext.SuperStuff.Any(x => x.AwesomenessId == id);
    }
}

public class CoolRepository
{
    private readonly AppDbContext _dbContext;

    public CoolRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal bool HasCoolStuff(int id)
    {
        return _dbContext.CoolStuff.Any(x => x.AwesomenessId == id);
    }
}

public class Awesomeness
{
    public int Id { get; set; }
    public bool IsCool { get; set; }
    public bool IsSuper { get; set; }
    public DateTime LastCheck { get; set; }
}

Usually the Repositories are much bigger than this small classes, they provide functionality for the default CRUD operations on the object and sometimes some more.

I've seen a lot of repositories the last 15 years, some where kind generic or planned to be generic. Most of them are pretty individual depending on the needs of that object to work on. This is so much overhead for such a simple feature.

BTW: I remember a Clean Code training I did in Romania for a awesome company which great developers that were highly motivated. I worked with them for years and it was always a pleasure. Anyway. At the end of that training I did a small code kata the everyone should know: The FizzBuzz Kate. It was awesome. This great developers used all the patterns and practices they learned during the training to try to solve that Kata. After an hour they had a not working Enterprise FizzBuzz Application. It was rocket science to just iterate threw a list of numbers. They completely forgot about the most important Clean Code principles KISS and YAGNI. At the end I was the bad trainer, when I wrote the FizzBuzz in just a few lines of code in a single method without any interfaces, factories and repositories.

Why not writing the code just like this?

public class AwesomeService
{
    private readonly AppDbContext _dbContext;

    public AwesomeService(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    public void UpdateAwesomeness(int id)
    {
        var awesomeness = _dbContext.Awesomenesses.First(x => x.Id == id);
        
        awesomeness.IsCool = _dbContext.CoolStuff.Any(x => x.AwesomenessId == id);
        awesomeness.IsSuper = _dbContext.SuperStuff.Any(x => x.AwesomenessId == id);
        awesomeness.LastCheck = DateTime.Now;

        _dbContext.SaveChanges();
    }
}

public class Awesomeness
{
    public int Id { get; set; }
    public bool IsCool { get; set; }
    public bool IsSuper { get; set; }
    public DateTime LastCheck { get; set; }
}

This is simple, less code, less dependencies, easy to understand and anyway working. Sure it uses EF directly. If there is really, really, really the need to encapsulate the ORM. Why don't you create repositories by topic instead of per entity? Let's have a look:

public class AwesomeService
{
    private readonly AwesomeRepository _awesomeRepository;

    public AwesomeService(AwesomeRepository awesomeRepository)
    {
        _awesomeRepository = awesomeRepository;
    }     

    public void UpdateAwesomeness(int id)
    {
        _awesomeRepository.UpdateAwesomeness(id);
    }   
}

public class AwesomeRepository
{
    private readonly AppDbContext _dbContext;

    public AwesomeRepository(AppDbContext dbContext)
    {
        _dbContext = dbContext;
    }

    internal void UpdateAwesomeness(int id)
    {
        var awesomeness = _dbContext.Awesomenesses.First(x => x.Id == id);
        awesomeness.IsCool = _dbContext.CoolStuff.Any(x => x.AwesomenessId == id);
        awesomeness.IsSuper = _dbContext.SuperStuff.Any(x => x.AwesomenessId == id);
        awesomeness.LastCheck = DateTime.Now;
        _dbContext.SaveChanges();
    }
}

public class Awesomeness
{
    public int Id { get; set; }
    public bool IsCool { get; set; }
    public bool IsSuper { get; set; }
    public DateTime LastCheck { get; set; }
}

This looks clean as well and encapsulates EF pretty well. But why do we really need the AwesomeService in that case? It just calls the repository. It doesn't contain any real logic, but needs to be tested and maintained. I also saw this kind of services a lot the last 15 years. This also doesn't make sense to me anymore. At the end I always end up with the second solution.

We don't need to have a three layered architecture, because we had the last 20, 30 or 40 years.

It always depends in software architecture.

Architectural point of view

My architectural point of view changed over the last years. I don't look at data objects anymore. If I do architecture, I don't wear the OOP glasses anymore. I try to explore the data flows inside the solution first. Where do the data come from, how do the data transform on the way to the target and where do the data go? I don't think in layers anymore. I try to figure out what is the best way to let the date flow into the right direction. I also try to use the users perspective to find an efficient way for the users as well.

I'm looking for the main objects the application is working on. In that case object isn't a .NET object or any .NET Type. It is just a specification. If I'm working on a shopping card, the main object is the order that produces money. This is the object that produces the most of the actions and contains and produces the most of the data.

Depending on the size and the kind of the application, I end up using different kind architectural patterns.

BTW: Pattern in the sense of idea how to solve the current problem. Not in the sense of patterns I need to use. I'll write about this architectural patterns in a separate blog post soon.

Despite what pattern is used, there's no repository anymore. There are services that provide the data in the way I need them in the UI. Sometimes the Services are called Handlers, depending what architectural pattern is used, but they work the same way. Mostly they are completely independent from each other. There's not a thing like a UserService or GroupService, but there's an AuthService or an ProfileService. There is no ProductService, CategoryService or CheckoutService, but an OrderService.

What do you think?

Does this make sense to you? What do you think?

I know this is a topic that is always discussed in a controversy way. But it shouldn't. But tell me your opinion about that topic. I'm curious about your thoughts and really like to learn more from you.

For me it worked quite well this way. I reduced a lot of overhead and a lot of code I need to maintain.

Stefan Henneken: IEC 61131-3: The ‘Decorator’ Pattern

With the help of the decorator pattern, new function blocks can be developed on the basis of existing function blocks without overstraining the principle of inheritance. In the following post, I will introduce the use of this pattern using a simple example.

The example should calculate the price (GetPrice()) for different pizzas. Even if this example has no direct relation to automation technology, the basic principle of the decorator pattern is described quite well. The pizzas could just as well be replaced by pumps, cylinders or axes.

First variant: The ‘Super Function Block’

In the example, there are two basic kinds of pizza: American style and Italian style. Each of these basic sorts can have salami, cheese and broccoli as a topping.

The most obvious approach could be to place the entire functionality in one function block.

Properties determine the ingredients of the pizza, while a method performs the desired calculation.

Picture01

Furthermore, FB_init() is extended in such a way that the ingredients are already defined during the declaration of the instances. Thus different pizza variants can be created quite simply.

fbAmericanSalamiPizza : FB_Pizza(ePizzaStyle := E_PizzaStyle.eAmerican,
                                 bHasBroccoli := FALSE,
                                 bHasCheese := TRUE,
                                 bHasSalami := TRUE);
fbItalianVegetarianPizza : FB_Pizza(ePizzaStyle := E_PizzaStyle.eItalian,
                                    bHasBroccoli := TRUE,
                                    bHasCheese := FALSE,
                                    bHasSalami := FALSE);

The GetPrice() method evaluates this information and returns the requested value:

METHOD PUBLIC GetPrice : LREAL
 
IF (THIS^.eStyle = E_PizzaStyle.eItalian) THEN
  GetPrice := 4.5;
ELSIF (THIS^.eStyle = E_PizzaStyle.eAmerican) THEN
  GetPrice := 4.2;
ELSE
  GetPrice := 0;
  RETURN;
END_IF
IF (THIS^.bBroccoli) THEN
  GetPrice := GetPrice + 0.8;
END_IF
IF (THIS^.bCheese) THEN
  GetPrice := GetPrice + 1.1;
END_IF
IF (THIS^.bSalami) THEN
  GetPrice := GetPrice + 1.4;
END_IF

Actually, it’s a pretty solid solution. But as is so often the case in software development, the requirements change. So the introduction of new pizzas may require additional ingredients. The FB_Pizza function block is constantly growing and so is its complexity. The fact that everything is contained in one function block also makes it difficult to distribute the final development among several people.

Sample 1 (TwinCAT 3.1.4022) on GitHub

Second Variant: The ‚Hell of Inheritance‘

In the second approach, a separate function block is created for each pizza variant. In addition, an interface (I_Pizza) defines all common properties and methods. Since the price has to be determined for all pizzas, the interface contains the GetPrice() method.

The two function blocks FB_PizzaAmericanStyle and FB_PizzaItalianStyle implement this interface. Thus the function blocks replace the enumeration E_PizzaStyle and are the basis for all further pizzas. The GetPrice() method returns the respective base price for these two FBs.

Based on this, different pizzas are defined with the different ingredients. For example, the pizza Margherita has cheese and tomatoes. The salami pizza also needs salami. Thus, the FB inherits for the salami pizza from the FB of the pizza Margherita.

The GetPrice() method always uses the super pointer to access the underlying method and adds the amount for its own ingredients, given that they are available.

METHOD PUBLIC GetPrice : LREAL
 
GetPrice := SUPER^.GetPrice();
IF (THIS^.bSalami) THEN
  GetPrice := GetPrice + 1.4;
END_IF

This results in an inheritance hierarchy that reflects the dependencies of the different pizza variants.

Picture02

This solution also looks very elegant at first glance. One advantage is the common interface. Each instance of one of the function blocks can be assigned to an interface pointer of type I_Pizza. This is helpful, for example, with methods, since each pizza variant can be passed via a parameter of type I_Pizza.

Also different pizzas can be stored in an array and the common price can be calculated:

PROGRAM MAIN
VAR
  fbItalianPizzaPiccante     : FB_ItalianPizzaPiccante;
  fbItalianPizzaMozzarella   : FB_ItalianPizzaMozzarella;
  fbItalianPizzaSalami       : FB_ItalianPizzaSalami;
  fbAmericanPizzaCalifornia  : FB_AmericanPizzaCalifornia;
  fbAmericanPizzaNewYork     : FB_AmericanPizzaNewYork;
  aPizza                     : ARRAY [1..5] OF I_Pizza;
  nIndex                     : INT;
  lrPrice                    : LREAL;
END_VAR
 
aPizza[1] := fbItalianPizzaPiccante;
aPizza[2] := fbItalianPizzaMozzarella;
aPizza[3] := fbItalianPizzaSalami;
aPizza[4] := fbAmericanPizzaCalifornia;
aPizza[5] := fbAmericanPizzaNewYork;
 
lrPrice := 0;
FOR nIndex := 1 TO 5 DO
  lrPrice := lrPrice + aPizza[nIndex].GetPrice();
END_FOR

Nevertheless, this approach has several disadvantages.

What happens if the menu is adjusted and the ingredients of a pizza change as a result? Assuming the salami pizza should also get mushrooms, the pizza Piccante also inherits the mushrooms, although this is not desired. The entire inheritance hierarchy must be adapted. The solution becomes inflexible because of the firm relationship through inheritance.

How does the system handle individual customer wishes? For example, double cheese or ingredients that are not actually intended for a particular pizza.

If the function blocks are located in a library, these adaptations would be only partially possible.

Above all, there is a danger that existing applications compiled with an older version of the library will no longer behave correctly.

Sample 2 (TwinCAT 3.1.4022) on GitHub

Third variant: The Decorator Pattern

Some design principles of object-oriented software development are helpful to optimize the solution. Adhering to these principles should help to keep the software structure clean.

Open-closed Principle

Open for extensions: This means that the original functionality of a module can be changed by using extension modules. The extension modules only contain the adaptations of the original functionality.

Closed for changes: This means that no changes to the module are necessary to extend it. The module provides defined extension points to connect to the extension modules.

Identify those aspects that change and separate them from those that remain constant

How are the function blocks divided so that extensions are necessary in as few places as possible?

So far, the two basic pizza varieties, American style and Italian style, have been represented by function blocks. So why not also define the ingredients as function blocks? This would enable us to comply with the Open Closed Principle. Our basic varieties and ingredients are constant and therefore closed to change. However, we must ensure that each basic variety can be extended with any number of ingredients. The solution would therefore be open to extensions.

The decorator pattern does not rely on inheritance when behaviour is extended. Rather, each side order can also be understood as a wrapper. This wrapper covers an already existing dish. To make this possible, the side orders also implement the interface I_Pizza. Each side order also contains an interface pointer to the underlying wrapper.

The basic pizza type and the side orders are thereby nested into each other. If the GetPrice() method is called from the outer wrapper, it delegates the call to the underlying wrapper and then adds its price. This goes on until the call chain has reached the basic pizza type that returns the base price.

Picture03

The innermost wrapper returns its base price:

METHOD GetPrice : LREAL
 
GetPrice := 4.5;

Each further decorator adds the requested surcharge to the underlying wrapper:

METHOD GetPrice : LREAL
 
IF (THIS^.ipSideOrder  0) THEN
  GetPrice := THIS^.ipSideOrder.GetPrice() + 0.9;
END_IF

So that the underlying wrapper can be passed to the function block, the method FB_init() is extended by an additional parameter of type I_Pizza. Thus, the desired ingredients are already defined during the declaration of the FB instances.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains  : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode   : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  ipSideOrder   : I_Pizza;
END_VAR
 
THIS^.ipSideOrder := ipSideOrder;

To make it easier to see how the individual wrappers run through, I have provided the GetDescription() method. Each wrapper adds a short description to the existing string.

Picture04

In the following example, the ingredients of the pizza are specified directly in the declaration:

PROGRAM MAIN
VAR
  // Italian Pizza Margherita (via declaration)
  fbItalianStyle : FB_PizzaItalianStyle;
  fbTomato       : FB_DecoratorTomato(fbItalianStyle);
  fbCheese       : FB_DecoratorCheese(fbTomato);
  ipPizza        : I_Pizza := fbCheese;
 
  fPrice         : LREAL;
  sDescription   : STRING;  
END_VAR
 
fPrice := ipPizza.GetPrice(); // output: 6.5
sDescription := ipPizza.GetDescription(); // output: 'Pizza Italian Style: - Tomato - Cheese'

There is no fixed connection between the function blocks. New pizza types can be defined without having to modify existing function blocks. The inheritance hierarchy does not determine the dependencies between the different pizza variants.

Picture05

In addition, the interface pointer can also be passed by property. This makes it possible to combine or change the pizza at run-time.

PROGRAM MAIN
VAR
  // Italian Pizza Margherita (via runtime)
  fbItalianStyle  : FB_PizzaItalianStyle;
  fbTomato        : FB_DecoratorTomato(0);
  fbCheese        : FB_DecoratorCheese(0);
  ipPizza         : I_Pizza;
 
  bCreate         : BOOL;
  fPrice          : LREAL;
  sDescription    : STRING;
END_VAR
 
IF (bCreate) THEN
  bCreate := FALSE;
  fbTomato.ipDecorator := fbItalianStyle;
  fbCheese.ipDecorator := fbTomato;
  ipPizza := fbCheese;
END_IF
IF (ipPizza  0) THEN
  fPrice := ipPizza.GetPrice(); // output: 6.5
  sDescription := ipPizza.GetDescription(); // output: 'Pizza Italian Style: - Tomato - Cheese'
END_IF

Special features can also be integrated in each function block. These can be additional properties, but also further methods.

The function block for the tomatoes is to be offered optionally also as organic tomato. One possibility, of course, is to create a new function block. This is necessary if the existing function block cannot be extended (e.g., because it is in a library). However, if this requirement is known before the first release, it can be directly taken into account.

The function block receives an additional parameter in the method FB_init().

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains      : BOOL; // if TRUE, the retain variables are initialized (warm start / cold start)
  bInCopyCode       : BOOL; // if TRUE, the instance afterwards gets moved into the copy code (online change)
  ipSideOrder       : I_Pizza;
  bWholefoodProduct : BOOL;
END_VAR
 
THIS^.ipSideOrder := ipSideOrder;
THIS^.bWholefood := bWholefoodProduct;

This parameter could also be changed at run-time using a property. When the price is calculated, the option is taken into account as required.

METHOD GetPrice : LREAL
 
IF (THIS^.ipSideOrder  0) THEN
  GetPrice := THIS^.ipSideOrder.GetPrice() + 0.9;
  IF (THIS^.bWholefood) THEN
    GetPrice := GetPrice + 0.3;
  END_IF
END_IF

A further optimization can be the introduction of a basic FB (FB_Decorator) for all decorator FBs.

Picture06

Sample 3 (TwinCAT 3.1.4022) on GitHub

Definition

In the book „Design pattern. Elements of reusable object-oriented software” by Gamma, Helm, Johnson and Vlissides, it is expressed as follows:

„The decorator patterns provide a flexible alternative to subclassing for […] extending functionality.”

Implementation

The crucial point with the decorator pattern is that when extending a function block, inheritance is not used. If the behaviour is to be supplemented, function blocks are nested into each other; they are decorated.

The central component is the IComponent interface. The functional blocks to be decorated (Component) implement this interface.

The function blocks that serve as decorators (Decorator) also implement the IComponent interface. In addition, they also contain a reference (interface pointer component) to another decorator (Decorator) or to the basic function block (Component).

The outermost decorator thus represents the basic function block, extended by the functions of the decorators. The method Operation() is passed through all function blocks. Whereby each function block may add any functionalities.

This approach has some advantages:

  • The original function block (component) does not know anything about the add-ons (decorator). It is not necessary to extend or adapt it.
  • The decorators are independent of each other and can also be used for other applications.
  • The decorators can be combined with each other at any time.
  • A function block can therefore change its behaviour either by declaration or at run-time.
  • A client that accesses the function block via the IComponent interface can handle a decorated function block in the same way. The client does not have to be adapted; it becomes reusable.

But also some disadvantages have to be considered:

  • The number of function blocks can increase significantly, which makes the integration into an existing library more complex.
  • The client does not recognize whether it is the original base component (if accessed via the IComponent interface) or whether it has been enhanced by decorators. This can be an advantage (see above), but can also lead to problems.
  • The long call sequences make troubleshooting more difficult. The long call sequences can also have a negative effect on the performance of the application.

UML Diagram

Picture07

Related to the example above, the following mapping results:

Client MAIN
IComponent I_Pizza
Operation() GetPrice(), GetDescription()
Decorator FB_DecoratorCheese, FB_DecoratorSalami, FB_DecoratorTomato
AddedBehavior() bWholefoodProduct
component ipSideOrder
Component FB_PizzaItalianStyle, FB_PizzaAmericanStyle

Application examples

The decorator pattern is very often found in classes that are responsible for editing data streams. This concerns both the Java standard library and the Microsoft .NET framework.

Thus, there is the class System.IO.Stream in the .NET Framework. System.IO.FileStream and System.IO.MemoryStream inherit from this class. Both subclasses also contain an instance of Stream. Many methods and properties of FileStream and MemoryStream access this instance. You can also say: The subclasses FileStream and MemoryStream decorate Stream.

Further use cases are libraries for the creation of graphical user interfaces. These include WPF from Microsoft as well as Swing for Java.

A text box and a border are nested into each other; the text box is decorated with the border. The border (with the text box) is then passed to the page.

Stefan Lieser: TDD vs. Test-first

Immer wieder erlebe ich Diskussionen um die Frage, wie man denn nun Software so richtig richtig testet. Die Erkenntnis, dass automatisierte Tests notwendig sind, scheint sich inzwischen durchgesetzt zu haben. Ich erlebe nicht mehr, dass Entwickler ernsthaft behaupten, automatisierte Tests wären Zeitverschwendung, zu kompliziert, in ihrem Projekt halt unmöglich oder was auch immer die Argumente ... Read more

Der Beitrag TDD vs. Test-first erschien zuerst auf Refactoring Legacy Code.

Jürgen Gutsch: WPF and WinForms will run on .NET Core 3

Maybe you already heard or read about the fact that Microsoft brings WinForms and WPF to .NET Core 3.0. Maybe you already saw the presentations on the Connect conference, or any other conference or recording when Scott Hanselman shows how to run a pretty old WPF application on .NET Core. I saw a demo where he run BabySmash on .NET Core.

BTW: My oldest sun really loved that BabySmash when he was a baby :-)

You did not hear, read or see about it?

WPF and WinForms on .NET Core?

I was really wondering about this step, even because I wrote an article for a German .NET magazine some months before where I mentioned that Microsoft won't build a UI Stack for .NET Core. There where some other UI stacks built by the community. The most popular is Avalonia.

But this step makes sense anyway. Since the .NET Standards moves the API of .NET Core more to the same level of .NET Framework, it was a question of time when the APIs are almost equal. WPF and WinForms is based on .NET Libraries, it should basically also run on .NET Core.

Does this mean it runs on Linux and Mac?

Nope! Since WinForms and WPF uses Windows only technology in the background, it cannot run on Linux or Mac. It is really depending on Windows. The sense of running it on .NET Core is performance and to be independent from any framework. .NET Core is optimized for performance, to run superfast web applications in the cloud. .NET Core is also independent from the installed framework on the machine. Just deploy the runtime together with your application.

You are now able to run fast and self-contained Windows desktop applications. That's awesome, isn't it?

Good I wrote that article some months before ;-)

Anyways...

The .NET CLI

Every time I install a new version of the .NET Core runtime I try dotnet new and I was positively shocked about what I saw this time:

You are now able to create a Windows Forrms or a WPF application using the .NET CLI. This is cool. And for sure I needed to try it out:

dotnet new -n WpfTest -o WpfTest
dotnet new -n WpfTest -o WpfTest

And yes, it is working as you can see here in Visual Studio Code:

And this is the WinForms project in VS Code

Running dotnet run on the WPF project:

And again on the WinForms GUI:

IDE

Visual Studio Code isn't the right editor for this kind of projects. If you know XAML pretty well, it will work, but WinForms definitely won't work well. You need to write the designer code manually and you don't have any designer support yet. Maybe there will be some designer in the future, but I'm not sure.

The best choice to work with WinForms and WPF on .NET Core is Visual Studio 2017 or newer.

Last words

I don't think I will now start to write desktop apps on .NET Core 3, because I'm a web guy. But it is a really nice option to build apps like this on .NET Core.

BTW: Even EF 6 will work in .NET Core 3, that means you also don't need to rewrite the database access part of your desktop application.

As I wrote, you can now use this super fast framework and the option to create self contained apps. I would suggest to try it out, to play around with it. Do you have an older desktop application based on WPF or WinForms? I would be curious about whether you can run it on .NET Core 3. Tell me how easy it was to get it running on .NET Core 3.

Code-Inside Blog: Office Add-ins with ASP.NET Core

The “new” Office-Addins

Most people might associate Offce-Addins with “old school” COM addins, but since a couple of years Microsoft pushes a new add-in application modal powered by HTML, Javascript and CSS.

The cool stuff is, that these add-ins will run unter Windows, macOS, Online in a browser and on the iPad. If you want to read more about the general aspects, just checkout the Microsoft Docs.

In Microsoft Word you can find those addins under the “Insert” ribbon:

x

Visual Studio Template: Urgh… ASP.NET

Because of the “new” nature of the Add-ins you could actually use your favorite text editor and create a valid Office Add-ins. There are some great tooling out there, including a Yeoman generator for Office-Add-ins.

If you want to stick with Visual Studio you might want to install the __“Office/SharePoint development-Workload”. After the installation you should see a couple of new templates appear in your Visual Studio:

x

Sadly, those templates still uses ASP.NET and not ASP.NET Core.

x

ASP.NET Core Sample

If you want to use ASP.NET Core, you might want to take a look at my ASP.NET Core-sample. It is not a VS template - it is meant to be a starting point, but feel free to create one if this would help!

x

The structure is very similar. I moved all the generated HTML/CSS/JS stuff in a separate area and the Manifest.xml points to those files.

Result should be something like this:

x

Warning:

In the “ASP.NET”-Offce-Addin-development world there is one feature that is kinda cool, but seems not to be working with ASP.NET Core projects. The original Manifest.xml generated by the Visual Studio template uses a placeholder called “~remoteAppUrl”. It seems that Visual Studio was able to replace this placeholder during startup with the correct URL of the ASP.NET application. This is not possible with a ASP.NET Core application.

The good news is, that this feature is not really needed. You just need to point to the correct URL and everything is fine and the debugging is OK as well.

Hope this helps!

Jürgen Gutsch: Customizing ASP.​NET Core Part 11: WebHostBuilder

In my post about Configuring HTTPS in ASP.NET Core 2.1, a reader asked how to configure the HTTPS settings using user secrets.

"How would I go about using user secrets to pass the password to listenOptions.UseHttps(...)? I can't fetch the configuration from within Program.cs no matter what I try. I've been Googling solutions for like a half hour so any help would be greatly appreciated." https://github.com/JuergenGutsch/blog/issues/110#issuecomment-441177441

In this post I'm going to answer this question.

Initial series topics

WebHostBuilderContext

It is about this Kestrel configuration in the Program.cs. In that post I wrote that you should use user secrets to configure the certificates password:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
        	.UseKestrel(options =>
            {
                options.Listen(IPAddress.Loopback, 5000);
                options.Listen(IPAddress.Loopback, 5001, listenOptions =>
                {
                    listenOptions.UseHttps("certificate.pfx", "topsecret");
                });
            })
        	.UseStartup<Startup>();
}

The reader wrote that he couldn't fetch the configuration inside this code. And he is true, if we are only looking at this snippet. You need to know that the method UseKestrel() is overloaded:

.UseKestrel((host, options) =>
{
    // ...
})

This first argument is a WebHostBuilderContext. Using this you are able to access the configuration.

So lets rewrite the lambda a little bit to use this context:

.UseKestrel((host, options) =>
{
    var filename = host.Configuration.GetValue("AppSettings:certfilename", "");
    var password = host.Configuration.GetValue("AppSettings:certpassword", "");
    
    options.Listen(IPAddress.Loopback, 5000);
    options.Listen(IPAddress.Loopback, 5001, listenOptions =>
    {
        listenOptions.UseHttps(filename, password);
    });
})

In this sample I chose to write the keys using the colon divider because this is the way you need to read nested configurations from the appsettings.json:

{
    "AppSettings": {
        "certfilename": "certificate.pfx",
        "certpassword": "topsecret"
    },
    "Logging": {
        "LogLevel": {
            "Default": "Warning"
        }
    },
    "AllowedHosts": "*"
}

You are also able to read from the user secrets store with this keys:

dotnet user-secrets init
dotnet user-secrets set "AppSettings:certfilename" "certificate.pfx"
dotnet user-secrets set "AppSettings:certpassword" "topsecret"

As well as environment variables:

SET APPSETTINGS_CERTFILENAME=certificate.pfx
SET APPSETTINGS_CERTPASSWORD=topsecret

Why does it work?

Do you remember the days back where you needed to configure app configuration in the Startup.cs ASP.NET Core? That was configured in the constructor of the Startup class and looked similar like this, if you added user secrets:

 var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json")
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true);

    if (env.IsDevelopment())
    {
        builder.AddUserSecrets();
    }

    builder.AddEnvironmentVariables();
    Configuration = builder.Build();

This code now is wrapped inside the CreateDefaultBuilder Method (see on GitHub) and looks like this:

builder.ConfigureAppConfiguration((hostingContext, config) =>
{
    var env = hostingContext.HostingEnvironment;

    config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);

    if (env.IsDevelopment())
    {
        var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
        if (appAssembly != null)
        {
            config.AddUserSecrets(appAssembly, optional: true);
        }
    }

    config.AddEnvironmentVariables();

    if (args != null)
    {
        config.AddCommandLine(args);
    }
})

It is almost the same code and it is one of the first things that gets executed when building the WebHost. It needs to be one of the first things because the Kestrel is configurable via the app configuration. Maybe you know that you are able to specify ports and URLs and so on using environment variables or the appsettings.json:

I found this lines in the WebHost.cs:

builder.UseKestrel((builderContext, options) =>
{
    options.Configure(builderContext.Configuration.GetSection("Kestrel"));
})

That means you are able to add this lines to the appsettings.json to configure Kestrel endpoints:

"Kestrel": {
  "EndPoints": {
  "Http": {
  "Url": "http://localhost:5555"
 }}}

Or to use environment variables like this to configure the endpoint:

SET KESTREL_ENDPOINTS_HTTP_URL=http://localhost:5555

Also this configuration isn't executed

Conclusion

Inside the Program.cs you are able to use app configuration inside the lambdas of the configuration methods, if you have access to the WebHostBuilderContext. This way you can use all the configuration you like to configure the WebHostBuilder.

I just realized that this post could be placed between Customizing ASP.NET Core Part 02: Configuration and Customizing ASP.NET Core Part 04: HTTPS. So I made this the eleventh part of the Customiting ASP.NET Core Series.

Holger Schwichtenberg: Windows-10-Apps per PowerShell löschen

Mit einem PowerShell-Skript kann man elegant beliebig viele unerwünschte Windows-10-Store-Apps auf einmal deinstallieren.

Stefan Henneken: MEF Part 2 – Metadata and creation policies

Part 1 dealt with fundamentals, imports and exports. Part 2 follows on from part 1 and explores additional features of the Managed Extensibility Framework (MEF). This time the focus is on metadata and creation policies.

Metadata

Exports can use metadata to expose additional information. To query this information, we use the class Lazy<>, use of which avoids causing the composable part to create an instance.

For our example application, we will go back to the example from part 1. We have an application (CarHost.exe) which uses imports to bind different cars (BMW.dll and Mercedes.dll). There is a contract (CarContract.dll) which contains the interface via which the host accesses the exports.

The metadata consist of three values. Firstly, a string containing the name (Name). Secondly, an enumeration indicating a colour (Color). Lastly, an integer containing the price (Price).

There are a number of options for how exports can make these metadata available to imports:

  1. non-type-safe
  2. type-safe via an interface
  3. type-safe via an interface and user-defined export attributes
  4. type-safe via an interface and enumerated user-defined export attributes

Option 1: non-type-safe

In this option, metadata are exposed using the ExportMetadata attribute. Each item of metadata is described using a name-value pair. The name is always a string type, whilst the value is an Object type. In some cases, it may be necessary to use the cast operator to explicitly convert a value to the required data type. In this case, the Price value needs to be converted to the uint data type.

We create two exports, each of which exposes the same metadata, but with differing values.

using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarMercedes
{
    [ExportMetadata("Name", "Mercedes")]
    [ExportMetadata("Color", CarColor.Blue)]
    [ExportMetadata("Price", (uint)48000)]
    [Export(typeof(ICarContract))]
    public class Mercedes : ICarContract
    {
        private Mercedes()
        {
            Console.WriteLine("Mercedes constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the Mercedes.", name);
        }
    }
}
 
using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarBMW
{
    [ExportMetadata("Name", "BMW")]
    [ExportMetadata("Color", CarColor.Black)]
    [ExportMetadata("Price", (uint)55000)]
    [Export(typeof(ICarContract))]
    public class BMW : ICarContract
    {
        private BMW()
        {
            Console.WriteLine("BMW constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the BMW.", name);
        }
    }
}

The ICarContract interface exposes the method, so that it is then available to the import. It represents the ‘contract’ between the imports and the exports. The enumeration CarColor is also defined in the same namespace.

using System;
namespace CarContract
{
    public interface ICarContract
    {
        string StartEngine(string name);
    }
    public enum CarColor
    {
        Unkown,
        Black,
        Red,
        Blue,
        White
    }
}

Metadata for the import can be accessed using the class Lazy<T, TMetadata>. This class exposes the Metadata property. Metadata is of type Dictionary<string, object>.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract, Dictionary<string, object>>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract, Dictionary<string, object>> car in CarParts)
            {
                if (car.Metadata.ContainsKey("Name"))
                    Console.WriteLine(car.Metadata["Name"]);
                if (car.Metadata.ContainsKey("Color"))
                    Console.WriteLine(car.Metadata["Color"]);
                if (car.Metadata.ContainsKey("Price"))
                    Console.WriteLine(car.Metadata["Price"]);
                Console.WriteLine("");
            }
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            container.Dispose();
        }
    }
}

If we want to access a specific item of metadata, we need to verify that the export has indeed defined the required item. It may well be that different imports expose different metadata.

On running the program, we can clearly see that accessing the metadata does not initialise the export parts. Only once we access the StartEngine() method do we create an instance, thereby calling the constructor.

CommandWindowSample01

Since the metadata is stored in a class of type Dictionary<string, object>, it can contain any number of items of metadata. This has advantages and disadvantages. The advantage is that all metadata are optional and the information they expose is entirely arbitrary – the value is of type Object. However, this also entails a loss of type safety. This is a major disadvantage. When accessing metadata, we always need to check that the metadata is actually present. Failure to do so can lead to some nasty runtime errors.

Sample 1 (Visual Studio 2010) on GitHub

Option 2: type-safe via an interface

Just as the available methods and properties of an export can be specified using an interface (ICarContract), it is also possible to define metadata using an interface. In this case, the individual values which will be available are specified using properties. You can only define properties which can be accessed using a get accessor. (If you try to define a set accessor, this will cause a runtime error.)

For our example, we will create three properties of the required type. We define an interface for the metadata as follows:

public interface ICarMetadata
{
    string Name { get; }
    CarColor Color { get; }
    uint Price { get; }
}

The interface for the metadata is used during verification between import and export. All exports must expose the defined metadata. If metadata are not present, this again results in a runtime error. If a property is optional, you can use the DefaultValue attribute.

[DefaultValue((uint)0)]
uint Price { get; }

To avoid having to define all metadata in an export, all properties in this example will be decorated with the DefaultValue attribute.

using System;
using System.ComponentModel;
namespace CarContract
{
    public interface ICarMetadata
    {
        [DefaultValue("NoName")]
        string Name { get; }
 
        [DefaultValue(CarColor.Unkown)]
        CarColor Color { get; }
 
        [DefaultValue((uint)0)]
        uint Price { get; }
    }
}

The ICarContract interface and the exports are created in exactly the same way as in the first example.

To access the metadata, the interface for the metadata is used as the value for TMetadata in the Lazy<T, TMetadata> class. In this example, this is the ICarMetadata interface. Individual items of metadata are therefore available via the Metadata property.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using System.Linq;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        IEnumerable<Lazy<ICarContract, ICarMetadata>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract, ICarMetadata> car in CarParts)
            {
                Console.WriteLine(car.Metadata.Name);
                Console.WriteLine(car.Metadata.Color);
                Console.WriteLine(car.Metadata.Price);
                Console.WriteLine("");
            }
            // invokes the method only of black cars
            var blackCars = from lazyCarPart in CarParts
                            let metadata = lazyCarPart.Metadata
                            where metadata.Color == CarColor.Black
                            select lazyCarPart.Value;
            foreach (ICarContract blackCar in blackCars)
                Console.WriteLine(blackCar.StartEngine("Sebastian"));
            Console.WriteLine(".");
            // invokes the method of all imports
             foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            container.Dispose();
        }
    }
}

Since the ICarMetadata interface specifies the name and type of the metadata, it can be accessed directly. This type-safety brings with it a small but useful advantage – it is now possible to access the CarParts property using LINQ. This means that it is possible to filter by metadata, so that only specific imports are used.

The first foreach loop outputs the metadata from all exports. The second uses LINQ to create a query which produces a list containing only those exports where the metadata has a specific value – in this case where Color has the value CarColor.Black. The StartEngine() method of these exports only is called. The final foreach loop calls this method for all exports.

CommandWindowSample02

Once again, we can clearly see that neither outputting all metadata nor the LINQ query initialises an export. A new instance is only created (and the constructor therefore called) on calling the StartEngine() method.

Sample 2 (Visual Studio 2010) on GitHub

In my opinion, interfaces should be used to work with metadata wherever possible. Sure, it may be a little more work, but this approach does avoid unwanted runtime errors.

Option 3: type-safe via an interface and user-defined export attributes

Defining metadata in the export has one further disadvantage. The name has to be supplied in the form of a string. With long names in particular, it’s easy for typos to creep in. Any typos will not be recognised by the compiler, producing errors which only become apparent at runtime. Of course things would be a lot easier if Visual Studio listed all valid metadata whilst typing and if the compiler noticed any typos. This happy state can be achieved by creating a separate attribute class for the metadata. To achieve this, all we need to do to our previous example is add a class.

using System;
using System.ComponentModel.Composition;
namespace CarContract
{
    [MetadataAttribute]
    public class CarMetadataAttribute : Attribute
    {
        public string Name { get; set; }
        public CarColor Color { get; set; }
        public uint Price { get; set; }
    }
}

This class needs to be decorated with the MetadataAttribute attribute and derived from the Attribute class. The individual values to be exported via the metadata are specified using properties. The type and name of the properties must match that specified in the interface for the metadata. We previously defined the ICarContract interface as follows:

using System;
using System.ComponentModel;
namespace CarContract
{
    public interface ICarMetadata
    {
        [DefaultValue("NoName")]
        string Name { get; }
 
        [DefaultValue(CarColor.Unkown)]
        CarColor Color { get; }
 
        [DefaultValue((uint)0)]
        uint Price { get; }
    }
}

We can not decorate an export with metadata by using this newly defined attribute.

[CarMetadata(Name="BMW", Color=CarColor.Black, Price=55000)]
[Export(typeof(ICarContract))]
public class BMW : ICarContract
{
    // ...
}

Now Visual Studio can give the developer a helping hand when entering metadata. All valid metadata are displayed during editing. In addition, the compiler is now in a position to verify that all of the entered metadata are valid.

VisualStudioSample03

Sample 3 (Visual Studio 2010) on GitHub

Option 4: type-safe via an interface and enumerated user-defined export attributes

Up to this point, it has not been possible to have multiple entries for a single item of metadata. However, there could be situations where we want an enumeration containing options which we wish to be able to combine together. We’re now going to extend our car example to allow us to additionally define the audio system with which the car is equipped. To do this, we first define an enum containing all of the possible options:

public enum AudioSystem
{
    Without,
    Radio,
    CD,
    MP3
}

Now we add a property of type AudioSystem to the ICarMetadata interface.

using System;
using System.ComponentModel;
namespace CarContract
{
    public interface ICarMetadata
    {
        [DefaultValue("NoName")]
        string Name { get; }
 
        [DefaultValue(CarColor.Unkown)]
        CarColor Color { get; }
 
        [DefaultValue((uint)0)]
        uint Price { get; }
 
        [DefaultValue(AudioSystem.Without)]
        AudioSystem[] Audio { get; }
    }
}

Because a radio can also include a CD player, we need to be able to specify multiple options for specific items of metadata. In the export, the metadata is declared as follows:

using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarBMW
{
    [CarMetadata(Name="BMW", Color=CarColor.Black, Price=55000)]
    [CarMetadataAudio(AudioSystem.CD)]
    [CarMetadataAudio(AudioSystem.MP3)]
    [CarMetadataAudio(AudioSystem.Radio)]
    [Export(typeof(ICarContract))]
    public class BMW : ICarContract
    {
        private BMW()
        {
            Console.WriteLine("BMW constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the BMW.", name);
        }
    }
}
 
using System;
using System.ComponentModel.Composition;
using CarContract;
namespace CarMercedes
{
    [CarMetadata(Name="Mercedes", Color=CarColor.Blue, Price=48000)]
    [CarMetadataAudio(AudioSystem.Radio)]
    [Export(typeof(ICarContract))]
    public class Mercedes : ICarContract
    {
        private Mercedes()
        {
            Console.WriteLine("Mercedes constructor.");
        }
        public string StartEngine(string name)
        {
            return String.Format("{0} starts the Mercedes.", name);
        }
    }
}

Whilst the Mercedes has just a radio, the BMW also has a CD player and MP3 player.

To achieve this, we create an additional attribute class. This attribute class represents the metadata for the audio equipment (CarMetadataAudio).

using System;
using System.ComponentModel.Composition;
namespace CarContract
{
    [MetadataAttribute]
    [AttributeUsage(AttributeTargets.Class, AllowMultiple=true)]
    public class CarMetadataAudioAttribute : Attribute
    {
        public CarMetadataAudioAttribute(AudioSystem audio)
        {
            this.Audio = audio;
        }
        public AudioSystem Audio { get; set; }
    }
}

To allow us to specify multiple options for this attribute, this class has to be decorated with the AttributeUsage attribute and AllowMultiple needs to be set to true for this attribute. Here, the attribute class has been provided with a constructor, which takes the value directly as an argument.

Multiple metadata are output via an additional loop (see lines 28 and 29):

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using System.Linq;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract, ICarMetadata>> CarParts { get; set; }
 
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract, ICarMetadata> car in CarParts)
            {
                Console.WriteLine("Name: " + car.Metadata.Name);
                Console.WriteLine("Price: " + car.Metadata.Price.ToString());
                Console.WriteLine("Color: " + car.Metadata.Color.ToString());
                foreach (AudioSystem audio in car.Metadata.Audio)
                    Console.WriteLine("Audio: " + audio);
                Console.WriteLine("");
            }
            foreach (Lazy<ICarContract> car in CarParts)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            container.Dispose();
        }
    }
}

Running the program yields the expected result:

CommandWindowSample04

Sample 4 (Visual Studio 2010) on GitHub

There is one further option, but this I will leave for a later post in which I will talk about inherited exports. It allows both the export and metadata to be decorated with an attribute simultaneously.

Creation Policies

In the previous examples, we used the Export and ImportMany attributes to bind multiple exports to a single import. But what does MEF do when multiple imports are available to a single export? This requires us to adapt the above example somewhat. The exports and the contract remain unchanged. In the host, instead of one list, we create two. Both lists take the same exports.

using System;
using System.Collections.Generic;
using System.ComponentModel.Composition;
using System.ComponentModel.Composition.Hosting;
using CarContract;
namespace CarHost
{
    class Program
    {
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract>> CarPartsA { get; set; }
        [ImportMany(typeof(ICarContract))]
        private IEnumerable<Lazy<ICarContract>> CarPartsB { get; set; }
        static void Main(string[] args)
        {
            new Program().Run();
        }
        void Run()
        {
            var catalog = new DirectoryCatalog(".");
            var container = new CompositionContainer(catalog);
            container.ComposeParts(this);
            foreach (Lazy<ICarContract> car in CarPartsA)
                Console.WriteLine(car.Value.StartEngine("Sebastian"));
            Console.WriteLine("");
            foreach (Lazy<ICarContract> car in CarPartsB)
                Console.WriteLine(car.Value.StartEngine("Michael"));
            container.Dispose();
        }
    }
}

This change means that two lists (imports) are assigned to each export. The program output, however, implies that each export is instantiated only once.

CommandWindowSample05

Sample 5 (Visual Studio 2010) on GitHub

If Managed Extensibility Framework finds a matching export for an import, it creates an instance of the export. This instance is shared with all other matching imports. MEF treats each export as a singleton.

We can modify this default behaviour both for exports and for imports by using the creation policy. Each creation policy can have the value Shared, NonShared or Any. The default setting is Any. An export for which the policy is defined as Shared or NonShared is only deemed to match an import if the creation policy of the import matches that of the export or is Any. To be considered matching, imports and exports must have compatible creation policies. If both imports and exports are defined as Any (or are undefined), both parts will be specified as Shared.

The creation policy for an export is defined using the PartCreationPolicy attribute.

[Export(typeof(ICarContract))]
[PartCreationPolicy(CreationPolicy.NonShared)]

In the case of the Import or ImportAny attribute, the creation policy is defined by using the RequiredCreationPolicy property.

[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.NonShared)]
private IEnumerable<Lazy<ICarContract>> CarPartsA { get; set; }

The following output illustrates the case where the creation policy is set to NonShared. There are now two instances of each export.

CommandWindowSample06

Sample 6 (Visual Studio 2010) on GitHub

It is also possible to combine creation policies. For the import, I have decorated one list with NonShared and two further lists with Shared.

[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.NonShared)]
private IEnumerable<Lazy<ICarContract>> CarPartsA { get; set; }
 
[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.Shared)]
private IEnumerable<Lazy<ICarContract>> CarPartsB { get; set; }
 
[ImportMany(typeof(ICarContract), RequiredCreationPolicy = CreationPolicy.Shared)]
private IEnumerable<Lazy<ICarContract>> CarPartsC { get; set; }

The output shows how MEF creates the instances and assigns them to the individual imports:

CommandWindowSample06b

The first list has its own independent instances of the exports. Lists two and three share the same instances.

Outlook

It is very encouraging that a framework of this kind has been standardised. Some Microsoft teams are already successfully using MEF, the best known example is Visual Studio. Let’s hope that more products will follow suit, and that this ensures that MEF continues to undergo further development.

Part 3 deals with the life cycle of composable parts.

Golo Roden: Eine Referenz für Domain-Driven Design

Der Einstieg in Domain-Driven Design fällt häufig schwer, weil die Terminologie von DDD an sich schwer verständlich und zunächst verwirrend wirkt. Abhilfe schaffen ein anschaulicher Einstieg und eine gute Referenz.

Stefan Lieser: Unit Tests are your Friends

Angeregt durch die Diskussion mit Teilnehmern eines Clean Code Developer Workshops habe ich mich wieder einmal mit der Frage befasst, welche Strategien beim automatisierten Testen ich anwende und empfehle. Die Diskussion drehte sich vor allem um die Frage, ob private Methoden durch Unit Tests getestet werden sollen und wie dies dann technisch am besten realisiert ... Read more

Der Beitrag Unit Tests are your Friends erschien zuerst auf Refactoring Legacy Code.

Jürgen Gutsch: Integration testing data access in ASP.​NET Core

In the last post, I wrote about unit testing data access in ASP.NET Core. This time I'm going to go into integration tests. This post shows you how to write an end-to-end test using a WebApplicationFactory and hot to write specific integration test.

Unit tests vs. Integration tests

I'm sure most of you already know the difference. In a few discussions I learned that some developers don't have a clear idea about the difference. At the end is doesn't really matter, because every test is a good test. Both the unit tests and the integration tests are coded test, they look similar and use the same technology. The difference is in the concepts of how and what to test and in the scope of the test:

  • A unit test tests a logical unit, a single isolated component, a function or a feature. A unit test isolates this component to test it without any dependencies, like I did in the last post. First I tested the actions of a controller, without testing the actual service in behind. Than I tested the service methods in an isolated way with a faked DbContext. Why? Because unit tests shouldn't break because of a failing dependency. A unit test should be fast in development and in execution. It is a development tool. So it shouldn't cost a lot of time to write one. And in fact, setting up a unit test is much cheaper than setting up an integration test. Usually you write a unit test during or immediately after implementing the logic. In the best case you'll write a unit test before implementing the logic. This would be the TDD way, test driven development or test driven design.

  • An integration tests does a lot more. It tests the composition of all units. It ensures that all units are working together in the right way. This means it may need a lot more effort to setup a test because you need to setup the dependencies. An integration test can test a feature from the UI to the database. It integrates all the dependencies. On the other hand an integrations test can be isolated on a hot path of a feature. It is also legit to fake or mock aspects that don't need to be tested in this special case. For example, if you test a user input from the UI to the database, you don't need to test the logging. Also an integration test shouldn't fail because on an error outside the context. This also means to isolate an integration test as much as possible, maybe by using an in-memory database instead of a real one.

Let's see how it works:

Setup

I'm going to reuse the solution created for the last post to keep this section short.

I only need to create another XUnit test project, to add it to the existing solution and to add a reference to the WebToTest and some NuGet packages:

dotnet new xunit -o WebToTest.IntegrationTests -n WebToTest.IntegrationTests
dotnet sln add WebToTest.IntegrationTests
dotnet add WebToTest.IntegrationTests reference WebToTest

dotnet add WebToTest.IntegrationTests package GenFu
dotnet add WebToTest.IntegrationTests package moq
dotnet add WebToTest.IntegrationTests package Microsoft.AspNetCore.Mvc.Testing

At the next step I create a test class for a web integration test. This means I setup a web host for the application-to-test to call the web via a web client. This is kind of an UI test than, not based on UI events, but I'm able to get and analyze the HTML result of the page to test.

ASP.NET Core since 2.0 has the possibility to setup a test host to run the a web in the test environment. This is pretty cool. You don't need to setup an actual web server to run a test against the web. This gets done automatically by using the generic WebApplicationFactory. You just need to specify the type of the Startup class of the web-to-test:

using System;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc.Testing;
using Xunit;

namespace WebToTest.IntegrationTests
{
    public class PersonTests : IClassFixture<WebApplicationFactory<Startup>>
    {
        private readonly WebApplicationFactory<Startup> _factory;

        public PersonTests(WebApplicationFactory<Startup> factory)
        {
            _factory = factory;
        }
        
        // put test methods here
    }
}

Also the XUnit IClassFixture is special here. This generic interface tells XUnint to create an instance of the generic argument per test to run. In this case I get a new instance of the WebApplicationFactory of Startup per test. Wo this test class created its own web server every time a test method gets executed. This is a isolated test environment per test.

End-to-end tests

Our first integration tests will ensure the MVC routes are working. This tests create a web host and calls the web via HTTP. It tests parts of the application from the UI to the database. This is an end-to-end test.

Instead of a XUnit Fact, we create a Theory this time. A Theory marks a test method which is able to retrieve input data via attribute. The InlineDataAttribute defines the data we want to pass in. In this case the MVC route URLs:

[Theory]
[InlineData("/")]
[InlineData("/Home/Index")]
[InlineData("/Home/Privacy")]
public async Task BaseTest(string url)
{
    // Arrange
    var client = _factory.CreateClient();

    // Act
    var response = await client.GetAsync(url);

    // Assert
    response.EnsureSuccessStatusCode(); // Status Code 200-299
    Assert.Equal("text/html; charset=utf-8",
        response.Content.Headers.ContentType.ToString());
}

Let's try it

dotnet test WebToTest.IntegrationTests

This actually creates 3 test results as you can see in the output window:

We'll now need to do the same thing for the API routs. Why in a separate method? Because the first integration test also checks the content type which is the type of a HTML document. The content type of the API results is application/json:

[Theory]
[InlineData("/api/person")]
[InlineData("/api/person/1")]
public async Task ApiRouteTest(string url)
{
    // Arrange
    var client = _factory.CreateClient();

    // Act
    var response = await client.GetAsync(url);

    // Assert
    response.EnsureSuccessStatusCode(); // Status Code 200-299
    Assert.Equal("application/json; charset=utf-8",
        response.Content.Headers.ContentType.ToString());
}

This also works and we have to more successful tests now:

This isn't completely isolated, because it uses the same database as the production or the test web. At least it is the same file based SQLite database as in the test environment. Because a test should be as fast as possible, wouldn't it make sense to use a in-memory database instead?

Usually it would be possible to override the service registration of the Startup.cs with the WebApplicationFactory we retrieve in the constructor. It should be possible to add the ApplicationDbContext and to configure an in-memory database:

public PersonTests(WebApplicationFactory<Startup> factory)
{
    _factory = factory.WithWebHostBuilder(config =>
    {
        config.ConfigureServices(services =>
        {
            services.AddDbContext<ApplicationDbContext>(options =>
                options.UseInMemoryDatabase("InMemory"));
        });
    });
}

Unfortunately, I didn't get the seeding running for the in-memory database using the current preview version of ASP.NET Core 3.0. This will result in an failing test for the route URL /api/person/1 because the Person with the Id 1 isn't available. This is a known issue on GitHub: https://github.com/aspnet/EntityFrameworkCore/issues/11666

To get this running we need to ensure seeding explicitly every time we create an instance of the DbContext.

public PersonService(ApplicationDbContext dbContext)
{
    _dbContext = dbContext;
    _dbContext.Database?.EnsureCreated();
}

This hopefully gets fixed, because it is kinda bad to add this line only for the integration tests. Anyway, it works this way. Maybe you find a way to call EnsureCreated() in the test class.

Specific integration tests

Sometimes it makes sense to test more specific parts of the application, without starting a web host and without accessing a real database. Just to be sure that the individual units are working together. This time I'm testing the PersonController together with the PersonService. I'm going to mock the DbContext, because the database access isn't relevant for the test. I just need to ensure the service provides the data to the controller in the right way and to ensure the controller is able to handle these data.

At first I create a simple test class that is able to create the needed test data and the DbContext mock:

public class PersonIntegrationTest
{
    // put the tests here

    private Mock<ApplicationDbContext> CreateDbContextMock()
    {
        var persons = GetFakeData().AsQueryable();

        var dbSet = new Mock<DbSet<Person>>();
        dbSet.As<IQueryable<Person>>().Setup(m => m.Provider).Returns(persons.Provider);
        dbSet.As<IQueryable<Person>>().Setup(m => m.Expression).Returns(persons.Expression);
        dbSet.As<IQueryable<Person>>().Setup(m => m.ElementType).Returns(persons.ElementType);
        dbSet.As<IQueryable<Person>>().Setup(m => m.GetEnumerator()).Returns(persons.GetEnumerator());

        var context = new Mock<ApplicationDbContext>();
        context.Setup(c => c.Persons).Returns(dbSet.Object);
        
        return context;
    }

    private IEnumerable<Person> GetFakeData()
    {
        var i = 1;
        var persons = A.ListOf<Person>(26);
        persons.ForEach(x => x.Id = i++);
        return persons.Select(_ => _);
    }
}

At next I wrote the tests, which look similar to to the PersonControllerTests I wrote in the last blog post. Only the arrange part differs a little bit. This time I don't pass the mocked service in, but an actual one that uses a mocked DbContext:

[Fact]
public void GetPersonsTest()
{
    // arrange
    var context = CreateDbContextMock();

    var service = new PersonService(context.Object);

    var controller = new PersonController(service);

    // act
    var results = controller.GetPersons();

    var count = results.Count();

    // assert
    Assert.Equal(26, count);
}

[Fact]
public void CreateDbContextMock()
{
    // arrange
    var context = CreateDbContext();

    var service = new PersonService(context.Object);

    var controller = new PersonController(service);

    // act
    var result = controller.GetPerson(1);
    var person = result.Value;

    // assert
    Assert.Equal(1, person.Id);
}

Let's try it by using the following command:

dotnet test WebToTest.IntegrationTests

Et voilà:

At the end we should run all the tests of the solution at once to be sure not to break the existing tests and the existing code. Just type dotnet test and see what happens:

Conclusion

I wrote that integration test will cost a lot more effort than unit test. This isn't completely true since we are able to use the WebApplicationFactory. In many other cases it will be a little more expensive, depending how you want to test and how many dependencies you have.. You need to figure out how you want to isolate a integration test. More isolation sometimes means more effort, less isolation means more dependencies that may break your test.

Anyway. Writing integration tests in my point of view is more important than writing unit tests, because it tests that the parts of the application are working together. And it is not that hard and doesn't cost that much.

Just do it. If you never wrote tests in the past: Try it. It feels great to be on the save way, to be sure the code is working as expected.

Jürgen Gutsch: Unit testing data access in ASP.​NET Core

I really like to be in contact with the dear readers of my blog. I get a lot of positive feedback about my posts via twitter or within the comments. That's awesome and that really pushed me forward to write more posts like this. Some folks also create PRs for my blog posts on GitHub to fix typos and other errors of my posts. You also can do this, by clicking the link to the related markdown file on GitHub at the end of every post.

Many thanks for this kind of feedback :-)

The reader Mohammad Reza recently asked me via twitter to write about unit testing an controller that connects to a database and to fake the data for the unit tests.

@sharpcms Hello jurgen, thank you for your explanation of unit test : Unit Testing an ASP.NET Core Application. it's grate. can you please explain how to use real data from entity framwork and fake data for test in a controller?

@Mohammad: First of all: I'm glad you like this post and I would be proud to write about that. Here it is:

Setup the solution using the .NET CLI

First of all, let's create the demo solution using the .NET CLI

mkdir UnitTestingAspNetCoreWithData & cd UnitTestingAspNetCoreWithData
dotnet new mvc -n WebToTest -o WebToTest
dotnet new xunit -n WebToTest.Tests -o WebToTest.Tests
dotnet new sln -n UnitTestingAspNetCoreWithData

dotnet sln add WebToTest
dotnet sln add WebToTest.Tests

This lines are creating a solution directory adding a web to test and a XUnit test project. Also a solution file gets added and the two projects will be added to the solution file.

dotnet add WebToTest.Tests reference WebToTest

This command won't work in the current version of the .NET Core, because the XUnit project still targets netcoreapp2.2. You cannot reference a higher target version. It should be equal or lower than the target version of the referencing project. You should change the the target to netcoreapp3.0 in the csproj of the test project before executing this command:

<TargetFramework>netcoreapp3.0</TargetFramework>

Now we need to add some NuGet references:

dotnet add WebToTest package GenFu
dotnet add WebToTest.Tests package GenFu
dotnet add WebToTest.Tests package moq

At first we add GenFu, which is a dummy data generator. We need it in the web project to seed some dummy data initially to the database and we need it in the test project to generate test data. We also need Moq to create fake objects, e.g. fake data access in the test project.

Because the web project is an empty web project it also doesn't contain any data access libraries. We need to add Enitity Framework Core to the project.

dotnet add WebToTest package Microsoft.EntityFrameworkCore.Sqlite -v 3.0.0-preview.18572.1
dotnet add WebToTest package Microsoft.EntityFrameworkCore.Tools -v 3.0.0-preview.18572.1
dotnet add WebToTest package Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore -v 3.0.0-preview.18572.1

I'm currently using the preview version of .NET Core 3.0. The version number will change later on.

Now we can start Visual Studio Code

code .

In the same console window we can call the following command to execute the tests:

dotnet test WebToTest.Tests

Creating the controller to test

The Controller we want to test is an API controller that only includes two GET actions. It is only about the concepts. Testing additional actions, POST and PUT actions is almost the same. This is the complete controller to test.

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using WebToTest.Data.Entities;
using WebToTest.Services;

namespace WebToTest.Controllers
{
    [Route("api/[controller]")]
    [ApiController()]
    public class PersonController : Controller
    {
        private readonly IPersonService _personService;

        public PersonController(IPersonService personService)
        {
            _personService = personService;
        }
        // GET: api/Person
        [HttpGet]
        public IEnumerable<Person> GetPersons()
        {
            return _personService.AllPersons();
        }

        // GET: api/Person/5
        [HttpGet("{id}")]
        public ActionResult<Person> GetPerson(int id)
        {
            var todoItem = _personService.FindPerson(id);

            if (todoItem == null)
            {
                return NotFound();
            }

            return todoItem;
        }
    }
}

As you can see, we don't use entity framework directly in the controller. I would propose to encapsulate the data access in service classes, which prepare the data as you need it.

Some developers prefer to encapsulate the actual data access in an additional repository layer. From my perspective this is not needed, if you use an OR mapper like Entity Framework. One reason is that EF already is the additional layer that encapsulates the actual data access. And the repository layer is also an additional layer to test and to maintain.

So the service layer contains all the EF stuff and is used here. This also makes testing much easier because we don't need to mock the EF DbContext. The Service gets passed in via dependency injection.

Let's have a quick look into the Startup.cs where we need to configure the services:

public void ConfigureServices(IServiceCollection services)
{
    // [...]

    services.AddDbContext<ApplicationDbContext>(options =>
        options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

    services.AddTransient<IPersonService, PersonService>();
    
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
}

What I added to the ConfigureServices method is one line to register and configure the DbContext and one line to register the PersonService used in the controller. Both types are not created yet. Before we create them we also need to add a few lines to the config file. Open the appsettings.json and add the connection string to the SQLite database:

{
  "ConnectionStrings": {
    "DefaultConnection": "DataSource=app.db"
  },
  // [...]
}

That's all about the configuration. Let's go back to the implementation. The next step is the DbContext. To keep the demo simple, I just use one Person entity here:

using GenFu;
using Microsoft.EntityFrameworkCore;
using WebToTest.Data.Entities;

namespace WebToTest.Data
{
    public class ApplicationDbContext : DbContext
    {
        public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options)
            : base(options)
        {}
        
        public ApplicationDbContext() { }

		protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            // seeding
            var i = 1;
            var personsToSeed = A.ListOf<Person>(26);
            personsToSeed.ForEach(x => x.Id = i++);
            modelBuilder.Entity<Person>().HasData(personsToSeed);
        }

        public virtual DbSet<Person> Persons { get; set; }
    }
}

We only have one DbSet of Person here. In the OneModelCreating method we use the new seeding method HasData() to ensure we have some data in the database. Usually you would use real data to seed to the database. In this case I use GenFu do generate a list of 26 persons. Afterwards I need to ensure the IDs are unique, because by default GenFu generates random numbers for the ids which may result in a duplicate key exception.

The person entity is simple as well:

using System;

namespace WebToTest.Data.Entities
{
    public class Person
    {
        public int Id { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }
        public DateTime DateOfBirth { get; set; }
        public string City { get; set; }
        public string State { get; set; }
        public string Address { get; set; }
        public string Telephone { get; set; }
        public string Email { get; set; }
    }
}

Now let's add the PersonService which uses the ApplicationDbContext to fetch the data. Even the DbContext gets injected into the constructor via dependency injection:

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
using WebToTest.Data;
using WebToTest.Data.Entities;

namespace WebToTest.Services
{
    public class PersonService : IPersonService
    {
        private readonly ApplicationDbContext _dbContext;
        public PersonService(ApplicationDbContext dbContext)
        {
            _dbContext = dbContext;
        }

        public IEnumerable<Person> AllPersons()
        {
            return _dbContext.Persons
            	.OrderBy(x => x.DateOfBirth)
            	.ToList();
        }

        public Person FindPerson(int id)
        {
            return _dbContext.Persons
            	.FirstOrDefault(x => x.Id == id);
        }
    }

    public interface IPersonService
    {
        IEnumerable<Person> AllPersons();
        Person FindPerson(int id);
    }
}

We need the interface to register the service using a contract and to create a mock service later on in the test project.

If this is done, don't forget to create an initial migration to create the database:

dotnet ef migrations add Initial -p WebToTest -o Data\Migrations\

This puts the migration into the Data folder in our web project. No we are able to create and seed the database:

dotnet ef database update -p WebToTest

In the console you will now see how the database gets created and seeded:

Now the web project is complete and should run. You can try it by calling the following command and calling the URL https://localhost:5001/api/person in the browser:

dotnet run -p WebToTest

You now should see the 26 persons as JSON in the browser:

Testing the controller

In the test project I renamed the initially scaffolded class to PersonControllerTests. After that I created a small method that creates the test data we'll return to the controller. This is exactly the same code we used to seed the database:

private IEnumerable<Person> GetFakeData()
{
    var i = 1;
    var persons = A.ListOf<Person>(26);
    persons.ForEach(x => x.Id = i++);
    return persons.Select(_ => _);
}

We now can create out first test to test the controllers GetPersons() method:

[Fact]
public void GetPersonsTest()
{
    // arrange
    var service = new Mock<IPersonService>();

    var persons = GetFakeData();
    service.Setup(x => x.AllPersons()).Returns(persons);

    var controller = new PersonController(service.Object);

    // Act
    var results = controller.GetPersons();

    var count = results.Count();

    // Assert
    Assert.Equal(count, 26);
}

In the first line we use Moq to create a mock/fake object of our PersonService. This is why we need the interface of the service class. Moq creates proxy objects out of interfaces or abstract classes. Using Moq we are now able to setup the mock object, by telling Moq we want to return this specific list of persons every time we call the AllPersons() method.

If the setup is done we are able to inject the proxy object of the IPersonService into the controller. Our controller now works with a fake service instead using the original one. Inside the unit test we don't need a connection to the database now. That makes the test faster and more independent from any infrastructure outside the code to test

In the act section we call the GetPersons() method and will check the results afterwards in the assert section.

How does it look like with the GetPerson() method that returns one single item?

The second action to test returns an ActionResult of Person, so we only need to get the result a little bit different:

[Fact]
public void GetPerson()
{
    // arrange
    var service = new Mock<IPersonService>();

    var persons = GetFakeData();
   	var firstPerson = persons.First();
    service.Setup(x => x.FindPerson(1)).Returns(firstPerson);

    var controller = new PersonController(service.Object);

    // act
    var result = controller.GetPerson(1);
    var person = result.Value;

    // assert
    Assert.Equal(1, person.Id);
}

Also the setup differs, because we setup another method that returns a single Person instead of a IEnumerable of Person.

To execute the tests run the next command in the console:

dotnet test WebToTest.Tests

This should result in the following output if all is done right:

Testing the service layer

How does it look like to test the service layer? In that case we need to mock the DbContext to feed the service with fake data.

In the test project I created a new test class called PersonServiceTests and a test method that tests the AllPersons() method of the PersonService:

[Fact]
public void AllPersonsTest()
{
    // arrange
    var context = CreateDbContext();

    var service = new PersonService(context.Object);

    // act
    var results = service.AllPersons();

    var count = results.Count();

    // assert
    Assert.Equal(26, count);
}

This looks pretty simple at the first glance, but the magic is inside the method CreateDbContext, which created the mock object of the DbContext. I don't return the actual object, in case I need to extend the mock object in the current test method. Let's see how the DbContext is created:

private Mock<ApplicationDbContext> CreateDbContext()
{
    var persons = GetFakeData().AsQueryable();

    var dbSet = new Mock<DbSet<Person>>();
    dbSet.As<IQueryable<Person>>().Setup(m => m.Provider).Returns(persons.Provider);
    dbSet.As<IQueryable<Person>>().Setup(m => m.Expression).Returns(persons.Expression);
    dbSet.As<IQueryable<Person>>().Setup(m => m.ElementType).Returns(persons.ElementType);
    dbSet.As<IQueryable<Person>>().Setup(m => m.GetEnumerator()).Returns(persons.GetEnumerator());

    var context = new Mock<ApplicationDbContext>();
    context.Setup(c => c.Persons).Returns(dbSet.Object);
    return context;
}

The DbSet cannot be easily created, it is a bit special. This is why I need to mock the DbSet and to setup the Provider, the Expression the ElementType and the Enumerator using the values from the persons list. If this is done, I can create the ApplicationDbContext mock and setup the DbSet of Person on that mock. For every DbSet in your DbContext you need to add this four special setups on the mock DbSet. This seems to be a lot of overhead, but it is worth the trouble to test the service in an isolated way.

Sure you could use a in memory database with a real DbContext, but in this case the service isn't really isolated and and we anyway have a kind of a unit test.

The second test of the PersonService is pretty similar to the first one:

[Fact]
public void FindPersonTest()
{
    // arrange
    var context = CreateDbContext();

    var service = new PersonService(context.Object);

    // act
    var person = service.FindPerson(1);

    // assert
    Assert.Equal(1, person.Id);
}

Let's run the tests and see if it's all working as expected:

dotnet test WebToTest.Tests

Also this four tests passed.

Summary

In this tutorial the setup took the biggest part, just to get a running API controller that we can use to test.

  • We created the solution in the console using the .NET CLI.
  • We added a service layer to encapsulate the data access.
  • We added a EF DbContext to use in the service layer.
  • We registered the services and the DbContext in the DI.
  • We used the service in the controller to create two actions which return the data.
  • We started the application to be sure all is running fine.
  • We created one test on an action that doesn't return an ActionResult
  • We created another test on an action that returns an ActionResult
  • We ran the tests successfully in the console using the .NET CLI

Not using the DbContext in the Controller directly makes it a lot easier to test the controller by passing in a mock service. Why? Because it is easier to fake the service instead of the DbContext. It also keeps the controller small which makes maintenance a lot easier later on.

Faking the DbContext is a bit more effort, but also possible as you saw in the last section.

Please find the complete code sample her on GitHub: https://github.com/JuergenGutsch/unit-testing-aspnetcore3

Conclusion

@Mohammad I hope this post will help you and answer your questions :-)

Using ASP.NET Core there is no reason not to unit test the most important and critical parts of your application. If needed you are able to unit test almost all in your ASP.NET Core application.

Unit test is no magic but it is also not the general solution to ensure the quality of your app. To ensure that all tested units are working together you definitely need to have some some integration tests.

I'll do another post about integration tests using ASP.NET Core 3.0 soon.

Albert Weinert: Mein Twitch.TV Kanal ist nun Live

Live und in Farbe

Letztes Jahr habe ich nach jahrelangen zögern nun endlich auch das Live Coding auf Twitch.TV angefangen. Dies habe ich schon ewig vor. Tom Wendel hatte schon früh diesen Trend gesehen und mit Octo Awesome und anderen Projekten viele Stunden Live gestreamt. Dies wollte ich auch machen, kann doch nicht so schwer sein.

Aber, will mich überhaupt jemand sehen oder auch noch hören haben mich lange in meiner Komfortzone sitzen lassen. Jeodch auch wegen Fritz & Friends und CodeRushed dachte ich mir immer mehr, mache es einfach.

Dann habe ich den Schritt gemacht. Ich finde es entwickelt sich ganz prächtig. Natürlich übe ich noch rund um das Streaming, auch wie ich vor der Kamera gebe oder was ich so sage.

Falls Ihr Interesse habt schaut einfach mal rein.

Wo findet Ihr es?

Was gab es bis jetzt?

Es wird z.b. Live eine Konferenz Site als PWA entwickelt. Mit der sollen sich Teilnehmer einer Konferenz Ihren Session Plan für den Tag erstellen können. Die OpenSource Lizenz unter der es stehen wird ist noch definiert. Aber sie wird frei verwendbar sein.

Auch gab es schon zwei Erklärbär Videos.

Eins wo ich AdHoc etwas zum ASP.NET Core Options System erzähle.

Und eins wo mir Mike Build was über GraphQL erklärt.

Auch wurde ein kleines Tool, der Browser Deflector entwickelt und via Azure DevOps auf GitHub deployed. Mit dem man für bestimmte Urls bestimmte Browser starten kann.

Was ist geplant?

Geplant ist so einiges, ich möchte mehr Live Streams machen wie der mit Mike Bild, wo Experten zu einem Thema etwas im Dialog erzählen und gleichzeitig dazu etwas zeigen

Es wird weiter Live gecodet an verschiedenen Projekten

  • Fertigstellung der Konferenz App
  • Entwicklungs eines Familien Kalendars ala Dakboard
  • Einen auf Arduino und Light Stripes basierender Durchhaltezähler (auf Wunsch einer einzelnen Dame)
  • was anderes

Diese Angaben sind ohne gewähr ;)

Auch ist geplant ein einen ASP.NET Core Authentication & Authorization Deep Dive zu machen, aber erst wenn ich 100 Follower auf meinem Twitch Kanal habe.

Wenn dieser Euch interessiert könnte Ihr hier schon vorab eure Fragen & Herausforderungen einkippen. So dass ich diese berücksichtigen kann.

Sonst noch was?

Twitch is ein Amazon Dienst, wenn Ihr Amazon Prime habt könnt Ihr eurer Twitch Konto damit verknüpfung und schaut Werbefrei.

Während des Streams gibt es einen Live Chat, darin könnt Ihr mit mir und anderen interagieren. Um dort rein schreiben zu können ist ein Twitch Konto erforderlich.

Ich würde mich freuen wenn sich noch mehr Leute zu den Live Streams einfinden, und natürlich mir dort folgen. Damit werdet wenn Ihr wollt von Twitch über den Start des Streams informiert. Regelmäßige Zeiten gibt es noch nicht, aber besondere Events wie das mit Mike oder der Deep Dive werden natürlich entsprechend terminiert.

Holger Schwichtenberg: Alle Grafikdateien aus einem Word-Dokument exportieren

Mit einem PowerShell-Skript, das man per Kontextmenü aufruft, bekommt man ganz schnell einen Ordner mit allen Grafiken, die sich in einem Word-DOCX befinden.

Norbert Eder: Ziele 2019

Wie jedes Jahr, nehme ich mir auch für das kommende Jahr wieder einige Ziele vor. Einige davon möchte ich hiermit öffentlich zugänglich machen, einige werden alleine nur für mich (bzw. für einen nicht-öffentlichen Kontext) existieren.

Softwareentwicklung

Das kommende Jahr wird sich voraussichtlich um die Themen .NET Core, Golang, Container, Serverless und IoT drehen. Es stehen einige große Projekte ins Haus. Hierzu wird viel architekturelle aber auch sicherheitsrelevante Arbeit zu leisten sein. Die Ziele liegen hierbei weniger bei den einzelnen Technologien, sondern vielmehr in der Größenordnung und damit einhergehenden architekturellen und performancetechnischen Herausforderungen.

Bewertbares Ziel: Mehr Fachbeiträge hier im Blog als 2018.

Fotografie

Ich war 2018 sehr viel unterwegs und es sind tolle Fotos entstanden. Allerdings fand ich nicht die Muse, diese auch online zu präsentieren. 2019 möchte ich mehr in meine Portfolios investieren und sie aufmotzen. Auch soll es wieder meine Reiseberichte geben.

Nachdem ich bereits im „Portrait-Modus“ bin, werde ich anschließen und mehr in dieses Thema investieren.

Neben der Fotografie möchte ich mich auch mit dem Thema Video mehr beschäftigen. In diesen Bereich fallen viele andere Themen wie Sound, Bearbeitungssoftware usw. Meinen Weg möchte ich dokumentieren und somit einen Einblick in dieses Thema liefern.

Auch zum Thema Bildbearbeitung wird es einige Tutorials geben. Dabei werde ich mich mit der Software Luminar auseinandersetzen.

Blog

Nachdem voriges Jahr einige Aufgaben liegen geblieben sind, möchte ich diese nun 2019 erledigen. D.h. weniger Fotografie auf meiner Website, sondern wieder mehr Softwareentwicklung, IT und Technik. Die Fotografie wird auf meine Website https://norberteder.photography „verdrängt“. Der Umbau hat bereits begonnen.

Im Rückblick 2018 habe ich das Ende meines Projektes #fotomontag angekündigt – zumindest auf dieser Website. Es geht weiter, etwas verändert, aber doch. Ebenfalls auf https://norberteder.photography.

Lesen

Nachdem ich voriges Jahr relativ schnell meine ursprüngliches Ziel von 15 Büchern auf 50 angehoben habe, möchte ich dieses Jahr mit einem Ziel von 25 Büchern loslegen. Den Fortschritt könnt ihr auch dieses Jahr wieder auf https://goodreads.com verfolgen.

Zum Schluss möchte ich euch ein wunderbares Jahr 2019 wünschen. Viel Gesundheit, Glück, aber auch Erfolg.

Der Beitrag Ziele 2019 erschien zuerst auf Norbert Eder.

Norbert Eder: Rückblick 2018

Rückblick 2018

Die Jahren fliegen dahin. Schon wieder ist eines rum. Wie jedes Jahr, möchte ich einen Blick zurück werfen und über die vergangenen 365 Tage nachdenken. Dabei möchte natürlich meine Ziele für 2018 nicht außer Acht lassen und in die Bewertung einfließen lassen.

Softwareentwicklung

Wie ich es mir vorgenommen habe, beschäftigte ich mich 2018 sehr viel mit .NET Core, Angular und dem Thema Internet of Things (IoT). Zusätzlich habe ich wieder über den Tellerrand geguckt und bei Golang reingeschnuppert. Gerade Golang entwickelt sich bei mir persönlich zu einer beliebten Programmiersprache.

Sehr viel Energie ging 2018 in das Thema Docker, Microservices und Orchestrierung.

Fotografie

Auch 2018 habe ich es geschafft, jeden Montag ein Foto für mein Projekt #fotomontag zu veröffentlichen. Nach 209 veröffentlichten Fotos in 4 Jahren, geht diese Ära allerdings zu Ende.

Zusätzlich hat sich gerade auf dem Gebiet der Fotografie sehr viel bei mir getan:

  • Zahlreiche wissenswerte Beiträge auf https://norberteder.photography, auch das Portfolio wurde überarbeitet und erweitert
  • Ich habe ja eine Liste meines Foto-Equipments auf meinem Blog. Dieser musste wieder adaptiert werden :)
  • Dieses Jahr habe ich es endlich geschafft und zahlreiche Portrait-/Model-Shootings gemacht. Dabei konnte ich richtig tolle Erfahrungen sammeln und mich weiterentwickeln. Vielen Dank an dieser Stelle an alle, mit denen ich zusammenwirken konnte.
  • Schlussendlich gab es eine Menge Fotoreisen: Piran/Portoroz, Dubrovnik, Kotor, Nürnberg, Ostsee, Dresden, Budapest und Prag.

Wie du siehst, hat sich also wirklich viel getan – mehr als ich mir erhofft bzw. geplant hatte.

Blog

Für das Blog hatte ich mir mehr vorgenommen. Zwar habe ich einige Artikel ausgemistet und aktualisiert, doch wollte ich wieder viel mehr über Softwareentwicklung bloggen. Durch die Arbeit einerseits und das doch aufwändige Hobby der Fotografie andererseits, blieb einfach zu wenig Zeit, die ich dann doch anderweitig nutzte.

Seit diesem Jahr steht die Kommentarfunktion nicht mehr zur Verfügung. Das ist nicht der DSGVO geschuldet, sondern vielmehr der Qualität. Feedback erhalte ich nun über andere Kanäle (E-Mail hauptsächlich) und das qualitativ hochwertiger, da der Aufwand für das Feedback einfach höher ist.

Ein herzliches Dankeschön an meine treuen Leser, die mir die Stange halten und auch immer wieder Feedback geben.

Bücher

Seit diesem Jahr verwalte ich meine Bücher über goodreads. Meinen Account hatte ich zwar schon lange, aber das Potential blieb mir lange Zeit verborgen. Nun, seit heuer nutze ich diese Plattform.

Für 2018 hatte ich mir vorgenommen, 15 Bücher zu lesen. Tatsächlich wurden es 50 (nicht ausschließlich Fach- bzw. Sachbücher).

Top 5 Beiträge

Fazit

Insgesamt hielt das Jahr 2018 zahlreiche Herausforderungen bereit. Das war nicht immer leicht, allerdings gab es wieder viel zu lernen – und das ist wichtig und gut.

Der Beitrag Rückblick 2018 erschien zuerst auf Norbert Eder.

Alexander Schmidt: Azure Active Directory B2C – Teil 2

Dieser Teil widmet sich vor allem den Themen Identity Provider, Policies und User attributes.

Norbert Eder: Visual Studio 2017: Service Fabric Templates werden nicht angezeigt

Du hast das Azure Service Fabric SDK installiert, allerdings findest du im Visual Studio 2017 das Projekt-Template nicht und kannst somit kein neues Projekt anlegen? In diesem Fall sind eventuell die Service Fabric Tools des Azure Entwicklungsmoduls nicht installiert:

Service Fabric Tools für Visual Studio 2017 installieren

Service Fabric Tools für Visual Studio 2017 installieren

Es ist im Visual Studio Installer die Azure Entwicklung zu aktivieren, ebenso die Service Fabric-Tools.

Nach der Installation und des erneuten Startes von Visual Studio 2017 sind die Templates vorhanden.

Service Fabric Application Template | Visual Studio 2017

Service Fabric Application Template | Visual Studio 2017

Viel Spaß bei der Entwicklung.

Der Beitrag Visual Studio 2017: Service Fabric Templates werden nicht angezeigt erschien zuerst auf Norbert Eder.

Holger Schwichtenberg: Ein Git-Repository in Azure DevOps per Kommandozeile anlegen

Microsoft stellt zur automatisierten Verwaltung von Azure DevOps ein Kommandozeilenwerkzeug mit Namen "VSTS CLI" bereit.

Christina Hirth : Base your decisions on heuristics and not on gut feeling

As a developer we tackle very often problems which can be solved in various ways. It is ok not to know how to solve a problem. The real question is: how to decide which way to go 😯

In this situations often I rather have a feeling as a concrete logical reason for my decisions. This gut feelings are in most cases correct – but this fact doesn’t help me if I want to discuss it with others. It is not enough to KNOW something. If you are not a nerd from the 80’s (working alone in a den) it is crucial to be able to formulate and explain and share your thoughts leading to those decisions.

Finally I found a solution for this problem as I saw the session of Mathias Verraes about Design Heuristics held by the KanDDDinsky.

The biggest take away seems to be a no-brainer but it makes a huge difference: formulate and visualize your heuristics so that you can talk about concrete ideas instead of having to memorize everything what was said – or what you think it was said.

Using this methodology …

  • … unfounded opinions like “I think this is good and this is bad” won’t be discussed. The question is, why is something good or bad.
  • … loop backs to the same subjects are avoided (to something already discussed)
  • … the participants can see all criteria at once
  • … the participants can weight the heuristics and so to find the probably best solution

What is necessary for this method? Actually nothing but a whiteboard and/or some stickies. And maybe to take some time beforehand to list your design heuristics. These are mine (for now):

  • Is this a solution for my problem?
  • Do I have to build it or can I buy it?
  • Can it be rolled out without breaking neither my features as everything else out of my control?
  • Breaks any architecture rules, any clean code rules? Do I have a valid reason to break these rules?
  • Can lead to security leaks?
  • Is it over engineered?
  • Is it much to simple, does it feel like a short cut?
  • If it is a short cut, can be corrected in the near future without having to throw away everything? = Is my short cut implemented driving my code in the right direction, but in more shallow way?
  • Does this solution introduce a new stack = a new unknown complexity?
  • Is it fast enough (for now and the near future)?
  • … to be continued 🙂

The video for the talk can be found here. It was a workshop disguised as a talk (thanks again Mathias!!), we could have have continued for another hour if it weren’t for the cold beer waiting 🙂

David Tielke: DDC 2018 - Inhalte meines Workshops und der DevSessions