Holger Schwichtenberg: Kostenfreier Vortrag zu PowerShell Core 6.1 am 7. November 2018 in Gelsenkirchen

Der Dotnet-Doktor zeigt bei der User Group ".NET Developers Ruhr" in Gelsenkirchen die Unterschiede zwischen der alten Windows PowerShell 5.1 und der PowerShell Core 6.1 auf und demonstriert einige Einsatzgebiete.

Jürgen Gutsch: Customizing ASP.​NET Core Part 08: ModelBinders

In the last post about OutputFormatters I wrote about sending data out to the clients in different formats. In this post we are going to do it the other way. This post is about data you get into your Web API from outside. What if you get data in a special format or what if you get data you need to validate in a special way. ModelBinders will help you handling this.

The series topics

About ModelBinders

ModelBinders are responsible to bind the incoming data to specific action method parameters. It binds the data sent with the request to the parameters. The default binders are able to bind data that are sent via the QueryString or sent within the request body. Within the body the data can be sent in URL format or JSON.

The model binding tries to find the values in the request by the parameter names. The form values, the route data and the query string values are stored as a key-value pair collection and the binding tries to find the parameter name in the keys of the collection.

Preparation of the test project

In this post I'd like to send CSV data to a Web API method. I will reuse the CSV data we created in the last post:

Id,FirstName,LastName,Age,EmailAddress,Address,City,Phone
48,Samantha,White,18,Angel.Morgan@shaw.ca,"8202 77th Street ",Mascouche,(682) 381-4092
1,Eric,Wright,2,Briana.Ross@gmx.com,"8104 Scott Avenue ",Canutillo,(253) 366-5637
55,Amber,Watson,46,Sarah.Foster@gmx.com,"9206 Lewis Avenue ",Coleman,(632) 375-4415
99,Alexander,King,59,Ross.Timms@live.com,"3089 Paerdegat 7th Street ",Monte Alto,(366) 319-4154
69,Autumn,Hayes,25,Mark.Diaz@shaw.ca,"3263 Avenue O  ",Montreal West (Montréal-Ouest),(283) 438-7801
94,Destiny,James,47,Kylie.Walker@telus.net,"1057 14th Street ",Montreal,(570) 574-3208
59,Christina,Bennett,87,Madeline.Adams@att.com,"5672 19th Lane ",Corrigan,(467) 304-0309
71,Isaac,Hayes,33,Trevor.Robinson@hotmail.com,"9707 Langham Street ",Huntington,(635) 317-0231
23,Jason,Morgan,77,Jennifer.Powell@rogers.ca,"4413 Debevoise Avenue ",Pinole,(265) 467-1984
43,Jenna,Brandzin,92,Natalie.Reed@gmail.com,"4691 Sea Breeze Avenue ",Cushing-Douglass,(502) 427-9135
79,Madison,Verstraete,69,Abigail.Wright@hotmail.com,"2066 104th Street ",Moose Lake,(448) 423-7550
80,Lorrie,Long,89,Melissa.Bennett@microsoft.com,"3048 Allen Avenue ",Munday,(576) 707-6183
79,Alejandro,Daeninck,51,Matthew.Phillips@att.com,"9997 41st Street ",North Bay,(455) 297-2648
14,Makayla,Clark,44,Joshua.Jackson@rogers.ca,"4518 Folsom Place ",Cortland,(772) 692-0732
12,Isaac,Sanchez,37,Paige.MacKenzie@live.com,"2094 Mc Kenny Street ",Brockville,(563) 735-0233
68,Jesus,Brandzin,34,Molly.Clark@telus.net,"3532 Durland Place ",Comfort,(627) 319-9704
59,Logan,Howard,59,Jorge.Brandzin@rogers.ca,"3458 Wythe Avenue ",Enderby,(226) 520-9653
48,Nathaniel,Richardson,58,Amanda.Pitt@gmail.com,"6926 Sunnyside Court ",Los Altos Hills,(513) 338-4602
34,Tiffany,Miller,18,Claire.Alexander@att.com,"1985 Devon Avenue ",Sansom Park,(357) 274-3606

So let's start by creating a new project using the .NET CLI:

dotnet new webapi -n ModelBinderSample -o ModelBinderSample

This creates a new Web API project.

In this new project I created a new controller with a small action inside:

namespace ModelBinderSample.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class PersonsController : ControllerBase
    {
        public ActionResult<object> Post(IEnumerable<Person> persons)
        {
            return new
            {
                ItemsRead = persons.Count(),
                Persons = persons
            };
        }
    }
}

This looks basically like any other action. It accepts a list of persons and returns an anonymous object that contains the number of persons as well as the list of persons. This action is pretty useless, but helps us to debug the ModelBinder using Postman.

We also need the Person class:

public class Person
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string EmailAddress { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string Phone { get; set; }
}

This actually will work fine, if we would send JSON based data to that action.

As a last preparation step, we need to add the CsvHelper NuGet package to easier parse the CSV data. I also love to use the .NET CLI here:

dotnet package add CsvHelper

Creating a CsvModelBinder

To create the ModelBinder add a new class called CsvModelBinder, which implements the IModelBinder. The next snippet shows a generic binder that should work with any list of models:

public class CsvModelBinder : IModelBinder
{
    public Task BindModelAsync(ModelBindingContext bindingContext)
    {
        if (bindingContext == null)
        {
            throw new ArgumentNullException(nameof(bindingContext));
        }

        // Specify a default argument name if none is set by ModelBinderAttribute
        var modelName = bindingContext.ModelName;
        if (String.IsNullOrEmpty(modelName))
        {
            modelName = "model";
        }

        // Try to fetch the value of the argument by name
        var valueProviderResult = bindingContext.ValueProvider.GetValue(modelName);
        if (valueProviderResult == ValueProviderResult.None)
        {
            return Task.CompletedTask;
        }

        bindingContext.ModelState.SetModelValue(modelName, valueProviderResult);

        var value = valueProviderResult.FirstValue;
        // Check if the argument value is null or empty
        if (String.IsNullOrEmpty(value))
        {
            return Task.CompletedTask;
        }

        var stringReader = new StringReader(value);
        var reader = new CsvReader(stringReader);

        var modelElementType = bindingContext.ModelMetadata.ElementType;
        var model = reader.GetRecords(modelElementType).ToList();

        bindingContext.Result = ModelBindingResult.Success(model);

        return Task.CompletedTask;
    }
}

In the method BindModelAsync we get the ModelBindingContext with all the information in it we need to get the data and to de-serialize it.

First the context get's checked against null values. After that we set a default argument name to model, if none is specified. If this is done we are able to fetch the value by the name we previously set.

If there's no value, we shouldn't throw an exception in this case. The reason is that maybe the next configured ModelBinder is responsible. If we throw an exception the execution of the current request is broken and the next configured ModelBinder doesn't have the chance to get executed.

With a StringReader we read the value into the CsvReader and de-serialize it to the list of models. We get the type for the de-serialization out of the ModelMetadata property. This contains all the relevant information about the current model.

Using the ModelBinder

The Binder isn't used automatically, because it isn't registered in the dependency injection container and not configured to use within the MVC framework.

The easiest way use this model binder is to use the ModelBinderAttribute on the argument of the action where the model should be bound:

[HttpPost]
public ActionResult<object> Post(
    [ModelBinder(binderType: typeof(CsvModelBinder))] 
    IEnumerable<Person> persons)
{
    return new
    {
        ItemsRead = persons.Count(),
        Persons = persons
    };
}

Here the type of our CsvModelBinder is set as binderType to that attribute.

Steve Gordon wrote about a second option in his blog post: Custom ModelBinding in ASP.NET MVC Core. He uses a ModelBinderProvider to add the ModelBinder to the list of existing ones.

I personally prefer the explicit declaration, because the most custom ModelBinders will be pretty specific to an action or to an specific type and theres no hidden magic in the background.

Testing the ModelBinder

To test it, we need to create a new Request in Postman. I set the request type to POST and put the URL https://localhost:5001/api/persons in the address bar. No I need to add the CSV data in the body of the request. Because it is a URL formatted body, I needed to put the data as persons variable into the body:

persons=Id,FirstName,LastName,Age,EmailAddress,Address,City,Phone
48,Samantha,White,18,Angel.Morgan@shaw.ca,"8202 77th Street ",Mascouche,(682) 381-4092
1,Eric,Wright,2,Briana.Ross@gmx.com,"8104 Scott Avenue ",Canutillo,(253) 366-5637
55,Amber,Watson,46,Sarah.Foster@gmx.com,"9206 Lewis Avenue ",Coleman,(632) 375-4415

After pressing send, I got the result as shown below:

Now the clients are able to send CSV based data to the client.

Conclusion

This is a good way to transform the input in a way the action really needs. You could also use the ModelBinders to do some custom validation against the database or whatever you need to do before the model get's passed to the action.

To learn more about ModelBinders, you need to have a look into the pretty detailed documentation:

While playing around with the ModelBinderProvider Steve describes in his blog, I stumbled upon InputFormatters. Would this actually be the right way to transform CSV input into objects? I definitely need to learn some more details about the InputFormattersand will use this as 12th topic of this series.

Please follow the introduction post of this series to find additional customizing topics I will write about.

In the next part I will show you what you can do with ActionFilters: Customizing ASP.NET Core Part 09: ActionFilter (not yet done)

Jürgen Gutsch: Customizing ASP.​NET Core Part 07: OutputFormatter

In this seventh post I want to write about, how to send your Data in different formats and types to the client. By default the ASP.NET Core Web API sends the data as JSON, but there are some more ways to send the data.

The series topics

About OutputFormatters

OutputFormatters are classes that turn your data into a different format to sent them trough HTTP to the clients. Web API uses a default OutputFormatter to turn objects into JSON, which is the default format to send data in a structured way. Other build in formatters are a XML formatter and a plan text formatter.

With the - so called - content negotiation the client is able to decide which format he wants to retrieve .The client need to specify the content type of the format in the Accept-Header. The content negotiation is implemented in the ObjectResult.

By default the Web API always returns JSON, even if you accept text/xml in the header. This is why the build in XML formatter is not registered by default. There are two ways to add a XmlSerializerOutputFormatter to ASP.NET Core:

services.AddMvc()
    .AddXmlSerializerFormatters();

or

services.AddMvc(options =>
{
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
});

There is also a XmlDataContractSerializerOutputFormatter available

Also any Accept header gets turned into application/json. If you want to allow the clients to accept different headers, you need to switch that translation off:

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
});

To try the formatters let's setup a small test project.

Prepare a test project

Using the console we will create a small ASP.NET Core Web API project. Execute the following commands line by line:

dotnet new webapi -n WebApiTest -o WebApiTest
cd WebApiTest
dotnet add package GenFu
dotnet add package CsvHelper

This creates a new Web API projects and adds two NuGet packages to it. GenFu is a awesome library to easily create test data. The second one helps us to easily write CSV data.

Now open the project in Visual Studio or in Visual Studio Code and open the ValuesController.cs and change the Get() method like this:

[HttpGet]
public ActionResult<IEnumerable<Person>> Get()
{
	var persons = A.ListOf<Person>(25);
	return persons;
}

This crates a list of 25 Persons using GenFu. The properties get automatically filled with almost realistic data. You'll see the magic of GenFu and the results later on.

In the Models folder create a new file Person.cs with the the Person class inside:

public class Person
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int Age { get; set; }
    public string EmailAddress { get; set; }
    public string Address { get; set; }
    public string City { get; set; }
    public string Phone { get; set; }
}

Open the Startup.cs as well and add the Xml formatters and allow other accept headers as described earlier:

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
});

That's it for now. Now you are able to retrieve the data from the Web API. Start the project by using the dotnet run command.

The best tools to test a web API are Fiddler or Postman. I prefer Postman because it is easy to use. At the end it doesn't matter which tool you want to use. In this demos I'm going to use Postman.

Inside Postman I create a new request. I write the API Url into the address field, which is https://localhost:5001/api/values and I add a header with the key Accept and the Value application/json.

After I press send I will see the JSON result in the response body below:

Here you can see the auto generated values. GenFu puts the data in based on the property type and the property name. So it puts real first names and real last names as well as real cities and phone numbers into the Persons properties.

No let's test the XML output formatter.

In postman change the Accept header form application/json to text/xml and press send:

We now have an XML formatted output.

Now let's go a step further and create some custom OutputFormatters.

Custom OutputFormatters

The plan is to create an VCard output to be able to import the persons contacts directly to outlook or any other contact database that supports VCards. Later in this section we also want to create an CSV output formatter.

Both are text based output formatters and will derive from TextOutputFormatter. Create a new class in a new file called VcardOutputFormatter.cs:

public class VcardOutputFormatter : TextOutputFormatter
{
    public string ContentType { get; }

    public VcardOutputFormatter()
    {
        SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/vcard"));

        SupportedEncodings.Add(Encoding.UTF8);
        SupportedEncodings.Add(Encoding.Unicode);
    }

    // optional, but makes sense to restrict to a specific condition
    protected override bool CanWriteType(Type type)
    {
        if (typeof(Person).IsAssignableFrom(type) 
            || typeof(IEnumerable<Person>).IsAssignableFrom(type))
        {
            return base.CanWriteType(type);
        }
        return false;
    }

    // this needs to be overwritten
    public override Task WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)
    {
        var serviceProvider = context.HttpContext.RequestServices;
        var logger = serviceProvider.GetService(typeof(ILogger<VcardOutputFormatter>)) as ILogger;

        var response = context.HttpContext.Response;

        var buffer = new StringBuilder();
        if (context.Object is IEnumerable<Person>)
        {
            foreach (var person in context.Object as IEnumerable<Person>)
            {
                FormatVcard(buffer, person, logger);
            }
        }
        else
        {
            var person = context.Object as Person;
            FormatVcard(buffer, person, logger);
        }
        return response.WriteAsync(buffer.ToString());
    }

    private static void FormatVcard(StringBuilder buffer, Person person, ILogger logger)
    {
		buffer.AppendLine("BEGIN:VCARD");
		buffer.AppendLine("VERSION:2.1");
		buffer.AppendLine($"FN:{person.FirstName} {person.LastName}");
		buffer.AppendLine($"N:{person.LastName};{person.FirstName}");
		buffer.AppendLine($"EMAIL:{person.EmailAddress}");
		buffer.AppendLine($"TEL;TYPE=VOICE,HOME:{person.Phone}");
		buffer.AppendLine($"ADR;TYPE=home:;;{person.Address};{person.City}");            
		buffer.AppendLine($"UID:{person.Id}");
		buffer.AppendLine("END:VCARD");
		logger.LogInformation($"Writing {person.FirstName} {person.LastName}");
    }
}

In the constructor we need to specify the supported media types and encodings. In the method CanWriteType() we need to check whether the current type is supported within this output formatters. Here we only want to format a single Person or a lists of Persons.

The method WriteResponseBodyAsync() then actually writes the list of persons out to the response stream via a StringBuilder

At least we need to register the new VcardOutputFormatter in the Startup.cs:

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    
    // register the VcardOutputFormatter
    options.OutputFormatters.Add(new VcardOutputFormatter()); 
});

Start the app again using dotnet run. Now change the Accept header to text/vcard and let's see what happens:

We now should see our date in the VCard format.

Let's do the same for a CSV output. We already added the CsvHelper library to the project, so you can just copy the next snippet into your project:

public class CsvOutputFormatter : TextOutputFormatter
{
    public string ContentType { get; }

    public CsvOutputFormatter()
    {
        SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/csv"));

        SupportedEncodings.Add(Encoding.UTF8);
        SupportedEncodings.Add(Encoding.Unicode);
    }

    // optional, but makes sense to restrict to a specific condition
    protected override bool CanWriteType(Type type)
    {
        if (typeof(Person).IsAssignableFrom(type)
            || typeof(IEnumerable<Person>).IsAssignableFrom(type))
        {
            return base.CanWriteType(type);
        }
        return false;
    }

    // this needs to be overwritten
    public override Task WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)
    {
        var serviceProvider = context.HttpContext.RequestServices;
        var logger = serviceProvider.GetService(typeof(ILogger<CsvOutputFormatter>)) as ILogger;

        var response = context.HttpContext.Response;

        var csv = new CsvWriter(new StreamWriter(response.Body));

        if (context.Object is IEnumerable<Person>)
        {
            var persons = context.Object as IEnumerable<Person>;
            csv.WriteRecords(persons);
        }
        else
        {
            var person = context.Object as Person;
            csv.WriteRecord<Person>(person);
        }

        return Task.CompletedTask;
    }
}

This almost works the same way. We can pass the response stream via a StreamWriter directly into the CsvWriter. After that we are able to feed the writer with the persons or the list of persons. That's it.

We also need to register the CsvOutputFormatter before we can test it.

services.AddMvc(options =>
{
    options.RespectBrowserAcceptHeader = true; // false by default
    options.OutputFormatters.Add(new XmlSerializerOutputFormatter());
    
    // register the VcardOutputFormatter
    options.OutputFormatters.Add(new VcardOutputFormatter()); 
	// register the CsvOutputFormatter
    options.OutputFormatters.Add(new CsvOutputFormatter()); 
});

In Postman change the Accept header to text/csv and press send again:

Conclusion

Isn't that cool? I really like the way to change the format based on the except header. This way you are able to create an Web API for many different clients and that accept many different formats. There are still a lot of potential clients outside which don't use JSON and prefer XML or CSV.

The other way around would be an option to consume CSV or any other format inside the Web API. Let's assume your client would send you a list of persons in CSV format. How would you solve this? Parsing the String manually in the action method would work, but it's not a nice option. This is what ModelBinders can do for us. Let's see how this works in the next chapter about Customizing ASP.NET Core Part 08: ModelBinders.

Holger Schwichtenberg: Azure-Dienste per PowerShell anlegen

Ein PowerShell-Skript legt schnell mehrere Webserver und Datenbanken in Microsofts Azure-Cloud an.

Golo Roden: Verkettete Listen in SQL

Common Table Expressions (CTE) ermöglichen rekursive Abfragen auf SQL-Tabellen. Das ist unter anderem praktisch, um einfach verkettete Listen effizient in relationalen Datenbanken ablegen und abfragen zu können.

Jürgen Gutsch: Customizing ASP.​NET Core Part 06: Middlewares

Wow, it is already the sixth part of this series. In this post I'm going to write about middlewares and how you can use them to customize your app a little more. I quickly go threw the basics about middlewares and than I'll write about some more specials things you can do with middlewares.

The series topics

About middlewares

The most of you already know what middlewares are, but some of you maybe don't. Even if you already use ASP.NET Core for a while, you don't really need to know details about middlewares, because they are mostly hidden behind nicely named extension methods like UseMvc(), UseAuthentication(), UseDeveloperExceptionPage() and so on. Every time you call a Use-method in the Startup.cs in the Configure method, you'll implicitly use at least one ore maybe more middlewares.

A middleware is a peace of code that handles the request pipeline. Imagine the request pipeline as huge tube where you can call something in and where an echo comes back. The middlewares are responsible for create this echo or to manipulate the sound, to enrich the information or to handle the source sound or to handle the echo.

Middlewares are executed in the order they are configured. The first configured middleware is the first that gets executed.

In an ASP.NET Core web, if the client requests an image or any other static file, the StaticFileMiddleware searches for that resource and return that resource if it finds one. If not this middleware does nothing except to call the next one. If there is no last middleware that handles the request pipeline, the request returns nothing. The MvcMiddleware also checks the requested resource, tries to map it to a configured route, executes the controller, created a view and returns a HTML or Web API result. If the MvcMiddleware doesn't find a matching controller, it anyway will return a result in this case it is a 404 Status result. It returns an echo in any case. This is why the MvcMiddleware is the last configured middleware.

(Image source: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?view=aspnetcore-2.1)

An exception handling middleware usually is one of the first configured middleware, but it is not because it get's executed at first, but at last. The first configured middleware is also the last one if the echo comes back the tube. An exception handling middleware validates the result and displays a possible exception in a browser and client friendly way. This is where a runtime error gets an 500 Status.

You are able to see how the pipeline is executed if you create an empty ASP.NET Core application. I usually use the console and the .NET CLI tools:

dotnet new web -n MiddleWaresSample -o MiddleWaresSample
cd MiddleWaresSample

Open the Startup.cs with your favorite editor. It should be pretty empty compared to a regular ASP.NET Core application:

public class Startup
{
    // This method gets called by the runtime. Use this method to add services to the container.
    // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
    public void ConfigureServices(IServiceCollection services)
    {
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        
        app.Run(async (context) =>
        {
            await context.Response.WriteAsync("Hello World!");
        });
    }
}

There is the DeveloperExceptionPageMiddleware used and a special lambda middleware that only writes "Hello World!" to the response stream. The response stream is the echo I wrote about previously. This special middleware stops the pipeline and returns something as an echo. So it is the last one.

Leave this middleware and add the following lines right before the app.Run():

app.Use(async (context, next) =>
{
    await context.Response.WriteAsync("===");
    await next();
    await context.Response.WriteAsync("===");
});
app.Use(async (context, next) =>
{
    await context.Response.WriteAsync(">>>>>> ");
    await next();
    await context.Response.WriteAsync(" <<<<<<");
});

This two calls of app.Use() also creates two lambda middlewares, but this time the middlewares are calling the next ones. Each middleware knows the next one and calls it. Both middleware writing to the response stream before and after the next middleware is called. This should demonstrate how the pipeline works. Before the next middleware is called the actual request is handled and after the next middleware is called, the response (echo) is handled.

If you now run the application (using dotnet run) and open the displayed URL in the browser, you should see a plain text result like this:

===>>>>>> Hello World! <<<<<<===

Does this make sense to you? If yes, let's see how to use this concept to add some additional functionality to the request pipeline.

Writing a custom middleware

ASP.NET Core is based on middlewares. All the logic that gets executed during a request is somehow based on a middleware. So we are able to use this to add custom functionality to the web. We want to know the execution time of every request that goes through the request pipeline. I do this by creating and starting a Stopwatch before the next middleware is called and by stop measuring the execution time after the next middleware is called:

app.Use(async (context, next) =>
{
    var s = new Stopwatch();
    s.Start();
    
    // execute the rest of the pipeline
    await next();
    
    s.Stop(); //stop measuring
    var result = s.ElapsedMilliseconds;
    
    // write out the milliseconds needed
    await context.Response.WriteAsync($"Time needed: {result }");
});

After that I write out the elapsed milliseconds to the response stream.

If you write some more middlewares the Configure method in the Startup.cs get's pretty messy. This is why the most middlewares are written as separate classes. This could look like this:

public class StopwatchMiddleWare
{
    private readonly RequestDelegate _next;

    public StopwatchMiddleWare(RequestDelegate next)
    {
        _next = next;
    }

    public async Task Invoke(HttpContext context)
    {
        var s = new Stopwatch();
        s.Start();

        // execute the rest of the pipeline
        await next();

        s.Stop(); //stop measuring
        var result = s.ElapsedMilliseconds;

        // write out the milliseconds needed
        await context.Response.WriteAsync($"Time needed: {result }");
    }
}

This way we get the next middleware via the constructor and the current context in the Invoke() method.

Note: The Middleware is initialized on the start of the application and exists once during the application lifetime. The constructor gets called once. On the other hand the Invoke() method is called once per request.

To use this middleware, there is a generic UseMiddleware() method available you can use in the configure method:

app.UseMiddleware<StopwatchMiddleware>();

The more elegant way is to create an extensions method that encapsulates this call:

public static class StopwatchMiddlewareExtension
{
    public static IApplicationBuilder UseStopwatch(this IApplicationBuilder app)
    {
        app.UseMiddleware<StopwatchMiddleware>();
        return app;
    }
}

Now can simply call it like this:

app.useStopwatch();

This is the way you can provide additional functionality to a ASP.NET Core web through the request pipeline. You are able to manipulate the request or even the response using middlewares.

The AuthenticationMiddleware for example tries to request user information from the request. If it doesn't find some it asked the client about it by sending a specific response back to the client. If it finds some, it adds the information to the request context and makes it available to the entire application this way.

What else can we do using middlewares?

Did you know that you can divert the request pipeline into two or more branches?

The next snippet shows how to create branches based on specific paths:

app.Map("/map1", app1 =>
{
    // some more middlewares
    
    app1.Run(async context =>
    {
        await context.Response.WriteAsync("Map Test 1");
    });
});

app.Map("/map2", app2 =>
{
    // some more middlewares
    
    app2.Run(async context =>
    {
        await context.Response.WriteAsync("Map Test 2");
    });
});

// some more middlewares

app.Run(async (context) =>
{
    await context.Response.WriteAsync("Hello World!");
});

The path "/map1" is a specific branch that continues the request pipeline inside. The same with "/map2". Both maps have their own middleware configurations inside. All other not specified paths will follow the main branch.

There's also a MapWhen() method to branch the pipeline based on a condition instead of branch based on a path:

public void Configure(IApplicationBuilder app)
{
    app.MapWhen(context => context.Request.Query.ContainsKey("branch"),
                app1 =>
    {
        // some more middlewares
    
        app1.Run(async context =>
        {
            await context.Response.WriteAsync("MapBranch Test");
        });
    });

    // some more middlewares
    
    app.Run(async context =>
    {
        await context.Response.WriteAsync("Hello from non-Map delegate. <p>");
    });
}

You can create conditions based on configuration values or as shown here, based on properties of the request context. In this case a query string property is used. You can use HTTP headers, form properties or any other property of the request context.

You are also able to nest the maps to create child and grandchild branches of needed.

Map() or MapWhen() is used to provide a special API or resource based an a specific path or a specific condition. The ASP.NET Core HealthCheck API is done like this. It first uses MapWhen() to specify the port to use and then the Map() to set the path for the HealthCheck API, or it uses Map() only if no port is specified. At the end the HealthCheckMiddleware is used:

private static void UseHealthChecksCore(IApplicationBuilder app, PathString path, int? port, object[] args)
{
    if (port == null)
    {
        app.Map(path, b => b.UseMiddleware<HealthCheckMiddleware>(args));
    }
    else
    {
        app.MapWhen(
            c => c.Connection.LocalPort == port,
            b0 => b0.Map(path, b1 => b1.UseMiddleware<HealthCheckMiddleware>(args)));
    }
}

(See here on GitHib)

UPDATE 10/10/2018

After I published this post Hisham asked me a question on Twitter:

Another question that's middlewares related, I'm not sure why I never seen anyone using IMiddleware instead of writing InvokeAsync manually?!!

IMiddleware is new in ASP.NET Core 2.0 and actually I never knew that it exists before he tweeted about it. I'll definitely have a deeper look into IMiddleware and will write about it. Until that you should read Hishams really good post about it: Why you aren't using IMiddleware?

Conclusion

Most of the ASP.NET Core features are based on middlewares and we are able to extend ASP.NET Core by creating our own middlewares.

In the next to chapters I will have a look into different data types and how to handle them. I will create API outputs with any format and data type I want and except data of any type and format. Read the next part about Customizing ASP.NET Core Part 07: OutputFormatter

Jürgen Gutsch: Customizing ASP.​NET Core Part 05: HostedServices

This fifth part of this series doesn't really show a customization. This part is more about a feature you can use to create background services to run tasks asynchronously inside your application. Actually I use this feature to regularly fetch data from a remote service in a small ASP.NET Core application.

The series topics

About HostedServcices

HostedServices are a new thing in ASP.NET Core 2.0 and can be used to run tasks in the asynchronously in the background of your application. This can be used to fetch data periodically, do some calculations in the background or some cleanups. This can also be used to send preconfigured emails or whatever you need to do in the background.

HostedServices are basically simple classes, which implements the IHostedService interface.

public class SampleHostedService : IHostedService
{
	public Task StartAsync(CancellationToken cancellationToken)
	{
	}
	
	public Task StopAsync(CancellationToken cancellationToken)
	{
	}
}

A HostedService needs to implement a StartAsync() and a StopAsync() method. The StartAsync() is the place where you implement the logic to execute. This method gets executed once immediately after the application starts. The method StopAsync() on the other hand gets executed just before the application stops. This also means, to start a kind of a scheduled service you need to implement it by your own. You will need to implement a loop which executes the code regularly.

To get a HostedService executed you need to register it in the ASP.NET Core dependency injection container as a singleton instance:

services.AddSingleton<IHostedService, SampleHostedService>();

To see how a hosted service work, I created the next snippet. It writes a log message on start, on stop and every two seconds to the console:

public class SampleHostedService : IHostedService
{
	private readonly ILogger<SampleHostedService> logger;
	
	// inject a logger
	public SampleHostedService(ILogger<SampleHostedService> logger)
	{
		this.logger = logger;
	}

	public Task StartAsync(CancellationToken cancellationToken)
	{
		logger.LogInformation("Hosted service starting");

		return Task.Factory.StartNew(async () =>
		{
			// loop until a cancalation is requested
			while (!cancellationToken.IsCancellationRequested)
			{
				logger.LogInformation("Hosted service executing - {0}", DateTime.Now);
				try
				{
					// wait for 3 seconds
					await Task.Delay(TimeSpan.FromSeconds(2), cancellationToken);
				}
				catch (OperationCanceledException) { }
			}
		}, cancellationToken);
	}

	public Task StopAsync(CancellationToken cancellationToken)
	{
		logger.LogInformation("Hosted service stopping");
		return Task.CompletedTask;
	}
}

To test this, I simply created a new ASP.NET Core application, placed this snippet inside, register the HostedService and started the application by calling the next command in the console:

dotnet run

This results in the following console output:

As you can see the log output is written to the console every two seconds.

Conclusion

You can now start to do some more complex thing with the HostedServices. Be careful with the hosted service, because it runs all in the same application. Don't use to much CPU or memory, this could slow down your application.

For bigger applications I would suggest to move such tasks in a separate application that is specialized to execute background tasks. A separate Docker container, a BackroundWorker on Azure, Azure Functions or something like this. However it should be separated from the main application in that case

In the next part I'm going to write about Middlewares and how you can use them to implement special logic to the request pipeline, or how you are able to serve specific logic on different paths. Customizing ASP.NET Core Part 06: Middlewares

Jürgen Gutsch: Customizing ASP.​NET Core Part 04: HTTPS

HTTPS is on by default now and a first class feature. On Windows the certificate which is needed to enable HTTPS is loaded from the windows certificate store. If you create a project on Linux and Mac the certificate is loaded from a certificate file.

Even if you want to create a project to run it behind and IIS or an NGinX webserver HTTPS is enabled. Usually you would manage the certificate on the IIS or NGinX webserver in that case. But this shouldn't be a problem and you shouldn't disable HTTPS in the ASP.NET Core settings.

To manage the certificate within the ASP.NET Core application directly makes sense if you run services behind the firewall, services which are not accessible from the internet. Services like background services for a micro service based applications, or services in a self hosted ASP.NET Core application.

There are some scenarios where it makes sense to also load the certificate from a file on Windows. This could be in an application that you will run on docker for Windows, and also on docker for Linux.

Personally I like the flexible way to load the certificate from a file.

The series topics

Setup Kestrel

As well as in the first to parts of this blog series, we need override the default WebHostBuilder a little bit. With ASP.NET Core it is possible to replace the default Kestrel based hosting with an hosting based on an HttpListener. This means the Kestrel webserver is configured somehow to the host builder. You are able to add and configure Kestrel manually by using it. That means by calling the UseKestrel() method on the IWebHostBuilder:

public class Program
{
	public static void Main(string[] args)
	{
		CreateWebHostBuilder(args).Build().Run();
	}

	public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
		WebHost.CreateDefaultBuilder(args)
			.UseKestrel(options => 
			{	
			})
			.UseStartup<Startup>();
}

This method accepts an action to configure the Kestrel webserver. What we actually need to do is to configure the addresses and ports the webserver is listen on. For the HTTPS port we also need to configure how the certificate should be loaded.

.UseKestrel(options => 
{
	options.Listen(IPAddress.Loopback, 5000);
	options.Listen(IPAddress.Loopback, 5001, listenOptions =>
	{
		listenOptions.UseHttps("certificate.pfx", "topsecret");
	});
})

In this snippet we add to addresses and ports to listen on. The second one is defined as secure endpoint configured to use HTTPS. The method UseHttps() is overloaded multiple times, to load certificates from the windows certificate store as well as from files. In this case we use a file called certificate.pfx located in the project folder.

Reminder to myself: Replacing the host actually would be an idea for an eleventh part of this series.

To create such a certificate file to just play around with this configuration open the certificate store and export the development certificate created by visual studio.

For your safety

Use the following line ONLY to play around with this configuration:

listenOptions.UseHttps("certificate.pfx", "topsecret");

The problem is the hard coded password. Never ever store a password in a code file that gets pushed to any source code repository. Ensure you load the password from the configuration API of ASP.NET Core. Use the user secrets on your local development machine and use environment variables on a server. On Azure use the Application Settings to store the passwords. Passwords will be hidden on the Azure Portal UI, if they are marked as passwords.

Conclusion

This is just a small customization. Anyway, this helps if you want to share the code between different platforms, if you want to run your application on Docker and don't want to care about certificate stores, etc.

Usually, if you run your application behind an web server like IIS or NGinX, you don't need to care about certificates in your ASP.NET Core application. But you need to if you host your application inside another application, on Docker or without an IIS or NGinX.

ASP.NET Core has a new feature to run tasks in the background inside the application. To learn more about that, read the next post about Customizing ASP.NET Core Part 05: HostedServices.

Alexander Schmidt: Azure Active Directory B2C – Teil 1

Einrichtung von Azure AD B2C.

Jürgen Gutsch: Customizing ASP.​NET Core Part 03: Dependency Injection

In the third part we'll take a look into the ASP.NET Core dependency injection and how to customize it to use a different dependency injection container if needed.

The series topics

Why using a different dependency injection container?

In the most projects you don't really need to use a different dependency injection Container. The DI implementation in ASP.NET Core supports the main basic features and works well and pretty fast. Anyway, some other DI container support some interesting features you maybe want to use in your application.

  • Maybe you like to create an application that support modules as lightweight dependencies.
    • E.g. modules you want to put into a specific directory and they get automatically registered in your application
    • This could be done with NInject.
  • Maybe you want to configure the services in a configuration file outside the application, in an XML or JSON file instead in C# only
    • This is a common feature in various DI containers, but not yet supported in ASP.NET Core.
  • Maybe you don't want to have an immutable DI container, because you want to add services at runtime.
    • This is also a common feature in some DI containers.

A look at the ConfigureServices Method

Create a new ASP.NET Core project and open the Startup.cs, you will find the method to configure the services which looks like this:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
	services.Configure<CookiePolicyOptions>(options =>
	{
		// This lambda determines whether user consent for non-essential cookies is needed for a given request.
		options.CheckConsentNeeded = context => true;
		options.MinimumSameSitePolicy = SameSiteMode.None;
	});
    
    services.AddTransient<IService, MyService>();

	services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
}

This method gets the IServiceCollection, which already filled with a bunch of services which are needed by ASP.NET Core. This services got added by the hosting services and parts of ASP.NET Core that got executed before the method ConfigureSercices is called.

Inside the method some more services gets added. First a configuration class that contains cookie policy options is added to the ServiceCollection. In this sample I also add a custom service called MyService that implements the IService interface. After that the method AddMvc() adds another bunch of services needed by the MVC framework. Until yet we have around 140 services registered to the IServiceCollection. But the service collections isn't the actual dependency injection container.

The actual DI container is wrapped in the so called service provider, which will be created out of the service collection. The IServiceCollection has an extension method registered to create a IServiceProvider out of the service collection.

IServiceProvider provider = services.BuildServiceProvider()

The ServiceProvider than contains the immutable container that cannot be changed at runtime. With the default method ConfigureServices the IServiceProvider gets created in the background after this method was called, but it is possible to change the method a little bit:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.Configure<CookiePolicyOptions>(options =>
    {
        // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
    });
    
    services.AddTransient<IService, MyService>(); // custom service
    
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
    
    return services.BuildServiceProvider()
}

I changed the return type to IServiceProvider and return the ServiceProvider created with the method BuildServiceProvider(). This change will still work in ASP.NET Core.

Use a different ServiceProvider

To change to a different or custom DI container you need to replace the default implementation of the IServiceProvider with a different one. Additionally you need to find a way to move the already registered services to the new container.

The next code sample uses Autofac as a third party container. I use Autofac in this snippet because you are easily able to see what is happening here:

public IServiceProvider ConfigureServices(IServiceCollection services)
{
    services.Configure<CookiePolicyOptions>(options =>
    {
        // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
    });

    //services.AddTransient<IService, MyService>();

    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

    // create a Autofac container builder
    var builder = new ContainerBuilder();

    // read service collection to Autofac
    builder.Populate(services);

    // use and configure Autofac
    builder.RegisterType<MyService>().As<IService>();

    // build the Autofac container
    ApplicationContainer = builder.Build();

    // creating the IServiceProvider out of the Autofac container
    return new AutofacServiceProvider(ApplicationContainer);
}

// IContainer instance in the Startup class 
public IContainer ApplicationContainer { get; private set; }

Also Autofac works with a kind of a service collection inside the ContainerBuilder and it creates the actual container out of the ContainerBuilder. To get the registered services out of the IServiceCollection into the ContainerBuilder, Autofac uses the Populate() method. This copies all the existing services to the Autofac container.

Our custom service MyService now gets registered using the Autofac way.

After that, the container gets build and stored in a property of type IContainer. In the last line of the method ConfigureServices we create a AutofacServiceProvider and pass in the IContainer. This is the IServiceProvider we need to return to use Autofac within our application.

UPDATE: Introducing Scrutor

You don't always need to replace the existing .NET Core DI container to get and use nice features. In the beginning I mentioned the auto registration of services. This can also be done with a nice NuGet package called Scrutor by Kristian Hellang (https://kristian.hellang.com/). Scrutor extends the IServiceCollection to automatically register services to the .NET Core DI container.

"Assembly scanning and decoration extensions for Microsoft.Extensions.DependencyInjection" https://github.com/khellang/Scrutor

Andrew Lock published a pretty detailed blog post about Scrutor. It doesn't make sense to repeat that. Read that awesome post and learn more about it: Using Scrutor to automatically register your services with the ASP.NET Core DI container

Conclusion

Using this approach you are able to use any .NET Standard compatible DI container to replace the existing one. If the container of your choice doesn't provide an ServiceProvider, create an own one that implements IServiceProvider and uses the DI container inside. If the container of your choice doesn't provide a method to populate the registered services into the container, create your own method. Loop over the registered services and add them to the other container.

Actually the last step sounds easy, but can be a hard task. Because you need to translate all the possible IServiceCollection registrations into registrations of the different container. The complexity of that task depends on the implementation details of the other one.

Anyway, you have the choice to use any DI container which is compatible to the .NET Standard. You have the choice to change a lot of the default implementations in ASP.NET Core.

So you can with the default HTTPS behavior on Windows. To learn more about that please read the next post about Customizing ASP.NET Core Part 04: HTTPS.

Code-Inside Blog: Be afraid of varchar(max) with async EF or ADO.NET

Last month we had changed our WCF APIs to async implementations, because we wanted all those glorious scalability improvements in our codebase.

The implementation was quite easy, because our service layer did most of the time just some simple EntityFramework 6 queries.

The field test went horribly wrong

After we moved most of the code to async we did a small test and it worked quite good. Our gut feelings were OK-ish, because we knew that we didn’t do a full stress test.

As always: Things didn’t work as expected. We deployed the code on our largest customer and it did: Nothing.

100% CPU

We knew that after the deployment we would hit a high load and at first it seems to “work” based on the CPU workload, but nothing happend. I checked the SQL monitoring and noticed that the throughput was ridiculous low. One query (which every client needed to execute) caught my attention, because the query itself was super simple, but somehow was the showstopper for everyone.

The “bad query”

I checked the code and it was more or less something like this (with the help of EntityFramework 6)

var result = await dbContext.Configuration.ToListAsync();

The “Configuration” itself is a super simple table with a Key & Value column.

Be aware that the same code worked OK with the non async implementation!

“Cause”

This call was extremely costly in terms of performance, but why? It turns out, that this customer installation had a pretty large configuration. One value was around 10MB, which doesn’t sound that much, but if this code is executed in parallel with 5000 clients, it can hurt.

On top of that: The async implementation tries to be smart, but this leads to thousand of task creations, which will slow down everything.

This stackoverflow answer really helped me to understand this problem. Just look at those figures:

First, in the first case we were having just 3500 hit counts along the full call path, here we have 118 371. Moreover, you have to imagine all the synchronization calls I didn’t put on the screenshoot…

Second, in the first case, we were having “just 118 353” calls to the TryReadByteArray() method, here we have 2 050 210 calls ! It’s 17 times more… (on a test with large 1Mb array, it’s 160 times more)

Moreover there are:

  • 120 000 Task instances created
  • 727 519 Interlocked calls
  • 290 569 Monitor calls
  • 98 283 ExecutionContext instances, with 264 481 Captures
  • 208 733 SpinLock calls

My guess is the buffering is made in an async way (and not a good one), with parallel Tasks trying to read data from the TDS. Too many Task are created just to parse the binary data. …

Switch to ADO.NET, damn EF, right?

If you are now thinking: “Yeah… EF sucks, right, use just plain ADO.NET!” you will end up in the same mess, because the default ExecuteAsync-reader is used in the EntityFramework.

I use EF Core, am I save?

The same problem applies to EF Core, just checkout this comment by the EF Team.

How can we solve this problem then?

Solution 1: Async, but with Sequential read

I changed the code to use plain ADO.NET, but with CommandBehavior.Sequential access.

This way it seems that the async implementation is much smarter how to read large chunks of data. I’m not an ADO.NET expert, but with the default strategy ADO.NET tries to read the whole row and stores it in memory. With the sequential access it can use the memory more effective - at least, it seems to work much better.

Your code also needs to be implemented with sequential access in mind, otherwise it will fail.

Solution 2: Avoid large type like nvarchar(max)

This advice comes from the EF team:

Avoid using NTEXT, TEXT, IMAGE, TVP, UDT, XML, [N]VARCHAR(MAX) and VARBINARY(MAX) – the maximum data size for these types is so large that it is very unusual (or even impossible) that they would happen to be able to fit within a single packet.

When we need to store large content, we typically use a separat blob table and stream those values to the clients. This works quite well, but we forgot our “configuration” table :-)

When I now look at this problem it seems obvious, but we had some hard days to fix the issue.

Hope this helps.

Helpful links:

Jürgen Gutsch: Customizing ASP.​NET Core Part 02: Configuration

This second part of the blog series about customizing ASP.NET Core is about the application configuration, how to use it and how to customize the configuration to use different ways to configure your app.

The series topics

Configure the configuration

As well as the logging, since ASP.NET Core 2.0 the configuration is also hidden in the default configuration of the WebHostBuilder and not part of the Startup.cs anymore. This is done for the same reasons to keep the Startup clean and simple:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

Fortunately you are also able to override the default settings to customize the the configuration in a way you need it.

When you create a new ASP.NET Core project you already have an appsettings.json and an appsettings.Development.json configured. You can and you should use this configuration files to configure your app. You should because this is the pre-configured way and the most ASP.NET Core developers will look for an appsettings.json to configure the application. This is absolutely fine and works pretty well.

But maybe you already have an existing XML configuration or want to share a YAML configuration file over different kind of applications. This could also make sense. Sometimes it makes also sense to read configuration values out of a database.

The next snippet shows the hidden default configuration to read the appsettigns.json files:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

This configuration also set the base path of the application and adds the configuration via environment variables. The method ConfigureAppConfiguration accepts a lambda method that gets a ConfigurationBuilderContext and a ConfigurationBuilder passed in

Whenever you customize the the application configuration you should add the configuration via environment variable as the last step. The order of the configuration matters and the latter added configuration providers will override the previously added configurations. Be sure the environment variables always override the configurations via file. This way you ensure the configure on azure web apps via the Application Settings UI on Azure which will be passed to the application as environment variables.

The IConfigurationBuilder has a lot of extension methods to add more configurations like XML or INI configuration files, in-memory configurations and so on. You can find a lot more configuration providers provided by the community to read in YAML files, database values and a lot more. In this demo I'm going to show you how to read INI files in.

Typed configurations

Before trying to read the INI files it makes sense to show how to use typed configuration instead of reading the configuration via the IConfiguration key by key.

To read a type configuration you need to define the type to configure. I usually crate a class called AppSettings like this:

public class AppSettings
{
    public int Foo { get; set; }
    public string Bar { get; set; }
}

This classes than can be filled with specific configuration sections inside the method ConfigureServices in the Startup.cs

services.Configure<AppSettings>(Configuration.GetSection("AppSettings"));

This way the typed configuration also gets registered as a service in the dependency injection container and can be used everywhere in the application. You are able to create different configuration types per configuration section. In the most cases one section should be fine, but maybe it makes sense to divide the settings into different sections.

This configuration than can be used via dependency injection in every part of your application. The next snippets shows how to use the configuration in a MVC controller:

public class HomeController : Controller
{
    private readonly AppSettings _options;

    public HomeController(IOptions<AppSettings> options)
    {
        _options = options.Value;
    }

The IOptions<AppSettings> is a wrapper around our AppSettings type and the property Value contains the actual instance of the AppSettings including the values from the configuration file.

To try that out the appsettings.json need to have the AppSettings section configured, otherwise the values are null or not set.

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*",
  "AppSettings": {
      "Foo": 123,
      "Bar": "Bar"
  }
}

Configuration using INI files

To also use INI files to configure the application we need to add the INI configuration inside the method ConfigureAppConfiguration in the Program.cs:

config.AddIniFile("appsettings.ini", optional: false, reloadOnChange: true);
config.AddJsonFile($"appsettings.{env.EnvironmentName}.ini", optional: true, reloadOnChange: true);

This code loads the INI files the same way as the JSON configuration files. The first line is a required configuration and the second one an optional configuration depending on the current runtime environment.

The INI file could look like this:

[AppSettings]
Bar="FooBar"

This file also contains a section called AppSettings and a property called Bar. Initially I wrote the order of the configuration matters. If you added the two lines to configure via INI files after the configuration via JSON files, the INI files will override the settings from the JSON files. The property Bar gets overridden with "FooBar" and the property Foo stays the same. Also the values out of the INI file will be available via the previously created AppSettings class.

Every other configuration provider will work the same way. Let's see how a configuration provider would look like.

Configuration Providers

A configuration provider is an implementation of an IConfigurationProvider that get's created by an configuration source, which is an implementation of an IConfigurationSource. The configuration provider than reads the date in from somewhere and provides it via a Dictionary.

To add a custom or third party configuration provider to ASP.NET Core you need to call the method Add on the configuration builder and put the configuration source in:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        
        // add new configuration source
        config.Add(new MyCustomConfigurationSource{
        	SourceConfig = //configure whatever source 
            Optional = false,
            ReloadOnChange = true
        });
        
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

Usually you would create an extension method to easier add the configuration source:

config.AddMyCustomSource("source", optional: false, reloadOnChange: true);

A really detailed concrete example about how to create a custom configuration provider is written by the fellow MVP Andrew Lock.

Conclusion

In the most cases it is not needed to add a different configuration provider or to create your own configuration provider, but it's good to know how to change it in case you need it. Also using typed configuration is a nice way to read the settings. In classic ASP.NET we used a manually created façade to to read the application settings in a typed way. Now this is automatically done by just providing a class. This class get's automatically filled and provided via dependency injection.

To learn more about ASP.NET Core Dependency Injection have a look into the next part of the series: Customizing ASP.NET Core Part 03: Dependency Injection

Jürgen Gutsch: Customizing ASP.​NET Core Part 01: Logging

In this first part of the new blog series about customizing ASP.NET Core, I will show you how to customize the logging. The default logging only writes to the console or to the debug window. This is quite good for the most cases, but maybe you need to log to a sink like a file or a database. Maybe you want to extend the logger with additional information. In that cases you need to know how to change the default logging.

The series topics

Configure logging

In previous versions of ASP.NET Core (pre 2.0) the logging was configured in the Startup.cs. Since 2.0 the Startup.cs was simplified and a lot of configurations where moved to a default WebHostBuilder, which is called in the Program.cs. Also the logging was moved to the default WebHostBuilder:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

In ASP.NET Core you are able to override and customize almost everything. So you can with the logging. The IWebHostBuilder has a lot of extension methods to override the default behavior. To override the default settings for the logging we need to choose the ConfigureLogging method. The next snippet shows exactly the same logging as it was configured inside the CreateDefaultBuilder() method:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureLogging((hostingContext, logging) =>
    {
        logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
        logging.AddConsole();
        logging.AddDebug();
    })                
    .UseStartup<Startup>();

This method needs a lambda that gets a WebHostBuilderContext that contains the hosting context and a LoggingBuilder to configure the logging.

Create a custom logger

To demonstrate a custom logger, I created a small useless logger that is able to colorize log entries with an specific log level in the console. This so called ColoredConsoleLogger will be added and created using a LoggerProvider we also need to write by our own. To specify the color and the log level to colorize, we need to add a configuration class. In the next snippet all three parts (Logger, LoggerProvider and Configuration) are shown:

public class ColoredConsoleLoggerConfiguration
{
    public LogLevel LogLevel { get; set; } = LogLevel.Warning;
    public int EventId { get; set; } = 0;
    public ConsoleColor Color { get; set; } = ConsoleColor.Yellow;
}

public class ColoredConsoleLoggerProvider : ILoggerProvider
{
    private readonly ColoredConsoleLoggerConfiguration _config;
    private readonly ConcurrentDictionary<string, ColoredConsoleLogger> _loggers = new ConcurrentDictionary<string, ColoredConsoleLogger>();

    public ColoredConsoleLoggerProvider(ColoredConsoleLoggerConfiguration config)
    {
        _config = config;
    }

    public ILogger CreateLogger(string categoryName)
    {
        return _loggers.GetOrAdd(categoryName, name => new ColoredConsoleLogger(name, _config));
    }

    public void Dispose()
    {
        _loggers.Clear();
    }
}

public class ColoredConsoleLogger : ILogger
{
	private static object _lock = new Object();
    private readonly string _name;
    private readonly ColoredConsoleLoggerConfiguration _config;

    public ColoredConsoleLogger(string name, ColoredConsoleLoggerConfiguration config)
    {
        _name = name;
        _config = config;
    }

    public IDisposable BeginScope<TState>(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        return logLevel == _config.LogLevel;
    }

    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        lock (_lock)
        {
            if (_config.EventId == 0 || _config.EventId == eventId.Id)
            {
                var color = Console.ForegroundColor;
                Console.ForegroundColor = _config.Color;
                Console.WriteLine($"{logLevel.ToString()} - {eventId.Id} - {_name} - {formatter(state, exception)}");
                Console.ForegroundColor = color;
            }
        }
    }
}

We need to lock the actual console output, because we will get some race conditions where wrong log entries get colored with the wrong color, because the console itself is not really thread save.

If this is done we can start to plug in the new logger to the configuration:

logging.ClearProviders();

var config = new ColoredConsoleLoggerConfiguration
{
    LogLevel = LogLevel.Information,
    Color = ConsoleColor.Red
};
logging.AddProvider(new ColoredConsoleLoggerProvider(config));

If needed you are able to clear all the previously added logger providers. Than we call AddProvider to add a new instance of our ColoredConsoleLoggerProvider with the specific settings. We could also add some more instances of the provider with different settings.

This shows ho to handle different log levels in a a different way. You can use this to send an emails on hard errors, to log debug messages to a different log sink than regular informational messages and so on.

In many cases it doesn't make sense to write a custom logger because there are already many good third party loggers, like elmah, log4net and NLog. In the next section I'm going to show you how to use NLog in ASP.NET Core

Plug-in an existing Third-Party logger provider

NLog was one of the very first loggers, which was available as a .NET Standard library and usable in ASP.NET Core. NLog also already provides a Logger Provider to easily plug it into ASP.NET Core.

The next snippet shows a typical NLog.Config that defines two different sinks to log all messages in one log file and custom messages only into another file:

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\dotnetconf\001-logging\internal-nlog.txt">

  <!-- Load the ASP.NET Core plugin -->
  <extensions>
    <add assembly="NLog.Web.AspNetCore"/>
  </extensions>

  <!-- the targets to write to -->
  <targets>
     <!-- write logs to file -->
     <target xsi:type="File" name="allfile" fileName="C:\git\dotnetconf\001-logging\nlog-all-${shortdate}.log"
                 layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

   <!-- another file log, only own logs. Uses some ASP.NET core renderers -->
     <target xsi:type="File" name="ownFile-web" fileName="C:\git\dotnetconf\001-logging\nlog-own-${shortdate}.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}|url: ${aspnet-request-url}|action: ${aspnet-mvc-action}" />

     <!-- write to the void aka just remove -->
    <target xsi:type="Null" name="blackhole" />
  </targets>

  <!-- rules to map from logger name to target -->
  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

We than need to add the NLog ASP.NET Core package from NuGet:

dotnet add package NLog.Web.AspNetCore

(Be sure you are in the project directory before you execute that command)

Now you only need to add NLog in the ConfigureLogging method in the Program.cs

hostingContext.HostingEnvironment.ConfigureNLog("NLog.Config");
logging.AddProvider(new NLogLoggerProvider());

The first line configures NLog to use the previously created NLog.Config and the second line adds the NLogLoggerProvider to the list of logging providers. Here you can add as many logger providers you need.

Conclusion

The good thing of hiding the basic configuration is only to clean up the newly scaffolded projects and to keep the actual start as simple as possible. The developer is able to focus on the actual features. But the more the application grows the more important is logging. The default logging configuration is easy and it works like charm, but in production you need a persisted log to see errors from the past. So you need to add a custom logging or a more flexible logger like NLog or log4net.

To learn more about ASP.NET Core configuration have a look into the next part of the series: Customizing ASP.NET Core Part 02: Configuration.

Jürgen Gutsch: New Blog Series: Customizing ASP.​NET Core

With this post I want to introduce a new blog series about things you can or maybe need to customize in ASP.NET Core. Initially this series will contain ten different topics. Maybe later I'll write some more posts about that.

The initial topics are based on my talk about Customizing ASP.NET Core. I did this talk several times in German and English. I did the talk on the .NET Conf 2018 as well.

Unfortunately on the .NET Conf the talk started with pretty bad audio for some reasons. The first five minutes can be moved directly to the trash IMHO. I also could only show 7 out of 10 demos, even if I tried to get all the demos into 45 minutes one day before. I'm almost sure the audio problem wasn't on my side. Via the router I disconnected almost all devices from the internet during the our I was presenting and it went well before the presentation when we did the latest tech check.

Anyway, after five minutes the audio went a lot better and the audience was able to follow the rest of the presentation.

For this series I'm going to follow the same order as in that presentation, which is the order from bottom to top, from the server configuration parts, over Web.API up to the MVC topics.

Initial series topics

Additional series topics

  • Customizing ASP.NET Core Part 11: Hosting
  • Customizing ASP.NET Core Part 12: InputFormatter

Do you want to see that talk?

If you are interested in this talk about Customizing ASP.NET Core, feel free to drop me a comment, a message via Twitter or an email. I'm able to do it remotely via Skype, Skype for Business or on side, if the travel costs are covered somehow. For free at community events, like Meetups or user group meetings and fairly paid on commercial events.

Discover more possible talks on Sessionize: https://sessionize.com/juergengutsch

Golo Roden: 25 Tage später …

Node.js enthält in der 10.x-Serie bis zur Version 10.9.0 den Fehler, dass die Funktionen setTimeout und setInterval nach 25 Tagen ihren Betrieb einstellen. Ursache ist ein Konvertierungsfehler. Abhilfe schafft das Aktualisieren auf eine neue Node.js-Version.

Stefan Henneken: IEC 61131-3: Das ‘State’ Pattern

Besonders in der Automatisierungstechnik finden Zustandsautomaten regelmäßig Anwendung. Mit Hilfe des State Pattern steht ein objektorientierter Ansatz zur Verfügung, der insbesondere bei größeren Zustandsautomaten wichtige Vorteile bietet.

Die meisten Entwickler haben schon Zustandsautomaten in IEC 61131-3 realisiert. Der eine bewusst, der andere vielleicht unbewusst. Im Folgenden soll ein einfaches Beispiel drei verschiedene Ansätze vorstellen:

  1. CASE-Anweisung
  2. Zustandsübergänge in Methoden
  3. Das ‚State‘ Pattern

Unser Beispiel beschreibt einen Automaten, der nach Einwurf einer Münze und nach dem Drücken eines Knopfes ein Produkt ausgibt. Die Anzahl der Produkte ist begrenzt. Wird eine Münze eingeworfen und der Knopf betätigt, obwohl der Automat leer ist, so wird die Münze wieder zurückgegeben.

Der Automat soll durch den Funktionsblock FB_Machine abgebildet werden. Eingänge nehmen die Ereignisse entgegen und über Ausgänge wird der aktuelle Zustand und die Anzahl der noch verfügbaren Produkte ausgegeben. Bei der Deklaration des FBs wird die maximale Anzahl der Produkte festgelegt.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton           : BOOL;
  bInsertCoin       : BOOL;
  bTakeProduct      : BOOL;
  bTakeCoin         : BOOL;    
END_VAR
VAR_OUTPUT
  eState            : E_States;
  nProducts         : UINT;    
END_VAR

UML-Zustandsdiagramm

Zustandsautomaten lassen sich sehr gut als UML-Zustandsdiagramm (englisch: state diagram) darstellen.

Picture01

Ein UML-Zustandsdiagramm beschreibt einen Automaten, der sich zu jedem Zeitpunkt in genau einem Zustand einer endlichen Menge von Zuständen befindet.

Die Zustände in einem UML-Zustandsdiagramm werden durch Rechtecke mit abgerundeten Ecken (englisch: vertices) dargestellt (in anderen Diagrammformen häufig auch als Kreis). Zustände können Aktivitäten ausführen, die z.B. beim Eintritt in den Zustand (entry) oder beim Verlassen (exit) ausgeführt werden. Mit entry / n = n – 1 wird beim Eintritt in den Zustand die Variable n dekrementiert.

Die Pfeile zwischen den Zuständen symbolisieren mögliche Zustandsübergänge (englisch: transitions). Sie sind mit den Ereignissen beschriftet, die zu dem jeweiligen Zustandsübergang führen. Ein Zustandsübergang erfolgt, wenn das Ereignis eintritt und eine optionale Bedingung (guard) erfüllt ist. Bedingungen werden in eckigen Klammern angegeben. Hierdurch lassen sich Entscheidungsbäume realisieren.

Erste Variante: CASE-Anweisung

Häufig findet man CASE-Anweisungen für die Umsetzung von Zustandsautomaten. Die CASE-Anweisung fragt jeden möglichen Zustand ab. Innerhalb der jeweiligen Bereiche, für die einzelnen Zustände, werden die Bedingungen abgefragt. Ist die Bedingung erfüllt, wird die Aktion ausgeführt und die Zustandsvariable angepasst. Um die Lesbarkeit zu erhöhen, wird die Zustandsvariable gerne als ENUM abgebildet.

TYPE E_States :
(
    eWaiting := 0,
    eHasCoin,
    eProductEjected,
    eCoinEjected
);
END_TYPE

Somit sieht die erste Variante vom Zustandsautomat wie folgt aus:

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

CASE eState OF
  E_States.eWaiting:
    IF (rtrigButton.Q) THEN
      ; // keep in the state
    END_IF
    IF (rtrigInsertCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
      eState := E_States.eHasCoin;
    END_IF

  E_States.eHasCoin:
    IF (rtrigButton.Q) THEN
      IF (nProducts > 0) THEN
        nProducts := nProducts - 1;
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
        eState := E_States.eProductEjected;
      ELSE
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
        eState := E_States.eCoinEjected;
      END_IF
    END_IF

  E_States.eProductEjected:
    IF (rtrigTakeProduct.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
      eState := E_States.eWaiting;
    END_IF

  E_States.eCoinEjected:
    IF (rtrigTakeCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
      eState := E_States.eWaiting;
    END_IF

  ELSE
    ADSLOGSTR(ADSLOG_MSGTYPE_ERROR, 'Invalid state', '');
    eState := E_States.eWaiting;
END_CASE

Ein kurzer Test zeigt, dass der FB das macht, was er machen soll:

Picture02

Doch wird auch schnell deutlich, dass größer Anwendungen so nicht umsetzbar sind. Die Übersichtlichkeit geht nach wenigen Zuständen komplett verloren.

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Zweite Variante: Zustandsübergänge in Methoden

Das Problem ist reduzierbar, wenn alle Zustandsübergänge als Methode realisiert werden.

Picture03

Tritt ein bestimmtes Ereignis auf, so wird die jeweilige Methode aufgerufen.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

IF (rtrigButton.Q) THEN
  THIS^.PressButton();
END_IF
IF (rtrigInsertCoin.Q) THEN
  THIS^.InsertCoin();
END_IF
IF (rtrigTakeProduct.Q) THEN
  THIS^.CustomerTakesProduct();
END_IF
IF (rtrigTakeCoin.Q) THEN
  THIS^.CustomerTakesCoin();
END_IF

Je nach aktuellem Zustand, wird in den Methoden der gewünschte Zustandsübergang ausgeführt und die Zustandsvariable angepasst:

METHOD INTERNAL CustomerTakesCoin : BOOL
IF (THIS^.eState = E_States.eCoinEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
  eState := E_States.eWaiting;
END_IF

METHOD INTERNAL CustomerTakesProduct : BOOL
IF (THIS^.eState = E_States.eProductEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
  eState := E_States.eWaiting;
END_IF

METHOD INTERNAL InsertCoin : BOOL
IF (THIS^.eState = E_States.eWaiting) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
  THIS^.eState := E_States.eHasCoin;
END_IF

METHOD INTERNAL PressButton : BOOL
IF (THIS^.eState = E_States.eHasCoin) THEN
  IF (THIS^.nProducts > 0) THEN
    THIS^.nProducts := THIS^.nProducts - 1;
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
    THIS^.eState := E_States.eProductEjected;
  ELSE                
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
    THIS^.eState := E_States.eCoinEjected;
  END_IF
END_IF

Auch dieser Ansatz funktioniert tadellos. Der Zustandsautomat befindet sich allerdings weiterhin in nur einem Funktionsblock. Die Zustandsübergänge werden zwar in Methoden ausgelagert, jedoch handelt es sich um einen Lösungsansatz, der strukturierten Programmierung. Dieser ignoriert weiterhin die Möglichkeiten der Objektorientierung. Dies führt zu dem Ergebnis, dass der Quellcode weiterhin schlecht erweiterbar und unleserlich ist.

Beispiel 2 (TwinCAT 3.1.4022) auf GitHub

Dritte Variante: Das ‚State‘ Pattern

Zur Umsetzung des State Pattern sind einige OO-Entwurfsprinzipien hilfreich:

Kohäsion (= Grad, inwiefern eine Klasse einen einzigen konzentrierten Zweck hat) und Delegation

Kapsele jede Verantwortlichkeit in ein eigenes Objekt und delegiere Aufrufe an diese Objekte weiter. Eine Klasse, eine Verantwortlichkeit!

Identifiziere jene Aspekte, die sich ändern und trenne diese von jenen, die konstant bleiben

Wie werden die Objekte aufgeteilt, damit Erweiterungen am Zustandsautomat an möglichst wenigen Stellen notwendig sind? Bisher musste bei jeder Erweiterung FB_Machine angepasst werden. Gerade bei umfangreichen Zustandsautomaten, an denen mehrere Entwickler arbeiten, ist dieses ein großer Nachteil.

Betrachten wir noch einmal die Methoden CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() und PressButton(). Diese haben alle einen ähnlichen Aufbau. In If-Anweisungen wird der aktuelle Zustand abgefragt und die gewünschten Aktionen werden ausgeführt. Bei Bedarf wird außerdem der aktuelle Zustand angepasst. Dieser Ansatz skaliert jedoch nicht. Jedes Mal, wenn ein neuer Zustand hinzugefügt wird, müssen mehrere Methoden angepasst werden.

Das State Pattern verstreut den Status auf mehrere Objekte. Jeder mögliche Status wird durch einen FB repräsentiert. Diese Status FBs beinhalten das gesamte Verhalten für den jeweiligen Zustand. Dadurch kann ein neuer Status eingeführt werden, ohne dass der Quellcode der ursprünglichen Bausteine geändert werden muss.

Auf jeden Zustand kann jede Aktion (CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() und PressButton()) ausgeführt werden. Somit besitzen alle Status FBs die gleiche Schnittstelle. Aus diesem Grund wird ein Interface für alle Status FBs eingeführt:

Picture04
 
FB_Machine aggregiert dieses Interface (Zeile 9), welches die Methodenaufrufe an die jeweiligen Status FBs delegiert (Zeile 30, 34, 38 und 42).

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton            : BOOL;
  bInsertCoin        : BOOL;
  bTakeProduct       : BOOL;
  bTakeCoin          : BOOL;
END_VAR
VAR_OUTPUT
  ipState            : I_State := fbWaitingState;
  nProducts          : UINT;
END_VAR
VAR
  fbCoinEjectedState    : FB_CoinEjectedState(THIS);
  fbHasCoinState        : FB_HasCoinState(THIS);
  fbProductEjectedState : FB_ProductEjectedState(THIS);
  fbWaitingState        : FB_WaitingState(THIS);

  rtrigButton           : R_TRIG;
  rtrigInsertCoin       : R_TRIG;
  rtrigTakeProduct      : R_TRIG;
  rtrigTakeCoin         : R_TRIG;
END_VAR

rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

IF (rtrigButton.Q) THEN
  ipState.PressButton();
END_IF

IF (rtrigInsertCoin.Q) THEN
  ipState.InsertCoin();
END_IF

IF (rtrigTakeProduct.Q) THEN
  ipState.CustomerTakesProduct();
END_IF

IF (rtrigTakeCoin.Q) THEN
  ipState.CustomerTakesCoin();
END_IF

Doch wie kann in den jeweiligen Methoden, der einzelnen Status FBs, der Status geändert werden?

Als erstes wird von jedem Status FB eine Instanz innerhalb von FB_Machine deklariert. Per FB_init() wird an jeden Status FB ein Pointer auf FB_Machine übergeben (Zeile 13 – 16).

Jede einzelne Instanz kann per Eigenschaft aus FB_Machine gelesen werden. Zurückgegeben wird jedes Mal ein Interface Pointer auf I_State.

Picture05

Des Weiteren erhält FB_Machine eine Methode zum Setzen des Status,

METHOD INTERNAL SetState : BOOL
VAR_INPUT
  newState : I_State;
END_VAR
THIS^.ipState := newState;

sowie eine Methode zum Ändern der aktuellen Produktanzahl:

METHOD INTERNAL SetProducts : BOOL
VAR_INPUT
  newProducts : UINT;
END_VAR
THIS^.nProducts := newProducts;

FB_init() erhält eine weitere Eingangsvariable, damit bei der Deklaration die maximale Anzahl der Produkte vorgegeben werden kann.

Da der Anwender der Zustandsmaschine nur FB_Machine und I_State benötigt, wurden die vier Eigenschaften (CoinEjectedState, HasCoinState, ProductEjectedState und WaitingState), die beiden Methoden (SetState() und SetProducts()) und die vier Status FBs (FB_CoinEjectedState, FB_HasCoinState, FB_ProductEjectedState und FB_WaitingState) als INTERNAL deklariert. Befinden sich die FBs der Zustandsmaschine in einer compilierten Bibliothek, so sind diese von außen nicht sichtbar. Auch im Library Repository sind diese nicht vorhanden. Das Gleiche gilt auch für Elemente die als PRIVATE deklariert werden. FBs, Interfaces, Methoden und Eigenschaften, die nur innerhalb einer Bibliothek Verwendung finden, können so vor dem Anwender der Library versteckt werden.

Der Tests der Zustandsmaschine ist in allen drei Varianten gleich:

PROGRAM MAIN
VAR
  fbMachine      : FB_Machine(3);
  sState         : STRING;
  bButton        : BOOL;
  bInsertCoin    : BOOL;
  bTakeProduct   : BOOL;
  bTakeCoin      : BOOL;
END_VAR

fbMachine(bButton := bButton,
          bInsertCoin := bInsertCoin,
          bTakeProduct := bTakeProduct,
          bTakeCoin := bTakeCoin);
sState := fbMachine.ipState.Description;

bButton := FALSE;
bInsertCoin := FALSE;
bTakeProduct := FALSE;
bTakeCoin := FALSE;

Die Anweisung in Zeile 15 soll das Testen vereinfachen, da für jeden Zustand ein lesbarer Text angezeigt wird.

Beispiel 3 (TwinCAT 3.1.4022) auf GitHub

Diese Variante wirkt bei der ersten Betrachtung recht aufwendig, da deutlich mehr FBs benötigt werden. Doch die Verteilung der Zuständigkeiten auf einzelne FBs macht diesen Ansatz sehr flexibel und deutlich robuster für Erweiterungen.

Dieses wird deutlich, wenn die einzelnen Status FBs sehr umfangreich werden. So könnte eine Zustandsmaschine einen komplexen Prozess steuern, bei dem jeder Status FB weitere Unterprozesse enthält. Eine Aufteilung auf mehrere FBs macht solch ein Programm erst überhaupt wartbar, insbesondere dann, wenn mehrere Entwickler daran beteiligt sind.

Bei sehr kleinen Zustandsmaschinen ist die Anwendung des State Pattern nicht unbedingt die optimalste Variante. Ich persönlich greife auch gerne auf die Lösung mit der CASE-Anweisung zurück.

Alternativ bietet die IEC 61131-3 mit der Ablaufsprache (AS) bzw. Sequential Function Chart (SFC) eine weitere Möglichkeit an Zustandsmaschinen umzusetzen. Aber das ist eine andere Geschichte.

Definition

In dem Buch “Entwurfsmuster. Elemente wiederverwendbarer objektorientierter Software” von Gamma, Helm, Johnson und Vlissides wird dieses wie folgt ausgedrückt:

”Ermögliche es einem Objekt, sein Verhalten zu ändern, wenn sein interner Zustand sich ändert. Es wird so aussehen, als ob das Objekt seine Klasse gewechselt hat.”

Implementierung

Es wird eine gemeinsame Schnittstelle definiert (State), die für jeden Zustandsübergang (Transistion) eine Methode enthält. Für jeden Zustand wird eine Klasse erstellt, die diese Schnittstelle implementiert (State1, State2, …). Da hierdurch alle Zustände die gleiche Schnittstelle besitzen, sind diese untereinander austauschbar.

Das Objekt, dessen Verhalten in Abhängigkeit vom Zustand geändert werden soll (Context), aggregiert (kapselt) ein solches Zustandsobjekt. Dieses Objekt repräsentiert den aktuellen internen Zustand (currentState) und kapselt das zustandsabhängige Verhalten. Der Context delegiert Aufrufe an das aktuell gesetzte Zustandsobjekt.

Die Zustandswechsel können durch die konkreten Zustandsobjekte selbst durchgeführt werden. Dazu benötigt jedes Zustandsobjekt eine Referenz auf den Context (context). Weiterhin muss der Context eine Methode anbieten, um den Zustand ändern zu können (setState()). Der Folgezustand wird der Methode setState() als Parameter übergeben. Hierzu bietet der Context alle möglichen Zustände als Eigenschaften an.

UML Diagramm


Picture06

Bezogen auf das obige Beispiel ergibt sich folgende Zuordnung:

Context FB_Machine
State I_State
State1, State2, … FB_CoinEjectedState, FB_HasCoinState, FB_ProductEjectedState, FB_WaitingState
Handle() CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin(), PressButton()
GetState1, GetState2, … CoinEjectedState, HasCoinState, ProductEjectedState, WaitingState
currentState ipState
setState() SetState()
context pMachine

Anwendungsbeispiele

Ein TCP-Kommunikationsstack ist ein gutes Beispiel für die Verwendung des State Pattern. So kann jeder Zustand eines Verbindungs-Sockets durch entsprechende Zustandsklassen (TCPOpen, TCPClosed, TCPListen, …) abgebildet werden. Jede dieser Klassen implementiert das gleiche Interface (TCPState). Der Context (TCPConnection) beinhaltet das aktuelle Zustandsobjekt. Über dieses Zustandsobjekt werden alle Aktionen an die jeweilige Zustandsklasse übergeben. Diese bearbeitet die Aktionen und wechselt bei Bedarf in einen neuen Zustand.

Auch Textparser sind zustandsbasiert. So ist die Bedeutung eines Zeichens meistens abhängig von den zuvor gelesenen Zeichen.

Christian Binder [MS]: New role at Microsoft

After refreshing and reforming the team of the Microsoft Technology Center in Munich, which caused some silence on this blog  🙁   I decided to move to a more Engineering focused unit in Microsoft - the Commercial Software Engineering Group in EMEA and  I will continue working on Azure DevOps 🙂

Golo Roden: Vergleiche in JavaScript: == oder ===?

JavaScript kennt zwei Operatoren zum Vergleich, == und ===. Der erste ist nicht typsicher, der zweite hingegen sehr wohl – weshalb man stets den zweiten verwenden sollte.

Code-Inside Blog: Migrate a .NET library to .NET Core / .NET Standard 2.0

I have a small spare time project called Sloader and I recently moved the code base to .NET Standard 2.0. This blogpost covers how I moved this library to .NET Standard.

Uhmmm… wait… what is .NET Standard?

If you have been living under a rock in the past year: .NET Standard is a kind of “contract” that allows the library to run under all .NET implementations like the full .NET Framework or .NET Core. But hold on: The library might also run under Unity, Xamarin and Mono (and future .NET implementations that support this contract - that’s why it is called “Standard”). So - in general: This is a great thing!

Sloader - before .NET Standard

Back to my spare time project:

Sloader consists of three projects (Config/Result/Engine) and targeted the full .NET Framework. All projects were typical library projects. All components were tested with xUnit and builded via Cake. The configuration is using YAML and the main work is done via the HttpClient.

To summarize it: The library is a not too trivial example, but in general it has pretty low requirements.

Sloader - moving to .NET Standard 2.0

The blogpost from Daniel Crabtee “Upgrading to .NET Core and .NET Standard Made Easy” was a great resource and if you want to migrate you should check his blogpost.

The best advice from the blogpost: Just create new .NET Standard projects and xcopy your files to the new projects.

To migrate the projects to .NET Standard I really just needed to deleted the old .csproj files and copied everything into new .NET Standard library projects.

After some fine tuning and NuGet package reference updates everything compilied.

This GitHub PR shows the result of the migration.

Problems & Aftermath

In my library I still used the old way to access configuration via the ConfigurationManager class (referenced via the official NuGet package). This API is not supported on every platform (e.g. Azure Functions), so I needed to tweak those code parts to use System.Environment Variables (this is in my example OK, but there are other options as well).

Everthing else “just worked” and it was a great experience. I tried the same thing with .NET Core 1.0 and it failed horrible, but this time the migration was more or less painless.

.NET Portability Analyzer

If you are not sure if your code works under .NET Standard or Core just install the .NET Portability Analyzer.

This handy tool will give you an overwhy which parts might run without problems under .NET Standard or .NET Core.

.NET Standard 2.0 and .NET Framework

If you still targeting the full Framework, make sure you use at least .NET Framework Version 4.7.2. In theory .NET Standard 2.0 was supposed to work under .NET 4.6.1, but it seems that this ended not too well:

Hope this helps and encourage you to try a migration to a more modern stack!

Holger Schwichtenberg: Verbesserungen der Performance bei ASP.NET Core

ASP.NET Core MVC und ASP.NET Core WebAPI sind 12 bis 24 Prozent performanter als ihre ASP.NET-Vorgänger, wie Messungen des Dotnet-Doktors zeigen.

Norbert Eder: DELL XPS 13: Funktionstasten aktivieren

In der Standardeinstellung (für die meisten wohl ok, für Softwareentwickler richtig grausam), sind die Funktionstasten nur die zweite Belegung auf der Tastatur. In der primären Belegung werden die Multimedia-Tasten verwendet. Wer mit Funktionstasten arbeitet, kommt damit überhaupt nicht klar, vor allem, weil es auch einen Bruch in der bisherigen Bedienung darstellt.

Zum Glück kann dies im BIOS umgestellt werden. Nachfolgend siehst du die Standardeinstellung.

Dell XPS 13 Bios

Dell XPS 13 Bios

Wähle einfach Lock Mode Enable/Secondary im Abschnitt POST Behavior/Fn Lock Options und schon sind die Funktionstasten wieder ohne Fn zu verwenden.

Der Beitrag DELL XPS 13: Funktionstasten aktivieren erschien zuerst auf Norbert Eder.

Code-Inside Blog: Improving Code

Improving code

TL;DR;

Things I learned:

  • long one-liners are hard to read and understand
  • split up your code into small, easy to understand functions
  • less “plumping” (read infrastructure code) is the better
  • get indentation right
  • “Make it correct, make it clear, make it concise, make it fast. In that order.” Wes Dyer

Why should I bother?

Readable code is:

  • easier to debug
  • fast to fix
  • easier to maintain

The problem

Recently I wanted to implement an algorithm for a project we are doing. The goal was to create a so-called “Balanced Latin Square”, we used it to prevent ordering effects in user studies. You can find a little bit of background here and a nice description of the algorithm here.

It’s fairly simple, although it is not obvious how it works, just by looking at the code. The function takes an integer as an argument and returns a Balanced Latin Square. For example, a “4” would return this matrix of numbers:

1 2 4 3 
2 3 1 4 
3 4 2 1 
4 1 3 2 

And there is a little twist if your number is odd, then you need to reverse every row and append them to your result.

After I created the my implementation, I had an idea on how to simplify it. At least I thought its simpler ;)

First attempt - Loops

Based on the description and a Python version of that algorithm, I created a classical (read “imperative”) implementation.

So this is the C# Code:

public List<List<String>> BalancedLatinSquares(int n)
{
    var result = new List<List<String>>() { };
    for (int i = 0; i < n; i++)
    {
        var row = new List<String>();
        for (int j = 0; j < n; j++)
        {
            var cell = ((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n;
            cell++; // start counting from 1
            row.Add(cell.ToString());
        }
        result.Add(row);
    }
    if (n % 2 == 1)
    {
        var reversedResult = result.Select(x => x.AsQueryable().Reverse().ToList()).ToList();                
        result.AddRange(reversedResult);
    }
    return result;
}

I also wrote some simple unit tests to ensure this works. But in the end, I really didn’t like this code. It contains two nested loops and a lot of plumbing code. There are four lines alone just to create the result object (list) and to add the values to it. Recently I looked into functional programming and since C# also has some functional inspired features, I tried to improve this code with some functional goodness :)

Second attempt - Lambda Expressions

public List<List<String>> BalancedLatinSquares(int n)
{
    var result = Enumerable.Range(0, n)
        .Select(i =>
                Enumerable.Range(0, n).Select(j => ((((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n)+1).ToString()).ToList()
            )
        .ToList();     
    
    if (n % 2 == 1)
    {
        var reversedResult = result.Select(x => x.AsQueryable().Reverse().ToList()).ToList();
        result.AddRange(reversedResult);
        return result;
}

This is the result of my attempt to use some functional features. And hey, it is much shorter, therefore it must be better, right? Well, I posted a screenshot of both versions on Twitter and asked which one the people prefer. As it turned out, a lot of folks actually preferred the loop version. But why? Looking back at my code a saw two problems by looking at this line:

Enumerable.Range(0, n).Select(j => ((((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n)+1).ToString()).ToList()

  • I squeezed a lot of code in this one liner. This makes it harder to read and therefore harder to understand.
  • Another issue is, that I omitted descriptive variable names since they are not needed anymore. Oh and I removed the only comment I wrote since this comment would not fit in the one line of code :)

So, shorter is not always better.

Third attempt - better Lambda Expressions

The smart folks on Twitter had some great ideas about how to improve my code.

The first step was to get rid of the unholy one-liner. You can - and should - always split up your code into smaller, meaningful code blocks. I pulled out the calculateCell function and out of that I also extracted a isEven function. The nice thing is, that the function names also working as a kind of documentation about whats going on.

By returning IEnumerable instead of lists, I was able to remove some .toList() calls. Also, I was able to shorten the code to create the reversedResult.

Another simple step to improve readability is to get line indentation right. Personally, I don’t care which indentation style people are using, as long as it’s used consistently.

public static IEnumerable<IEnumerable<int>> GenerateBalancedLatinSquares(int n)
{
    bool isEven (int i) => i % 2 == 0;        
    int calculateCell(int j, int i) =>((isEven(j) ? n - j / 2 : j / 2 + 1) + i) % n + 1;
    
    var result = Enumerable
                    .Range(0, n)
                    .Select(row =>
                        Enumerable
                            .Range(0, n)
                            .Select(col =>calculateCell(col,row))
                    );     
    
    if (isEven(n) != false)
    {
        var reversedResult = result.Select(x => x.Reverse());                
        result = result.Concat(reversedResult);
    }        
    return result;conditional
}

I think there is room for further improvement. For the calculateCell function I am using this ?: conditional operator, it allows you to write very compact code, on the other hand, it’s also harder to read. If you would replace this with an if statement you would need more lines of code, but also have more space to add comments. Functional languages like Scala, F#, and Haskel providing this neat match expression that could help here.

Extra: How does this algorithm look in other languages:

Python

def balanced_latin_squares(n):
    l = [[((j/2+1 if j%2 else n-j/2) + i) % n + 1 for j in range(n)] for i in range(n)]
    if n % 2:  # Repeat reversed for odd n
        l += [seq[::-1] for seq in l]
    return l

I took this sample from Paul Grau.

Haskell

Thank you Carsten

Holger Schwichtenberg: Welche Dateisystem- und Druckerfreigaben gibt es und wer kann sie nutzen?

Eine Dateisystem- oder Druckerfreigabe ist schnell angelegt und auch schnell vergessen. Um sicher zu gehen, dass es keine verwaisten Freigaben oder Freigaben mit zu viel Zugriffsrechten gibt, hat der Dotnet-Doktor ein PowerShell-Skript geschrieben.

Code-Inside Blog: Easy way to copy a SQL database with Microsoft SQL Server Management Studio (SSMS)

How to copy a database on the same SQL server

The scenario is pretty simple: We just want a copy of our database, with all the data and the complete scheme and permissions.

1. step: Make a back up of your source database

Click on the desired database and choose “Backup” under tasks.

x

2. step: Use copy only or use a full backup

In the dialog you may choose “copy-only” backup. With this option the regular backup job will not be confused.

x

3. step: Use “Restore” to create a new database

This is the most important point here: To avoid fighting against database-file namings use the “restore” option. Don’t create a database manually - this is part of the restore operation.

x

4. step: Choose the copy-only backup and choose a new name

In this dialog you can name the “copy” database and choose the copy-only backup from the source database.

x

Now click ok and you are done!

Behind the scenes

This restore operation works way better to copy a database then to overwrite an existing database, because the restore operation will adjust the filenames.

x

Further information

I’m not a DBA, but when I follow these steps I normally have nothing to worry about if I want a 1:1 copy of a database. This can also be scripted, but then you may need to worry about filenames.

This stackoverflow question is full of great answers!

Hope this helps!

Golo Roden: Ein enum für JavaScript

JavaScript kennt keinen enum-Datentyp. Mithilfe eines Objekts, dessen Eigenschaften numerische Werte zugewiesen werden, lässt sich leicht Abhilfe schaffen. Tatsächlich ist das allerdings kaum besser, als hart verdrahtete Zeichenketten zu verwenden.

Golo Roden: Kein bind für Lambda-Ausdrücke

Lambda-Ausdrücke vereinfachen die Handhabung von this in JavaScript, bringen aber einige Besonderheiten mit, die unter Umständen unerwartet sind. Eine ist die fehlende Möglichkeit, den Wert von this eines Lambda-Ausdrucks neu zu binden.

Jürgen Gutsch: Live streaming ideas

With this post, I'd like to share some ideas about two live streaming shows with you. It would be cool to get some feedback from you, especially from the German speaking readers as well. The first idea is about an German speaking .NET Developer Community Standup and the second one is about a live coding stream (English or German), both hosted on Google Hangouts.

A German speaking .NET Developer Community Standup

Since the beginning of the ASP.NET Community Standup, I watch this show more or less regularly. I think I missed only two or three shows. Because of the different time zone it is almost not possible to watch the live stream. Anyway. I really like the format of that show.

Also since a few years the number of user group attendees decreases. In my user group sometimes only two or three attendees show up, even if we have a lot more registrations via meetup. We (Olivier Giss and me) have kinda fun hosting the user group, but it is also hard to push much effort in it for just a handful of loyal attendees. Since a while we record the sessions using skype for business or google hangouts and push them to YouTube. This gives some more folks the chance to see the talks. We thought a lot about the reasons and tried to change some things to get more attendees, but that didn't really work.

This is the reason why I'm thinking laud about a .NET Developer Community Standup for the German speaking region (Germany, Austria and Switzerland) since months.

I'd like to find two more people to join the team to host the show. Would be cool to have a person from Austria as well as from Switzerland. Since I'm a swiss MVP, I could also take over the swiss part in behalf ;-) In that case I would like to have another person from Germany. One host per country would be cool.

Three host is a nice number and it wouldn't be necessary for the hosts to be available every time we do a live stream. Anyone interested in joining the team?

To keep it simple I'd also use google hangouts to stream the show, and it is not necessary to have an high end steaming equipment. A good headset and a good internet connection should be enough.

In the show I would like to go threw some interesting community and technology news. Talking about some random stuff and I'd also like to invite special guests, who can show us things they did or who would like to talk about special things. This should be a lazy show about interesting stuff about technology and community. I'd also like to give community leads the chance to talk about their work and their events.

What are you thinking about that? Are you interested in?

If yes, I would set up a GitHub repo to collect ideas and topics to talk about.

Live Coding via Live Stream on Google Hangouts

Another idea is inspired by Jeff Fritz live stream on Twitch called "Fritz and Friends". The recorded streams are published to YouTube afterwards. I really like this live stream, even if it's a completely different kind of video to watch. Jeff is permanently in discussion with the users in the chat, while working on his projects. This is kinda wired and makes the show a little nervous, but it is also really interesting. The really cool thing is that he accepts pull request from his audience and he discuss their changes with the audience while working on his project.

I would do such a live stream as well, there were a few projects I would like to work on:

  • LightCore 2.0
    • An alternative DI container for .NET and .NET Core projects
    • Almost done, but needs to be finalized.
    • Maybe you folks want do add more features or add some optimizations
  • Working on the GraphQL middleware for ASP.NET Core
  • Working on health checks for ASP.NET and ASP.NET Core
    • Including health check application provided in the same way IdentityServer is provided to ASP.NET projects: Mainly as a single but extendable library and a optional UI to visualize the health of the connected services.
  • Working on a developer community platform like the portal Microsoft planned to release last year?
    • Unfortunately Microsoft retired that project. It would make more sense anyway, if this project is built and hosted by the community itself.
    • So this would be a great way to create such a developer community platform

Maybe it makes also sense to invite a special guest to talk about specific topics while working on the project. e.g. inviting Dominick Baier to implement authentication to the developer community platform.

What if I do the same thing? Are you interested in? What would be the best Language for that kind of life stream?

If you are interested, I would also set up a GitHub repo to collect ideas and topics to talk about and I would setup additional repos per project.

What do you think?

Do you like these ideas? Do you have any other idea? Please drop me a comment and share your thoughts :-)

Golo Roden: Objekte aufzählen in JavaScript

JavaScript kennt keine forEach-Schleife für Objekte. Der Einsatz von modernen Sprachmitteln wie der Funktion Object.entries und der for-of-Schleife ermöglicht aber das einfache und elegante Nachbauen einer solchen Schleife.

Uli Armbruster: Microservices mögen kein Denken in klassischen Entitäten

Einen ganz besonderen AHA Moment hatte ich kürzlich in einem Workshop bei Udi Dahan, CEO von Particular. In seinem Beispiel ging es um die klassische Entität eines Kunden.

Microservices zu realisieren bedeutet Fachlichkeiten sauber schneiden und in eigenständige Silos (oder Säulen) packen zu müssen. Jedes Silo muss dabei die Hohheit über die eigenen Daten besitzen, auf denen es die zugehörigen Geschäftsprozesse abbildet. Soweit so gut. Doch wie lässt sich dies im Falle eines Kunden bewerkstelligen, der klassischerweise wie im Screenshot zu sehen modelliert ist? Unterschiedliche Eigenschaften werden von unterschiedlichen Microservices benötigt bzw. verändert.

Wird die gleiche Entität in allen Silos verwendet, muss es eine entsprechende Synchronisierung zw. den Microservices geben. Das hat erhelbiche Auswirkungen auf Skalierbarkeit und Performance. In einer Applikation mit häufig parallelen Änderungen an einer Entität wird das Fehlschlagen von Geschäftsprozessen zunehmen – oder im schlimmsten Fall zu Inkonsistenzen führen.

Klassische Kundeentität

Klassische Kundeentität

 

Udi schlägt die folgende Modellierung vor:

Neue Modellierung eines Kunden

Der Kunde wird durch unabhängige Entitäten modelliert

Zur Identifikation, welche Daten zusammengehören, schlägt Udi einen Interessanten Ansatz vor:

Fragt die Fachabteilung, ob das Ändern einer Eigenschaft Auswirkung auf eine andere Eigenschaft hat. 

Würde das Ändern des Nachnamens einen Einfluss auf die Preiskalkulation haben? Oder auf die Art der Marketings?

Nun gilt es noch das Problem der Aggregation zu lösen, sprich wenn ich in meiner Anzeige unterschiedliche Daten unterschiedlicher Microserivces anzeigen möchte. Klassischerweise würde es jetzt eine Tabelle geben, die die Spalten

 

ID_Kunde ID_Kundenstamm ID_Bestandskundenmarketing ID_Preiskalkulation

 

besitzt. Das führt aber zu 2 Problemen:

  1. Die Tabelle muss immer erweitert werden, wenn ein neuer Microservices hinzugefügt wird.
  2. Sofern ein Microservices die gleiche Funktionalität in Form unterschiedlicher Daten abdeckt, müssten pro Microservices mehrere Spalten hinzugefügt und NULL Werte zugelassen werden.

Ein Beispiel für Punkt 2 wäre ein Microservices, der das Thema Bezahlmethoden abdeckt. Anfangs gab es beispielsweise nur Kreditkarte und Kontoeinzug. Dann folgte Paypal. Und kurze Zeit später dann Bitcoin. Der Microservices hätte hierzu mehrere Tabellen, wo er die individuelle Daten für die jeweilige Bezahlmethode halten würde. In oben gezeigter Aggregationstabelle müsste aber für jede Bezahlmethode, die der Kunde nutzt, eine Spalte gefüllt werden. Wenn er sie nicht benutzt, würde NULL geschrieben werden. Man merkt schon: Das stinkt.

Ein anderer Ansatz ist da deutlich besser geeignet. Welcher das ist und wie man diesen technischen realisieren kann, könnt ihr im GitHub Repository von Particular nachschauen.

 

Golo Roden: Ein asynchrones 'map' für JavaScript

Die 'map'-Funktion in JavaScript arbeitet stets synchron und hat kein asynchrones Pendant. Da 'async'-Funktionen vom Compiler aber in synchrone Funktionen, die Promises zurückgeben, umgewandelt werden, lässt sich 'map' mit 'Promise.all' kombinieren, um den gewünschten Effekt zu erzielen.

Jürgen Gutsch: Configuring HTTPS in ASP.NET Core 2.1

Finally HTTPS gets into ASP.NET Core. It was there before back in 1.1, but was kinda tricky to configure. It was available in 2.0 bit not configured by default. Now it is part of the default configuration and pretty much visible and present to the developers who will create a new ASP.NET Core 2.1 project.

So the title of that blog post is pretty much misleading, because you don't need to configure HTTPS. because it already is. So let's have a look how it is configured and how it can be customized. First create a new ASP.NET Core 2.1 web application.

Did you already install the latest .NET Core SDK? If not, go to https://dot.net/ to download and install the latest version for your platform.

Open a console and CD to your favorite location to play around with new projects. It is C:\git\aspnet\ in my case.

mkdir HttpsSecureWeb && cd HttpSecureWeb
dotnet new mvc -n HttpSecureWeb -o HttpSecureWeb
dotnet run

This commands will create and run a new application called HttpSecureWeb. And you will see HTTPS the first time in the console output by running an newly created ASP.NET Core 2.1 application:

There are two different URLs where Kestrel is listening on: https://localhost:5001 and http://localhost:5000

If you go to the Configure method in the Startup.cs there are some new middlewares used to prepare this web to use HTTPS:

In the Production and Staging environment mode there is this middleware:

app.UseHsts();

This enables HSTS (HTTP Strinct Transport Protocol), which is a HTTP/2 feature to avoid man-in-the-middle attacks. It tells the browser to cache the certificate for the specific host-headers and for a specific time range. If the certificate changes before the time range ends, something is wrong with the page. (More about HSTS)

The next new middleware redirects all requests without HTTPS to use the HTTPS version:

app.UseHttpsRedirection();

If you call http://localhost:5000, you get redirected immediately to https://localhost:5001. This makes sense if you want to enforce HTTPS.

So from the ASP.NET Core perspective all is done to run the web using HTTPS. Unfortunately the Certificate is missing. For the production mode you need to buy a valid trusted certificate and to install it in the windows certificate store. For the Development mode, you are able to create a development certificate using Visual Studio 2017 or the .NET CLI. VS 2017 is creating a certificate for you automatically.

Using the .NET CLI tool "dev-certs" you are able to manage your development certificates, like exporting them, cleaning all development certificates, trusting the current one and so on. Just type the following command to get more detailed information:

dotnet dev-certs https --help

On my machine I trusted the development certificate to not get the ugly error screen in the browser about an untrusted certificate and an unsecure connection every time I want to debug a ASP.NET Core application. This works quite well:

dotnet dev-cert https --trust

This command trusts the development certificate, by adding it to the certificate store or to the keychain on Mac.

On Windows you should use the certificate store to register HTTPS certificated. This is the most secured way on Windows machines. But I also like the idea to store the password protected certificate directly in the web folder or somewhere on the web server. This makes it pretty easy to deploy the application to different platforms, because Linux and Mac use different ways to store the certificated. Fortunately there is a way in ASP.NET Core to create a HTTPS connection using a file certificate which is stored on the hard drive. ASP.NET Core is completely customizable. If you want to replace the default certification handling, feel free to do it.

To change the default handling, open the Program.cs and take a quick look at the code, especially to the method CreateWebHostBuilder:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .UseStartup<Startup>();

This method creates the default WebHostBuilder. This has a lot of stuff preconfigured, which is working great in the most scenarios. But it is possible to override all of the default settings here and to replace it with some custom configurations. We need to tell the Kestrel webserver which host and port he need to listen on and we are able to configure the ListenOptions for specific ports. In this ListenOptions we can use HTTPS and pass in the certificate file and a password for that file:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseKestrel(options =>
        {
            options.Listen(IPAddress.Loopback, 5000);
            options.Listen(IPAddress.Loopback, 5001, listenOptions =>
            {
                listenOptions.UseHttps("certificate.pfx", "topsecret");
            });
        })
        .UseStartup<Startup>();

Usually we would use the hardcoded values from a configuration file or environment variables, instead of hardcoding it.

Be sure the certificate is password protected using a long password or even better a pass-phrase. Be sure to not store the password or the pass-phrase into a configuration file. In development mode you should use the user secrets to store such secret date and in production mode the Azure Key Vault could be an option.

Conclusion

I hope this helps to get you a rough overview over the the usage of HTTPS in ASP.NET Core. This is not really a deep dive, but tries to explain what are the new middlewares good for and how to configure HTTPS for different platforms.

BTW: I just saw in the blog post about HTTPS improvements, about HSTS in ASP.NET Core, there is a way to store the HTTPS configuration in the launchSettings.json. This is an easy way to pass in environment variables on startup to the application. The samples also shows to add the certificate password to this settings file. Please never ever do this! Because a file is easily shared to a source code repository or any other way, so the password inside is shared as well. Please use different mechanisms to set passwords in an application, like the already mentioned user secrets or the Azure Key Vault.

Uli Armbruster: Quellen zu Defensives Design und Separation of Concerns sind jetzt online

Den Quellcode zu meinen Vorträgen auf den Karlsruher Entwicklertagen und der DWX sind jetzt online:

  • Zu Super Mario Kata mit Fokus auf Defensivem Design geht es hier.
  • Zur Prüfsummen Kata mit fokus auf Separation of Concerns gelangt ihr hier.

Auf beiden Seiten findet ihr auch die Links zu den PowerPoint Folien. Im Juli 2018 werde ich den Code der Prüfsummen Kata noch in Form von Iterationen veröffentlichen und beide Talks als YouTube Video freigeben.

IMG_20180627_145405

Defensives Design Talk auf der DWX

Ihr wollt Teile davon in euren Vorträgen verwenden, ihr wollt ein Training zu dem Thema oder ich soll dazu in euerer Community einen Vortrag halten, dann kontaktiert mich über die auf GitHub genannten Kanäle.

Stefan Henneken: IEC 61131-3: The generic data type T_Arg

In the article The wonders of ANY, Jakob Sagatowski shows how the data type ANY can be effectively used. In the example described, a function compares two variables to determine whether the data type, data length and content are exactly the same. Instead of implementing a separate function for each data type, the same requirements can be implemented much more elegantly with only one function using data type ANY.

Some time ago, I had a similar task. A method should be developed that accepts any number of parameters. Both the data type and the number of parameters were random.

During my first attempt to find solution, I tried to use a variable-length array of type ARRAY [*] OF ANY. However, variable-length arrays can only be used as VAR_IN_OUT and the data type ANY only as VAR_INPUT (see also IEC 61131-3: Arrays with variable length). This approach was therefore ruled out.

As an alternative to data type ANY, structure T_Arg is also available. T_Arg is declared in the TwinCAT library Tc2_Utilities and, in contrast to ANY, is also available at TwinCAT 2. The structure of T_Arg is similar to the structure used for the data type ANY (see also The wonders of ANY).

TYPE T_Arg :
STRUCT
  eType   : E_ArgType   := ARGTYPE_UNKNOWN; (* Argument data type *)
  cbLen   : UDINT       := 0;               (* Argument data byte length *
  pData   : UDINT       := 0;               (* Pointer to argument data *)
END_STRUCT
END_TYPE

T_Arg can be used at any place, including in the VAR_IN_OUT range.

The following function adds any amount of numbers whose data type can also be random. The result is returned as LREAL.

FUNCTION F_AddMulti : LREAL
VAR_IN_OUT
  aArgs : ARRAY [*] OF T_Arg;
END_VAR
VAR
  nIndex : DINT;
  aUSINT : USINT;
  aUINT  : UINT;
  aINT   : INT;
  aDINT  : DINT;
  aREAL  : REAL;
  aLREAL : LREAL;
END_VAR

F_AddMulti := 0.0;
FOR nIndex := LOWER_BOUND(aArgs, 1) TO UPPER_BOUND(aArgs, 1) DO
  CASE (aArgs[nIndex].eType) OF
    E_ArgType.ARGTYPE_USINT:
      MEMCPY(ADR(aUSINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUSINT;
    E_ArgType.ARGTYPE_UINT:
      MEMCPY(ADR(aUINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUINT;
    E_ArgType.ARGTYPE_INT:
      MEMCPY(ADR(aINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aINT;
    E_ArgType.ARGTYPE_DINT:
      MEMCPY(ADR(aDINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aDINT;
    E_ArgType.ARGTYPE_REAL:
      MEMCPY(ADR(aREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aREAL;
    E_ArgType.ARGTYPE_LREAL:
      MEMCPY(ADR(aLREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aLREAL;
  END_CASE
END_FOR

However, calling the function is somewhat more complicated than with the data type ANY.

PROGRAM MAIN
VAR
  sum    : LREAL;
  args   : ARRAY [1..4] OF T_Arg;
  a      : INT := 4567;
  b      : REAL := 3.1415;
  c      : DINT := 7032345;
  d      : USINT := 13;
END_VAR

args[1] := F_INT(a);
args[2] := F_REAL(b);
args[3] := F_DINT(c);
args[4] := F_USINT(d);
sum := F_AddMulti(args);

The array passed to the function must be initialized first. The library Tc2_Utilities contains help functions that convert a variable into a structure of type T_Arg (F_INT(), F_REAL(), F_DINT(), …). The function for adding the values has only one input variable of type ARRAY [*] OF T_Arg.

The data type T_Arg is used, for example, in the function block FB_FormatString() or in the function F_FormatArgToStr() of TwinCAT. The function block FB_FormatString() can replace up to 10 placeholders in a string with values of PLC variables of type T_Arg (similar to fprintf in C).

An advantage of ANY is the fact that the data type is defined by the IEC 61131-3 standard.

Even if the generic data types ANY and T_Arg do not correspond to the generics in C# or the templates in C++, they still support the development of generic functions in IEC 61131-3. These can now be designed in such a way that the same function can be used for different data types and data structures.

Norbert Eder: Ein Tech-Stack für Microservices ist genug

Über Microservices gibt es zahlreiche Bücher und noch viel mehr Artikel. Darin finden sich darin auch viele Erfahrungen, Tipps, Meinungen und jede Menge Vorteile. Als einer dieser Vorteile taucht gerne die Wahlfreiheit der verwendeten Sprache/Plattform auf. Darauf möchte ich in diesem Beitrag näher eingehen.

Grundaussage

Ein Microservice ist eigenständig. Es läuft in einer passenden Umgebung. Alleine, oder zusammen mit anderen Microservices. Jedes Microservice kann für eine andere Plattform/Programmiersprache (C++, Java, .NET, Haskell, Rust, etc.) entwickelt werden.

Die Vorteile:

  • Die Umsetzung kann mit der Plattform/Programmiersprache umgesetzt werden, mit der sich die Anforderungen am besten abbilden lassen.

  • Softwareentwickler müssen nicht mehr dieselbe Plattform/Programmiersprache lernen, sondern können ein Microservice mit bereits vorhandenen Kenntnissen umsetzen. Das spart Zeit und bietet Vorteile bei der Suche neuer EntwicklerInnen.

Was bedeuten diese Vorteile in der Realität?

Machbarkeit/Realitätsnähe

Bei der Mitarbeitersuche nicht mehr auf eine Plattform Rücksicht nehmen zu müssen, klingt gut. Gerade in Zeiten, in denen jeder Softwareentwickler umkämpft, wie nie zuvor, ist.

Für das Unternehmen (und in weiterer Folge auch den Mitarbeitern) ergibt sich jedoch ein anderes Bild:

  • Neue Plattformen benötigen zusätzliches Know-how. Eine Entwicklungsmaschine ist schnell eingerichtet, für den Produktivbetrieb gelten andere Anforderungen hinsichtlich Administration, Konfiguration, Sicherheit usw. Es entstehen zusätzliche Kosten und Risiken.

  • Wer pflegt und führt die Weiterentwicklung fort, wenn der ursprüngliche Entwickler nicht (mehr) verfügbar ist? Durch Krankheit, Unfall oder Kündigung kann ein erheblicher Aufwand durch Know-how-Verlust auf das Unternehmen zukommen. Nicht selten wird Software neu entwickelt, weil dies insgesamt günstiger kommt.

  • Um Themen wie Performance-Optimierungen etc. bewältigen zu können, bedarf es tiefgreifenden Wissens über den Technologiestack. Denselben Grad an Tiefe über mehrere Plattformen hinweg aufzubauen und dabei auch am aktuellen Stand zu bleiben, ist eine aufwändige und damit kostenintensive Angelegenheit – und auf lange Frist in der Regeln nicht machbar.

Bei einem Blick auf die Unternehmensverteilung in Österreich, sind 99,8% der Unternehmen laut WKO im Jahr 2017 99,8% der Unternehmen den KMU zuzuordnen (0-249 Mitarbeiter). In Deutschland sieht dieses Bild ähnlich aus. Hier sind es laut Statista 99,6%.

Internationale Großkonzerne sind aufgrund der vorhandenen Mittel im Vorteil. Einschlägige Softwareunternehmen an der oberen Grenze, die zudem mit einem einzigen Produkt am Markt sind, könnten dies ebenfalls bewältigen. Der Rest sollte tunlichst die Finger davon lassen.

Fazit

Nicht jeder genannte Vorteil, entpuppt sich als solcher. Microservices haben ihre Daseinsberechtigung. Einzelne Services auf unterschiedlichen Plattformen aufsetzen zu können, kann in manchen Fällen tatsächlich sinnvoll und hilfreich sein. Tatsächlich muss dies im Sinne der Wirtschaftlichkeit hinterfragt werden und keinesfalls frei von jedem Entwickler im Unternehmen entschieden werden können.

Der Beitrag Ein Tech-Stack für Microservices ist genug erschien zuerst auf Norbert Eder.

David Tielke: #DWX2018 - Inhalte meiner Sessions, Workshops und Fernseh- und Radiointerviews

Vom 25.06. bis 28.06. fand auch in diesem Jahr wieder die Developer Week 2018 in Nürnberg statt. An vier Tagen öffnete das NCC Ost des Messezentrums Nürnberg die Pforten um tausenden wissenshungrigen Entwicklern zu empfangen, die sich in Sessions und Workshops rund um das Thema Softwareentwicklung schlau machen konnten. Wie in jedem Jahr, war ich auch in diesem Jahr wieder als Trackchair für die beiden Tracks Softwarequalität und Softwarearchitekturen inhaltlich zuständig. Neben der Programmgestaltung durfte ich auch selbst wieder aktiv werden und in insgesamt vier Sessions, einer Abendveranstaltung und einem Workshop mein Wissen an die Teilnehmer weiterreichen. 


Hier meine Beiträge im Überblick:

  • Session: Effektive Architekturen mit Workflows
  • Session: Architektur für die Praxis 2.0 (Ersatzvortrag)
  • Session: Metriken - wie gut ist Ihre Software?
  • Session: Testing Everything
  • Abendveranstaltung: SmartHome - Das Haus der Zukunft!
  • Workshop: Architektur für die Praxis 2.0

Fernsehinterview BR / ARD:


Im Zusammenhang mit meinem Vortrag zum Thema "Smarthome" am Montagabend, wurde ich im Vorfeld von diversen Medien wie BR, ARD, Nürnberger Nachrichten und Radio Gong zu dem Thema befragt. Der Beitrag des BR dazu ist noch immer in deren Mediathek abrufbar.

Inhalte zu meinen Workshops und Sessions

Wie mit den Teilnehmern in meinen Sessions vorab besprochen, stelle ich hier nun alle relevanten Materialien meiner Sessions zur Verfügung. Diese Inhalte umfassen zum einen meine Codebeispiele aus Visual Studio, sowie meine Notizen und Mitschriften aus OneNote als PDF aber vor allem meine Artikel auf meinem dotnetpro Kolumne "Davids Deep Dive" zu diesem Thema:

Das Passwort für beide Bereiche, wurde auf der Konferenz bekannt gegeben und kann alternativ bei mir per Email erfragt werden.

Bis zur Developer Week 2019!

Auch im nächsten Jahr wird die Developer Week wieder Ihre Pforten öffnen und ich freue mich schon jetzt darauf! Ich möchte auf diesem Wege nochmal allen Teilnehmern meiner Sessions für die tolle Atmosphäre und interessanten Diskussionen danken, es hat mal wieder super viel Spaß gemacht und es war mir wie immer eine Ehre. Ebenfalls ein riesen großes Danke geht an den Veranstalter Developer Media, die mal wieder ein noch besseres Event als im vorherigen Jahr auf die Beine gestellt haben. Bis nächstes Jahr!

Golo Roden: Shorthand-Syntax für die Konsole

Die Shorthand-Syntax von ES2015 ermöglicht das vereinfachte Definieren von Objekten, deren Werte gleichnamigen Variablen entsprechen. Diese Syntax kann man sich bei Ausgaben auf die Konsole zu Nutze machen, um besser lesbare und nachvollziehbare Ausgaben zu erhalten.

Jürgen Gutsch: Four times in a row

One year later, it is the July 1st and I got the email from the Global MVP Administrator. I got the MVP award the fourth time in a row :)

I'm pretty proud and honored about that and I'm really happy to be part of the great MVP community one year more. I'm also looking forward to the Global MVP Summit next year to meet all the other MVPs from around the world.

Still not really a fan-boy...!?

I'm also proud of being a MVP, because I never called myself a Microsoft fan-boy. And sometimes, I also criticize some tools and platforms built by Microsoft (I feel like a bad boy). But I like most of the development tools built by Microsoft and I like to use the tools, and frameworks and I really like the new and open Microsoft. The way how Microsoft now supports more than its own technologies and platforms. I like using VSCode, Typescript and Webpack to create NodeJS applications. I like VSCode and .NET Core on Linux to build Applications on a different platform than Windows. I also like to play around with UWP Apps on Windows for IoT on a Raspberry PI.

There are much more possibilities, much more platforms, much more customers to reach, using the current Microsoft development stack. And it is really fun to play with it, to use it in real project, to write about it in .NET magazines, in this blog and to talk about it in the user groups and on conferences.

In the last year being an MVP, I also learned that it is kinda fun to contribute to Microsoft's open source projects, being a part of that project and to see my own work in that projects. If you like open source as well, contribute to the the open source projects. Make the projects better, make the documentations better.

I also need to say Thanks

But I wouldn't get honored again without such a great development community. I wouldn't continue to contribute to the community without that positive feedback and without that great people. This is why the biggest "Thank You" goes to the development community :)

And like last year, I also need to say "Thank You" to my great family (my lovely wife and my three kids) which supports me in spending so much time to contribute to the community. I also need to say Thanks to the YooApplications AG, my colleagues and my boss for supporting me and allowing me to use parts of my working time to contribute the the community.

Norbert Eder: The importance of automated software testing

Much has already been written about automated software testing – by me as well. Still there are so many developers out there, who say automated tests only costs money and time and so they rather put their time into building more functionality. Because this is, what the customer needs – they think. Some developers even think they are godlike, their code can’t have any defects, so there is no need for automated software testing. My experience tells another story though …

Automated testing is so expensive

Many years ago, I was developing a SaaS solution with a team of developers. We created a monolithic service having many features, but we wrote no automated tests. Every three or four months there was a „testing phase“. Everyone in the company (~ ten people) had to test the software the whole daylong. This lasted for about two weeks and resulted in lists of hundreds of bugs. The following few weeks were used to fix all those bugs, no new features were implemented.

Some years later, we decided to move to a new technology for our user interface (ASP.NET web forms to AngularJS). As part of this change my team got the chance to transform the monolithic application into several microservices. As the lead of this team, I promoted the implementation of automated tests. We wrote unit tests as well as integration tests for our web API endpoints. Even our Angular front-end code had been tested. Each team member had to run all tests before committing changes to our source control.

What do you think was the result of introducing automated tests?

We only had three to four bugs a month reported by colleagues or customers and could focus on building new features and a better user experience, improve performance and do research. No more testing and bug fixing weeks. That saved us so much money! Unbelievable. And we really loved it.

But writing tests costs time/money

This is the number one argument. Of course, it does! But: The more tests you write and the better you get at it, the less time you need. You will not only learn how to write tests very fast, you’ll also gain a sense for what is important to test.

So, writing unit and/or integration tests costs money. Right now. However, it saves money in the future. Neglecting to write tests saves money now, but costs dearly in the future. Let us have a look into the details.

Costs for writing tests

Again, writing tests needs time. However, every test helps to avoid failures in future. Of course, bugs can arise and you have to fix them, but you can do that in a test-driven manner. Write a test which assumes the correct behavior, then fix the bug until everything is fine and the test completes successfully. This failure will not bother you anymore in future.

Depending on your software, your test coverage and the used types of automated tests, you still may need to perform manual tests as well though.

Costs for not writing tests

In case you don’t want to deliver untested software, you have to do manual testing to find bugs. For these tests, detailed test cases are necessary. In reality, they too hardly exist if there was a decision against automated testing – costs are the basis of such decisions.

Doing manual tests is very time-consuming. It is reasonable to test manually even if you have automated tests, but the expenditure of time will be much higher if there are no automated tests.

Only relying on manual tests cause some negative effects (and they are cost-effective):

  • The software developer has to try to understand the code again when bugs are found weeks or months after the implementation. The sooner the bug was found, the better.
  • Generally, more bugs will be reported by customers if there is no automated testing. This ties up a lot of resources on the customers side and of course also at your own company (someone has to file the ticket, try to reproduce the bug, manage development planning, another testing phase has to be planned, a new release has to be scheduled and delivered, release notes have to be communicated and so forth).
  • In case of unit tests, the software design will be clearer and coupling will be lower. If you work actively with unit tests you tend to be in intensive care of the software design because you think about usage of your classes and methods. This leads to less complexity and higher maintainability.

Deployment could be relatively easy in case of a SaaS solution but that would become much more complex (and expensive) if you have a lot of on premises installations, especially in complex industrial infrastructures.

Positive side effects

I run all tests before committing my changes to the source control. This provides instant feedback and things can get fixed immediately.

Automated tests are an investment in the future of your software as well as your company. Someday a refactoring will be necessary. Automated tests greatly minimize the risk that accidental behavior changes occur, even when the software design or the implementation changes.

There is another cool thing about automated tests: Run them with a nightly build and log the execution times into a database. This makes it possible to trace performance of your software. Doing this daily provides a great feedback about the impact of the implementations made.

Conclusion

In reality, it is much more expensive to neglect to write tests. Some ignore the fact that the costs will incur much later. They just see less cost in development, but this is – in my opinion – a big mistake. My advice is to write both unit tests whenever reasonable, as well as integration tests for your entire API.

Der Beitrag The importance of automated software testing erschien zuerst auf Norbert Eder.

Golo Roden: Wie man Zahlen in JavaScript auffüllt

JavaScript kennt seit der Version ES2017 die beiden neuen Funktionen padStart und padEnd, um Zeichenketten von links beziehungsweise von rechts aufzufüllen. Damit sie mit Zahlen funktionieren, müssen diese zuvor mit toString in Zeichenketten umgewandelt werden.

Holger Schwichtenberg: Microsoft verkündet Pläne für ASP.NET Core und Entity Framework Core 2.2

Microsoft hat gestern auf Github sowohl die Termine als auch die inhaltlichen Pläne für Version 2.2 von ASP.NET Core und Entity Framework Core verkündet.

Golo Roden: Wie man Content-Types zerlegt

Das npm-Modul content-type stellt eine parse-Funktion zur Verfügung, mit der sich Content-Type-Header RFC-konform analysieren und zerlegen lassen.

Uli Armbruster: Kausale Ketten: Gründe für das Scheitern von Softwareprojekten

Auf dieser Github Page habe ich begonnen die Problemen in Softwareprojekten, die uns täglich begegnen, genauer zu analysieren, um diese vermeiden bzw. beheben zu können.

In Gesprächen mit Teilnehmern meiner Workshops werden mir regelmäßig Symptome geschildert, von denen ich der Meinung bin, dass die Ursache wie so oft tiefer liegt.

Tituliert habe ich das als kausale Ketten und orientiere mich dabei an folgendem Schema:

  • Was ist das wahrgenommene Problem, also das Symptom
  • Wie kam es dazu, also die Verlaufsbetrachtung
  • Warum kam es dazu, was ist die Ursache

Zudem suche ich nach nachvollziehbaren Beispielen, um die Theorie zu untermauern.

Ich verstehe die Seite als Anreiz zum Nachdenken. Alle „Thesen“ sollen als Einstieg in eine lebendige Diskussion dienen.

Gebt mir gerne Feedback in Form von Pull Requests.

Uli Armbruster: Termin nossued 2019

In diesem Jahr fand der #nossued noch vor Beginn der Sommerferien in den meisten Bundesländern statt. Das Feedback der letzten Jahren ging dahin, dass der ein oder andere aufgrund der Schulferien nicht kommen konnte.

Neben der Berücksichtung von zeitlich verschobener Ferien in den einzelnen Bundesländern gilt es natürlich noch auf etablierte Events wie die dotnet Cologne Rücksicht zu nehmen.

Der ein oder andere hadert mit Juni/Juli auch wegen den alle 2 Jahre stattfindenden Fußballturnieren oder weil die warme Sommerzeit lieber für Freizeitaktivitäten genutzt wird. Andere wiederum schätzen genau die strahlende Sonne und die Möglichkeit die Dachterrasse nutzen zu können.

Daher würden wir von der Organisation gerne wissen, wie ihr zu den möglichen Terminen 2019 steht. Denkbar wären auch frühere Termine wie die Zeiträume vom 08. März bis 04. April und 27. April bis 31. Mai. Wie steht die Community dazu?

Eure Rückmeldungen würden uns freuen, z.B. in Form von Nennung von guten Zeiträumen oder weniger geeigneten Monaten.

Christian Dennig [MS]: Open Service Broker for Azure (OSBA) with Azure Kubernetes Service (AKS)

In case you missed it, the Azure Managed Kubernetes service (AKS) has been released today (June, 13th 2018, hoooooray 🙂 see the official announcement from Brendan Burns here) and it is now possible to run production workloads on a fully Microsoft-managed Kubernetes cluster in the Azure cloud. “Fully managed” means, that the K8s control plane and the worker nodes (infrastructure) are managed by Microsoft (API server, Docker runtime, scheduler, etcd server…), security patches are applied on a dialy basis to the underlying OS, you get Azure Active Directory integration (currently in preview) etc. And what’s really nice: you only pay for the worker nodes, the control plane is completely free!

The integration of Kubernetes into the Azure infrastructure part is really impressive, but when it comes to service integration and provisioning on the cloud platform there is still room for improvement…but it’s on its way! The Open Service Broker for Azure (update: Version 1.0 reached) closes the gap between Kubernetes workloads that require certain Azure services to run and the provisioning part of these services as it makes it possible e.g. to create a SQL server instance in Azure via a Kubernetes “YAML”-file during deployment of other Kubernetes objects on the fly. Sounds good? Let’s see, how this works.

Creating a Kubernetes Demo Cluster

First of all, we need a Kubernetes cluster to be able to test the Open Service Broker for Azure – we are going to use Azure CLI, therefore please make sure you have installed the latest version of it.

Okay, so let’s create an Azure resource group where we can deploy the AKS cluster to afterwards:

// resource group
az group create --name osba-demo-rg --location westeurope

// AKS cluster - version must be above 1.9 (!)
az aks create 
        --resource-group osba-demo-rg 
        --name osba-k8sdemo-cluster 
        --generate-ssh-keys  
        --kubernetes-version 1.9.6

When the deployment of the cluster has finished, download the corresponding kubeconfig file:

az aks get-credentials 
        --resource-group osba-demo-rg 
        --name osba-k8sdemo-cluster

Now we are ready to use kubectl to work with the newly created cluster. Test the connection by querying the available worker nodes of the cluster:

kubectl get nodes

You should see something like this:

nodes

Before we can install the Open Service Broker, we also need a service principal in Azure that is able to interact with the Azure Resource Manager and create resources on behalf of us (think of it as a “service account” in Linux / Windows).

az ad sp create-for-rbac --name osba-demo-principal -o table

Important: Remember “Tenant”, “Application ID” and “Password” for you will need these values when installing the OSBA now.

Installing OSBA

Cluster Installation

We are using Helm to install OSBA on our cluster, so we first need to prepare your local machine for Helm (FYI: your AKS cluster is by default ready to use Helm, so there’s no need to install anything on it – anyway, you need to install the Helm client on your workstation):

helm init

Next, we need to deploy the Service Catalog on the cluster:

helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

helm install svc-cat/catalog --name catalog --namespace catalog \
   --set rbacEnable=false \
   --set apiserver.storage.etcd.persistence.enabled=true

Please note: if you have installed your cluster with the option “RBAC-enabled” (which is standard in the meantime for AKS deployments – unless you disable it), you’ll have to set rbacEnable=true! The service catalog apiserver container will fail to start otherwise.

Now we are ready to deploy OSBA to the cluster:

// add the Azure charts repository

helm repo add azure https://kubernetescharts.blob.core.windows.net/azure


// finally, add the service broker for Azure

helm install azure/open-service-broker-azure --name osba --namespace osba `
  --set azure.subscriptionId=<Your Subscription ID>  `
  --set azure.tenantId=<Tenant> `
  --set azure.clientId=<Service Principal Application ID> `
  --set azure.clientSecret=<Service Principal Password>

Info: In case you don’t know your Azure subscription Id, run…

az account show

…and use the value of property “id”.

You can check the status of the deployments (catalog & service broker) by querying the running pods in the namespaces catalog and osba.

pods

Service Catalog Client Tools

Service Catalog comes with its own command line interface. So you need to install it on your machine (installation instructions).

Using the OSBA for Service Provisioning

Now, we are prepared to provision / create so-called “ServiceInstances” (Azure resources) and “bind” them via “ServiceBindings” in order to be able to use them as resources/endpoints/services etc. in our pods.

In the current example, we want to provision an Azure SQL DB. So first of all, we need to create a service intance of the database. Therefore, use the following YAML definition:

As you can see, you, there are some values, you have to provide to OSBA:

  • clusterServiceClassExternalName – in our case, we want to create an Azure SQL DB. You can query the available service classes by using the following command: svcat get classes. We will be using azure-sql-12-0.
  • clusterServicePlanExternalName – the service plan name which represents the service tier in Azure. Use svcat describe classes azure-sql-12-0 to show the available service plans for class azure-sql-12-0. We will be using standard-s1.
  • resourceGroupthe Azure resource group for the server and database

 

 

classes

Show available service classes via “svcat get classes

azure-sql-12-0

Show available service plans for class “azure-sql-12-0” via “svcat describe classes azure-sql-12-0

Now, create the service via kubectl:

kubectl create -f .\service-instance.yaml

Query the service instances by using the Service Catalog CLI:

svcat get instances

The result should be (after a short amount of time) something like that:

instances

In the Azure portal, you should also see these newly created resources:

portal_resources

Now that we have created the service instance, let’s bind the instance, in order to be able to use it. Here’s the YAML file for it:

kubectl create -f service-binding.yaml

As seen with the service instance, the service binding also needs some parameters in order to work. Of course, the binding needs a reference to the service instance, it wants to use (instanceRef). The more interesting property is secretName. While creating the binding, the service broker also creates a secret in the current namespace, where important values ( like passwords, server name, database name, URIs etc.) are added. You can reference the secret / values afterwards in your K8s deployments and add them e.g. as environment variables to your pods.

Now let’s see, if the binding has been created via svcat:

bindings

That look’s good. Over to the Kubernetes dashboard, to see, if the secret has been created in the default namespace.

secret

Kubernetes secret

It seems like everything was “bound” for usage as expected and we are now ready to use the Azure SQL DB in our containers/pods!

Wrap Up

As you have seen in this example, with the Open Service Broker for Azure it is very easy to create Azure resources via Kubernetes object definitions. You simply need to install OSBA to your cluster with Helm! Afterwards, you can create and bind Azure services like Azure SQL DB. If you are curious what resource providers are supported, there are currently three services, that are available:

…and some experimental services:

  • Azure CosmosDB
  • Azure KeyVault
  • Azure Redis Cache
  • Azure Event Hubs
  • Azure Service Bus
  • Azure Storage
  • Azure Container Instances
  • Azure Search

The up-to-date list can always be found here: https://github.com/Azure/open-service-broker-azure/tree/master/docs/modules

Have fun with it 🙂

Holger Schwichtenberg: ASP.NET Blazor 0.4 erschienen

Die vierte Preview-Version von Microsofts .NET-basiertem Framework zur WebAssembly-Programmierung bietet einige Verbesserungen.

Jürgen Gutsch: Creating a signature pad using Canvas and ASP.​NET Core Razor Pages

In one of our projects, we needed to add a possibility to add signatures to PDF documents. A technician fills out a checklist online and a responsible person and the technician need to sign the checklist afterwards. The signatures then gets embedded into a generated pdf document together with the results of the checklist. The signatures must be created on a web UI, running on an iPad Pro.

It was pretty clear that we need to use the HTML5 canvas element and to capture the pointer movements. Fortunately we stumbled upon a pretty cool library on GitHub, created by Szymon Nowak from Poland. It is the super awesome Signature Pad written in TypeScript and available as NPM and Yarn package. It is also possible to use a CDN to use the Signature Pad.

Use Signature Pad

Using Signature Pad is really easy and works well without any configuration. Let me show you in a quick way how it works:

To play around with it, I created a new ASP.NET Core Razor Pages web using the dotnet CLI:

dotnet new razor -n SignaturePad -o SignaturePad

I added a new razor page called Signature and added it to the menu in the _Layout.cshtml. I created a simple form and placed some elements in it:

<form method="POST">
    <p>
        <canvas width="500" height="400" id="signature" 
                style="border:1px solid black"></canvas><br>
        <button type="button" id="accept" 
                class="btn btn-primary">Accept signature</button>
        <button type="submit" id="save" 
                class="btn btn-primary">Save</button><br>
        <img width="500" height="400" id="savetarget" 
             style="border:1px solid black"><br>
        <input type="text" asp-for="@Model.SignatureDataUrl"> 
    </p>
</form>

The form posts the content to the current URL, which is the same Razor page, but the different HTTP method handler. We will have a look later on.

The canvas is the most important thing. This is the area where the signature gets drawn. I added a border to make the pad boundaries visible on the screen. I add a button to accept the signature. This means we lock the canvas and write the image data to the input field added as last element. I also added a second button to submit the form. The image is just to validate the signature and is not really needed, but I was curious about, how it looks in an image tag.

This is not the nicest HTML code but works for a quick test.

Right after the form I added a script area to render the JavaScript to the end of the page. To get it running quickly, I use jQuery to access the HTML elements. I also copied the signature_pad.min.js into the project, instead of using the CDN version

@section Scripts{
    <script src="~/js/signature_pad.min.js"></script>
    <script>
        $(function () {

            var canvas = document.querySelector('#signature');
            var pad = new SignaturePad(canvas);

            $('#accept').click(function(){

                var data = pad.toDataURL();

                $('#savetarget').attr('src', data);
                $('#SignatureDataUrl').val(data);
                pad.off();
            
            });
                    
        });
    </script>
}

As you can see, creating the Signature Pad is simply done by creating a new instance of SignaturePad and passing the canvas as an argument. On click at the accept button, I start working with the pad. The function toDataURL() generates an image data URL that can be directly used as image source, like I do in the next line. After that I store the result as value in the input field to send it to the server. In Production this should be a hidden field. at the end I switch the Signature Pad off to lock the canvas and the user cannot manipulate the signature anymore.

Handling the Image Date URL with C##

The image data URL looks like this:

data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAAGQCAYA...

So after the comma the image is a base 64 encoded string. The data before the comma describes the image type and the encoding. I now send the complete data URL to the server and we need to decode the string.

public void OnPost()
{
    if (String.IsNullOrWhiteSpace(SignatureDataUrl)) return;

    var base64Signature = SignatureDataUrl.Split(",")[1];            
    var binarySignature = Convert.FromBase64String(base64Signature);

    System.IO.File.WriteAllBytes("Signature.png", binarySignature);
}

On the page model we need to create a new method OnPost() to handle the HTTP POST method. Inside we first check whether the bound property has a value or not. Then we split the string by comma and convert the base 64 string to an byte array.

With this byte array we can do whatever we need to do. In the current project I store the image directly in the PDF and in this demo I just store the data in an image on the hard drive.

Conclusion

As mentioned this is just a quick demo with some ugly code. But the rough idea could be used to make it better in Angular or React. To learn more about the Signature Pad visit the repository: https://github.com/szimek/signature_pad

This example also shows what is possible with HTML5 this times. I really like the possibilities of HTML5 and the HTML5 APIs used with JavaScript.

Hope this helps :-)

Code-Inside Blog: DbProviderFactories & ODP.NET: When even Oracle can be tamed

Oracle and .NET: Tales from the dark ages

Each time when I tried to load data from an Oracle database it was a pretty terrible experience.

I remember that I struggle to find the right Oracle driver and even when everything was installed the strange TNS ora config file popped up and nothing worked.

It can be simple…

2 weeks ago I had the pleasure to load some data from a Oracle database and discovered something beautiful: Actually, I can be pretty simple today.

The way to success:

1. Just ignore the System.Data.OracleClient-Namespace

The implementation is pretty old and if you go this route you will end up with the terrible “Oracle driver/tns.ora”-chaos mentioned above.

2. Use the Oracle.ManagedDataAccess:

Just install the official NuGet package and you are done. The single .dll contains all the bits to connect to an Oracle database. No driver installation additional software is needed. Yay!

The NuGet package will add some config entries in your web.config or app.config. I will cover this in the section below.

3. Use sane ConnectionStrings:

Instead of the wild Oracle TNS config stuff, just use (a more or less) sane ConnectionString.

You can either just use the same configuration you would normally do in the TNS file, like this:

Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=MyHost)(PORT=MyPort)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=MyOracleSID)));User Id=myUsername;Password=myPassword;

Or use the even simpler “easy connect name schema” like this:

Data Source=username/password@myserver//instancename;

DbProviderFactories & ODP.NET

As I mentioned earlier after the installation your web or app.config might look different.

The most interesting addition is the registration in the DbProviderFactories-section:

...
<system.data>
    <DbProviderFactories>
      <remove invariant="Oracle.ManagedDataAccess.Client"/>
      <add name="ODP.NET, Managed Driver" invariant="Oracle.ManagedDataAccess.Client" description="Oracle Data Provider for .NET, Managed Driver"
          type="Oracle.ManagedDataAccess.Client.OracleClientFactory, Oracle.ManagedDataAccess, Version=4.122.1.0, Culture=neutral, PublicKeyToken=89b483f429c47342"/>
    </DbProviderFactories>
  </system.data>
...

I covered this topic a while ago in an older blogpost, but to keep it simple: It also works for Oracle!

		private static void OracleTest()
        {
            string constr = "Data Source=localhost;User Id=...;Password=...;";

            DbProviderFactory factory = DbProviderFactories.GetFactory("Oracle.ManagedDataAccess.Client");

            using (DbConnection conn = factory.CreateConnection())
            {
                try
                {
                    conn.ConnectionString = constr;
                    conn.Open();

                    using (DbCommand dbcmd = conn.CreateCommand())
                    {
                        dbcmd.CommandType = CommandType.Text;
                        dbcmd.CommandText = "select name, address from contacts WHERE UPPER(name) Like UPPER('%' || :name || '%') ";

                        var dbParam = dbcmd.CreateParameter();
                        // prefix with : possible, but @ will be result in an error
                        dbParam.ParameterName = "name";
                        dbParam.Value = "foobar";

                        dbcmd.Parameters.Add(dbParam);

                        using (DbDataReader dbrdr = dbcmd.ExecuteReader())
                        {
                            while (dbrdr.Read())
                            {
                                Console.WriteLine(dbrdr[0]);
                            }
                        }
                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message);
                    Console.WriteLine(ex.StackTrace);
                }
            }
        }

MSSQL, MySql and Oracle - via DbProviderFactories

The above code is a snippet from my larger sample demo covering MSSQL, MySQL and Oracle. If you are interested just check this demo on GitHub.

Each SQL-Syntax teats parameter a bit different, so make sure you use the correct syntax for your target database.

Bottom line

Accessing a Oracle database from .NET doesn’t need to be a pain nowadays.

Be aware that the ODP.NET provider might surface higher level APIs to work with Oracle databases. The dbProviderfactory-approach helped us for our simple “just load some data”-scenario.

Hope this helps.

MSDN Team Blog AT [MS]: Starte mit Artificial Intelligence und Machine Learning

  • Du interessierst Dich für Cognitive Services und Custom Machine Learning?
  • Du willst Dich mit Neuralen Netze sowie mit Frameworks wie TensorFlow and CNTK beschäftigen?
  • Du willst mit Machine Learning Spezialisten mal so richtig in eine technische Diskussion eintauchen?
  • Du willst eventuell Euer geplantes Projekt vorab mal abklopfen lassen oder suchst Tipps für die Umsetzung?

Wir bringen Euch mit anderen Software Engineers in einer dreitägigen Veranstaltung zusammen, um Eure Machine Learning Fähigkeiten durch eine Reihe von strukturierten Herausforderungen zu schärfen, und in weiterer Folge gemeinsam mit Euch Probleme im Computer Vision Bereich zu lösen.

Ende Juni bieten wir als Readiness Maßnahme einen kostenlosen OpenHack zu Artificial Intelligence und Machine Learning an.

Bei den OpenHacks geben wir den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß.

Falls ihr Euch also aktiv mit den Themen Artificial Intelligence & Machine Learning auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien.

Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!

Voraussetzungen also:

  • Eigenes Arbeitsgerät zum Entwickeln mitbringen
  • Gut wäre zumindest Basis Wissen zu Python und Datenstrukturen.
    Kleiner Tipp: "Intro zu Python for Data Science" als Intro oder Auffrischung durchwassern.
  • Zwar nicht wesentlich, aber sicher hilfreich: erste Erfahrungen mit Machine Learning
  • Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, …

Als CSE (Commercial Software Engineering) können wir Euch in weiterer Folge auch beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Das Ziel wäre es also klarerweise Euch zu motivieren über eigene AI & ML Projekte nachzudenken bzw. diese zu starten. Nur so als kleine Anregung daher ein paar Projekte, die so in Kooperation mit der CSE entstanden sind: https://www.microsoft.com/developerblog/category/machine-learning/

Anmelden unter: https://aka.ms/openhackberlin

Wir freuen uns auf Euer Kommen!!

Code-Inside Blog: CultureInfo.GetCultureInfo() vs. new CultureInfo() - what's the difference?

The problem

The problem started with a simple code:

double.TryParse("1'000", NumberStyles.Any, culture, out _)

Be aware that the given culture was “DE-CH” and the Swiss use the ‘ for the separator for numbers.

Unfortunately the Swiss authorities have abandoned the ‘ for currencies, but it is widly used in the industrie and such a number should be parsed or displayed.

Now Microsoft steps in and they use a very similar char in the “DE-CH” region setting:

  • The backed in char to separate numbers: ‘ (CharCode: 8217)
  • The obvious choice would be: ‘ (CharCode: 39)

The result of this configuration hell:

If you don’t change the region settings in Windows you can’t parse doubles with this fancy group separator.

Stranger things:

My work machine is running the EN-US version of Windows and my tests where failing because of this madness, but it was even stranger: Some other tests (quite similar to what I did) were OK on our company DE-CH machines.

But… why?

After some crazy time I discovered that our company DE-CH machines (and the machines from our customer) were using the “sane” group separator, but my code still didn’t work as expected.

Root cause

The root problem (besides the stupid char choice) was this: I used the “wrong” method to get the “DE-CH” culture in my code.

Let’s try out this demo code:

class Program
    {
        static void Main(string[] args)
        {
            var culture = new CultureInfo("de-CH");

            Console.WriteLine("de-CH Group Separator");
            Console.WriteLine(
                $"{culture.NumberFormat.CurrencyGroupSeparator} - CharCode: {(int) char.Parse(culture.NumberFormat.CurrencyGroupSeparator)}");
            Console.WriteLine(
                $"{culture.NumberFormat.NumberGroupSeparator} - CharCode: {(int) char.Parse(culture.NumberFormat.NumberGroupSeparator)}");

            var cultureFromFramework = CultureInfo.GetCultureInfo("de-CH");

            Console.WriteLine("de-CH Group Separator from Framework");
            Console.WriteLine(
                $"{cultureFromFramework.NumberFormat.CurrencyGroupSeparator} - CharCode: {(int)char.Parse(cultureFromFramework.NumberFormat.CurrencyGroupSeparator)}");
            Console.WriteLine(
                $"{cultureFromFramework.NumberFormat.NumberGroupSeparator} - CharCode: {(int)char.Parse(cultureFromFramework.NumberFormat.NumberGroupSeparator)}");
        }
    }

The result should be something like this:

de-CH Group Separator
' - CharCode: 8217
' - CharCode: 8217
de-CH Group Separator from Framework
' - CharCode: 8217
' - CharCode: 8217

Now change the region setting for de-CH and see what happens:

x

de-CH Group Separator
' - CharCode: 8217
X - CharCode: 88
de-CH Group Separator from Framework
' - CharCode: 8217
' - CharCode: 8217

Only the CultureInfo from the first instance got the change!

Modified vs. read-only

The problem can be summerized with: RTFM!

From the MSDN for GetCultureInfo: Retrieves a cached, read-only instance of a culture.

The “new CultureInfo” constructor will pick up the changed settings from Windows.

TL;DR:

  • CultureInfo.GetCultureInfo will return a “backed in” culture, which might be very fast, but doesn’t respect user changes.
  • If you need to use the modified values from windows: Use the normal CultureInfo constructor.

Hope this helps!

Stefan Henneken: IEC 61131-3: The ‘Observer’ Pattern

The Observer Pattern is suitable for applications that require one or more function blocks to be notified when the state of a particular function block changes. The assignment of the communication participants can be changed at runtime of the program.

In almost every IEC 61131-3 program, function blocks exchange states with each other. In the simplest case, one input of one FB is assigned the output of another FB.

Pic01

This makes it very easy to exchange states between function blocks. But this simplicity has its price:

Inflexibility. The assignment between fbSensor and the three instances of FB_Actuator is hard-coded in the program. Dynamic assignment between the FBs during runtime is not possible.

Fixed dependencies. The data type of the output variable of FB_Sensor must be compatible to the input variable of FB_Actuator. If there is a new sensor component whose output variable is incompatible with the previous data type, this necessarily results in an adjustment of the data type of the actuators.

Problem Definition

The following example shows how, with the help of the observer pattern, the fixed assignment between the communication participants can be dispensed with. The sensor reads a measured value (e.g. a temperature) from a data source, while the actuator performs actions depending on a measured value (e.g. temperature control). The communication between the participants should be changeable. If these disadvantages are to be eliminated, two basic OO design patterns are helpful:

  • Identify those areas that remain constant and separate them from those that change.
  • Never program directly to implementations, but always to interfaces. The assignment between input and output variables must therefore no longer be permanently implemented.

    This can be realized elegantly with the help of interfaces that define the communication between the FBs. There is no longer a fixed assignment of input and output variables. This results in a loose coupling between the participants. Software design based on loose coupling makes it possible to build flexible software systems that cope better with changes, since dependencies between the participants are minimized.

    Definition of Observer Pattern

    The observer pattern provides an efficient communication mechanism between several participants, whereby one or more participants depend on the state of one participant. The participant providing a state is called Subject (FB_Sensor). The participants, which depend on the state, are called Observer (FB_Actuator).

    The Observer pattern is often compared to a newspaper subscription service. The publisher is the subject, while the subscribers are the observers. The subscriber must register with the publisher. When registering, you may also specify which information you would like to receive. The publisher maintains a list in which all subscribers are stored. As soon as a new publication is available, the publisher sends the desired information to all subscribers in the list.

    This becomes more formal in the book „Design pattern. Elements of reusable object-oriented software” expressed by Gamma, Helm, Johnson and Vlissides:

    The Observer pattern defines a 1-to-n dependency between objects, so that changing the state of an object causes all dependent objects to be notified and automatically updated.

    Implementation

    In which way the subject receives the data and how the observer processes the data is not discussed here in more detail.

    Observer

    The method Update() notifies the observer of the subject, if the value changes. Since this behaviour is the same for all observers, the interface I_Observer is defined, which is implemented by all observers.

    The function block FB_Observer also defines a property that returns the current actual value.

    Pic02 Pic03

    Since the data is exchanged by method, no further inputs or outputs are required.

    FUNCTION_BLOCK PUBLIC FB_Observer IMPLEMENTS I_Observer
    VAR
      fValue : LREAL;
    END_VAR
    

    Here is the implementation of the method Update():

    METHOD PUBLIC Update
    VAR_INPUT
      fValue : LREAL;
    END_VAR
    THIS^.fValue := fValue;
    

    und das Property fActualValue:

    PROPERTY PUBLIC fActualValue : LREAL
    fActualValue := THIS^.fValue;
    

    Subject

    The subject manages a list of observers. Using the methods Attach() and Detach(), the individual Observers can log on and off.

    Pic04 Pic05

    Since all Observers implement the interface I_Observer, the list is of type ARRAY[1..Param.cMaxObservers] OF I_Observer. The exact implementation of the observer does not have to be known at this point. Further variants of observers can be created, as long as they implement the interface I_Observer, the subject can communicate with them.

    The method Attach() contains the interface pointer to the observer as a parameter. Before it is stored in the list (line 23), the system checks whether it is valid and not already contained in the list.

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver            : I_Observer;
    END_VAR
    VAR
      nIndex                : INT := 0;
    END_VAR
    
    Attach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    // is the observer already registered?
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        RETURN;
      END_IF
    END_FOR
    
    // save the observer object into the array of observers and send the actual value
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = 0) THEN
        THIS^.aObservers[nIndex] := ipObserver;
        THIS^.aObservers[nIndex].Update(THIS^.fValue);
        Attach := TRUE;
        EXIT;
      END_IF
    END_FOR
    

    The method Detach() also contains the interface pointer to the Observer as a parameter. If the interface pointer is valid, the Observer is searched in the list and the corresponding position is deleted (line 15).

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver             : I_Observer;
    END_VAR
    VAR
      nIndex                 : INT := 0;
    END_VAR
    
    Detach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        THIS^.aObservers[nIndex] := 0;
        Detach := TRUE;
      END_IF
    END_FOR
    

    If there is a status change in the subject, the method Update() is called by all valid interface pointers in the list (line 8). This functionality is found in the private method Notify().

    METHOD PRIVATE Notify
    VAR
      nIndex : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] 0) THEN
        THIS^.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    In this example, the subject generates a random value every second and then notifies the observer using the Notify() method.

    FUNCTION_BLOCK PUBLIC FB_Subject IMPLEMENTS I_Subject
    VAR
      fbDelay : TON;
      fbDrand : DRAND;
      fValue : LREAL;
      aObservers : ARRAY [1..Param.cMaxObservers] OF I_Observer;
    END_VAR
    
    // creates every sec a random value and invoke the update method
    fbDelay(IN := TRUE, PT := T#1S);
    IF (fbDelay.Q) THEN
      fbDelay(IN := FALSE);
      fbDrand(SEED := 0);
      fValue := fbDrand.Num * 1234.5;
      Notify();
    END_IF
    

    There is no statement in the subject to access FB_Observer directly. Access always takes place indirectly via the interface I_Observer. An application can be extended with any observer. As long as it implements the interface I_Observer, no adjustments to the subject are necessary.

    Pic06 

    Application

    The following module should help to test the example program. A subject and two observers are created in it. By setting appropriate auxiliary variables, the two observers can be both connected to the subject and disconnected again at runtime.

    PROGRAM MAIN
    VAR
      fbSubject         : FB_Subject;
      fbObserver1       : FB_Observer;
      fbObserver2       : FB_Observer;
      bAttachObserver1  : BOOL;
      bAttachObserver2  : BOOL;
      bDetachObserver1  : BOOL;
      bDetachObserver2  : BOOL;
    END_VAR
    
    fbSubject();
    
    IF (bAttachObserver1) THEN
      fbSubject.Attach(fbObserver1);
      bAttachObserver1 := FALSE;
    END_IF
    IF (bAttachObserver2) THEN
      fbSubject.Attach(fbObserver2);
      bAttachObserver2 := FALSE;
    END_IF
    IF (bDetachObserver1) THEN
      fbSubject.Detach(fbObserver1);
      bDetachObserver1 := FALSE;
    END_IF
    IF (bDetachObserver2) THEN
      fbSubject.Detach(fbObserver2);
      bDetachObserver2 := FALSE;
    END_IF
    

    Sample 1 (TwinCAT 3.1.4022) on GitHub

    Improvements

    Subject: Interface or base class?

    The necessity of the interface I_Observer is obvious in this implementation. Access to an observer is decoupled from implementation by the interface.

    However, the interface I_Subject does not appear necessary here. And in fact, the interface I_Subject could be omitted. However, I have planned it anyway, because it keeps the option open to create special variants of FB_Subject. For example, there might be a function block that does not organize the observer list in an array. The methods for logging on and off the different Observers could then be accessed generically using the interface I_Subject.

    The disadvantage of the interface, however, is that the code for logging in and out must be implemented each time, even if the application does not require it. Instead, a base class (FB_SubjectBase) seems to be more useful for the subject. The management code for the methods Attach() and Detach() could be moved to this base class. If it is necessary to create a special subject (FB_SubjectNew), it can be inherited from this base class (FB_SubjectBase).

    But what if this special function block (FB_SubjectNew) already inherits from another base class (FB_Base)? Multiple inheritance is not possible (however, several interfaces can be implemented).

    Here, it makes sense to embed the base class in the new function block, i.e. to create a local instance of FB_SubjectBase.

    FUNCTION_BLOCK PUBLIC FB_SubjectNew EXTENDS FB_Base IMPLEMENTS I_Subject
    VAR
      fValue               : LREAL;
      fbSubjectBase        : FB_SubjectBase;
    END_VAR
    

    The methods Attach() and Detach() can then access this local instance.

    Method Attach():

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver : I_Observer;
    END_VAR
    
    Attach := FALSE;
    IF (THIS^.fbSubjectBase.Attach(ipObserver)) THEN
      ipObserver.Update(THIS^.fValue);
      Attach := TRUE;
    END_IF
    

    Method Detach():

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver : I_Observer;
    END_VAR
    Detach := THIS^.fbSubjectBase.Detach(ipObserver);
    

    Method Notify():

    METHOD PRIVATE Notify
    VAR
      nIndex : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.fbSubjectBase.aObservers[nIndex] 0) THEN
        THIS^.fbSubjectBase.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    Thus, the new subject implements the interface I_Subject, inherits from the function block FB_Base and can access the functionalities of FB_SubjectBase via the embedded instance.

    Pic07

    Sample 2 (TwinCAT 3.1.4022) on GitHub

    Update: Push or pull method?

    There are two ways in which the observer receives the desired information from the subject:

    With the push method, all information is passed to the observer via the update method. Only one method call is required for the entire information exchange. In the example, only one variable of the data type LREAL has ever passed the subject. But depending on the application, it can be considerably more data. However, not every observer always needs all the information that is passed to it. Furthermore, extensions are made more difficult: What if the method Update() is extended by further data? All observers must be customized. This can be remedied by using a special function block as a parameter. This function block encapsulates all necessary information in properties. If additional properties are added, it is not necessary to adjust the update method.

    If the pull method is implemented, the observer receives only a minimal notification. He then gets all the information he needs from the subject. However, two conditions must be met. First, the subject should make all data available as properties. On the other hand, the observer must be given a reference to the subject so that it can access the properties. One solution may be that the update method contains a reference to the subject (i.e. to itself) as a parameter.

    Both variants can certainly be combined with each other. The subject provides all relevant data as properties. At the same time, the update method can provide a reference to the subject and pass the most important information as a function block. This method is the classic approach of numerous GUI libraries.

    Tip: If the subject knows little about its observers, the pull method is preferable. If the subject knows its observers (since there are only a few different types of observers), the push method should be used.

  • Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.