Jürgen Gutsch: Customizing ASP.​NET Core Part 02: Configuration

This second part of the blog series about customizing ASP.NET Core is about the application configuration, how to use it and how to customize the configuration to use different ways to configure your app.

The series topics

  • Customizing ASP.NET Core Part 01: Logging
  • Customizing ASP.NET Core Part 02: Configuration
  • Customizing ASP.NET Core Part 03: Dependency Injection
  • Customizing ASP.NET Core Part 04: HTTPS
  • Customizing ASP.NET Core Part 05: HostedServices
  • Customizing ASP.NET Core Part 06: MiddleWares
  • Customizing ASP.NET Core Part 07: OutputFormatter
  • Customizing ASP.NET Core Part 08: ModelBinder
  • Customizing ASP.NET Core Part 09: ActionFilter
  • Customizing ASP.NET Core Part 10: TagHelpers

Configure the configuration

As well as the logging, since ASP.NET Core 2.0 the configuration is also hidden in the default configuration of the WebHostBuilder and not part of the Startup.cs anymore. This is done for the same reasons to keep the Startup clean and simple:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

Fortunately you are also able to override the default settings to customize the the configuration in a way you need it.

When you create a new ASP.NET Core project you already have an appsettings.json and an appsettings.Development.json configured. You can and you should use this configuration files to configure your app. You should because this is the pre-configured way and the most ASP.NET Core developers will look for an appsettings.json to configure the application. This is absolutely fine and works pretty well.

But maybe you already have an existing XML configuration or want to share a YAML configuration file over different kind of applications. This could also make sense. Sometimes it makes also sense to read configuration values out of a database.

The next snippet shows the hidden default configuration to read the appsettigns.json files:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

This configuration also set the base path of the application and adds the configuration via environment variables. The method ConfigureAppConfiguration accepts a lambda method that gets a ConfigurationBuilderContext and a ConfigurationBuilder passed in

Whenever you customize the the application configuration you should add the configuration via environment variable as the last step. The order of the configuration matters and the latter added configuration providers will override the previously added configurations. Be sure the environment variables always override the configurations via file. This way you ensure the configure on azure web apps via the Application Settings UI on Azure which will be passed to the application as environment variables.

The IConfigurationBuilder has a lot of extension methods to add more configurations like XML or INI configuration files, in-memory configurations and so on. You can find a lot more configuration providers provided by the community to read in YAML files, database values and a lot more. In this demo I'm going to show you how to read INI files in.

Typed configurations

Before trying to read the INI files it makes sense to show how to use typed configuration instead of reading the configuration via the IConfiguration key by key.

To read a type configuration you need to define the type to configure. I usually crate a class called AppSettings like this:

public class AppSettings
{
    public int Foo { get; set; }
    public string Bar { get; set; }
}

This classes than can be filled with specific configuration sections inside the method ConfigureServices in the Startup.cs

services.Configure<AppSettings>(Configuration.GetSection("AppSettings"));

This way the typed configuration also gets registered as a service in the dependency injection container and can be used everywhere in the application. You are able to create different configuration types per configuration section. In the most cases one section should be fine, but maybe it makes sense to divide the settings into different sections.

This configuration than can be used via dependency injection in every part of your application. The next snippets shows how to use the configuration in a MVC controller:

public class HomeController : Controller
{
    private readonly AppSettings _options;

    public HomeController(IOptions<AppSettings> options)
    {
        _options = options.Value;
    }

The IOptions<AppSettings> is a wrapper around our AppSettings type and the property Value contains the actual instance of the AppSettings including the values from the configuration file.

To try that out the appsettings.json need to have the AppSettings section configured, otherwise the values are null or not set.

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning"
    }
  },
  "AllowedHosts": "*",
  "AppSettings": {
      "Foo": 123,
      "Bar": "Bar"
  }
}

Configuration using INI files

To also use INI files to configure the application we need to add the INI configuration inside the method ConfigureAppConfiguration in the Program.cs:

config.AddIniFile("appsettings.ini", optional: false, reloadOnChange: true);
config.AddJsonFile($"appsettings.{env.EnvironmentName}.ini", optional: true, reloadOnChange: true);

This code loads the INI files the same way as the JSON configuration files. The first line is a required configuration and the second one an optional configuration depending on the current runtime environment.

The INI file could look like this:

[AppSettings]
Bar="FooBar"

This file also contains a section called AppSettings and a property called Bar. Initially I wrote the order of the configuration matters. If you added the two lines to configure via INI files after the configuration via JSON files, the INI files will override the settings from the JSON files. The property Bar gets overridden with "FooBar" and the property Foo stays the same. Also the values out of the INI file will be available via the previously created AppSettings class.

Every other configuration provider will work the same way. Let's see how a configuration provider would look like.

Configuration Providers

A configuration provider is an implementation of an IConfigurationProvider that get's created by an configuration source, which is an implementation of an IConfigurationSource. The configuration provider than reads the date in from somewhere and provides it via a Dictionary.

To add a custom or third party configuration provider to ASP.NET Core you need to call the method Add on the configuration builder and put the configuration source in:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureAppConfiguration((builderContext, config) =>
    {
        var env = builderContext.HostingEnvironment;

        config.SetBasePath(env.ContentRootPath);
        config.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true);
        config.AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);
        
        // add new configuration source
        config.Add(new MyCustomConfigurationSource{
        	SourceConfig = //configure whatever source 
            Optional = false,
            ReloadOnChange = true
        });
        
        config.AddEnvironmentVariables();
    })
    .UseStartup<Startup>();

Usually you would create an extension method to easier add the configuration source:

config.AddMyCustomSource("source", optional: false, reloadOnChange: true);

A really detailed concrete example about how to create a custom configuration provider is written by the fellow MVP Andrew Lock.

Conclusion

In the most cases it is not needed to add a different configuration provider or to create your own configuration provider, but it's good to know how to change it in case you need it. Also using typed configuration is a nice way to read the settings. In classic ASP.NET we used a manually created façade to to read the application settings in a typed way. Now this s automatically done by just providing a class. This class get's automatically filled and provided via dependency injection.

To learn more about ASP.NET Core Dependency Injection have a look into the next part of the series: Customizing ASP.NET Core Part 02: Dependency Injection (not yet done)

Jürgen Gutsch: Customizing ASP.​NET Core Part 01: Logging

In this first part of the new blog series about customizing ASP.NET Core, I will show you how to customize the logging. The default logging only writes to the console or to the debug window. This is quite good for the most cases, but maybe you need to log to a sink like a file or a database. Maybe you want to extend the logger with additional information. In that cases you need to know how to change the default logging.

The series topics

  • Customizing ASP.NET Core Part 01: Logging
  • Customizing ASP.NET Core Part 02: Configuration
  • Customizing ASP.NET Core Part 03: Dependency Injection
  • Customizing ASP.NET Core Part 04: HTTPS
  • Customizing ASP.NET Core Part 05: HostedServices
  • Customizing ASP.NET Core Part 06: MiddleWares
  • Customizing ASP.NET Core Part 07: OutputFormatter
  • Customizing ASP.NET Core Part 08: ModelBinder
  • Customizing ASP.NET Core Part 09: ActionFilter
  • Customizing ASP.NET Core Part 10: TagHelpers

Configure logging

In previous versions of ASP.NET Core (pre 2.0) the logging was configured in the Startup.cs. Since 2.0 the Startup.cs was simplified and a lot of configurations where moved to a default WebHostBuilder, which is called in the Program.cs. Also the logging was moved to the default WebHostBuilder:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)                
            .UseStartup<Startup>();
}

In ASP.NET Core you are able to override and customize almost everything. So you can with the logging. The IWebHostBuilder has a lot of extension methods to override the default behavior. To override the default settings for the logging we need to choose the ConfigureLogging method. The next snippet shows exactly the same logging as it was configured inside the CreateDefaultBuilder() method:

WebHost.CreateDefaultBuilder(args)	
    .ConfigureLogging((hostingContext, logging) =>
    {
        logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
        logging.AddConsole();
        logging.AddDebug();
    })                
    .UseStartup<Startup>();

This method needs a lambda that gets a WebHostBuilderContext that contains the hosting context and a LoggingBuilder to configure the logging.

Create a custom logger

To demonstrate a custom logger, I created a small useless logger that is able to colorize log entries with an specific log level in the console. This so called ColoredConsoleLogger will be added and created using a LoggerProvider we also need to write by our own. To specify the color and the log level to colorize, we need to add a configuration class. In the next snippet all three parts (Logger, LoggerProvider and Configuration) are shown:

public class ColoredConsoleLoggerConfiguration
{
    public LogLevel LogLevel { get; set; } = LogLevel.Warning;
    public int EventId { get; set; } = 0;
    public ConsoleColor Color { get; set; } = ConsoleColor.Yellow;
}

public class ColoredConsoleLoggerProvider : ILoggerProvider
{
    private readonly ColoredConsoleLoggerConfiguration _config;
    private readonly ConcurrentDictionary<string, ColoredConsoleLogger> _loggers = new ConcurrentDictionary<string, ColoredConsoleLogger>();

    public ColoredConsoleLoggerProvider(ColoredConsoleLoggerConfiguration config)
    {
        _config = config;
    }

    public ILogger CreateLogger(string categoryName)
    {
        return _loggers.GetOrAdd(categoryName, name => new ColoredConsoleLogger(name, _config));
    }

    public void Dispose()
    {
        _loggers.Clear();
    }
}

public class ColoredConsoleLogger : ILogger
{
	private static object _lock = new Object();
    private readonly string _name;
    private readonly ColoredConsoleLoggerConfiguration _config;

    public ColoredConsoleLogger(string name, ColoredConsoleLoggerConfiguration config)
    {
        _name = name;
        _config = config;
    }

    public IDisposable BeginScope<TState>(TState state)
    {
        return null;
    }

    public bool IsEnabled(LogLevel logLevel)
    {
        return logLevel == _config.LogLevel;
    }

    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception exception, Func<TState, Exception, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }

        lock (_lock)
        {
            if (_config.EventId == 0 || _config.EventId == eventId.Id)
            {
                var color = Console.ForegroundColor;
                Console.ForegroundColor = _config.Color;
                Console.WriteLine($"{logLevel.ToString()} - {eventId.Id} - {_name} - {formatter(state, exception)}");
                Console.ForegroundColor = color;
            }
        }
    }
}

We need to lock the actual console output, because we will get some race conditions where wrong log entries get colored with the wrong color, because the console itself is not really thread save.

If this is done we can start to plug in the new logger to the configuration:

logging.ClearProviders();

var config = new ColoredConsoleLoggerConfiguration
{
    LogLevel = LogLevel.Information,
    Color = ConsoleColor.Red
};
logging.AddProvider(new ColoredConsoleLoggerProvider(config));

If needed you are able to clear all the previously added logger providers. Than we call AddProvider to add a new instance of our ColoredConsoleLoggerProvider with the specific settings. We could also add some more instances of the provider with different settings.

This shows ho to handle different log levels in a a different way. You can use this to send an emails on hard errors, to log debug messages to a different log sink than regular informational messages and so on.

In many cases it doesn't make sense to write a custom logger because there are already many good third party loggers, like elmah, log4net and NLog. In the next section I'm going to show you how to use NLog in ASP.NET Core

Plug-in an existing Third-Party logger provider

NLog was one of the very first loggers, which was available as a .NET Standard library and usable in ASP.NET Core. NLog also already provides a Logger Provider to easily plug it into ASP.NET Core.

The next snippet shows a typical NLog.Config that defines two different sinks to log all messages in one log file and custom messages only into another file:

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      autoReload="true"
      internalLogLevel="Warn"
      internalLogFile="C:\git\dotnetconf\001-logging\internal-nlog.txt">

  <!-- Load the ASP.NET Core plugin -->
  <extensions>
    <add assembly="NLog.Web.AspNetCore"/>
  </extensions>

  <!-- the targets to write to -->
  <targets>
     <!-- write logs to file -->
     <target xsi:type="File" name="allfile" fileName="C:\git\dotnetconf\001-logging\nlog-all-${shortdate}.log"
                 layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${message} ${exception}" />

   <!-- another file log, only own logs. Uses some ASP.NET core renderers -->
     <target xsi:type="File" name="ownFile-web" fileName="C:\git\dotnetconf\001-logging\nlog-own-${shortdate}.log"
             layout="${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|  ${message} ${exception}|url: ${aspnet-request-url}|action: ${aspnet-mvc-action}" />

     <!-- write to the void aka just remove -->
    <target xsi:type="Null" name="blackhole" />
  </targets>

  <!-- rules to map from logger name to target -->
  <rules>
    <!--All logs, including from Microsoft-->
    <logger name="*" minlevel="Trace" writeTo="allfile" />

    <!--Skip Microsoft logs and so log only own logs-->
    <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" />
    <logger name="*" minlevel="Trace" writeTo="ownFile-web" />
  </rules>
</nlog>

We than need to add the NLog ASP.NET Core package from NuGet:

dotnet add package NLog.Web.AspNetCore

(Be sure you are in the project directory before you execute that command)

Now you only need to add NLog in the ConfigureLogging method in the Program.cs

hostingContext.HostingEnvironment.ConfigureNLog("NLog.Config");
logging.AddProvider(new NLogLoggerProvider());

The first line configures NLog to use the previously created NLog.Config and the second line adds the NLogLoggerProvider to the list of logging providers. Here you can add as many logger providers you need.

Conclusion

The good thing of hiding the basic configuration is only to clean up the newly scaffolded projects and to keep the actual start as simple as possible. The developer is able to focus on the actual features. But the more the application grows the more important is logging. The default logging configuration is easy and it works like charm, but in production you need a persisted log to see errors from the past. So you need to add a custom logging or a more flexible logger like NLog or log4net.

To learn more about ASP.NET Core configuration have a look into the next part of the series: Customizing ASP.NET Core Part 02: Configuration.

Jürgen Gutsch: New Blog Series: Customizing ASP.​NET Core

With this post I want to introduce a new blog series about things you can or maybe need to customize in ASP.NET Core. Initially this series will contain ten different topics. Maybe later I'll write some more posts about that.

The initial topics are based on my talk about Customizing ASP.NET Core. I did this talk several times in German and English. I did the talk on the .NET Conf 2018 as well.

Unfortunately on the .NET Conf the talk started with pretty bad audio for some reasons. The first five minutes can be moved directly to the trash IMHO. I also could only show 7 out of 10 demos, even if I tried to get all the demos into 45 minutes one day before. I'm almost sure the audio problem wasn't on my side. Via the router I disconnected almost all devices from the internet during the our I was presenting and it went well before the presentation when we did the latest tech check.

Anyway, after five minutes the audio went a lot better and the audience was able to follow the rest of the presentation.

For this series I'm going to follow the same order as in that presentation, which is the order from bottom to top, from the server configuration parts, over Web.API up to the MVC topics.

The series topics

  • Customizing ASP.NET Core Part 01: Logging
  • Customizing ASP.NET Core Part 02: Configuration
  • Customizing ASP.NET Core Part 03: Dependency Injection
  • Customizing ASP.NET Core Part 04: HTTPS
  • Customizing ASP.NET Core Part 05: HostedServices
  • Customizing ASP.NET Core Part 06: MiddleWares
  • Customizing ASP.NET Core Part 07: OutputFormatter
  • Customizing ASP.NET Core Part 08: ModelBinder
  • Customizing ASP.NET Core Part 09: ActionFilter
  • Customizing ASP.NET Core Part 10: TagHelpers

Do you want to see that talk?

If you are interested in this talk about Customizing ASP.NET Core, feel free to drop me a comment, a message via Twitter or an email. I'm able to do it remotely via Skype, Skype for Business or on side, if the travel costs are covered somehow. For free at community events, like Meetups or user group meetings and fairly paid on commercial events.

Discover more possible talks on Sessionize: https://sessionize.com/juergengutsch

Golo Roden: 25 Tage später …

Node.js enthält in der 10.x-Serie bis zur Version 10.9.0 den Fehler, dass die Funktionen setTimeout und setInterval nach 25 Tagen ihren Betrieb einstellen. Ursache ist ein Konvertierungsfehler. Abhilfe schafft das Aktualisieren auf eine neue Node.js-Version.

Stefan Henneken: IEC 61131-3: Das ‘State’ Pattern

Besonders in der Automatisierungstechnik finden Zustandsautomaten regelmäßig Anwendung. Mit Hilfe des State Pattern steht ein objektorientierter Ansatz zur Verfügung, der insbesondere bei größeren Zustandsautomaten wichtige Vorteile bietet.

Die meisten Entwickler haben schon Zustandsautomaten in IEC 61131-3 realisiert. Der eine bewusst, der andere vielleicht unbewusst. Im Folgenden soll ein einfaches Beispiel drei verschiedene Ansätze vorstellen:

  1. CASE-Anweisung
  2. Zustandsübergänge in Methoden
  3. Das ‚State‘ Pattern

Unser Beispiel beschreibt einen Automaten, der nach Einwurf einer Münze und nach dem Drücken eines Knopfes ein Produkt ausgibt. Die Anzahl der Produkte ist begrenzt. Wird eine Münze eingeworfen und der Knopf betätigt, obwohl der Automat leer ist, so wird die Münze wieder zurückgegeben.

Der Automat soll durch den Funktionsblock FB_Machine abgebildet werden. Eingänge nehmen die Ereignisse entgegen und über Ausgänge wird der aktuelle Zustand und die Anzahl der noch verfügbaren Produkte ausgegeben. Bei der Deklaration des FBs wird die maximale Anzahl der Produkte festgelegt.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton           : BOOL;
  bInsertCoin       : BOOL;
  bTakeProduct      : BOOL;
  bTakeCoin         : BOOL;    
END_VAR
VAR_OUTPUT
  eState            : E_States;
  nProducts         : UINT;    
END_VAR

UML-Zustandsdiagramm

Zustandsautomaten lassen sich sehr gut als UML-Zustandsdiagramm (englisch: state diagram) darstellen.

Picture01

Ein UML-Zustandsdiagramm beschreibt einen Automaten, der sich zu jedem Zeitpunkt in genau einem Zustand einer endlichen Menge von Zuständen befindet.

Die Zustände in einem UML-Zustandsdiagramm werden durch Rechtecke mit abgerundeten Ecken (englisch: vertices) dargestellt (in anderen Diagrammformen häufig auch als Kreis). Zustände können Aktivitäten ausführen, die z.B. beim Eintritt in den Zustand (entry) oder beim Verlassen (exit) ausgeführt werden. Mit entry / n = n – 1 wird beim Eintritt in den Zustand die Variable n dekrementiert.

Die Pfeile zwischen den Zuständen symbolisieren mögliche Zustandsübergänge (englisch: transitions). Sie sind mit den Ereignissen beschriftet, die zu dem jeweiligen Zustandsübergang führen. Ein Zustandsübergang erfolgt, wenn das Ereignis eintritt und eine optionale Bedingung (guard) erfüllt ist. Bedingungen werden in eckigen Klammern angegeben. Hierdurch lassen sich Entscheidungsbäume realisieren.

Erste Variante: CASE-Anweisung

Häufig findet man CASE-Anweisungen für die Umsetzung von Zustandsautomaten. Die CASE-Anweisung fragt jeden möglichen Zustand ab. Innerhalb der jeweiligen Bereiche, für die einzelnen Zustände, werden die Bedingungen abgefragt. Ist die Bedingung erfüllt, wird die Aktion ausgeführt und die Zustandsvariable angepasst. Um die Lesbarkeit zu erhöhen, wird die Zustandsvariable gerne als ENUM abgebildet.

TYPE E_States :
(
    eWaiting := 0,
    eHasCoin,
    eProductEjected,
    eCoinEjected
);
END_TYPE

Somit sieht die erste Variante vom Zustandsautomat wie folgt aus:

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

CASE eState OF
  E_States.eWaiting:
    IF (rtrigButton.Q) THEN
      ; // keep in the state
    END_IF
    IF (rtrigInsertCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
      eState := E_States.eHasCoin;
    END_IF

  E_States.eHasCoin:
    IF (rtrigButton.Q) THEN
      IF (nProducts > 0) THEN
        nProducts := nProducts - 1;
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
        eState := E_States.eProductEjected;
      ELSE
        ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
        eState := E_States.eCoinEjected;
      END_IF
    END_IF

  E_States.eProductEjected:
    IF (rtrigTakeProduct.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
      eState := E_States.eWaiting;
    END_IF

  E_States.eCoinEjected:
    IF (rtrigTakeCoin.Q) THEN
      ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
      eState := E_States.eWaiting;
    END_IF

  ELSE
    ADSLOGSTR(ADSLOG_MSGTYPE_ERROR, 'Invalid state', '');
    eState := E_States.eWaiting;
END_CASE

Ein kurzer Test zeigt, dass der FB das macht, was er machen soll:

Picture02

Doch wird auch schnell deutlich, dass größer Anwendungen so nicht umsetzbar sind. Die Übersichtlichkeit geht nach wenigen Zuständen komplett verloren.

Beispiel 1 (TwinCAT 3.1.4022) auf GitHub

Zweite Variante: Zustandsübergänge in Methoden

Das Problem ist reduzierbar, wenn alle Zustandsübergänge als Methode realisiert werden.

Picture03

Tritt ein bestimmtes Ereignis auf, so wird die jeweilige Methode aufgerufen.

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton             : BOOL;
  bInsertCoin         : BOOL;
  bTakeProduct        : BOOL;
  bTakeCoin           : BOOL;
END_VAR
VAR_OUTPUT
  eState              : E_States;
  nProducts           : UINT;
END_VAR
VAR
  rtrigButton         : R_TRIG;
  rtrigInsertCoin     : R_TRIG;
  rtrigTakeProduct    : R_TRIG;
  rtrigTakeCoin       : R_TRIG;
END_VAR
rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

IF (rtrigButton.Q) THEN
  THIS^.PressButton();
END_IF
IF (rtrigInsertCoin.Q) THEN
  THIS^.InsertCoin();
END_IF
IF (rtrigTakeProduct.Q) THEN
  THIS^.CustomerTakesProduct();
END_IF
IF (rtrigTakeCoin.Q) THEN
  THIS^.CustomerTakesCoin();
END_IF

Je nach aktuellem Zustand, wird in den Methoden der gewünschte Zustandsübergang ausgeführt und die Zustandsvariable angepasst:

METHOD INTERNAL CustomerTakesCoin : BOOL
IF (THIS^.eState = E_States.eCoinEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the coin.', '');
  eState := E_States.eWaiting;
END_IF

METHOD INTERNAL CustomerTakesProduct : BOOL
IF (THIS^.eState = E_States.eProductEjected) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has taken the product.', '');
  eState := E_States.eWaiting;
END_IF

METHOD INTERNAL InsertCoin : BOOL
IF (THIS^.eState = E_States.eWaiting) THEN
  ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has insert a coin.', '');
  THIS^.eState := E_States.eHasCoin;
END_IF

METHOD INTERNAL PressButton : BOOL
IF (THIS^.eState = E_States.eHasCoin) THEN
  IF (THIS^.nProducts > 0) THEN
    THIS^.nProducts := THIS^.nProducts - 1;
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. Output product.', '');
    THIS^.eState := E_States.eProductEjected;
  ELSE                
    ADSLOGSTR(ADSLOG_MSGTYPE_HINT, 'Customer has pressed the button. No more products. Return coin.', '');
    THIS^.eState := E_States.eCoinEjected;
  END_IF
END_IF

Auch dieser Ansatz funktioniert tadellos. Der Zustandsautomat befindet sich allerdings weiterhin in nur einem Funktionsblock. Die Zustandsübergänge werden zwar in Methoden ausgelagert, jedoch handelt es sich um einen Lösungsansatz, der strukturierten Programmierung. Dieser ignoriert weiterhin die Möglichkeiten der Objektorientierung. Dies führt zu dem Ergebnis, dass der Quellcode weiterhin schlecht erweiterbar und unleserlich ist.

Beispiel 2 (TwinCAT 3.1.4022) auf GitHub

Dritte Variante: Das ‚State‘ Pattern

Zur Umsetzung des State Pattern sind einige OO-Entwurfsprinzipien hilfreich:

Kohäsion (= Grad, inwiefern eine Klasse einen einzigen konzentrierten Zweck hat) und Delegation

Kapsele jede Verantwortlichkeit in ein eigenes Objekt und delegiere Aufrufe an diese Objekte weiter. Eine Klasse, eine Verantwortlichkeit!

Identifiziere jene Aspekte, die sich ändern und trenne diese von jenen, die konstant bleiben

Wie werden die Objekte aufgeteilt, damit Erweiterungen am Zustandsautomat an möglichst wenigen Stellen notwendig sind? Bisher musste bei jeder Erweiterung FB_Machine angepasst werden. Gerade bei umfangreichen Zustandsautomaten, an denen mehrere Entwickler arbeiten, ist dieses ein großer Nachteil.

Betrachten wir noch einmal die Methoden CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() und PressButton(). Diese haben alle einen ähnlichen Aufbau. In If-Anweisungen wird der aktuelle Zustand abgefragt und die gewünschten Aktionen werden ausgeführt. Bei Bedarf wird außerdem der aktuelle Zustand angepasst. Dieser Ansatz skaliert jedoch nicht. Jedes Mal, wenn ein neuer Zustand hinzugefügt wird, müssen mehrere Methoden angepasst werden.

Das State Pattern verstreut den Status auf mehrere Objekte. Jeder mögliche Status wird durch einen FB repräsentiert. Diese Status FBs beinhalten das gesamte Verhalten für den jeweiligen Zustand. Dadurch kann ein neuer Status eingeführt werden, ohne dass der Quellcode der ursprünglichen Bausteine geändert werden muss.

Auf jeden Zustand kann jede Aktion (CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin() und PressButton()) ausgeführt werden. Somit besitzen alle Status FBs die gleiche Schnittstelle. Aus diesem Grund wird ein Interface für alle Status FBs eingeführt:

Picture04
 
FB_Machine aggregiert dieses Interface (Zeile 9), welches die Methodenaufrufe an die jeweiligen Status FBs delegiert (Zeile 30, 34, 38 und 42).

FUNCTION_BLOCK PUBLIC FB_Machine
VAR_INPUT
  bButton            : BOOL;
  bInsertCoin        : BOOL;
  bTakeProduct       : BOOL;
  bTakeCoin          : BOOL;
END_VAR
VAR_OUTPUT
  ipState            : I_State := fbWaitingState;
  nProducts          : UINT;
END_VAR
VAR
  fbCoinEjectedState    : FB_CoinEjectedState(THIS);
  fbHasCoinState        : FB_HasCoinState(THIS);
  fbProductEjectedState : FB_ProductEjectedState(THIS);
  fbWaitingState        : FB_WaitingState(THIS);

  rtrigButton           : R_TRIG;
  rtrigInsertCoin       : R_TRIG;
  rtrigTakeProduct      : R_TRIG;
  rtrigTakeCoin         : R_TRIG;
END_VAR

rtrigButton(CLK := bButton);
rtrigInsertCoin(CLK := bInsertCoin);
rtrigTakeProduct(CLK := bTakeProduct);
rtrigTakeCoin(CLK := bTakeCoin);

IF (rtrigButton.Q) THEN
  ipState.PressButton();
END_IF

IF (rtrigInsertCoin.Q) THEN
  ipState.InsertCoin();
END_IF

IF (rtrigTakeProduct.Q) THEN
  ipState.CustomerTakesProduct();
END_IF

IF (rtrigTakeCoin.Q) THEN
  ipState.CustomerTakesCoin();
END_IF

Doch wie kann in den jeweiligen Methoden, der einzelnen Status FBs, der Status geändert werden?

Als erstes wird von jedem Status FB eine Instanz innerhalb von FB_Machine deklariert. Per FB_init() wird an jeden Status FB ein Pointer auf FB_Machine übergeben (Zeile 13 – 16).

Jede einzelne Instanz kann per Eigenschaft aus FB_Machine gelesen werden. Zurückgegeben wird jedes Mal ein Interface Pointer auf I_State.

Picture05

Des Weiteren erhält FB_Machine eine Methode zum Setzen des Status,

METHOD INTERNAL SetState : BOOL
VAR_INPUT
  newState : I_State;
END_VAR
THIS^.ipState := newState;

sowie eine Methode zum Ändern der aktuellen Produktanzahl:

METHOD INTERNAL SetProducts : BOOL
VAR_INPUT
  newProducts : UINT;
END_VAR
THIS^.nProducts := newProducts;

FB_init() erhält eine weitere Eingangsvariable, damit bei der Deklaration die maximale Anzahl der Produkte vorgegeben werden kann.

Da der Anwender der Zustandsmaschine nur FB_Machine und I_State benötigt, wurden die vier Eigenschaften (CoinEjectedState, HasCoinState, ProductEjectedState und WaitingState), die beiden Methoden (SetState() und SetProducts()) und die vier Status FBs (FB_CoinEjectedState, FB_HasCoinState, FB_ProductEjectedState und FB_WaitingState) als INTERNAL deklariert. Befinden sich die FBs der Zustandsmaschine in einer compilierten Bibliothek, so sind diese von außen nicht sichtbar. Auch im Library Repository sind diese nicht vorhanden. Das Gleiche gilt auch für Elemente die als PRIVATE deklariert werden. FBs, Interfaces, Methoden und Eigenschaften, die nur innerhalb einer Bibliothek Verwendung finden, können so vor dem Anwender der Library versteckt werden.

Der Tests der Zustandsmaschine ist in allen drei Varianten gleich:

PROGRAM MAIN
VAR
  fbMachine      : FB_Machine(3);
  sState         : STRING;
  bButton        : BOOL;
  bInsertCoin    : BOOL;
  bTakeProduct   : BOOL;
  bTakeCoin      : BOOL;
END_VAR

fbMachine(bButton := bButton,
          bInsertCoin := bInsertCoin,
          bTakeProduct := bTakeProduct,
          bTakeCoin := bTakeCoin);
sState := fbMachine.ipState.Description;

bButton := FALSE;
bInsertCoin := FALSE;
bTakeProduct := FALSE;
bTakeCoin := FALSE;

Die Anweisung in Zeile 15 soll das Testen vereinfachen, da für jeden Zustand ein lesbarer Text angezeigt wird.

Beispiel 3 (TwinCAT 3.1.4022) auf GitHub

Diese Variante wirkt bei der ersten Betrachtung recht aufwendig, da deutlich mehr FBs benötigt werden. Doch die Verteilung der Zuständigkeiten auf einzelne FBs macht diesen Ansatz sehr flexibel und deutlich robuster für Erweiterungen.

Dieses wird deutlich, wenn die einzelnen Status FBs sehr umfangreich werden. So könnte eine Zustandsmaschine einen komplexen Prozess steuern, bei dem jeder Status FB weitere Unterprozesse enthält. Eine Aufteilung auf mehrere FBs macht solch ein Programm erst überhaupt wartbar, insbesondere dann, wenn mehrere Entwickler daran beteiligt sind.

Bei sehr kleinen Zustandsmaschinen ist die Anwendung des State Pattern nicht unbedingt die optimalste Variante. Ich persönlich greife auch gerne auf die Lösung mit der CASE-Anweisung zurück.

Alternativ bietet die IEC 61131-3 mit der Ablaufsprache (AS) bzw. Sequential Function Chart (SFC) eine weitere Möglichkeit an Zustandsmaschinen umzusetzen. Aber das ist eine andere Geschichte.

Definition

In dem Buch “Entwurfsmuster. Elemente wiederverwendbarer objektorientierter Software” von Gamma, Helm, Johnson und Vlissides wird dieses wie folgt ausgedrückt:

”Ermögliche es einem Objekt, sein Verhalten zu ändern, wenn sein interner Zustand sich ändert. Es wird so aussehen, als ob das Objekt seine Klasse gewechselt hat.”

Implementierung

Es wird eine gemeinsame Schnittstelle definiert (State), die für jeden Zustandsübergang (Transistion) eine Methode enthält. Für jeden Zustand wird eine Klasse erstellt, die diese Schnittstelle implementiert (State1, State2, …). Da hierdurch alle Zustände die gleiche Schnittstelle besitzen, sind diese untereinander austauschbar.

Das Objekt, dessen Verhalten in Abhängigkeit vom Zustand geändert werden soll (Context), aggregiert (kapselt) ein solches Zustandsobjekt. Dieses Objekt repräsentiert den aktuellen internen Zustand (currentState) und kapselt das zustandsabhängige Verhalten. Der Context delegiert Aufrufe an das aktuell gesetzte Zustandsobjekt.

Die Zustandswechsel können durch die konkreten Zustandsobjekte selbst durchgeführt werden. Dazu benötigt jedes Zustandsobjekt eine Referenz auf den Context (context). Weiterhin muss der Context eine Methode anbieten, um den Zustand ändern zu können (setState()). Der Folgezustand wird der Methode setState() als Parameter übergeben. Hierzu bietet der Context alle möglichen Zustände als Eigenschaften an.

UML Diagramm


Picture06

Bezogen auf das obige Beispiel ergibt sich folgende Zuordnung:

Context FB_Machine
State I_State
State1, State2, … FB_CoinEjectedState, FB_HasCoinState, FB_ProductEjectedState, FB_WaitingState
Handle() CustomerTakesCoin(), CustomerTakesProduct(), InsertCoin(), PressButton()
GetState1, GetState2, … CoinEjectedState, HasCoinState, ProductEjectedState, WaitingState
currentState ipState
setState() SetState()
context pMachine

Anwendungsbeispiele

Ein TCP-Kommunikationsstack ist ein gutes Beispiel für die Verwendung des State Pattern. So kann jeder Zustand eines Verbindungs-Sockets durch entsprechende Zustandsklassen (TCPOpen, TCPClosed, TCPListen, …) abgebildet werden. Jede dieser Klassen implementiert das gleiche Interface (TCPState). Der Context (TCPConnection) beinhaltet das aktuelle Zustandsobjekt. Über dieses Zustandsobjekt werden alle Aktionen an die jeweilige Zustandsklasse übergeben. Diese bearbeitet die Aktionen und wechselt bei Bedarf in einen neuen Zustand.

Auch Textparser sind zustandsbasiert. So ist die Bedeutung eines Zeichens meistens abhängig von den zuvor gelesenen Zeichen.

Christian Binder [MS]: New role at Microsoft

After refreshing and reforming the team of the Microsoft Technology Center in Munich, which caused some silence on this blog  🙁   I decided to move to a more Engineering focused unit in Microsoft - the Commercial Software Engineering Group in EMEA and  I will continue working on Azure DevOps 🙂

Golo Roden: Vergleiche in JavaScript: == oder ===?

JavaScript kennt zwei Operatoren zum Vergleich, == und ===. Der erste ist nicht typsicher, der zweite hingegen sehr wohl – weshalb man stets den zweiten verwenden sollte.

Code-Inside Blog: Migrate a .NET library to .NET Core / .NET Standard 2.0

I have a small spare time project called Sloader and I recently moved the code base to .NET Standard 2.0. This blogpost covers how I moved this library to .NET Standard.

Uhmmm… wait… what is .NET Standard?

If you have been living under a rock in the past year: .NET Standard is a kind of “contract” that allows the library to run under all .NET implementations like the full .NET Framework or .NET Core. But hold on: The library might also run under Unity, Xamarin and Mono (and future .NET implementations that support this contract - that’s why it is called “Standard”). So - in general: This is a great thing!

Sloader - before .NET Standard

Back to my spare time project:

Sloader consists of three projects (Config/Result/Engine) and targeted the full .NET Framework. All projects were typical library projects. All components were tested with xUnit and builded via Cake. The configuration is using YAML and the main work is done via the HttpClient.

To summarize it: The library is a not too trivial example, but in general it has pretty low requirements.

Sloader - moving to .NET Standard 2.0

The blogpost from Daniel Crabtee “Upgrading to .NET Core and .NET Standard Made Easy” was a great resource and if you want to migrate you should check his blogpost.

The best advice from the blogpost: Just create new .NET Standard projects and xcopy your files to the new projects.

To migrate the projects to .NET Standard I really just needed to deleted the old .csproj files and copied everything into new .NET Standard library projects.

After some fine tuning and NuGet package reference updates everything compilied.

This GitHub PR shows the result of the migration.

Problems & Aftermath

In my library I still used the old way to access configuration via the ConfigurationManager class (referenced via the official NuGet package). This API is not supported on every platform (e.g. Azure Functions), so I needed to tweak those code parts to use System.Environment Variables (this is in my example OK, but there are other options as well).

Everthing else “just worked” and it was a great experience. I tried the same thing with .NET Core 1.0 and it failed horrible, but this time the migration was more or less painless.

.NET Portability Analyzer

If you are not sure if your code works under .NET Standard or Core just install the .NET Portability Analyzer.

This handy tool will give you an overwhy which parts might run without problems under .NET Standard or .NET Core.

.NET Standard 2.0 and .NET Framework

If you still targeting the full Framework, make sure you use at least .NET Framework Version 4.7.2. In theory .NET Standard 2.0 was supposed to work under .NET 4.6.1, but it seems that this ended not too well:

Hope this helps and encourage you to try a migration to a more modern stack!

Holger Schwichtenberg: Verbesserungen der Performance bei ASP.NET Core

ASP.NET Core MVC und ASP.NET Core WebAPI sind 12 bis 24 Prozent performanter als ihre ASP.NET-Vorgänger, wie Messungen des Dotnet-Doktors zeigen.

Code-Inside Blog: Improving Code

Improving code

TL;DR;

Things I learned:

  • long one-liners are hard to read and understand
  • split up your code into small, easy to understand functions
  • less “plumping” (read infrastructure code) is the better
  • get indentation right
  • “Make it correct, make it clear, make it concise, make it fast. In that order.” Wes Dyer

Why should I bother?

Readable code is:

  • easier to debug
  • fast to fix
  • easier to maintain

The problem

Recently I wanted to implement an algorithm for a project we are doing. The goal was to create a so-called “Balanced Latin Square”, we used it to prevent ordering effects in user studies. You can find a little bit of background here and a nice description of the algorithm here.

It’s fairly simple, although it is not obvious how it works, just by looking at the code. The function takes an integer as an argument and returns a Balanced Latin Square. For example, a “4” would return this matrix of numbers:

1 2 4 3 
2 3 1 4 
3 4 2 1 
4 1 3 2 

And there is a little twist if your number is odd, then you need to reverse every row and append them to your result.

After I created the my implementation, I had an idea on how to simplify it. At least I thought its simpler ;)

First attempt - Loops

Based on the description and a Python version of that algorithm, I created a classical (read “imperative”) implementation.

So this is the C# Code:

public List<List<String>> BalancedLatinSquares(int n)
{
    var result = new List<List<String>>() { };
    for (int i = 0; i < n; i++)
    {
        var row = new List<String>();
        for (int j = 0; j < n; j++)
        {
            var cell = ((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n;
            cell++; // start counting from 1
            row.Add(cell.ToString());
        }
        result.Add(row);
    }
    if (n % 2 == 1)
    {
        var reversedResult = result.Select(x => x.AsQueryable().Reverse().ToList()).ToList();                
        result.AddRange(reversedResult);
    }
    return result;
}

I also wrote some simple unit tests to ensure this works. But in the end, I really didn’t like this code. It contains two nested loops and a lot of plumbing code. There are four lines alone just to create the result object (list) and to add the values to it. Recently I looked into functional programming and since C# also has some functional inspired features, I tried to improve this code with some functional goodness :)

Second attempt - Lambda Expressions

public List<List<String>> BalancedLatinSquares(int n)
{
    var result = Enumerable.Range(0, n)
        .Select(i =>
                Enumerable.Range(0, n).Select(j => ((((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n)+1).ToString()).ToList()
            )
        .ToList();     
    
    if (n % 2 == 1)
    {
        var reversedResult = result.Select(x => x.AsQueryable().Reverse().ToList()).ToList();
        result.AddRange(reversedResult);
        return result;
}

This is the result of my attempt to use some functional features. And hey, it is much shorter, therefore it must be better, right? Well, I posted a screenshot of both versions on Twitter and asked which one the people prefer. As it turned out, a lot of folks actually preferred the loop version. But why? Looking back at my code a saw two problems by looking at this line:

Enumerable.Range(0, n).Select(j => ((((j % 2 == 1 ? j / 2 + 1 : n - j / 2) + i) % n)+1).ToString()).ToList()

  • I squeezed a lot of code in this one liner. This makes it harder to read and therefore harder to understand.
  • Another issue is, that I omitted descriptive variable names since they are not needed anymore. Oh and I removed the only comment I wrote since this comment would not fit in the one line of code :)

So, shorter is not always better.

Third attempt - better Lambda Expressions

The smart folks on Twitter had some great ideas about how to improve my code.

The first step was to get rid of the unholy one-liner. You can - and should - always split up your code into smaller, meaningful code blocks. I pulled out the calculateCell function and out of that I also extracted a isEven function. The nice thing is, that the function names also working as a kind of documentation about whats going on.

By returning IEnumerable instead of lists, I was able to remove some .toList() calls. Also, I was able to shorten the code to create the reversedResult.

Another simple step to improve readability is to get line indentation right. Personally, I don’t care which indentation style people are using, as long as it’s used consistently.

public static IEnumerable<IEnumerable<int>> GenerateBalancedLatinSquares(int n)
{
    bool isEven (int i) => i % 2 == 0;        
    int calculateCell(int j, int i) =>((isEven(j) ? n - j / 2 : j / 2 + 1) + i) % n + 1;
    
    var result = Enumerable
                    .Range(0, n)
                    .Select(row =>
                        Enumerable
                            .Range(0, n)
                            .Select(col =>calculateCell(col,row))
                    );     
    
    if (isEven(n) != false)
    {
        var reversedResult = result.Select(x => x.Reverse());                
        result = result.Concat(reversedResult);
    }        
    return result;conditional
}

I think there is room for further improvement. For the calculateCell function I am using this ?: conditional operator, it allows you to write very compact code, on the other hand, it’s also harder to read. If you would replace this with an if statement you would need more lines of code, but also have more space to add comments. Functional languages like Scala, F#, and Haskel providing this neat match expression that could help here.

Extra: How does this algorithm look in other languages:

Python

def balanced_latin_squares(n):
    l = [[((j/2+1 if j%2 else n-j/2) + i) % n + 1 for j in range(n)] for i in range(n)]
    if n % 2:  # Repeat reversed for odd n
        l += [seq[::-1] for seq in l]
    return l

I took this sample from Paul Grau.

Haskell

Thank you Carsten

Holger Schwichtenberg: Welche Dateisystem- und Druckerfreigaben gibt es und wer kann sie nutzen?

Eine Dateisystem- oder Druckerfreigabe ist schnell angelegt und auch schnell vergessen. Um sicher zu gehen, dass es keine verwaisten Freigaben oder Freigaben mit zu viel Zugriffsrechten gibt, hat der Dotnet-Doktor ein PowerShell-Skript geschrieben.

Code-Inside Blog: Easy way to copy a SQL database with Microsoft SQL Server Management Studio (SSMS)

How to copy a database on the same SQL server

The scenario is pretty simple: We just want a copy of our database, with all the data and the complete scheme and permissions.

1. step: Make a back up of your source database

Click on the desired database and choose “Backup” under tasks.

x

2. step: Use copy only or use a full backup

In the dialog you may choose “copy-only” backup. With this option the regular backup job will not be confused.

x

3. step: Use “Restore” to create a new database

This is the most important point here: To avoid fighting against database-file namings use the “restore” option. Don’t create a database manually - this is part of the restore operation.

x

4. step: Choose the copy-only backup and choose a new name

In this dialog you can name the “copy” database and choose the copy-only backup from the source database.

x

Now click ok and you are done!

Behind the scenes

This restore operation works way better to copy a database then to overwrite an existing database, because the restore operation will adjust the filenames.

x

Further information

I’m not a DBA, but when I follow these steps I normally have nothing to worry about if I want a 1:1 copy of a database. This can also be scripted, but then you may need to worry about filenames.

This stackoverflow question is full of great answers!

Hope this helps!

Golo Roden: Ein enum für JavaScript

JavaScript kennt keinen enum-Datentyp. Mithilfe eines Objekts, dessen Eigenschaften numerische Werte zugewiesen werden, lässt sich leicht Abhilfe schaffen. Tatsächlich ist das allerdings kaum besser, als hart verdrahtete Zeichenketten zu verwenden.

Golo Roden: Kein bind für Lambda-Ausdrücke

Lambda-Ausdrücke vereinfachen die Handhabung von this in JavaScript, bringen aber einige Besonderheiten mit, die unter Umständen unerwartet sind. Eine ist die fehlende Möglichkeit, den Wert von this eines Lambda-Ausdrucks neu zu binden.

Jürgen Gutsch: Live streaming ideas

With this post, I'd like to share some ideas about two live streaming shows with you. It would be cool to get some feedback from you, especially from the German speaking readers as well. The first idea is about an German speaking .NET Developer Community Standup and the second one is about a live coding stream (English or German), both hosted on Google Hangouts.

A German speaking .NET Developer Community Standup

Since the beginning of the ASP.NET Community Standup, I watch this show more or less regularly. I think I missed only two or three shows. Because of the different time zone it is almost not possible to watch the live stream. Anyway. I really like the format of that show.

Also since a few years the number of user group attendees decreases. In my user group sometimes only two or three attendees show up, even if we have a lot more registrations via meetup. We (Olivier Giss and me) have kinda fun hosting the user group, but it is also hard to push much effort in it for just a handful of loyal attendees. Since a while we record the sessions using skype for business or google hangouts and push them to YouTube. This gives some more folks the chance to see the talks. We thought a lot about the reasons and tried to change some things to get more attendees, but that didn't really work.

This is the reason why I'm thinking laud about a .NET Developer Community Standup for the German speaking region (Germany, Austria and Switzerland) since months.

I'd like to find two more people to join the team to host the show. Would be cool to have a person from Austria as well as from Switzerland. Since I'm a swiss MVP, I could also take over the swiss part in behalf ;-) In that case I would like to have another person from Germany. One host per country would be cool.

Three host is a nice number and it wouldn't be necessary for the hosts to be available every time we do a live stream. Anyone interested in joining the team?

To keep it simple I'd also use google hangouts to stream the show, and it is not necessary to have an high end steaming equipment. A good headset and a good internet connection should be enough.

In the show I would like to go threw some interesting community and technology news. Talking about some random stuff and I'd also like to invite special guests, who can show us things they did or who would like to talk about special things. This should be a lazy show about interesting stuff about technology and community. I'd also like to give community leads the chance to talk about their work and their events.

What are you thinking about that? Are you interested in?

If yes, I would set up a GitHub repo to collect ideas and topics to talk about.

Live Coding via Live Stream on Google Hangouts

Another idea is inspired by Jeff Fritz live stream on Twitch called "Fritz and Friends". The recorded streams are published to YouTube afterwards. I really like this live stream, even if it's a completely different kind of video to watch. Jeff is permanently in discussion with the users in the chat, while working on his projects. This is kinda wired and makes the show a little nervous, but it is also really interesting. The really cool thing is that he accepts pull request from his audience and he discuss their changes with the audience while working on his project.

I would do such a live stream as well, there were a few projects I would like to work on:

  • LightCore 2.0
    • An alternative DI container for .NET and .NET Core projects
    • Almost done, but needs to be finalized.
    • Maybe you folks want do add more features or add some optimizations
  • Working on the GraphQL middleware for ASP.NET Core
  • Working on health checks for ASP.NET and ASP.NET Core
    • Including health check application provided in the same way IdentityServer is provided to ASP.NET projects: Mainly as a single but extendable library and a optional UI to visualize the health of the connected services.
  • Working on a developer community platform like the portal Microsoft planned to release last year?
    • Unfortunately Microsoft retired that project. It would make more sense anyway, if this project is built and hosted by the community itself.
    • So this would be a great way to create such a developer community platform

Maybe it makes also sense to invite a special guest to talk about specific topics while working on the project. e.g. inviting Dominick Baier to implement authentication to the developer community platform.

What if I do the same thing? Are you interested in? What would be the best Language for that kind of life stream?

If you are interested, I would also set up a GitHub repo to collect ideas and topics to talk about and I would setup additional repos per project.

What do you think?

Do you like these ideas? Do you have any other idea? Please drop me a comment and share your thoughts :-)

Golo Roden: Objekte aufzählen in JavaScript

JavaScript kennt keine forEach-Schleife für Objekte. Der Einsatz von modernen Sprachmitteln wie der Funktion Object.entries und der for-of-Schleife ermöglicht aber das einfache und elegante Nachbauen einer solchen Schleife.

Uli Armbruster: Microservices mögen kein Denken in klassischen Entitäten

Einen ganz besonderen AHA Moment hatte ich kürzlich in einem Workshop bei Udi Dahan, CEO von Particular. In seinem Beispiel ging es um die klassische Entität eines Kunden.

Microservices zu realisieren bedeutet Fachlichkeiten sauber schneiden und in eigenständige Silos (oder Säulen) packen zu müssen. Jedes Silo muss dabei die Hohheit über die eigenen Daten besitzen, auf denen es die zugehörigen Geschäftsprozesse abbildet. Soweit so gut. Doch wie lässt sich dies im Falle eines Kunden bewerkstelligen, der klassischerweise wie im Screenshot zu sehen modelliert ist? Unterschiedliche Eigenschaften werden von unterschiedlichen Microservices benötigt bzw. verändert.

Wird die gleiche Entität in allen Silos verwendet, muss es eine entsprechende Synchronisierung zw. den Microservices geben. Das hat erhelbiche Auswirkungen auf Skalierbarkeit und Performance. In einer Applikation mit häufig parallelen Änderungen an einer Entität wird das Fehlschlagen von Geschäftsprozessen zunehmen – oder im schlimmsten Fall zu Inkonsistenzen führen.

Klassische Kundeentität

Klassische Kundeentität

 

Udi schlägt die folgende Modellierung vor:

Neue Modellierung eines Kunden

Der Kunde wird durch unabhängige Entitäten modelliert

Zur Identifikation, welche Daten zusammengehören, schlägt Udi einen Interessanten Ansatz vor:

Fragt die Fachabteilung, ob das Ändern einer Eigenschaft Auswirkung auf eine andere Eigenschaft hat. 

Würde das Ändern des Nachnamens einen Einfluss auf die Preiskalkulation haben? Oder auf die Art der Marketings?

Nun gilt es noch das Problem der Aggregation zu lösen, sprich wenn ich in meiner Anzeige unterschiedliche Daten unterschiedlicher Microserivces anzeigen möchte. Klassischerweise würde es jetzt eine Tabelle geben, die die Spalten

 

ID_Kunde ID_Kundenstamm ID_Bestandskundenmarketing ID_Preiskalkulation

 

besitzt. Das führt aber zu 2 Problemen:

  1. Die Tabelle muss immer erweitert werden, wenn ein neuer Microservices hinzugefügt wird.
  2. Sofern ein Microservices die gleiche Funktionalität in Form unterschiedlicher Daten abdeckt, müssten pro Microservices mehrere Spalten hinzugefügt und NULL Werte zugelassen werden.

Ein Beispiel für Punkt 2 wäre ein Microservices, der das Thema Bezahlmethoden abdeckt. Anfangs gab es beispielsweise nur Kreditkarte und Kontoeinzug. Dann folgte Paypal. Und kurze Zeit später dann Bitcoin. Der Microservices hätte hierzu mehrere Tabellen, wo er die individuelle Daten für die jeweilige Bezahlmethode halten würde. In oben gezeigter Aggregationstabelle müsste aber für jede Bezahlmethode, die der Kunde nutzt, eine Spalte gefüllt werden. Wenn er sie nicht benutzt, würde NULL geschrieben werden. Man merkt schon: Das stinkt.

Ein anderer Ansatz ist da deutlich besser geeignet. Welcher das ist und wie man diesen technischen realisieren kann, könnt ihr im GitHub Repository von Particular nachschauen.

 

Golo Roden: Ein asynchrones 'map' für JavaScript

Die 'map'-Funktion in JavaScript arbeitet stets synchron und hat kein asynchrones Pendant. Da 'async'-Funktionen vom Compiler aber in synchrone Funktionen, die Promises zurückgeben, umgewandelt werden, lässt sich 'map' mit 'Promise.all' kombinieren, um den gewünschten Effekt zu erzielen.

Jürgen Gutsch: Configuring HTTPS in ASP.NET Core 2.1

Finally HTTPS gets into ASP.NET Core. It was there before back in 1.1, but was kinda tricky to configure. It was available in 2.0 bit not configured by default. Now it is part of the default configuration and pretty much visible and present to the developers who will create a new ASP.NET Core 2.1 project.

So the title of that blog post is pretty much misleading, because you don't need to configure HTTPS. because it already is. So let's have a look how it is configured and how it can be customized. First create a new ASP.NET Core 2.1 web application.

Did you already install the latest .NET Core SDK? If not, go to https://dot.net/ to download and install the latest version for your platform.

Open a console and CD to your favorite location to play around with new projects. It is C:\git\aspnet\ in my case.

mkdir HttpsSecureWeb && cd HttpSecureWeb
dotnet new mvc -n HttpSecureWeb -o HttpSecureWeb
dotnet run

This commands will create and run a new application called HttpSecureWeb. And you will see HTTPS the first time in the console output by running an newly created ASP.NET Core 2.1 application:

There are two different URLs where Kestrel is listening on: https://localhost:5001 and http://localhost:5000

If you go to the Configure method in the Startup.cs there are some new middlewares used to prepare this web to use HTTPS:

In the Production and Staging environment mode there is this middleware:

app.UseHsts();

This enables HSTS (HTTP Strinct Transport Protocol), which is a HTTP/2 feature to avoid man-in-the-middle attacks. It tells the browser to cache the certificate for the specific host-headers and for a specific time range. If the certificate changes before the time range ends, something is wrong with the page. (More about HSTS)

The next new middleware redirects all requests without HTTPS to use the HTTPS version:

app.UseHttpsRedirection();

If you call http://localhost:5000, you get redirected immediately to https://localhost:5001. This makes sense if you want to enforce HTTPS.

So from the ASP.NET Core perspective all is done to run the web using HTTPS. Unfortunately the Certificate is missing. For the production mode you need to buy a valid trusted certificate and to install it in the windows certificate store. For the Development mode, you are able to create a development certificate using Visual Studio 2017 or the .NET CLI. VS 2017 is creating a certificate for you automatically.

Using the .NET CLI tool "dev-certs" you are able to manage your development certificates, like exporting them, cleaning all development certificates, trusting the current one and so on. Just type the following command to get more detailed information:

dotnet dev-certs https --help

On my machine I trusted the development certificate to not get the ugly error screen in the browser about an untrusted certificate and an unsecure connection every time I want to debug a ASP.NET Core application. This works quite well:

dotnet dev-cert https --trust

This command trusts the development certificate, by adding it to the certificate store or to the keychain on Mac.

On Windows you should use the certificate store to register HTTPS certificated. This is the most secured way on Windows machines. But I also like the idea to store the password protected certificate directly in the web folder or somewhere on the web server. This makes it pretty easy to deploy the application to different platforms, because Linux and Mac use different ways to store the certificated. Fortunately there is a way in ASP.NET Core to create a HTTPS connection using a file certificate which is stored on the hard drive. ASP.NET Core is completely customizable. If you want to replace the default certification handling, feel free to do it.

To change the default handling, open the Program.cs and take a quick look at the code, especially to the method CreateWebHostBuilder:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
    .UseStartup<Startup>();

This method creates the default WebHostBuilder. This has a lot of stuff preconfigured, which is working great in the most scenarios. But it is possible to override all of the default settings here and to replace it with some custom configurations. We need to tell the Kestrel webserver which host and port he need to listen on and we are able to configure the ListenOptions for specific ports. In this ListenOptions we can use HTTPS and pass in the certificate file and a password for that file:

public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
        .UseKestrel(options =>
        {
            options.Listen(IPAddress.Loopback, 5000);
            options.Listen(IPAddress.Loopback, 5001, listenOptions =>
            {
                listenOptions.UseHttps("certificate.pfx", "topsecret");
            });
        })
        .UseStartup<Startup>();

Usually we would use the hardcoded values from a configuration file or environment variables, instead of hardcoding it.

Be sure the certificate is password protected using a long password or even better a pass-phrase. Be sure to not store the password or the pass-phrase into a configuration file. In development mode you should use the user secrets to store such secret date and in production mode the Azure Key Vault could be an option.

Conclusion

I hope this helps to get you a rough overview over the the usage of HTTPS in ASP.NET Core. This is not really a deep dive, but tries to explain what are the new middlewares good for and how to configure HTTPS for different platforms.

BTW: I just saw in the blog post about HTTPS improvements, about HSTS in ASP.NET Core, there is a way to store the HTTPS configuration in the launchSettings.json. This is an easy way to pass in environment variables on startup to the application. The samples also shows to add the certificate password to this settings file. Please never ever do this! Because a file is easily shared to a source code repository or any other way, so the password inside is shared as well. Please use different mechanisms to set passwords in an application, like the already mentioned user secrets or the Azure Key Vault.

Uli Armbruster: Quellen zu Defensives Design und Separation of Concerns sind jetzt online

Den Quellcode zu meinen Vorträgen auf den Karlsruher Entwicklertagen und der DWX sind jetzt online:

  • Zu Super Mario Kata mit Fokus auf Defensivem Design geht es hier.
  • Zur Prüfsummen Kata mit fokus auf Separation of Concerns gelangt ihr hier.

Auf beiden Seiten findet ihr auch die Links zu den PowerPoint Folien. Im Juli 2018 werde ich den Code der Prüfsummen Kata noch in Form von Iterationen veröffentlichen und beide Talks als YouTube Video freigeben.

IMG_20180627_145405

Defensives Design Talk auf der DWX

Ihr wollt Teile davon in euren Vorträgen verwenden, ihr wollt ein Training zu dem Thema oder ich soll dazu in euerer Community einen Vortrag halten, dann kontaktiert mich über die auf GitHub genannten Kanäle.

Stefan Henneken: IEC 61131-3: The generic data type T_Arg

In the article The wonders of ANY, Jakob Sagatowski shows how the data type ANY can be effectively used. In the example described, a function compares two variables to determine whether the data type, data length and content are exactly the same. Instead of implementing a separate function for each data type, the same requirements can be implemented much more elegantly with only one function using data type ANY.

Some time ago, I had a similar task. A method should be developed that accepts any number of parameters. Both the data type and the number of parameters were random.

During my first attempt to find solution, I tried to use a variable-length array of type ARRAY [*] OF ANY. However, variable-length arrays can only be used as VAR_IN_OUT and the data type ANY only as VAR_INPUT (see also IEC 61131-3: Arrays with variable length). This approach was therefore ruled out.

As an alternative to data type ANY, structure T_Arg is also available. T_Arg is declared in the TwinCAT library Tc2_Utilities and, in contrast to ANY, is also available at TwinCAT 2. The structure of T_Arg is similar to the structure used for the data type ANY (see also The wonders of ANY).

TYPE T_Arg :
STRUCT
  eType   : E_ArgType   := ARGTYPE_UNKNOWN; (* Argument data type *)
  cbLen   : UDINT       := 0;               (* Argument data byte length *
  pData   : UDINT       := 0;               (* Pointer to argument data *)
END_STRUCT
END_TYPE

T_Arg can be used at any place, including in the VAR_IN_OUT range.

The following function adds any amount of numbers whose data type can also be random. The result is returned as LREAL.

FUNCTION F_AddMulti : LREAL
VAR_IN_OUT
  aArgs : ARRAY [*] OF T_Arg;
END_VAR
VAR
  nIndex : DINT;
  aUSINT : USINT;
  aUINT  : UINT;
  aINT   : INT;
  aDINT  : DINT;
  aREAL  : REAL;
  aLREAL : LREAL;
END_VAR

F_AddMulti := 0.0;
FOR nIndex := LOWER_BOUND(aArgs, 1) TO UPPER_BOUND(aArgs, 1) DO
  CASE (aArgs[nIndex].eType) OF
    E_ArgType.ARGTYPE_USINT:
      MEMCPY(ADR(aUSINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUSINT;
    E_ArgType.ARGTYPE_UINT:
      MEMCPY(ADR(aUINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aUINT;
    E_ArgType.ARGTYPE_INT:
      MEMCPY(ADR(aINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aINT;
    E_ArgType.ARGTYPE_DINT:
      MEMCPY(ADR(aDINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aDINT;
    E_ArgType.ARGTYPE_REAL:
      MEMCPY(ADR(aREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aREAL;
    E_ArgType.ARGTYPE_LREAL:
      MEMCPY(ADR(aLREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
      F_AddMulti := F_AddMulti + aLREAL;
  END_CASE
END_FOR

However, calling the function is somewhat more complicated than with the data type ANY.

PROGRAM MAIN
VAR
  sum    : LREAL;
  args   : ARRAY [1..4] OF T_Arg;
  a      : INT := 4567;
  b      : REAL := 3.1415;
  c      : DINT := 7032345;
  d      : USINT := 13;
END_VAR

args[1] := F_INT(a);
args[2] := F_REAL(b);
args[3] := F_DINT(c);
args[4] := F_USINT(d);
sum := F_AddMulti(args);

The array passed to the function must be initialized first. The library Tc2_Utilities contains help functions that convert a variable into a structure of type T_Arg (F_INT(), F_REAL(), F_DINT(), …). The function for adding the values has only one input variable of type ARRAY [*] OF T_Arg.

The data type T_Arg is used, for example, in the function block FB_FormatString() or in the function F_FormatArgToStr() of TwinCAT. The function block FB_FormatString() can replace up to 10 placeholders in a string with values of PLC variables of type T_Arg (similar to fprintf in C).

An advantage of ANY is the fact that the data type is defined by the IEC 61131-3 standard.

Even if the generic data types ANY and T_Arg do not correspond to the generics in C# or the templates in C++, they still support the development of generic functions in IEC 61131-3. These can now be designed in such a way that the same function can be used for different data types and data structures.

David Tielke: #DWX2018 - Inhalte meiner Sessions, Workshops und Fernseh- und Radiointerviews

Vom 25.06. bis 28.06. fand auch in diesem Jahr wieder die Developer Week 2018 in Nürnberg statt. An vier Tagen öffnete das NCC Ost des Messezentrums Nürnberg die Pforten um tausenden wissenshungrigen Entwicklern zu empfangen, die sich in Sessions und Workshops rund um das Thema Softwareentwicklung schlau machen konnten. Wie in jedem Jahr, war ich auch in diesem Jahr wieder als Trackchair für die beiden Tracks Softwarequalität und Softwarearchitekturen inhaltlich zuständig. Neben der Programmgestaltung durfte ich auch selbst wieder aktiv werden und in insgesamt vier Sessions, einer Abendveranstaltung und einem Workshop mein Wissen an die Teilnehmer weiterreichen. 


Hier meine Beiträge im Überblick:

  • Session: Effektive Architekturen mit Workflows
  • Session: Architektur für die Praxis 2.0 (Ersatzvortrag)
  • Session: Metriken - wie gut ist Ihre Software?
  • Session: Testing Everything
  • Abendveranstaltung: SmartHome - Das Haus der Zukunft!
  • Workshop: Architektur für die Praxis 2.0

Fernsehinterview BR / ARD:


Im Zusammenhang mit meinem Vortrag zum Thema "Smarthome" am Montagabend, wurde ich im Vorfeld von diversen Medien wie BR, ARD, Nürnberger Nachrichten und Radio Gong zu dem Thema befragt. Der Beitrag des BR dazu ist noch immer in deren Mediathek abrufbar.

Inhalte zu meinen Workshops und Sessions

Wie mit den Teilnehmern in meinen Sessions vorab besprochen, stelle ich hier nun alle relevanten Materialien meiner Sessions zur Verfügung. Diese Inhalte umfassen zum einen meine Codebeispiele aus Visual Studio, sowie meine Notizen und Mitschriften aus OneNote als PDF aber vor allem meine Artikel auf meinem dotnetpro Kolumne "Davids Deep Dive" zu diesem Thema:

Das Passwort für beide Bereiche, wurde auf der Konferenz bekannt gegeben und kann alternativ bei mir per Email erfragt werden.

Bis zur Developer Week 2019!

Auch im nächsten Jahr wird die Developer Week wieder Ihre Pforten öffnen und ich freue mich schon jetzt darauf! Ich möchte auf diesem Wege nochmal allen Teilnehmern meiner Sessions für die tolle Atmosphäre und interessanten Diskussionen danken, es hat mal wieder super viel Spaß gemacht und es war mir wie immer eine Ehre. Ebenfalls ein riesen großes Danke geht an den Veranstalter Developer Media, die mal wieder ein noch besseres Event als im vorherigen Jahr auf die Beine gestellt haben. Bis nächstes Jahr!

Golo Roden: Shorthand-Syntax für die Konsole

Die Shorthand-Syntax von ES2015 ermöglicht das vereinfachte Definieren von Objekten, deren Werte gleichnamigen Variablen entsprechen. Diese Syntax kann man sich bei Ausgaben auf die Konsole zu Nutze machen, um besser lesbare und nachvollziehbare Ausgaben zu erhalten.

Jürgen Gutsch: Four times in a row

One year later, it is the July 1st and I got the email from the Global MVP Administrator. I got the MVP award the fourth time in a row :)

I'm pretty proud and honored about that and I'm really happy to be part of the great MVP community one year more. I'm also looking forward to the Global MVP Summit next year to meet all the other MVPs from around the world.

Still not really a fan-boy...!?

I'm also proud of being a MVP, because I never called myself a Microsoft fan-boy. And sometimes, I also criticize some tools and platforms built by Microsoft (I feel like a bad boy). But I like most of the development tools built by Microsoft and I like to use the tools, and frameworks and I really like the new and open Microsoft. The way how Microsoft now supports more than its own technologies and platforms. I like using VSCode, Typescript and Webpack to create NodeJS applications. I like VSCode and .NET Core on Linux to build Applications on a different platform than Windows. I also like to play around with UWP Apps on Windows for IoT on a Raspberry PI.

There are much more possibilities, much more platforms, much more customers to reach, using the current Microsoft development stack. And it is really fun to play with it, to use it in real project, to write about it in .NET magazines, in this blog and to talk about it in the user groups and on conferences.

In the last year being an MVP, I also learned that it is kinda fun to contribute to Microsoft's open source projects, being a part of that project and to see my own work in that projects. If you like open source as well, contribute to the the open source projects. Make the projects better, make the documentations better.

I also need to say Thanks

But I wouldn't get honored again without such a great development community. I wouldn't continue to contribute to the community without that positive feedback and without that great people. This is why the biggest "Thank You" goes to the development community :)

And like last year, I also need to say "Thank You" to my great family (my lovely wife and my three kids) which supports me in spending so much time to contribute to the community. I also need to say Thanks to the YooApplications AG, my colleagues and my boss for supporting me and allowing me to use parts of my working time to contribute the the community.

Golo Roden: Wie man Zahlen in JavaScript auffüllt

JavaScript kennt seit der Version ES2017 die beiden neuen Funktionen padStart und padEnd, um Zeichenketten von links beziehungsweise von rechts aufzufüllen. Damit sie mit Zahlen funktionieren, müssen diese zuvor mit toString in Zeichenketten umgewandelt werden.

Holger Schwichtenberg: Microsoft verkündet Pläne für ASP.NET Core und Entity Framework Core 2.2

Microsoft hat gestern auf Github sowohl die Termine als auch die inhaltlichen Pläne für Version 2.2 von ASP.NET Core und Entity Framework Core verkündet.

Golo Roden: Wie man Content-Types zerlegt

Das npm-Modul content-type stellt eine parse-Funktion zur Verfügung, mit der sich Content-Type-Header RFC-konform analysieren und zerlegen lassen.

Uli Armbruster: Kausale Ketten: Gründe für das Scheitern von Softwareprojekten

Auf dieser Github Page habe ich begonnen die Problemen in Softwareprojekten, die uns täglich begegnen, genauer zu analysieren, um diese vermeiden bzw. beheben zu können.

In Gesprächen mit Teilnehmern meiner Workshops werden mir regelmäßig Symptome geschildert, von denen ich der Meinung bin, dass die Ursache wie so oft tiefer liegt.

Tituliert habe ich das als kausale Ketten und orientiere mich dabei an folgendem Schema:

  • Was ist das wahrgenommene Problem, also das Symptom
  • Wie kam es dazu, also die Verlaufsbetrachtung
  • Warum kam es dazu, was ist die Ursache

Zudem suche ich nach nachvollziehbaren Beispielen, um die Theorie zu untermauern.

Ich verstehe die Seite als Anreiz zum Nachdenken. Alle „Thesen“ sollen als Einstieg in eine lebendige Diskussion dienen.

Gebt mir gerne Feedback in Form von Pull Requests.

Uli Armbruster: Termin nossued 2019

In diesem Jahr fand der #nossued noch vor Beginn der Sommerferien in den meisten Bundesländern statt. Das Feedback der letzten Jahren ging dahin, dass der ein oder andere aufgrund der Schulferien nicht kommen konnte.

Neben der Berücksichtung von zeitlich verschobener Ferien in den einzelnen Bundesländern gilt es natürlich noch auf etablierte Events wie die dotnet Cologne Rücksicht zu nehmen.

Der ein oder andere hadert mit Juni/Juli auch wegen den alle 2 Jahre stattfindenden Fußballturnieren oder weil die warme Sommerzeit lieber für Freizeitaktivitäten genutzt wird. Andere wiederum schätzen genau die strahlende Sonne und die Möglichkeit die Dachterrasse nutzen zu können.

Daher würden wir von der Organisation gerne wissen, wie ihr zu den möglichen Terminen 2019 steht. Denkbar wären auch frühere Termine wie die Zeiträume vom 08. März bis 04. April und 27. April bis 31. Mai. Wie steht die Community dazu?

Eure Rückmeldungen würden uns freuen, z.B. in Form von Nennung von guten Zeiträumen oder weniger geeigneten Monaten.

Christian Dennig [MS]: Open Service Broker for Azure (OSBA) with Azure Kubernetes Service (AKS)

In case you missed it, the Azure Managed Kubernetes service (AKS) has been released today (June, 13th 2018, hoooooray 🙂 see the official announcement from Brendan Burns here) and it is now possible to run production workloads on a fully Microsoft-managed Kubernetes cluster in the Azure cloud. “Fully managed” means, that the K8s control plane and the worker nodes (infrastructure) are managed by Microsoft (API server, Docker runtime, scheduler, etcd server…), security patches are applied on a dialy basis to the underlying OS, you get Azure Active Directory integration (currently in preview) etc. And what’s really nice: you only pay for the worker nodes, the control plane is completely free!

The integration of Kubernetes into the Azure infrastructure part is really impressive, but when it comes to service integration and provisioning on the cloud platform there is still room for improvement…but it’s on its way! The Open Service Broker for Azure (update: Version 1.0 reached) closes the gap between Kubernetes workloads that require certain Azure services to run and the provisioning part of these services as it makes it possible e.g. to create a SQL server instance in Azure via a Kubernetes “YAML”-file during deployment of other Kubernetes objects on the fly. Sounds good? Let’s see, how this works.

Creating a Kubernetes Demo Cluster

First of all, we need a Kubernetes cluster to be able to test the Open Service Broker for Azure – we are going to use Azure CLI, therefore please make sure you have installed the latest version of it.

Okay, so let’s create an Azure resource group where we can deploy the AKS cluster to afterwards:

// resource group
az group create --name osba-demo-rg --location westeurope

// AKS cluster - version must be above 1.9 (!)
az aks create 
        --resource-group osba-demo-rg 
        --name osba-k8sdemo-cluster 
        --generate-ssh-keys  
        --kubernetes-version 1.9.6

When the deployment of the cluster has finished, download the corresponding kubeconfig file:

az aks get-credentials 
        --resource-group osba-demo-rg 
        --name osba-k8sdemo-cluster

Now we are ready to use kubectl to work with the newly created cluster. Test the connection by querying the available worker nodes of the cluster:

kubectl get nodes

You should see something like this:

nodes

Before we can install the Open Service Broker, we also need a service principal in Azure that is able to interact with the Azure Resource Manager and create resources on behalf of us (think of it as a “service account” in Linux / Windows).

az ad sp create-for-rbac --name osba-demo-principal -o table

Important: Remember “Tenant”, “Application ID” and “Password” for you will need these values when installing the OSBA now.

Installing OSBA

Cluster Installation

We are using Helm to install OSBA on our cluster, so we first need to prepare your local machine for Helm (FYI: your AKS cluster is by default ready to use Helm, so there’s no need to install anything on it – anyway, you need to install the Helm client on your workstation):

helm init

Next, we need to deploy the Service Catalog on the cluster:

helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

helm install svc-cat/catalog --name catalog --namespace catalog \
   --set rbacEnable=false \
   --set apiserver.storage.etcd.persistence.enabled=true

Please note: if you have installed your cluster with the option “RBAC-enabled” (which is standard in the meantime for AKS deployments – unless you disable it), you’ll have to set rbacEnable=true! The service catalog apiserver container will fail to start otherwise.

Now we are ready to deploy OSBA to the cluster:

// add the Azure charts repository

helm repo add azure https://kubernetescharts.blob.core.windows.net/azure


// finally, add the service broker for Azure

helm install azure/open-service-broker-azure --name osba --namespace osba `
  --set azure.subscriptionId=<Your Subscription ID>  `
  --set azure.tenantId=<Tenant> `
  --set azure.clientId=<Service Principal Application ID> `
  --set azure.clientSecret=<Service Principal Password>

Info: In case you don’t know your Azure subscription Id, run…

az account show

…and use the value of property “id”.

You can check the status of the deployments (catalog & service broker) by querying the running pods in the namespaces catalog and osba.

pods

Service Catalog Client Tools

Service Catalog comes with its own command line interface. So you need to install it on your machine (installation instructions).

Using the OSBA for Service Provisioning

Now, we are prepared to provision / create so-called “ServiceInstances” (Azure resources) and “bind” them via “ServiceBindings” in order to be able to use them as resources/endpoints/services etc. in our pods.

In the current example, we want to provision an Azure SQL DB. So first of all, we need to create a service intance of the database. Therefore, use the following YAML definition:

As you can see, you, there are some values, you have to provide to OSBA:

  • clusterServiceClassExternalName – in our case, we want to create an Azure SQL DB. You can query the available service classes by using the following command: svcat get classes. We will be using azure-sql-12-0.
  • clusterServicePlanExternalName – the service plan name which represents the service tier in Azure. Use svcat describe classes azure-sql-12-0 to show the available service plans for class azure-sql-12-0. We will be using standard-s1.
  • resourceGroupthe Azure resource group for the server and database

 

 

classes

Show available service classes via “svcat get classes

azure-sql-12-0

Show available service plans for class “azure-sql-12-0” via “svcat describe classes azure-sql-12-0

Now, create the service via kubectl:

kubectl create -f .\service-instance.yaml

Query the service instances by using the Service Catalog CLI:

svcat get instances

The result should be (after a short amount of time) something like that:

instances

In the Azure portal, you should also see these newly created resources:

portal_resources

Now that we have created the service instance, let’s bind the instance, in order to be able to use it. Here’s the YAML file for it:

kubectl create -f service-binding.yaml

As seen with the service instance, the service binding also needs some parameters in order to work. Of course, the binding needs a reference to the service instance, it wants to use (instanceRef). The more interesting property is secretName. While creating the binding, the service broker also creates a secret in the current namespace, where important values ( like passwords, server name, database name, URIs etc.) are added. You can reference the secret / values afterwards in your K8s deployments and add them e.g. as environment variables to your pods.

Now let’s see, if the binding has been created via svcat:

bindings

That look’s good. Over to the Kubernetes dashboard, to see, if the secret has been created in the default namespace.

secret

Kubernetes secret

It seems like everything was “bound” for usage as expected and we are now ready to use the Azure SQL DB in our containers/pods!

Wrap Up

As you have seen in this example, with the Open Service Broker for Azure it is very easy to create Azure resources via Kubernetes object definitions. You simply need to install OSBA to your cluster with Helm! Afterwards, you can create and bind Azure services like Azure SQL DB. If you are curious what resource providers are supported, there are currently three services, that are available:

…and some experimental services:

  • Azure CosmosDB
  • Azure KeyVault
  • Azure Redis Cache
  • Azure Event Hubs
  • Azure Service Bus
  • Azure Storage
  • Azure Container Instances
  • Azure Search

The up-to-date list can always be found here: https://github.com/Azure/open-service-broker-azure/tree/master/docs/modules

Have fun with it 🙂

Holger Schwichtenberg: ASP.NET Blazor 0.4 erschienen

Die vierte Preview-Version von Microsofts .NET-basiertem Framework zur WebAssembly-Programmierung bietet einige Verbesserungen.

Jürgen Gutsch: Creating a signature pad using Canvas and ASP.​NET Core Razor Pages

In one of our projects, we needed to add a possibility to add signatures to PDF documents. A technician fills out a checklist online and a responsible person and the technician need to sign the checklist afterwards. The signatures then gets embedded into a generated pdf document together with the results of the checklist. The signatures must be created on a web UI, running on an iPad Pro.

It was pretty clear that we need to use the HTML5 canvas element and to capture the pointer movements. Fortunately we stumbled upon a pretty cool library on GitHub, created by Szymon Nowak from Poland. It is the super awesome Signature Pad written in TypeScript and available as NPM and Yarn package. It is also possible to use a CDN to use the Signature Pad.

Use Signature Pad

Using Signature Pad is really easy and works well without any configuration. Let me show you in a quick way how it works:

To play around with it, I created a new ASP.NET Core Razor Pages web using the dotnet CLI:

dotnet new razor -n SignaturePad -o SignaturePad

I added a new razor page called Signature and added it to the menu in the _Layout.cshtml. I created a simple form and placed some elements in it:

<form method="POST">
    <p>
        <canvas width="500" height="400" id="signature" 
                style="border:1px solid black"></canvas><br>
        <button type="button" id="accept" 
                class="btn btn-primary">Accept signature</button>
        <button type="submit" id="save" 
                class="btn btn-primary">Save</button><br>
        <img width="500" height="400" id="savetarget" 
             style="border:1px solid black"><br>
        <input type="text" asp-for="@Model.SignatureDataUrl"> 
    </p>
</form>

The form posts the content to the current URL, which is the same Razor page, but the different HTTP method handler. We will have a look later on.

The canvas is the most important thing. This is the area where the signature gets drawn. I added a border to make the pad boundaries visible on the screen. I add a button to accept the signature. This means we lock the canvas and write the image data to the input field added as last element. I also added a second button to submit the form. The image is just to validate the signature and is not really needed, but I was curious about, how it looks in an image tag.

This is not the nicest HTML code but works for a quick test.

Right after the form I added a script area to render the JavaScript to the end of the page. To get it running quickly, I use jQuery to access the HTML elements. I also copied the signature_pad.min.js into the project, instead of using the CDN version

@section Scripts{
    <script src="~/js/signature_pad.min.js"></script>
    <script>
        $(function () {

            var canvas = document.querySelector('#signature');
            var pad = new SignaturePad(canvas);

            $('#accept').click(function(){

                var data = pad.toDataURL();

                $('#savetarget').attr('src', data);
                $('#SignatureDataUrl').val(data);
                pad.off();
            
            });
                    
        });
    </script>
}

As you can see, creating the Signature Pad is simply done by creating a new instance of SignaturePad and passing the canvas as an argument. On click at the accept button, I start working with the pad. The function toDataURL() generates an image data URL that can be directly used as image source, like I do in the next line. After that I store the result as value in the input field to send it to the server. In Production this should be a hidden field. at the end I switch the Signature Pad off to lock the canvas and the user cannot manipulate the signature anymore.

Handling the Image Date URL with C##

The image data URL looks like this:

data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAAGQCAYA...

So after the comma the image is a base 64 encoded string. The data before the comma describes the image type and the encoding. I now send the complete data URL to the server and we need to decode the string.

public void OnPost()
{
    if (String.IsNullOrWhiteSpace(SignatureDataUrl)) return;

    var base64Signature = SignatureDataUrl.Split(",")[1];            
    var binarySignature = Convert.FromBase64String(base64Signature);

    System.IO.File.WriteAllBytes("Signature.png", binarySignature);
}

On the page model we need to create a new method OnPost() to handle the HTTP POST method. Inside we first check whether the bound property has a value or not. Then we split the string by comma and convert the base 64 string to an byte array.

With this byte array we can do whatever we need to do. In the current project I store the image directly in the PDF and in this demo I just store the data in an image on the hard drive.

Conclusion

As mentioned this is just a quick demo with some ugly code. But the rough idea could be used to make it better in Angular or React. To learn more about the Signature Pad visit the repository: https://github.com/szimek/signature_pad

This example also shows what is possible with HTML5 this times. I really like the possibilities of HTML5 and the HTML5 APIs used with JavaScript.

Hope this helps :-)

Code-Inside Blog: DbProviderFactories & ODP.NET: When even Oracle can be tamed

Oracle and .NET: Tales from the dark ages

Each time when I tried to load data from an Oracle database it was a pretty terrible experience.

I remember that I struggle to find the right Oracle driver and even when everything was installed the strange TNS ora config file popped up and nothing worked.

It can be simple…

2 weeks ago I had the pleasure to load some data from a Oracle database and discovered something beautiful: Actually, I can be pretty simple today.

The way to success:

1. Just ignore the System.Data.OracleClient-Namespace

The implementation is pretty old and if you go this route you will end up with the terrible “Oracle driver/tns.ora”-chaos mentioned above.

2. Use the Oracle.ManagedDataAccess:

Just install the official NuGet package and you are done. The single .dll contains all the bits to connect to an Oracle database. No driver installation additional software is needed. Yay!

The NuGet package will add some config entries in your web.config or app.config. I will cover this in the section below.

3. Use sane ConnectionStrings:

Instead of the wild Oracle TNS config stuff, just use (a more or less) sane ConnectionString.

You can either just use the same configuration you would normally do in the TNS file, like this:

Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=MyHost)(PORT=MyPort)))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=MyOracleSID)));User Id=myUsername;Password=myPassword;

Or use the even simpler “easy connect name schema” like this:

Data Source=username/password@myserver//instancename;

DbProviderFactories & ODP.NET

As I mentioned earlier after the installation your web or app.config might look different.

The most interesting addition is the registration in the DbProviderFactories-section:

...
<system.data>
    <DbProviderFactories>
      <remove invariant="Oracle.ManagedDataAccess.Client"/>
      <add name="ODP.NET, Managed Driver" invariant="Oracle.ManagedDataAccess.Client" description="Oracle Data Provider for .NET, Managed Driver"
          type="Oracle.ManagedDataAccess.Client.OracleClientFactory, Oracle.ManagedDataAccess, Version=4.122.1.0, Culture=neutral, PublicKeyToken=89b483f429c47342"/>
    </DbProviderFactories>
  </system.data>
...

I covered this topic a while ago in an older blogpost, but to keep it simple: It also works for Oracle!

		private static void OracleTest()
        {
            string constr = "Data Source=localhost;User Id=...;Password=...;";

            DbProviderFactory factory = DbProviderFactories.GetFactory("Oracle.ManagedDataAccess.Client");

            using (DbConnection conn = factory.CreateConnection())
            {
                try
                {
                    conn.ConnectionString = constr;
                    conn.Open();

                    using (DbCommand dbcmd = conn.CreateCommand())
                    {
                        dbcmd.CommandType = CommandType.Text;
                        dbcmd.CommandText = "select name, address from contacts WHERE UPPER(name) Like UPPER('%' || :name || '%') ";

                        var dbParam = dbcmd.CreateParameter();
                        // prefix with : possible, but @ will be result in an error
                        dbParam.ParameterName = "name";
                        dbParam.Value = "foobar";

                        dbcmd.Parameters.Add(dbParam);

                        using (DbDataReader dbrdr = dbcmd.ExecuteReader())
                        {
                            while (dbrdr.Read())
                            {
                                Console.WriteLine(dbrdr[0]);
                            }
                        }
                    }
                }
                catch (Exception ex)
                {
                    Console.WriteLine(ex.Message);
                    Console.WriteLine(ex.StackTrace);
                }
            }
        }

MSSQL, MySql and Oracle - via DbProviderFactories

The above code is a snippet from my larger sample demo covering MSSQL, MySQL and Oracle. If you are interested just check this demo on GitHub.

Each SQL-Syntax teats parameter a bit different, so make sure you use the correct syntax for your target database.

Bottom line

Accessing a Oracle database from .NET doesn’t need to be a pain nowadays.

Be aware that the ODP.NET provider might surface higher level APIs to work with Oracle databases. The dbProviderfactory-approach helped us for our simple “just load some data”-scenario.

Hope this helps.

MSDN Team Blog AT [MS]: Starte mit Artificial Intelligence und Machine Learning

  • Du interessierst Dich für Cognitive Services und Custom Machine Learning?
  • Du willst Dich mit Neuralen Netze sowie mit Frameworks wie TensorFlow and CNTK beschäftigen?
  • Du willst mit Machine Learning Spezialisten mal so richtig in eine technische Diskussion eintauchen?
  • Du willst eventuell Euer geplantes Projekt vorab mal abklopfen lassen oder suchst Tipps für die Umsetzung?

Wir bringen Euch mit anderen Software Engineers in einer dreitägigen Veranstaltung zusammen, um Eure Machine Learning Fähigkeiten durch eine Reihe von strukturierten Herausforderungen zu schärfen, und in weiterer Folge gemeinsam mit Euch Probleme im Computer Vision Bereich zu lösen.

Ende Juni bieten wir als Readiness Maßnahme einen kostenlosen OpenHack zu Artificial Intelligence und Machine Learning an.

Bei den OpenHacks geben wir den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß.

Falls ihr Euch also aktiv mit den Themen Artificial Intelligence & Machine Learning auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien.

Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!

Voraussetzungen also:

  • Eigenes Arbeitsgerät zum Entwickeln mitbringen
  • Gut wäre zumindest Basis Wissen zu Python und Datenstrukturen.
    Kleiner Tipp: "Intro zu Python for Data Science" als Intro oder Auffrischung durchwassern.
  • Zwar nicht wesentlich, aber sicher hilfreich: erste Erfahrungen mit Machine Learning
  • Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, …

Als CSE (Commercial Software Engineering) können wir Euch in weiterer Folge auch beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Das Ziel wäre es also klarerweise Euch zu motivieren über eigene AI & ML Projekte nachzudenken bzw. diese zu starten. Nur so als kleine Anregung daher ein paar Projekte, die so in Kooperation mit der CSE entstanden sind: https://www.microsoft.com/developerblog/category/machine-learning/

Anmelden unter: https://aka.ms/openhackberlin

Wir freuen uns auf Euer Kommen!!

Code-Inside Blog: CultureInfo.GetCultureInfo() vs. new CultureInfo() - what's the difference?

The problem

The problem started with a simple code:

double.TryParse("1'000", NumberStyles.Any, culture, out _)

Be aware that the given culture was “DE-CH” and the Swiss use the ‘ for the separator for numbers.

Unfortunately the Swiss authorities have abandoned the ‘ for currencies, but it is widly used in the industrie and such a number should be parsed or displayed.

Now Microsoft steps in and they use a very similar char in the “DE-CH” region setting:

  • The backed in char to separate numbers: ‘ (CharCode: 8217)
  • The obvious choice would be: ‘ (CharCode: 39)

The result of this configuration hell:

If you don’t change the region settings in Windows you can’t parse doubles with this fancy group separator.

Stranger things:

My work machine is running the EN-US version of Windows and my tests where failing because of this madness, but it was even stranger: Some other tests (quite similar to what I did) were OK on our company DE-CH machines.

But… why?

After some crazy time I discovered that our company DE-CH machines (and the machines from our customer) were using the “sane” group separator, but my code still didn’t work as expected.

Root cause

The root problem (besides the stupid char choice) was this: I used the “wrong” method to get the “DE-CH” culture in my code.

Let’s try out this demo code:

class Program
    {
        static void Main(string[] args)
        {
            var culture = new CultureInfo("de-CH");

            Console.WriteLine("de-CH Group Separator");
            Console.WriteLine(
                $"{culture.NumberFormat.CurrencyGroupSeparator} - CharCode: {(int) char.Parse(culture.NumberFormat.CurrencyGroupSeparator)}");
            Console.WriteLine(
                $"{culture.NumberFormat.NumberGroupSeparator} - CharCode: {(int) char.Parse(culture.NumberFormat.NumberGroupSeparator)}");

            var cultureFromFramework = CultureInfo.GetCultureInfo("de-CH");

            Console.WriteLine("de-CH Group Separator from Framework");
            Console.WriteLine(
                $"{cultureFromFramework.NumberFormat.CurrencyGroupSeparator} - CharCode: {(int)char.Parse(cultureFromFramework.NumberFormat.CurrencyGroupSeparator)}");
            Console.WriteLine(
                $"{cultureFromFramework.NumberFormat.NumberGroupSeparator} - CharCode: {(int)char.Parse(cultureFromFramework.NumberFormat.NumberGroupSeparator)}");
        }
    }

The result should be something like this:

de-CH Group Separator
' - CharCode: 8217
' - CharCode: 8217
de-CH Group Separator from Framework
' - CharCode: 8217
' - CharCode: 8217

Now change the region setting for de-CH and see what happens:

x

de-CH Group Separator
' - CharCode: 8217
X - CharCode: 88
de-CH Group Separator from Framework
' - CharCode: 8217
' - CharCode: 8217

Only the CultureInfo from the first instance got the change!

Modified vs. read-only

The problem can be summerized with: RTFM!

From the MSDN for GetCultureInfo: Retrieves a cached, read-only instance of a culture.

The “new CultureInfo” constructor will pick up the changed settings from Windows.

TL;DR:

  • CultureInfo.GetCultureInfo will return a “backed in” culture, which might be very fast, but doesn’t respect user changes.
  • If you need to use the modified values from windows: Use the normal CultureInfo constructor.

Hope this helps!

Stefan Henneken: IEC 61131-3: The ‘Observer’ Pattern

The Observer Pattern is suitable for applications that require one or more function blocks to be notified when the state of a particular function block changes. The assignment of the communication participants can be changed at runtime of the program.

In almost every IEC 61131-3 program, function blocks exchange states with each other. In the simplest case, one input of one FB is assigned the output of another FB.

Pic01

This makes it very easy to exchange states between function blocks. But this simplicity has its price:

Inflexibility. The assignment between fbSensor and the three instances of FB_Actuator is hard-coded in the program. Dynamic assignment between the FBs during runtime is not possible.

Fixed dependencies. The data type of the output variable of FB_Sensor must be compatible to the input variable of FB_Actuator. If there is a new sensor component whose output variable is incompatible with the previous data type, this necessarily results in an adjustment of the data type of the actuators.

Problem Definition

The following example shows how, with the help of the observer pattern, the fixed assignment between the communication participants can be dispensed with. The sensor reads a measured value (e.g. a temperature) from a data source, while the actuator performs actions depending on a measured value (e.g. temperature control). The communication between the participants should be changeable. If these disadvantages are to be eliminated, two basic OO design patterns are helpful:

  • Identify those areas that remain constant and separate them from those that change.
  • Never program directly to implementations, but always to interfaces. The assignment between input and output variables must therefore no longer be permanently implemented.

    This can be realized elegantly with the help of interfaces that define the communication between the FBs. There is no longer a fixed assignment of input and output variables. This results in a loose coupling between the participants. Software design based on loose coupling makes it possible to build flexible software systems that cope better with changes, since dependencies between the participants are minimized.

    Definition of Observer Pattern

    The observer pattern provides an efficient communication mechanism between several participants, whereby one or more participants depend on the state of one participant. The participant providing a state is called Subject (FB_Sensor). The participants, which depend on the state, are called Observer (FB_Actuator).

    The Observer pattern is often compared to a newspaper subscription service. The publisher is the subject, while the subscribers are the observers. The subscriber must register with the publisher. When registering, you may also specify which information you would like to receive. The publisher maintains a list in which all subscribers are stored. As soon as a new publication is available, the publisher sends the desired information to all subscribers in the list.

    This becomes more formal in the book „Design pattern. Elements of reusable object-oriented software” expressed by Gamma, Helm, Johnson and Vlissides:

    The Observer pattern defines a 1-to-n dependency between objects, so that changing the state of an object causes all dependent objects to be notified and automatically updated.

    Implementation

    In which way the subject receives the data and how the observer processes the data is not discussed here in more detail.

    Observer

    The method Update() notifies the observer of the subject, if the value changes. Since this behaviour is the same for all observers, the interface I_Observer is defined, which is implemented by all observers.

    The function block FB_Observer also defines a property that returns the current actual value.

    Pic02 Pic03

    Since the data is exchanged by method, no further inputs or outputs are required.

    FUNCTION_BLOCK PUBLIC FB_Observer IMPLEMENTS I_Observer
    VAR
      fValue : LREAL;
    END_VAR
    

    Here is the implementation of the method Update():

    METHOD PUBLIC Update
    VAR_INPUT
      fValue : LREAL;
    END_VAR
    THIS^.fValue := fValue;
    

    und das Property fActualValue:

    PROPERTY PUBLIC fActualValue : LREAL
    fActualValue := THIS^.fValue;
    

    Subject

    The subject manages a list of observers. Using the methods Attach() and Detach(), the individual Observers can log on and off.

    Pic04 Pic05

    Since all Observers implement the interface I_Observer, the list is of type ARRAY[1..Param.cMaxObservers] OF I_Observer. The exact implementation of the observer does not have to be known at this point. Further variants of observers can be created, as long as they implement the interface I_Observer, the subject can communicate with them.

    The method Attach() contains the interface pointer to the observer as a parameter. Before it is stored in the list (line 23), the system checks whether it is valid and not already contained in the list.

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver            : I_Observer;
    END_VAR
    VAR
      nIndex                : INT := 0;
    END_VAR
    
    Attach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    // is the observer already registered?
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        RETURN;
      END_IF
    END_FOR
    
    // save the observer object into the array of observers and send the actual value
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = 0) THEN
        THIS^.aObservers[nIndex] := ipObserver;
        THIS^.aObservers[nIndex].Update(THIS^.fValue);
        Attach := TRUE;
        EXIT;
      END_IF
    END_FOR
    

    The method Detach() also contains the interface pointer to the Observer as a parameter. If the interface pointer is valid, the Observer is searched in the list and the corresponding position is deleted (line 15).

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver             : I_Observer;
    END_VAR
    VAR
      nIndex                 : INT := 0;
    END_VAR
    
    Detach := FALSE;
    IF (ipObserver = 0) THEN
      RETURN;
    END_IF
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] = ipObserver) THEN
        THIS^.aObservers[nIndex] := 0;
        Detach := TRUE;
      END_IF
    END_FOR
    

    If there is a status change in the subject, the method Update() is called by all valid interface pointers in the list (line 8). This functionality is found in the private method Notify().

    METHOD PRIVATE Notify
    VAR
      nIndex : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.aObservers[nIndex] 0) THEN
        THIS^.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    In this example, the subject generates a random value every second and then notifies the observer using the Notify() method.

    FUNCTION_BLOCK PUBLIC FB_Subject IMPLEMENTS I_Subject
    VAR
      fbDelay : TON;
      fbDrand : DRAND;
      fValue : LREAL;
      aObservers : ARRAY [1..Param.cMaxObservers] OF I_Observer;
    END_VAR
    
    // creates every sec a random value and invoke the update method
    fbDelay(IN := TRUE, PT := T#1S);
    IF (fbDelay.Q) THEN
      fbDelay(IN := FALSE);
      fbDrand(SEED := 0);
      fValue := fbDrand.Num * 1234.5;
      Notify();
    END_IF
    

    There is no statement in the subject to access FB_Observer directly. Access always takes place indirectly via the interface I_Observer. An application can be extended with any observer. As long as it implements the interface I_Observer, no adjustments to the subject are necessary.

    Pic06 

    Application

    The following module should help to test the example program. A subject and two observers are created in it. By setting appropriate auxiliary variables, the two observers can be both connected to the subject and disconnected again at runtime.

    PROGRAM MAIN
    VAR
      fbSubject         : FB_Subject;
      fbObserver1       : FB_Observer;
      fbObserver2       : FB_Observer;
      bAttachObserver1  : BOOL;
      bAttachObserver2  : BOOL;
      bDetachObserver1  : BOOL;
      bDetachObserver2  : BOOL;
    END_VAR
    
    fbSubject();
    
    IF (bAttachObserver1) THEN
      fbSubject.Attach(fbObserver1);
      bAttachObserver1 := FALSE;
    END_IF
    IF (bAttachObserver2) THEN
      fbSubject.Attach(fbObserver2);
      bAttachObserver2 := FALSE;
    END_IF
    IF (bDetachObserver1) THEN
      fbSubject.Detach(fbObserver1);
      bDetachObserver1 := FALSE;
    END_IF
    IF (bDetachObserver2) THEN
      fbSubject.Detach(fbObserver2);
      bDetachObserver2 := FALSE;
    END_IF
    

    Sample 1 (TwinCAT 3.1.4022) on GitHub

    Improvements

    Subject: Interface or base class?

    The necessity of the interface I_Observer is obvious in this implementation. Access to an observer is decoupled from implementation by the interface.

    However, the interface I_Subject does not appear necessary here. And in fact, the interface I_Subject could be omitted. However, I have planned it anyway, because it keeps the option open to create special variants of FB_Subject. For example, there might be a function block that does not organize the observer list in an array. The methods for logging on and off the different Observers could then be accessed generically using the interface I_Subject.

    The disadvantage of the interface, however, is that the code for logging in and out must be implemented each time, even if the application does not require it. Instead, a base class (FB_SubjectBase) seems to be more useful for the subject. The management code for the methods Attach() and Detach() could be moved to this base class. If it is necessary to create a special subject (FB_SubjectNew), it can be inherited from this base class (FB_SubjectBase).

    But what if this special function block (FB_SubjectNew) already inherits from another base class (FB_Base)? Multiple inheritance is not possible (however, several interfaces can be implemented).

    Here, it makes sense to embed the base class in the new function block, i.e. to create a local instance of FB_SubjectBase.

    FUNCTION_BLOCK PUBLIC FB_SubjectNew EXTENDS FB_Base IMPLEMENTS I_Subject
    VAR
      fValue               : LREAL;
      fbSubjectBase        : FB_SubjectBase;
    END_VAR
    

    The methods Attach() and Detach() can then access this local instance.

    Method Attach():

    METHOD PUBLIC Attach : BOOL
    VAR_INPUT
      ipObserver : I_Observer;
    END_VAR
    
    Attach := FALSE;
    IF (THIS^.fbSubjectBase.Attach(ipObserver)) THEN
      ipObserver.Update(THIS^.fValue);
      Attach := TRUE;
    END_IF
    

    Method Detach():

    METHOD PUBLIC Detach : BOOL
    VAR_INPUT
      ipObserver : I_Observer;
    END_VAR
    Detach := THIS^.fbSubjectBase.Detach(ipObserver);
    

    Method Notify():

    METHOD PRIVATE Notify
    VAR
      nIndex : INT := 0;
    END_VAR
    
    FOR nIndex := 1 TO Param.cMaxObservers DO
      IF (THIS^.fbSubjectBase.aObservers[nIndex] 0) THEN
        THIS^.fbSubjectBase.aObservers[nIndex].Update(THIS^.fActualValue);
      END_IF
    END_FOR
    

    Thus, the new subject implements the interface I_Subject, inherits from the function block FB_Base and can access the functionalities of FB_SubjectBase via the embedded instance.

    Pic07

    Sample 2 (TwinCAT 3.1.4022) on GitHub

    Update: Push or pull method?

    There are two ways in which the observer receives the desired information from the subject:

    With the push method, all information is passed to the observer via the update method. Only one method call is required for the entire information exchange. In the example, only one variable of the data type LREAL has ever passed the subject. But depending on the application, it can be considerably more data. However, not every observer always needs all the information that is passed to it. Furthermore, extensions are made more difficult: What if the method Update() is extended by further data? All observers must be customized. This can be remedied by using a special function block as a parameter. This function block encapsulates all necessary information in properties. If additional properties are added, it is not necessary to adjust the update method.

    If the pull method is implemented, the observer receives only a minimal notification. He then gets all the information he needs from the subject. However, two conditions must be met. First, the subject should make all data available as properties. On the other hand, the observer must be given a reference to the subject so that it can access the properties. One solution may be that the update method contains a reference to the subject (i.e. to itself) as a parameter.

    Both variants can certainly be combined with each other. The subject provides all relevant data as properties. At the same time, the update method can provide a reference to the subject and pass the most important information as a function block. This method is the classic approach of numerous GUI libraries.

    Tip: If the subject knows little about its observers, the pull method is preferable. If the subject knows its observers (since there are only a few different types of observers), the push method should be used.

  • Holger Schwichtenberg: GroupBy funktioniert in Entity Framework Core 2.1 Release Candidate 1 endlich

    Tests in Entity Framework Core 2.1 Release Candidate 1 zeigen, dass nun tatsächlich die Übersetzung des LINQ-GroupBy-Operators in SQL funktioniert. Endlich!

    Holger Schwichtenberg: Die Highlights der Build 2018

    Der Dotnet-Doktor fasst die wesentlichen Nachrichten von Microsofts Build-Konferenz 2018 zusammen.

    Holger Schwichtenberg: Microsoft Build 2018: Was können wir erwarten?

    Microsofts Entwicklerkonferenz "Build 2018" beginnt am Montag, den 7. Mai 2018, um 17 Uhr deutscher Zeit mit der ersten Keynote. Microsoft wird wohl wieder Neuigkeiten zu .NET, .NET Core, Visual Studio, Azure und Windows verkünden.

    Martin Richter: Benachrichtigungen erhalten wenn der Symantec Endpoint Protection Manager (SEPM) keine Virendefinitionen aktualisiert

    Seit Jahren benutzen wir die Symantec Endpoint Protection in meiner Firma. Aktuell die Version 14.0.1.
    Eigentlich macht das Ding was es soll. Aber… Es gibt einen Fall in dem im Werkzeug Koffer von Symantec kein Tool vorhanden ist das Problem zu lösen.

    Was passiert?

    Eigentlich möchte man ja von einem Antivirensystem nichts hören und sehen. Es soll funktionieren und das war es.
    Ganz besonders in einer Firma in der 5, 10, 20 und mehr Clients vorhanden sind.
    Der SEPM (Symantec Endpoint Protection Manager), benachrichtigt much brav wenn auf Stations nach mehreren Tagen noch alte Virendefinition sind, oder auch eine bestimme Anzahl von PCs erreicht wurden, die alte Virendefinitionen haben. Oft sind das bei uns Maschinen, die unterwegs sind, oder lange nicht eingeschaltet wurden.

    Aber es gibt einen Fall in dem der SEPM vollkommen versagt: Wenn nämlich der SEPM selber keine neuen Virendefinitionen erhält. Warum auch immer!

    Ich hatte in den letzten Jahren mehrfach den Fall, in denen der SEPM keine neuen Virendefinitionen von Symantec geladen hat. Die Gründe waren vielseitig. Mal hatte der SEPM kein Internet aufgrund eines Konfigurationsfehlers, mal startete der SEPM gar nicht nach einem Windows Update.
    Aber in den meisten Fällen war der SEPM einfach nicht fähig, die neuen Signaturen zu laden obwohl er anzeigte, dass welche zum Download bereit stehen.

    Der letzte Fall ist besonders nervig. Ich habe zwei Support-Cases zu dem Thema schon offen gehabt, aber die redlich bemühten Supporter haben dennoch nichts herausbekommen.
    Nach jeweiligem Neustart des Dienstes oder des Servers, lief es fast immer wieder. Also hatte sich scheinbar nur irgendwas intern „verklemmt“!

    Dieser Fall ist aber gefährlich. Man bekommt von der ganzen Sache nichts mit, bis eine bestimmte Anzahl von PCs nach ein paar Tagen eben alte Virendefinitionen haben. In der Einstellung bei uns sind das 10% der Maschinen nach 4 Tagen. Man kann das zwar herunter drücken, aber diese Warnungen nerven meistens nur ohne triftigen Grund.
    Und man kann in diesem Fall nicht mal testweise einfach den SEPM neu starten.
    Eigentlich will ich keine Meldung und das System soll erstmal selbst versuchen erkannte Problem zu lösen.
    Vor allem habe ich keine Lust irgendjemanden zu beauftragen, der pro Tag einmal diese blöde Konsole startet und nachschaut was los ist. Über alles andere bekomme ich ja auch Emails.

    Ich finde solch eine lange Latenz, in der es nicht bemerkt wird, dass die AV-Signaturen alt sind, einfach gefährlich.
    Aber Bordmittel darüber zu warnen gibt es nicht!
    Zudem trat dieser Fall ca alle 6-9 Wochen immer einmal wieder auf.
    Und das nervt.

    Also habe ich mich auf die Suche gemacht und habe für den SQL Server in dem unsere Daten gehalten werden zwei kleine Jobs geschrieben die nachfolgend beschreiben werden.
    Diese Jobs laufen nun einige Monate und haben bereits mehrfach erfolgreich dieses Problem „selbsttätig“ behoben…

    Job 1: Symantec Virus Signature Check

    Dieser Job läuft jede Stunde einmal.
    Der Code macht einfach folgendes.

    • Sollte es in den letzten 32 Stunden (siehe Wert für @delta) eine Änderungen in den Signaturen gegeben haben ist alles OK
    • Gab es kein Update der Signaturen dann werden zwei Schritte eingeleitet.
    • Es wird über den internen SQL Mail Dienst eine Warnung versendet an den Admin.
    • Danach wird ein weiterer Job gestartet mit dem Namen Symantec SEPM Restart.

    Die 32 Stunden, sind ein Wert der sich aus Erfahrungswerten gebildet hat. In 98% aller Fälle werden Signaturen innerhalb von 24h aktualisiert. Aber es gibt eben ein paar Ausnahmen

    Taucht die Email mehr als zweimal auf, muss ich wohl irgendwie aktiv werden und mal manuell kontrollieren.

    DECLARE @delta INT
    -- number of hours
    SET @delta = 32 
    DECLARE @d DATETIME 
    DECLARE @t VARCHAR(MAX)
    IF NOT EXISTS(SELECT * FROM PATTERN WHERE INSERTDATETIME>DATEADD(hh,-@delta,GETDATE()) AND PATTERN_TYPE='VIRUS_DEFS')
    BEGIN
          SET @d = (SELECT TOP 1 INSERTDATETIME FROM PATTERN WHERE PATTERN_TYPE='VIRUS_DEFS' ORDER BY INSERTDATETIME DESC)
          SET @t = 'Hallo Admin!
    
    Die letzten Antivirus-Signaturen wurden am ' + CONVERT(VARCHAR, @d, 120)+' aktualisiert!
    Es wird versucht den SEPM Dienst neu zu starten!
    
    Liebe Grüße Ihr
    SQLServerAgent'
          EXEC msdb.dbo.sp_send_dbmail @profile_name='Administrator',
    				   @recipients='administrator@mydomain.de',
    				   @subject='Symantec Virus Definitionen sind nicht aktuell',
    				   @body=@t
          PRINT 'Virus Signaturen sind veraltet! Letztes Update: ' + CONVERT(VARCHAR, @d, 120)
          EXEC msdb.dbo.sp_start_job @job_name='Symantec Restart'
          PRINT 'Restart SEPM server!!!'
        END
      ELSE
        BEGIN 
          SET @d = (SELECT TOP 1 INSERTDATETIME FROM PATTERN WHERE PATTERN_TYPE='VIRUS_DEFS' ORDER BY INSERTDATETIME DESC)
          PRINT 'Virus Signaturen sind OK! Letztes Update: ' + CONVERT(VARCHAR, @d, 120)
        END
    

    Job 2: Symnatec Restart

    Dieser Job wird nur durch den Job 1 gestartet und er ist äußerst trivial.
    Führt einfach nur 2 Befehle aus, die den SEPM stoppen und anschließend neu starten.

    NET STOP SEMSRV
    NET START SEMSRV
    

    PS: Traurig war, dass man durch den Support auch keine Hilfe bekam, nachdem ich solch einen Lösungsansatz vorschlug. Man wollte mir keine Infos über die Strukturen der Tabellen geben. Letzten Endes waren die Suchmaschinen so nett alle nötigen Informationen zu liefern, denn ich war nicht der einzige mit diesem Problem.


    Copyright © 2017 Martin Richter
    Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
    (Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

    David Tielke: dotnet Cologne 2018 - Folien meines Vortrags über serviceorientierte Architekturen

    Heute bin auch ich endlich mit der dotnet Cologne 2018 in das Konferenzjahr gestartet. Auf der von der .NET Usergroup Köln/Bonn e.V. jährlich veranstalteten Konferenz, durfte ich für meinen langjährigen Partner Developer Media zum ersten Mal als Sprecher teilnehmen. Während viele neue und hippe Themen auf der Agenda standen, habe ich mich bewusst auf altbewährtes konzentriert und mit zahlreichen Vorurteilen und Kritikpunkten bezüglich einer der am meisten missverstandenen Architekturmuster überhaupt aufgeräumt - den Serviceorientierten Architekturen. In der 60 minütigen Level 300 Session ging es neben der Theorie zu Architekturen, der Funktionsweise von SOA vor allem um eins: Was können wir aus dieser genialen Architekturform für andere Architekturen lernen? Wie kann eine monolithische Systemarchitektur mit Aspekten aus SOA zu einer flexiblen und wartbaren Architektur umgewandelt werden? Nach dem sehr gut besuchten Vortrag habe ich sehr viel Feedback der Teilnehmer bekommen, besonders von denen, die keinen Sitzplatz mehr ergattern konnten. Deshalb habe ich das Thema noch einmal als Webcast aufgezeichnet und in meinem Youtube-Channel online gestellt. Zusätzlich gibt es hier wie immer die Folien als PDF. Ich danke an dieser Stelle nochmal allen Teilnehmern, natürlich dem Veranstalter und meinem Partner Developer Media für diesen tollen Konferenztag. Bis zum nächsten Jahr!

    Webcast


    Folien

    Links
    Folien
    Youtube-Kanal

    Martin Richter: Sieh mal an: SetFilePointer und SetFilePointerEx sind eigentlich überflüssig, wenn man ReadFile und WriteFile verwendet…

    Man lernt nie aus, bzw. man hat vermutlich nie die Dokumentation vollständig und richtig gelesen.

    Wenn man eine Datei nicht sequentiell liest ist es normal Seek, Read/Write in dieser Folge zu nutzen. Order eben SetFilePointer, ReadFile/WriteFile.

    In einer StackOverflow Antwort stolperte ich über die Aussage:

    you not need use SetFilePointerEx – this is extra call. use explicit offset in WriteFile / ReadFile instead

    (Rechtschreibung nicht korrigiert).

    Aber der Inhalt war für mich neu. Sieh mal an, selbst wen man kein FILE_FLAG_OVERRLAPPED benutzt, kann man die OVERLAPPED Struktur nutzen und die darin vorhandenen Offsets verwenden.
    Diese werden sogar netterweise angepasst, nachdem gelesen/geschrieben wurde.

    Zitat aus der MSDN (der Text ist gleichlautend für WriteFile):

    Considerations for working with synchronous file handles:

    • If lpOverlapped is NULL, the read operation starts at the current file position and ReadFile does not return until the operation is complete, and the system updates the file pointer before ReadFile returns.
    • If lpOverlapped is not NULL, the read operation starts at the offset that is specified in the OVERLAPPED structure and ReadFile does not return until the read operation is complete. The system updates the OVERLAPPED offset before ReadFile returns.

    Copyright © 2017 Martin Richter
    Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
    (Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

    Holger Schwichtenberg: C# 8.0 erkennt mehr Programmierfehler

    Referenztypen werden nicht mehr automatisch "nullable" sein; die Möglichkeit, den Wert null zuzuweisen, müssen Entwickler explizit deklarieren.

    Code-Inside Blog: .editorconfig: Sharing a common coding style in a team

    Sharing Coding Styles & Conventions

    In a team it is really important to set coding conventions and to use a specific coding style, because it helps to maintain the code - a lot. Of course has each developer his own “style”, but some rules should be set, otherwise it will end in a mess.

    Typical examples for such rules are “Should I use var or not?” or “Are _ still OK for private fields?”. Those questions shouldn’t be answered in a Wiki - it should be part of the daily developer life and should show up in your IDE!

    Be aware that coding conventions are highly debated. In our team it was important to set a commpon ruleset, even if not everyone is 100% happy with each setting.

    Embrace & enforce the conventions

    In the past this was the most “difficult” aspect: How do we enforce these rules?

    Rules in a Wiki are not really helpful, because if you are in your favorite IDE you might not notice rule violations.

    Stylecop was once a thing in the Visual Studio World, but I’m not sure if this is still alive.

    Resharper, a pretty useful Visual Studio plugin, comes with it’s own code convention sharing file, but you will need Resharper enforce and embrace the conventions.

    Introducing: .editorconfig

    Last year Microsoft decided to support the .EditorConfig file format in Visual Studio.

    The .editorconfig defines a set of common coding styles (think of tabs or spaces) in a very simple format. Different text ediotors and IDEs support this file, which makes it a good choice if you are using multiple IDEs or working with different setups.

    Additionally Microsoft added a couple of C# related options for the editorconfig file to support the C# language features.

    Each rule can be marked as “Information”, “Warning” or “Error” - which will light up in your IDE.

    Sample

    This was a tough choice, but I ended up with the .editorconfig of the CoreCLR. It is more or less the “normal” .NET style guide. I’m not sure if I love the the “var”-setting and the “static private field naming (like s_foobar)”, but I can life with them and it was for us a good starting point (and still is).

    The .editorconfig file can be saved at the same level as the .sln file, but you could also use multiple .editorconfig files based on the folder structure. Visual Studio should detect the file and see the rules.

    Benefits

    When everything is ready Visual Studio should populate the results and show does nice light blub:

    x

    Be aware that I have Resharper installed and Resharper has it’s own ruleset, which might be in conflict with the .editorconfig setting. You need to adjust those settings in Resharper. I’m still not 100% sure how good the .editorconfig support is, sometimes I need to overwrite the backed in Resharper settings and sometimes it just works. Maybe this page gives a hint.

    Getting started?

    Just search for a .editorconfig file (or use something from the Microsoft GitHub repositories) and play with the settings. The setup is easy and it’s just a small text file right next to our code. Read more about the customization here.

    Related topic

    If you are looking for a more powerful option to embrace coding standards, you might want to take a look at Roslyn Analysers:

    With live, project-based code analyzers in Visual Studio, API authors can ship domain-specific code analysis as part of their NuGet packages. Because these analyzers are powered by the .NET Compiler Platform (code-named “Roslyn”), they can produce warnings in your code as you type even before you’ve finished the line (no more waiting to build your code to discover issues). Analyzers can also surface an automatic code fix through the Visual Studio light bulb prompt to let you clean up your code immediately

    MSDN Team Blog AT [MS]: OpenHack IoT & Data – 28. – 30. Mai

    Als CSE (Commercial Software Engineering) können wir Kunden beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Ende Mai bieten wir als Readiness Maßnahme einen dreitägigen OpenHack zu IoT & Data an. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß.

    28. - 30. Mai OpenHack IoT & DataWir geben bei den OpenHacks den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer.

    Weiters braucht Ihr in diesem Fall auch noch kein eigenes Projekt, an dem ihr weiterarbeitet. Ihr braucht uns hier also dementsprechend auch keinerlei Projektinformationen von Euch zu geben. Kann bei machen Chefs ja ein wichtiger Punkt sein. Smile

    Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, …

    Falls ihr Euch also aktiv mit den Themen IoT & Data auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien.

    Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!

    Das Ziel wäre es klarerweise Euch zu motivieren über eigene IoT & Data Projekte nachzudenken bzw. diese zu starten. Nur so als kleine Anregung daher ein paar Projekte, die in Kooperation mit der CSE entstanden sind: https://www.microsoft.com/developerblog/tag/IoT

    Anmelden unter: http://www.aka.ms/zurichopenhack

    Ich freue mich auf Euer Kommen!!

    MSDN Team Blog AT [MS]: Build 2018 Public Viewing with BBQ & Beer

     

    Die Microsoft Build Konferenz ist DIE Veranstaltung für alle Softwareentwickler die sich mit Microsoft Technologien befassen. Die Keynote gibt immer einen tollen Überblick über die neuesten Entwicklungen in den Bereichen .NET, Azure, Windows, Visual Studio, AI, IoT, Big Data und mehr.

    Build 2018 Public Viewing with BBQ & BeerManche haben ja das Glück vor Ort live dabei sein zu können. Ein paar müssen allerdings auhc zu Hause bleiben.

    Ist das ein grund zum traurig sein? Kommt darauf an. Winking smile
    Zumindest in Graz trifft sich die Community zum Build 2018 Public Viewing with BBQ & Beer

    Zur Einstimmung auf die Keynote wird die Microsoft Developer User Group Graz dieses Jahr schon etwas früher mit BBQ und Bier starten.

    Ab 16:00 gibt es BBQ und Bier, ab
    17:30 dann die Keynote, gefolgt von einem gemütlicher Ausklang.

    Die Microsoft Developer User Group Graz freut sich auf Euer Kommen, gutes Essen und eine spannende Keynote!

     

    Stefan Henneken: IEC 61131-3: Der generische Datentyp T_Arg

    In dem Artikel The wonders of ANY zeigt Jakob Sagatowski wie der Datentyp ANY sinnvoll eingesetzt werden kann. Im beschriebenen Beispiel vergleicht eine Funktion zwei Variablen, ob der Datentyp, die Datenlänge und der Inhalt exakt gleich sind. Statt für jeden Datentyp eine eigene Funktion zu implementieren, kann mit dem Datentyp ANY die gleichen Anforderungen mit nur einer Funktion deutlich eleganter umgesetzt werden.

    Vor einiger Zeit hatte ich eine ähnliche Aufgabenstellung. Es sollte eine Methode entwickelt werden, die eine beliebige Anzahl von Parametern entgegennimmt. Sowohl der Datentyp, als auch die Anzahl der Parameter waren beliebig.

    Bei meinem ersten Lösungsansatz versuchte ich ein Array mit variabler Länge vom Typ ARRAY [*] OF ANY einzusetzen. Allerdings können Arrays mit variabler Länge nur als VAR_IN_OUT und der Datentyp ANY nur als VAR_INPUT eingesetzt werden (siehe auch IEC 61131-3: Arrays mit variabler Länge). Somit schied dieser Ansatz aus.

    Alternativ zu dem Datentyp ANY steht noch die Struktur T_Arg zur Verfügung. T_Arg ist in der TwinCAT-Bibliothek Tc2_Utilities deklariert und steht, im Gegensatz zu ANY, auch unter TwinCAT 2 zur Verfügung. Der Aufbau von T_Arg ist vergleichbar mit der Struktur, die für den Datentyp ANY eingesetzt wird (siehe auch The wonders of ANY).

    TYPE T_Arg :
    STRUCT
      eType   : E_ArgType    := ARGTYPE_UNKNOWN;     (* Argument data type *)
      cbLen   : UDINT        := 0;                   (* Argument data byte length *)
      pData   : UDINT        := 0;                   (* Pointer to argument data *)
    END_STRUCT
    END_TYPE
    

    T_Arg kann an beliebigen Stellen eingesetzt werden, somit auch im Bereich VAR_IN_OUT.

    Die folgende Funktion addiert eine beliebige Anzahl von Zahlen, deren Datentyp ebenfalls beliebig sein kann. Das Ergebnis wird als LREAL zurückgegeben.

    FUNCTION F_AddMulti : LREAL
    VAR_IN_OUT
      aArgs        : ARRAY [*] OF T_Arg;
    END_VAR
    VAR
      nIndex	: DINT;
      aUSINT	: USINT;
      aUINT		: UINT;
      aINT		: INT;
      aDINT		: DINT;
      aREAL		: REAL;   
      aLREAL	: LREAL;
    END_VAR
    
    F_AddMulti := 0.0;
    FOR nIndex := LOWER_BOUND(aArgs, 1) TO UPPER_BOUND(aArgs, 1) DO
      CASE (aArgs[nIndex].eType) OF
        E_ArgType.ARGTYPE_USINT:
          MEMCPY(ADR(aUSINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aUSINT;
        E_ArgType.ARGTYPE_UINT:
          MEMCPY(ADR(aUINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aUINT;
        E_ArgType.ARGTYPE_INT:
          MEMCPY(ADR(aINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aINT;
        E_ArgType.ARGTYPE_DINT:
          MEMCPY(ADR(aDINT), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aDINT;
        E_ArgType.ARGTYPE_REAL:
          MEMCPY(ADR(aREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aREAL;
        E_ArgType.ARGTYPE_LREAL:
          MEMCPY(ADR(aLREAL), aArgs[nIndex].pData, aArgs[nIndex].cbLen);
          F_AddMulti := F_AddMulti + aLREAL;
      END_CASE
    END_FOR
    

    Der Aufruf der Funktion gestaltet sich hierbei allerdings etwas umständlicher, als bei dem Datentyp ANY.

    PROGRAM MAIN
    VAR
      sum          : LREAL;
      args         : ARRAY [1..4] OF T_Arg;
      a            : INT := 4567;
      b            : REAL := 3.1415;
      c            : DINT := 7032345;
      d            : USINT := 13;
    END_VAR
    
    args[1] := F_INT(a);
    args[2] := F_REAL(b);
    args[3] := F_DINT(c);
    args[4] := F_USINT(d);
    sum := F_AddMulti(args);
    

    Das Array, das an die Funktion übergeben wird, muss zuvor initialisiert werden. In der Bibliothek Tc2_Utilities stehen Hilfsfunktionen zur Verfügung, die eine Variable in eine Struktur vom Typ T_Arg umwandeln (F_INT(), F_REAL(), F_DINT(), …). Die Funktion zum Addieren der Werte besitzt nur noch eine Eingangsvariable vom Typ ARRAY [*] OF T_Arg.

    Anwendung findet der Datentyp T_Arg z.B. im Funktionsblock FB_FormatString() oder in der Funktion F_FormatArgToStr() von TwinCAT. Mit dem Funktionsblock FB_FormatString() können in einem String bis zu 10 Platzhalter durch Werte von SPS-Variablen vom Typ T_Arg ersetzt werden (ähnlich wie bei fprintf in C).

    Ein Vorteil von ANY ist die Tatsache, dass der Datentyp durch die Norm IEC 61131-3 definiert wird.

    Auch wenn die generische Datentypen ANY und T_Arg vom Leistungsumfang nicht den Generics in C# oder den Templates in C++ entsprechen, so unterstützen diese dennoch die Entwicklung generischer Funktionen in der IEC 61131-3. Diese können jetzt so entworfen werden, dass die gleiche Funktion für unterschiedliche Datentypen und Datenstrukturen verwendet werden kann.

    Jürgen Gutsch: A generic logger factory facade for classic ASP.NET

    ASP.NET Core already has this feature. There is a ILoggerFactory to create a logger. You are able to inject the ILoggerFactory to your component (Controller, Service, etc.) and to create a named logger out of it. During testing you are able to replace this factory with a mock, to not test the logger as well and to not have an additional dependency to setup.

    Recently we had the same requirement in a classic ASP.NET project, where we use Ninject to enable dependency injection and log4net to log all the stuff we do and all exceptions. One important requirement is a named logger per component.

    Creating named loggers

    Usually log4net gets created inside the components as a private static instance:

    private static readonly ILog _logger = LogManager.GetLogger(typeof(HomeController));
    

    There already is a static factory method to create a named logger. Unfortunately this isn't really testable anymore and we need a different solution.

    We could create a bunch of named logger in advance and register them to Ninject, which obviously is not the right solution. We need to have a more generic solution. We figured out two different solutions:

    // would work well
    public MyComponent(ILoggerFactory loggerFactory)
    {
        _loggerA = loggerFactory.GetLogger(typeof(MyComponent));
        _loggerB = loggerFactory.GetLogger("MyComponent");
        _loggerC = loggerFactory.GetLogger<MyComponent>();
    }
    // even more elegant
    public MyComponent(
        ILoggerFactory<MyComponent> loggerFactoryA
        ILoggerFactory<MyComponent> loggerFactoryB)
    {
        _loggerA = loggerFactoryA.GetLogger();
        _loggerB = loggerFactoryB.GetLogger();
    }
    

    We decided to go with the second approach, which is a a simpler solution. This needs a dependency injection container that supports open generics like Ninject, Autofac and LightCore.

    Implementing the LoggerFactory

    Using Ninject the binding of open generics looks like this:

    Bind(typeof(ILoggerFactory<>)).To(typeof(LoggerFactory<>)).InSingletonScope();
    

    This binding creates an instance of LoggerFactory<T> using the requested generic argument. If I request for an ILoggerFactory<HomeController>, Ninject creates an instance of LoggerFactory<HomeController>.

    We register this as an singleton to reuse the ILog instances as we would do using the usual way to create the ILog instance in a private static variable.

    The implementation of the LoggerFactory is pretty easy. We use the generic argument to create the log4net ILog instance:

    public interface ILoggerFactory<T>
    {
    	ILog GetLogger();
    }
    
    public class LoggerFactory<T> : ILoggerFactory<T>
    {
        private ILog _logger;
        public ILog GetLogger()
        {
            if (_logger == null)
            {
                var type = typeof(T);
                _logger = LogManager.GetLogger(typeof(T));
            }
            return _logger;
        }
    }
    

    We need to ensure the logger is created before creating a new one. Because Ninject creates a new instance of the LoggerFactory per generic argument, the LoggerFactory don't need to care about the different loggers. It just stores a single specific logger.

    Conclusion

    Now we are able to create one or more named loggers per component.

    What we cannot do, using this approach is to create individual named loggers, using a specific string as a name. There is a type needed that gets passed as generic argument. So every time we need an individual named logger we need to create a specific type. In our case this is not a big problem.

    If you don't like to create types just to create individual named loggers, feel free to implement a non generic LoggerFactory and make a generic GetLogger method as well as a GetLogger method that accepts strings as logger names.

    Jürgen Gutsch: Creating Dummy Data Using GenFu

    Two years ago I already wrote about playing around with GenFu and I still use it now, as mentioned in that post. When I do a demo, or when I write blog posts and articles, I often need dummy data and I use GenFu to create it. But every time I use it in a talk or a demo, somebody still asks me a question about it,

    Actually I really forgot about that blog post and decided to write about it again this morning because of the questions I got. Almost accidently I stumbled upon this "old" post.

    I wont create a new one. Now worries ;-) Because of the questions I just want to push this topic a little bit to the top:

    Playing around with GenFu

    GenFu on GitHub

    PM> Install-Package GenFu
    

    Read about it, grab it and use it!

    It is one of the most time saving tools ever :)

    Holger Schwichtenberg: Die Windows-Update-Endlosschleife und der Microsoft-Support

    Windows 10 Update 1709 installiert nicht und der Microsoft-Support hat auch keine Lösung beziehungsweise gibt sich nicht viel Mühe, eine Lösung zu finden.

    Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.