Category Archives: UWP

Machine Learning with ML.NET in UWP: Feature Correlation Analysis

In this article we show how to perform Feature Correlation Analysis and display the results in a Heat Map in the context of Machine Learning in UWP. It’s the fourth in a series that started here, on implementing Machine Learning scenarios in UWP using Open Source frameworks and components such as

All articles in the series revolve around a single UWP sample app that lives here on GitHub. Here’s how the Feature Correlation Analysis page looks like:

HeatMap

It displays the correlation between different properties in the popular Titanic passengers dataset: age, fare, ticket class, whether the passenger was accompanied with siblings, spouses, parents or children, and whether he or she survived the trip.

The darker red or blue squares on the heat map indicate that the corresponding properties on X and Y axis have a higher correlation with each other. Higher correlation is a warning sign for possible negative impact on the classification model when both features would be added to the training data.

Feature Correlation Analysis

Feature Correlation Analysis in Machine Learning

The topic of this article is Feature Correlation Analysis. Just like in the previous article -Feature Distribution Analysis- we are in the “data preparation” phase of a Machine Learning scenario. We’re not training or even defining models yet, we’re selecting the features to train them with. An ideal feature set contains features that are highly correlated with the classification (in ML.NET terminology: the Label Column), yet uncorrelated to each other.

Identifying the right feature set highly impacts the quality and performance of the subsequent learning and generalization steps. Here are two important reasons not to keep on adding feature columns to a training data set:

  • while the predictive power of a classifier first increases with the number of dimensions/features used, there comes a break point where it decreases again (the so-called Curse of Dimensionality). The training data set is always a finite set of samples with discrete values, while the prediction space may be infinite and continuous in all dimensions. So the more features a data set with fixed a number of samples has, the less representative it may become. Secondly,
  • there’s also the cost incurred by adding features. Two features that are highly correlated with each other don’t add much value to a classifier, but they sure add cost at training, persisting and/or inference time. Machine Learning activities can be pretty resource intensive on CPU, memory, and elapsed time, so it sure makes sense to limit the number or features.

The process of converting a set of observations of possibly correlated variables into a smaller set of values of linearly uncorrelated variables is called Principal Component Analysis. This technique was invented by Karl Pearson, the same person that defined its main instrument: the Pearson correlation coefficient – a measure of the linear correlation between two variables X and Y. Pearson’s correlation coefficient is the covariance of the two variables divided by the product of their standard deviations. It has a value between +1 and -1, where 1 is total positive linear correlation, 0 is no linear correlation, and -1 is total negative linear correlation.

During Principal Component Analysis a matrix is calculated with the correlation between each pair of features. Highly correlated feature pairs are then ‘sanitized’ by removing one or combining both (e.g. by creating a new feature by multiplying the variables’ values). At the same time you also calculate the correlation between each attribute and the output variable (the Label), to select only those attributes that have a moderate-to-high positive or negative correlation (close to -1 or 1) and drop those attributes with a low correlation (value close to zero).

Feature Correlation Analysis with ML.NET and Math.NET

Data Preparation is outside the core business of ML.NET itself, but for retrieving and manipulating the candidate training data we can count one on its most important spin-off components: the DataView API.

DataViewCentric

We can fetch the samples, optionally filter and add missing data, and then pivot it into arrays of feature values (exactly what we did in the previous article). Al we need is TextLoader to create an IDataView from the data set, get the column names from its Schema, call GetColumn() to get the array, and ‘upgrade’ the data type to double:

var reader = new TextLoader(_mlContext,
                            new TextLoader.Arguments()
                            {
                                Separator = ",",
                                HasHeader = true,
                                Column = new[]
                                    {
                                    new TextLoader.Column("Survived", DataKind.R4, 1),
                                    new TextLoader.Column("PClass", DataKind.R4, 2),
                                    new TextLoader.Column("Age", DataKind.R4, 5),
                                    new TextLoader.Column("SibSp", DataKind.R4, 6),
                                    new TextLoader.Column("Parch", DataKind.R4, 7),
                                    new TextLoader.Column("Fare", DataKind.R4, 9)
                                    }
                            });
var dataView = reader.Read(src);
var result = new List<List<double>>();
for (int i = 0; i < dataView.Schema.ColumnCount; i++)
{
    var columnName = dataView.Schema.GetColumnName(i);
    result.Add(dataView.GetColumn<float>(_mlContext, columnName).Select(f => (double)f).ToList());
}

return result;

Now that we have all the feature values in arrays, it’s time to calculate the correlations. The MathNet.Numerics.Statistics.Correlation class from Math.NET hosts implementations for several Pearson, Spearman, and other correlation calculations.

We decided to make a copy of the code that calculates the Pearson correlation between two IEnumerable<double> instances:

/// <summary>
/// Computes the Pearson Product-Moment Correlation coefficient.
/// </summary>
/// <param name="dataA">Sample data A.</param>
/// <param name="dataB">Sample data B.</param>
/// <returns>The Pearson product-moment correlation coefficient.</returns>
/// <remarks>Original Source: https://github.com/mathnet/mathnet-numerics/blob/master/src/Numerics/Statistics/Correlation.cs </remarks>
public static double Pearson(IEnumerable<double> dataA, IEnumerable<double> dataB)
{
    var n = 0;
    var r = 0.0;

    var meanA = 0d;
    var meanB = 0d;
    var varA = 0d;
    var varB = 0d;

    using (IEnumerator<double> ieA = dataA.GetEnumerator())
    using (IEnumerator<double> ieB = dataB.GetEnumerator())
    {
        while (ieA.MoveNext())
        {
            if (!ieB.MoveNext())
            {
                throw new ArgumentOutOfRangeException(nameof(dataB), "Array too short.");
            }

            var currentA = ieA.Current;
            var currentB = ieB.Current;

            var deltaA = currentA - meanA;
            var scaleDeltaA = deltaA / ++n;

            var deltaB = currentB - meanB;
            var scaleDeltaB = deltaB / n;

            meanA += scaleDeltaA;
            meanB += scaleDeltaB;

            varA += scaleDeltaA * deltaA * (n - 1);
            varB += scaleDeltaB * deltaB * (n - 1);
            r += (deltaA * deltaB * (n - 1)) / n;
        }

        if (ieB.MoveNext())
        {
            throw new ArgumentOutOfRangeException(nameof(dataA), "Array too short.");
        }
    }

    return r / Math.Sqrt(varA * varB);
}

For the sake of completeness: that same Math.NET class also hosts code to calculate the whole matrix. For the sample app this would require importing a lot more code (linear algebra classes such as Matrix) or adding the whole NuGet package to the project.

Here’s how the sample app calculates the whole correlation matrix:

// Read data
var matrix = await ViewModel.LoadCorrelationData();

// Populate diagram
var data = new double[6, 6];
for (int x = 0; x < 6; ++x)
{
    for (int y = 0; y < 5 - x; ++y)
    {
        var seriesA = matrix[x];
        var seriesB = matrix[5 - y];

        var value = Statistics.Pearson(seriesA, seriesB);

        data[x, y] = value;
        data[5 - y, 5 - x] = value;
    }

    data[x, 5 - x] = 1;
}

All we now need is a way to properly visualize it.

Correlation Heat Maps

A heat map is a representation of data in which the values are represented by colors. They are ideal to highlight patterns and extreme values in rectangular data such as matrixes.

Correlation Heat Maps in Machine Learning

In Machine Learning, heat maps are used to display correlations between feature values. The typical (“Pearson”) color scheme is a gradient that goes from

  • red for high positive correlation (value +1), over
  • white for no correlation (value 0), to
  • blue for high negative correlation (value –1).

Sometimes the values are normalized (brought to a range from 0 to 1) like in the next image. When there are a lot of features, the value labels are omitted in the the diagram. Since correlation is commutative (the correlation between A and B is the same as the correlation between B and A) it suffices to only display half of the matrix, like this:

26821073-15081119795115542

This diagram also omitted the correlations on the diagonal: the red squares that indicate the full positive correlation between each feature and itself.

Here’s an example of a diagram (from here) showing less features. It’s common to display the whole matrix and the value labels:

heatmap_2.994292b9

The above diagram was created with Python (Pandas and Seaborn) and shows the correlation between all the numerical values in the already mentioned Titanic Passengers dataset.

Here’s the UWP sample app version of the very same diagram, calculated with ML.NET and Math.NET and visualized with OxyPlot:

HeatMapZoom

The small differences in correlations for the Age feature are caused by the sample app not compensating missing values. We have the matrix, let’s plot this diagram.

Correlation Heat Maps with OxyPlot

To draw an OxyPlot diagram, you start with placing a PlotView element in your XAML:

<oxy:PlotView x:Name="Diagram"
                Background="Transparent"
                BorderThickness="0" />

Then you can declaratively or programmatically decorate it with a PlotModel and different Axis instances. A correlation heat map uses a CategoryAxis in both dimensions:

plotModel.Axes.Add(new CategoryAxis
{
    Position = AxisPosition.Bottom,
    Key = "HorizontalAxis",
    ItemsSource = new[]
    {
        "Survived",
        "Class",
        "Age",
        "Sib / Sp",
        "Par / Chi",
        "Fare"
    },
    TextColor = foreground,
    TicklineColor = foreground,
    TitleColor = foreground
});

plotModel.Axes.Add(new CategoryAxis
{
    Position = AxisPosition.Left,
    Key = "VerticalAxis",
    ItemsSource = new[]
    {
        "Fare",
        "Parents / Children",
        "Siblings / Spouses",
        "Age",
        "Class",
        "Survived"
    },
    TextColor = foreground,
    TicklineColor = foreground,
    TitleColor = foreground
});

The legend on top of the diagram is an extra LinearColorAxis in the appropriate OxyPalette:

plotModel.Axes.Add(new LinearColorAxis
{
    // Pearson color scheme from blue over white to red.
    Palette = OxyPalettes.BlueWhiteRed31,
    Position = AxisPosition.Top,
    Minimum = -1,
    Maximum = 1,
    TicklineColor = OxyColors.Transparent
});

If you’re not entirely satisfied with the color scheme, feel free to create your own custom OxyPalette instance: it’s just a 3-color gradient.

The matrix itself is a HeatMapSeries with 6 values in each dimension, rendered as rectangles:

var heatMapSeries = new HeatMapSeries
{
    X0 = 0,
    X1 = 5,
    Y0 = 0,
    Y1 = 5,
    XAxisKey = "HorizontalAxis",
    YAxisKey = "VerticalAxis",
    RenderMethod = HeatMapRenderMethod.Rectangles,
    LabelFontSize = 0.12,
    LabelFormatString = ".00"
};

plotModel.Series.Add(heatMapSeries);

Diagram.Model = plotModel;

To display the label in the square you need to set the LabelFormatString. This will only be applied if you also set a value to LabelFontSize.

OxyPlot does not support the triangular version of the heat map. Missing values always get the default value and you can’t make them transparent:

HalfMap

To populate the diagram, assign the Data to the series, and refresh the plot:

(plotModel.Series[0] as HeatMapSeries).Data = data;

// Update diagram
Diagram.InvalidatePlot();

Again, here’s the resulting heat map in the sample app:

HeatMap

Interpretation

The dark blue square on the diagram reveals a relatively high negative correlation between the Passenger Class and Ticket Fare. This means that the value for the one can easily be derived from the other – think “first class tickets cost more than second class tickets”. Adding both as a feature would not add more value than adding only one of them.

Data scientists would probably extract a new feature from these two (something like “Luxury”) or would break up “Passenger Class” and “Ticket Fare” in more basic components, like locations on the ship that passengers had access to. Anyway, the heat map clearly highlights the feature combinations that need further analysis.

Source

In this article we used components from ML.NET, Math.NET and OxyPlot to calculate and visualize the correlation heat map on candidate training data for a classification model.

The UWP sample app host more Machine Learning scenarios. It lives here on GitHub.

Enjoy!

Advertisements

Machine Learning with ML.NET in UWP: Feature Distribution Analysis

Welcome to the third article in this series on implementing Machine Learning scenarios with Open Source technologies in UWP apps. We were using ML.NET for modeling and OxyPlot for data visualization, and we will continue to do so. For this article we added Math.NET to our shopping basket: to calculate some statistics that are important when analyzing the input data. We will analyze the distribution of values for columns in the candidate model training data to detect whether these columns would be a useful as feature, or whether they need filtering, or whether they should be ignored at all.

Feature Analysis in Machine Learning

When it comes to the ‘Garbage in – Garbage out’ principle, Machine Learning is not different from any other process. Before training a model data scientists will perform Feature Analysis to identify the features that are most useful in solving the problem. This includes steps such as:

  • analyzing the distribution of values,
  • looking for missing data and deciding whether to ignore, replace, or reject it,
  • analyzing correlation between columns, and
  • combining columns into new candidate features (so called Feature Engineering)

To illustrate why Feature Analysis is important, just take a look at the following diagram that shows the predicted NBA player salary (red) versus the real salary (blue) in the Regression page of our sample app:

Regression

It’s pretty clear that the trained algorithm is not very useful: it does not look significantly better than a random prediction. Next to the vertical axis you even see the model predicting negative salaries. This bad performance is partly because we did not do any analysis before just feeding the model with raw data.

Feature Analysis in ML.NET

Although the data preparation step is not the core business of ML.NET, we still have good news. The announcement of ML.NET 0.10.0 revealed IDataView to become a shared type across libraries in the .NET ecosystem. If you dive into the IDataView Design Principles you’ll observe that the DataView can be a core component in analyzing raw data sets, since it allows

  • cursoring,
  • large data,
  • caching,
  • peeking,
  • and so on.

You’ll observe that we will use IDataView properties and (extension) methods in this article to stay as close as possible to the following schema:

DataViewCentric

Introducing the Box Plot

The box plot (a.k.a. box and whisker diagram) is a standardized way of displaying the distribution of data. It is based on the five number summary:

  • minimum,
  • first quartile,
  • median,
  • third quartile, and
  • maximum.

In the simplest box plot the central rectangle spans the first quartile to the third quartile: the interquartile range or IQR – the likely range of variation. A segment inside the rectangle shows the median (the typical value) and “whiskers” above and below the box show the locations of the minimum and maximum. The so-called Tukey boxplot uses the lowest value still within 1.5 IQR of the lower quartile, and the highest value still within 1.5 IQR of the upper quartile respectively as minimum and maximum. It represents the values outside that boundary (the extreme values) as ‘outlier’ dots:

BoxPlotParts

The box plot can tell you about your outliers and what their values are. It can also tell you if your data is symmetrical, how tightly your data is grouped, and if and how your data is skewed.

Box plots may reveal whether or not you should use all of the values for training the model, and which algorithm you should prefer (some algorithms assume a symmetrical distribution). Here’s an article with more details on how to interpret box plots.

Box plot in OxyPlot

Most of the data visualization frameworks for UWP support box plots, and OxyPlot is not an exception. All you need to do is insert a PlotView control in your XAML:

<oxy:PlotView 
	x:Name="RegressionDiagram"
	Background="Transparent"
	BorderThickness="0" />

The control’s PlotModel is populated with a BoxPlotSeries that is displayed against a CategoryAxis for the property names and a LinearAxis for the values. Check the previous articles in this blog series on how to do define a model and its axes in XAML and C#.

In our sample app we wanted to highlight the distributions having outliers. We added two series to the model – each with a different color:

var cleanSeries = new BoxPlotSeries
{
    Stroke = foreground,
    Fill = OxyColors.DarkOrange
};
plotModel.Series.Add(cleanSeries);

var outlinerSeries = new BoxPlotSeries
{
    Stroke = foreground,
    Fill = OxyColors.Firebrick,
    OutlierSize = 4
};
plotModel.Series.Add(outlinerSeries);

The next step is to provide BoxPlotItem instances for the series, but first we need to calculate these.

Enter Math.NET.

Boxplot in Math.NET

Math.NET is an Open Source C# library covering fundamental mathematics. Its code is distributed over several NuGet packages covering domains such as numerical computing, algebra, signal processing, and geometry. Math.NET Numerics is the package that contains the functions from descriptive statistics that we’re interested in. It targets .NET 4.0 and higher, including Mono and .NET Standard 1.3 and higher. The sample app does not use the NuGet package however. Because the source code is effective and very well structured -what else did you expect from mathematicians- it was easy to identify and grab the source code for the calculations that we wanted, so that’s what we did.

The SortedArrayStatistics class contains all the functions we need (Median, Quartiles, Quantiles) and more.

Boxplot in the Sample App

To draw a box plot, we first need to get the data. ML.NET uses a TextLoader for this:

var trainingDataPath = await MlDotNet.FilePath(@"ms-appx:///Data/Mall_Customers.csv");
var reader = new TextLoader(_mlContext,
                            new TextLoader.Arguments()
                            {
                                Separator = ",",
                                HasHeader = true,
                                Column = new[]
                                    {
                                    new TextLoader.Column("Age", DataKind.R4, 2),
                                    new TextLoader.Column("AnnualIncome", DataKind.R4, 3),
                                    new TextLoader.Column("SpendingScore", DataKind.R4, 4),
                                    }
                            });

var file = _mlContext.OpenInputFile(trainingDataPath);
var src = new FileHandleSource(file);
var dataView = reader.Read(src);

The result of the Read() method is an IDataView tabular structure. We can query its Schema to find out column names, and with the GetColumn() extension method we can fetch all values for the specified column.

This is how the sample app basically pivots the data view from a list of rows to a list of columns:

var result = new List<List<double>>();
for (int i = 0; i < dataView.Schema.ColumnCount; i++)
{
    var columnName = dataView.Schema.GetColumnName(i);
    result.Add(dataView
	.GetColumn<float>(_mlContext, columnName)
	.Select(f => (double)f)
	.ToList());
}

return result;

Notice that we switched from float (the low-memory favorite type in ML.NET) to double (the high-precision favorite type in Math.NET).

The array of column values is used to build a BoxPlotItem to be added to the PlotModel:

// Read data
var regressionData = await ViewModel.LoadRegressionData();

// Populate diagram
for (int i = 0; i < regressionData.Count; i++)
{
    AddItem(plotModel, regressionData[i], i);
}

Here’s the code to calculate all the box plot constituents. Remember to sort the array first, since we rely on Math.ML’s Sorted Array Statistics here:

values.Sort();
var sorted = values.ToArray();

// Box: Q1, Q2, Q3
var median = sorted.Median();
var firstQuartile = sorted.LowerQuartile();
var thirdQuartile = sorted.UpperQuartile();

// Whiskers
var interQuartileRange = thirdQuartile - firstQuartile;
var step = interQuartileRange * 1.5;
var upperWhisker = thirdQuartile + step;
upperWhisker = sorted.Where(v => v <= upperWhisker).Max();
var lowerWhisker = firstQuartile - step;
lowerWhisker = sorted.Where(v => v >= lowerWhisker).Min();

// Outliers
var outliers = sorted.Where(v => v < lowerWhisker || v > upperWhisker).ToList();

Here’s the creation of the OxyPlot box plot item itself. The first parameter refers to the category index:

var item = new BoxPlotItem(
    x: slot,
    lowerWhisker: lowerWhisker,
    boxBottom: firstQuartile,
    median: median,
    boxTop: thirdQuartile,
    upperWhisker: upperWhisker)
{
    Outliers = outliers
};

In the following code snippet we assign the new item to one of the two series (with and without outliers) to obtain the wanted color scheme:

if (outliers.Any())
{
    (plotModel.Series[1] as BoxPlotSeries).Items.Add(item);
}
else
{
    (plotModel.Series[0] as BoxPlotSeries).Items.Add(item);
}

This is how the final result looks like. The diagram on the left shows the raw data for the Regression sample in the app. Notice that all properties are distributed asymmetrically and come with lots of outliers. That’s not a solid base for creating a prediction model:

BoxPlot

The diagram on the right shows the data for the Clustering sample. This was our very first ML.NET project so we decided to use a proven and cleaned data set, and this shows in the box plot. For the sake of completeness, here’s that Clustering sample again. Its prediction model works very well:

Clustering

One more thing about the box plot. When you right click a shape on the diagram, OxyPlot shows its tracker with the details:

FeatureDistributionAnalysis_Tracker

When outliers are identified in the analysis, you may decide to skip these when training the model, using FilterByColumn(). Check this sample code for more details.

Source

In this article we demonstrated how to build box plot diagrams in UWP, using ML.NET, Math.NET and OxyPlot. Even when ML.NET is not targeting Feature Analysis, its IDataView API is very helpful in getting the column data.

The sample app lives here on GitHub.

Enjoy!

Machine Learning with ML.NET in UWP: Multiclass Classification

This is the second in a series of articles on implementing Machine Learning scenarios with ML.NET and OxyPlot in UWP apps. If you’re looking for an introduction to these technologies, please check part one of this series. In this article we will build, train, evaluate, and consume a multiclass classification model to detect the language of a piece of text.

All blog posts in this series are based on a single sample app that lives here on GitHub.

Classification

Classification in Machine Learning

Classification is a  technique from supervised learning to categorize data into a desired number of labeled classes. In binary classification the prediction yields one of two possible outcomes (basically solving ‘true or false’ problems). This article however focuses on multiclass classification, where the prediction model has two or more possible outcomes.

Here are some real-world classification scenario’s:

  • road sign detection in self-driving cars,
  • spoken language understanding,
  • market segmentation (predict if a customer will respond to marketing campaign), and
  • classification of proteins according to their function.

There’s a wide range of multiclass classification algorithms available. Here are the most used ones:

  • k-Nearest Neighbors learns by example. The model is a so-called lazy one: it just stores the training data, all computation is deferred. At prediction time it looks up the k closest training samples. It’s very effective on small training sets, like in the face recognition on your mobile phone.
  • Naive Bayes is a family of algorithms that use principles from the field of probability theory and statistics. It is popular in text categorization and medical diagnosis.
  • Regression involves fitting a curve to numeric data. When used for classification, the resulting numerical value must be transformed back into a label. Regression algorithms have been used to identify future risks for patients, and to predict voting intent.
  • Classification Trees and Forests use flowchart-like structures to make decisions. This family of algorithms is particularly useful when transparency is needed, e.g. in loan approval or fraud detection.
  • A set of Binary Classification algorithms can be made to work together to form a multiclass classifier using a technique called ‘One-versus-All’ (OVA).

If you want to know more about classification then check this straightforward article. It is richly illustrated with Chris Albon’s awesome flash cards like this one:

1_OqOzEP0pLmqvEBirKAjPXQ

Classification in ML.NET

ML.NET covers all the major algorithm families and more with the following multiclass classification learners:

The API allows to implement the following flow:

MachineLearningSteps

When we dive into the code, you’ll recognize that same pattern.

Building a Language Recognizer

The case

In this article we’ll build and use a model to detect the language of a piece of text from a set of languages. The model will be trained to recognize English, German, French, Italian, Spanish, and Romanian. The training and evaluation datasets (and a lot of the code) are borrowed from this project by Dirk Bahle.

Safety Instructions

In the previous article, we explained why we’re using v0.6 of the ML.NET API instead of the current one (v0.9). There is some work to be done by different Microsoft teams to adjust the UWP/.NET Core/ML.NET components to one another. 

The sample app works pretty well, as long as you comply with the following safety instructions:

  • don’t upgrade the ML.NET NuGet package,
  • don’t run the app in Release mode, and
  • always bend your knees not your back when lifting heavy stuff.

In the last couple of iterations the ML.NET team has been upgrading its API from the original Microsoft internal .NET 1.0 code to one that is on par with other Machine Learning frameworks. The difference is huge! A lot of the v0.6 classes that you encounter in this sample are now living in the Legacy namespace or were even removed from the package.

As far as possible we’ll try to point the hyperlinks in this article to the corresponding members in the newer API. The documentation on older versions is continuously cleaned up and we don’t want you to end up on this page:

DeletedDocs

If you want to know multiclass classification looks like in the newest API, then check this official sample.

Alternative Strategy

We can imagine that some of you don’t want to wait for all pieces of the technical puzzle to come together, or are reluctant to use ML.NET in UWP. Allow us to promote an alternative approach. WinML is an inference engine to use trained local ONNX machine learning models in your Windows apps. Not all end user (UWP) apps are interested in model training – they only want the use a model for running predictions. You can build, train, and evaluate a Machine Learning model in a C# console app with ML.NET, then save it as ONNX with this converter, then load and consume it in a UWP app with WinML:

onnx-diagram-v03

The ML.NET console app can be packaged, deployed and executed as part of your UWP app by including it as a full trust desktop extension. In this configuration the whole solution can even be shipped to the store.

The Code

A Lottie-driven busy indicator

Depending on the algorithm family, training and using a machine learning model can be CPU intensive and time consuming. To entertain the end user during these processes and to verify that these does not block the UI, we added an extra element to the page. An UWP Lottie animation will play the role of a busy indicator:

<lottie:LottieAnimationView 
	x:Name="BusyIndicator"
	FileName="Assets/loading.json"
	Visibility="Collapsed" />

When the load-build-train-test-save-consume scenario starts, the image will become visible and we start the animation:

BusyIndicator.Visibility = Windows.UI.Xaml.Visibility.Visible;
BusyIndicator.PlayAnimation();

Here’s how this looks like:

Lottie

When the action stops, we hide the control and pause the animation:

BusyIndicator.Visibility = Windows.UI.Xaml.Visibility.Collapsed;
BusyIndicator.PauseAnimation();

As explained in the previous article, we moved all machine model processing of the main UI thread by making it awaitable:

public Task Train()
{
    return Task.Run(() =>
    {
        _model.Train();
    });
}

Load data

The training dataset is a TAB separated value file with the labeled input data: an integer corresponding to the language, and some text:

RawData

The input data is modeled through a small class. We use the Column attribute to indicate column sequence number in the file, and special names for the algorithm. Supervised learning algorithms always expect a “Label” column in the input:

public class MulticlassClassificationData
{
    [Column(ordinal: "0", name: "Label")]
    public float LanguageClass;

    [Column(ordinal: "1")]
    public string Text;

    public MulticlassClassificationData(string text)
    {
        Text = text;
    }
}

The output of the classification model is a prediction that contains the predicted language (as a float – just like the input) and the confidence percentages for all languages. We used the ColumnName attribute to link the class members to these output columns:

public class MulticlassClassificationPrediction
{
    private readonly string[] classNames = { "German", "English", "French", "Italian", "Romanian", "Spanish" };

    [ColumnName("PredictedLabel")]
    public float Class;

    [ColumnName("Score")]
    public float[] Distances;

    public string PredictedLanguage => classNames[(int)Class];

    public int Confidence => (int)(Distances[(int)Class] * 100);
}

The MVVM Model has properties to store the untrained model and the trained model, respectively a LearningPipeline and a PredictionModel:

public LearningPipeline Pipeline { get; private set; }

public PredictionModel<MulticlassClassificationData, MulticlassClassificationPrediction> Model { get; private set; }

We used the ‘classic’ text loader from the Legacy namespace to load the data sets, so watch the using statement:

using TextLoader = Microsoft.ML.Legacy.Data.TextLoader;

The first step in the learning pipeline is loading the raw data:

Pipeline = new LearningPipeline();
Pipeline.Add(new TextLoader(trainingDataPath).CreateFrom<MulticlassClassificationData>());

Extract features

To prepare the data for the classifier, we need to manipulate both incoming fields. The label does not represent a numerical series but a language. So with a Dictionarizer we create a ‘bucket’ for each language to hold the texts. The TextFeaturizer populates the Features column with a numeric vector that represents the text:

// Create a dictionary for the languages. (no pun intended)
Pipeline.Add(new Dictionarizer("Label"));

// Transform the text into a feature vector.
Pipeline.Add(new TextFeaturizer("Features", "Text"));

Train model

Now that the data is prepared, we can hook the classifier into the pipeline. As already mentioned, there are multiple candidate algorithms here:

// Main algorithm
Pipeline.Add(new StochasticDualCoordinateAscentClassifier());
// or
// Pipeline.Add(new LogisticRegressionClassifier());
// or
// Pipeline.Add(new NaiveBayesClassifier()); // yields weird metrics...

The predicted label is a vector, but we want one of our original input labels back  – to map it to a language. The PredictedLabelColumnsOriginalValueConverter does this:

// Convert the predicted value back into a language.
Pipeline.Add(new PredictedLabelColumnOriginalValueConverter()
    {
        PredictedLabelColumn = "PredictedLabel"
    }
);

The learning pipeline is complete now. We can train the model:

public void Train()
{
    Model = Pipeline.Train<MulticlassClassificationData, MulticlassClassificationPrediction>();
}

The trained machine learning model can be saved now:

public void Save(string modelName)
{
    var storageFolder = ApplicationData.Current.LocalFolder;
    using (var fs = new FileStream(
        Path.Combine(storageFolder.Path, modelName),
        FileMode.Create,
        FileAccess.Write,
        FileShare.Write))
        Model.WriteAsync(fs);
}

Evaluate model

In supervised learning you can evaluate a trained model by providing a labeled input test data set and see how the predictions compare against it. This gives you an idea of the accuracy of the model and indicates whether you need to retrain it with other parameters or another algorithm.

We create a ClassificationEvaluator for this, and inspect the ClassificationMetrics that return from the Evaluate() call:

public ClassificationMetrics Evaluate(string testDataPath)
{
    var testData = new TextLoader(testDataPath).CreateFrom<MulticlassClassificationData>();

    var evaluator = new ClassificationEvaluator();
    return evaluator.Evaluate(Model, testData);
}

Some of the returned metrics apply to the whole model, some are calculated per label (language). The following diagram presents the Logarithmic Loss of the classifier per language (the PerClassLogLoss field). Loss represents a degree of uncertainty, so lower values are better:

MulticlassClassificationStart

Observe that some languages are harder to detect than others.

Model consumption

The Predict() call takes a piece of text and returns a prediction:

public MulticlassClassificationPrediction Predict(string text)
{
    return Model.Predict(new MulticlassClassificationData(text));
}

The prediction contains the predicted language and a set of scores for each language. Here’s what we do with this information in the sample app:

MulticlassClassification

We are pretty impressed to see how easy it is to build a reliable detector for 6 languages. The trained model would definitely make sense in a lot of .NET applications that we developed in the last couple of years.

Visualizing the results

We decided to use OxyPlot for visualizing the data in the sample app, because it’s light-weight and it does all the graphs we needed. In the previous article in this series we created all the elements programmatically. So this time we’ll focus on the XAML.

Axes and Series

Here’s the declaration of the PlotView with its PlotModel. The model has a CategoryAxis for the languages and a LinearAxis for the log-loss values. The values are represented in a BarSeries:

<oxy:PlotView x:Name="Diagram"
                Background="Transparent"
                BorderThickness="0"
                Margin="0 0 40 60"
                Grid.Column="1">
    <oxy:PlotView.Model>
        <oxyplot:PlotModel Subtitle="Model Quality"
                            PlotAreaBorderColor="{x:Bind OxyForeground}"
                            TextColor="{x:Bind OxyForeground}"
                            TitleColor="{x:Bind OxyForeground}"
                            SubtitleColor="{x:Bind OxyForeground}">
            <oxyplot:PlotModel.Axes>
                <axes:CategoryAxis Position="Left"
                                    ItemsSource="{x:Bind Languages}"
                                    TextColor="{x:Bind OxyForeground}"
                                    TicklineColor="{x:Bind OxyForeground}"
                                    TitleColor="{x:Bind OxyForeground}" />
                <axes:LinearAxis Position="Bottom"
                                    Title="Logarithmic loss per class (lower is better)"
                                    TextColor="{x:Bind OxyForeground}"
                                    TicklineColor="{x:Bind OxyForeground}"
                                    TitleColor="{x:Bind OxyForeground}" />
            </oxyplot:PlotModel.Axes>
            <oxyplot:PlotModel.Series>
                <series:BarSeries LabelPlacement="Inside"
                                    LabelFormatString="{}{0:0.00}"
                                    TextColor="{x:Bind OxyText}"
                                    FillColor="{x:Bind OxyFill}" />
            </oxyplot:PlotModel.Series>
        </oxyplot:PlotModel>
    </oxy:PlotView.Model>
</oxy:PlotView>

Apart from the OxyColor and OxyThickness values we were able to define the whole diagram in XAML. Thats not too bad for a prerelease NuGet package…

When the page is loaded in the sample app, we fill out the missing declarations, and update the diagram’s UI:

var plotModel = Diagram.Model;
plotModel.PlotAreaBorderThickness = new OxyThickness(1, 0, 0, 1);
Diagram.InvalidatePlot();

Adding the data

After the evaluation of the classification model, we iterate through the quality metrics. We create a BarItem for each language. All items are then added to the series:

var bars = new List<BarItem>();
foreach (var logloss in metrics.PerClassLogLoss)
{
    bars.Add(new BarItem { Value = logloss });
}

(plotModel.Series[0] as BarSeries).ItemsSource = bars;
plotModel.InvalidatePlot(true);

The sample app

The sample app lives here on NuGet. We take the opportunity here to proudly mention that it is featured in the ML.NET Machine Learning Community gallery.

Enjoy!

Machine Learning with ML.NET in UWP: Clustering

This is the first in a series of articles on implementing Machine Learning scenarios in UWP apps. We will use

  • ML.NET for defining, training, evaluating and running Machine Learning models,
  • OxyPlot for visualizing the data, and
  • we’re planning to bring in Math.NET for number crunching – if needed.

All of these are cross platform Open Source technologies, all of these are written in C#, all of these are free, and all of these can be used on the UWP platform, albeit with some -hopefully temporary- restrictions.

Currently the large majority of the online samples on ML.NET are straightforward console apps. That’s fine if want to learn the API, but we want to figure out how ML.NET behaves in a more hostile enterprise-ish environment – where calculations should not block the UI, data should be visualized in sexy graphs, and architectural constraints may apply. With that in mind, we created a sample UWP MVVM app on GitHub, with pages covering different Machine Learning use cases, like

  • building and using models for clustering, classification, and regression, and
  • analyzing the input data that is used to train these models (that’s what data scientists call feature engineering).

Here’s a screenshot from that app:

MulticlassClassification

Machine Learning

Machine learning is a data science technique that allows computers to use existing data to forecast future behaviors, outcomes, and trends. Using machine learning, computers learn without being explicitly programmed. The forecasts and predictions are produced by so called ‘models’ that are upfront defined, trained with test data, evaluated, persisted, and then called upon in client apps.

Typical Machine Learning implementations include (source: Princeton):

  • optical character recognition: categorize images of handwritten characters by the letters represented
  • face detection: find faces in images (or indicate if a face is present)
  • spam filtering: identify email messages as spam or non-spam
  • topic spotting: categorize news articles (say) as to whether they are about politics, sports, entertainment, etc.
  • spoken language understanding: within the context of a limited domain, determine the meaning of something uttered by a speaker to the extent that it can be classified into one of a fixed set of categories
  • medical diagnosis: diagnose a patient as a sufferer or non-sufferer of some disease
  • customer segmentation: predict, for instance, which customers will respond to a particular promotion
  • fraud detection: identify credit card transactions (for instance) which may be fraudulent in nature
  • weather prediction: predict, for instance, whether or not it will rain tomorrow

Well, this is exactly what ML.NET covers, and here are the steps to build all of these scenarios:

MachineLearningSteps

The domain of machine learning (and data science in general) is currently dominated by two programming languages: Python (with tools like scikit-learn) and R. These are great environments for data scientists, but are not targeted to developers. Enter ML.NET.

ML.NET

ML.NET is not the first Machine Learning environment from Microsoft (think of the Data Mining components of SQL Server Analysis Services, Cognitive Toolkit, and Azure Machine Learning services), but it is the first framework that targets application developers. ML.NET brings a large set of model-based Machine Learning analytic and prediction capabilities into the .NET world. The framework is built upon .NET Core to run cross-platform on Linux, Windows and MacOS. Developers can define and train a Machine Learning models or reuse an existing models by a 3rd party, and run it on any environment offline.

ML.NET is mature …

The origins of the ML.NET library go back many years. Shortly after the introduction of the Microsoft .NET Framework in 2002, Microsoft Research began a project called TMSN (“text mining search and navigation”) to enable  developers to include ML code in Microsoft products and technologies. The project was very successful, and over the years grew in size and usage internally at Microsoft. Somewhere around 2011 the library was renamed to TLC (“the learning code”). TLC is widely used within Microsoft.

The ML.NET library started a descendant of TLC, with the Microsoft-specific features removed.

… but not finished yet

ML.NET 0.1 was announced at //Build 2018. The core functionality exists since 2002, so the first iterations probably focused on modernizing the C# code and hiding/removing the Microsoft-specific stuff. In version 0.5 a large portion of the original pipeline API with concepts such as Estimators, Transforms and DataView was moved to the Legacy namespace and replaced by one that is more consistent with well-known frameworks like Scikit-Learn, TensorFlow and Spark. Version 0.7 removed the framework’s dependency on x64 devices. ML.NET is and is currently at v0.9. It was extended with feature engineering, further reducing the functionality gap with Python and R environments.

You find the ML.NET source code here on GitHub.

What about UWP?

UWP apps are safe to run and easy to install and uninstall in a clean way. To achieve this, they run in a sandbox

  • with a reduced API surface where some Win32 an COM calls are not accessible or even available, and
  • within a security context that restricts access to the external environment (file system, processes, external devices).

When experimenting with ML.NET in UWP we observed the following handful of issues:

  • ML.NET uses Reflection.Emit, which is not allowed in the native compilation step that occurs when compiling UWP apps in release mode,
  • ML.NET uses the MEF composition container from v0.7 onwards, and not al MEF classes are exposed to UWP yet (a known issue), and
  • ML.NET does file and process manipulations on classic API’s that are not available in UWP because of the sandbox, or because they live in a different namespace.

These issues are known by the different teams at Microsoft and we believe that they will be tackled sooner or later (the issues, not the teams Smile). In the mean time we’ll stick to version v0.6. Don’t worry: the v0.6 NuGet package has most of the core functionality (load and transform training data into memory, create a model, train the model, evaluate and save the model, use the model) and algorithms, but not the latest API.

Clustering

Clustering is a set of Machine Learning algorithms that help to identify the meaningful groups in a larger population. It is used by social networks and search engines for targeting ads and determining relevance rates. Clustering is a subdomain of unsupervised learning. Its algorithms learn from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data.

These diagrams show what clustering does:
CA40FFF2-7AF4-4EF5-9899-ADE67C8EB2A7
Most courses and handbooks on Machine Learning start with clustering, because it’s easy compared to the other algorithms (no labeling, no evaluation, easy visualization if you stick to less than 4D).

So it makes sense to start this ML.NET-OxyPlot-UWP series with a article on clustering.

ML.NET algorithms

There is a few clustering algorithms available, but the K-Means family is the most used, and that’s what ML.NET provides. K-means is often referred to as Lloyd’s algorithm. It takes the number of wanted clusters (‘k’) as its primary input. The algorithm has three steps. It executes the first step and then starts looping between the other two:

  • The first step chooses the initial centroids, with the most basic method being to choose k samples from the dataset.
  • The second step assigns each sample to its nearest centroid.
  • The third step creates new centroids by taking the mean value of all of the samples assigned to each previous centroid.

The algorithm iterates through steps 2 and 3 until the position of the centroids stabilizes.

Enough talking, let’s get our hands dirty

Prepare the Solution

The sample app started as a new UWP app, with the following NuGet packages:

  • ML.NET v0.6 (because higher versions don’t work for UWP)
  • OxyPlot latest UWP prerelease
  • .NET Core latest prerelease (because we want to see if and when bugs are fixed)
  • LottieUWP (because we want to show a fancy animation while calculating)

ProjectConfig

As already mentioned we stay in debug mode to avoid Reflection issues, and we target x64 architecture only since we use a pre-v0.7 version of ML.NET.

Solution Architecture

The Visual Studio solution uses a lightweight MVVM-architecture where the

  • Models contain all business logic, the
  • ViewModels make the models accessible to the Views, the
  • Views focus on UI only, and the
  • Services take care of everything that doesn’t fit the previous categories.

As a result, the MVVM Model classes contain code that is pretty close to the existing ML.NET Console app samples. Here’s for example how a Machine Learning Model is trained:

public void Train(IDataView trainingDataView)
{
    Model = Pipeline.Fit(trainingDataView);
}

In general, MVVM ViewModels make the models accessible to the UI by enabling data binding and change propagation. Machine Learning scenarios typically work with large data files and they run CPU-intensive processes. In the sample app the ViewModels are responsible for pushing all that heavy lifting off the UI thread to keep app responsive while calculating.

Here’s the pattern: all the model’s CPU-bound actions are made awaitable by wrapping them in a Task. Here’s how such a call looks like:

public Task Train(IDataView trainingDataView)
{
    return Task.Run(() =>
    {
        _model.Train(trainingDataView);
    });
}

This allows the Views to remain responsive. It can update controls and start animations while calculations are in progress. Here’s how a XAML page updates its UI and then starts training a model:

TrainingBox.IsChecked = true;
await ViewModel.Train(trainingDataView);

Nothing fancy, right? Well: if you would train the model synchronously, the checkbox would only update after the calculation.

Let’s start with a hack

UWP has its own file API with limited access to the drives and its own logical URI schemes to refer to the files. ML.NET is based on the .NET Standard specification, and uses the classic API’s with physical file paths. So we decided to add a helper method that takes a UWP file reference (think ‘ms-appx:///something’), copies the file to local app storage, and then returns the physical path that classic API’s know and love. Here’s that helper:

public static async Task<string> FilePath(string uwpPath)
{
    var originalFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri(uwpPath));
    var storageFolder = ApplicationData.Current.LocalFolder;
    var localFile = await originalFile.CopyAsync(storageFolder, originalFile.Name, NameCollisionOption.ReplaceExisting);
    return localFile.Path;
}

Let’s now dive into ML.NET.

Load the data

In this first scenario we will divide a group of shopping mall customers (borrowed from this Python sample) into 5 clusters. The raw data contains records with a customer id, gender, age, annual income, and a spending score. Notice that we’re in an unsupervised learning environment, so the data is ‘unlabeled’ – there is no cluster id column to learn from:

Clustering_data

The MLContext class is central in the recent ML.NET API’s (sort of DBContext in Entity Framework). In v0.6 it was still called LocalEnvironment. We need an instance of it to share across components:

private LocalEnvironment _mlContext = new LocalEnvironment(seed: null); // v0.6;

The following code snippet from the MVVM Model shows how a TextLoader is used to describe the content of the file, read it, and return its schema as an IDataView:

public IDataView Load(string trainingDataPath)
{
    var reader = new TextLoader(_mlContext,
                                new TextLoader.Arguments()
                                {
                                    Separator = ",",
                                    HasHeader = true,
                                    Column = new[]
                                        {
                                            new TextLoader.Column("CustomerId", DataKind.I4, 0),
                                            new TextLoader.Column("Gender", DataKind.Text, 1),
                                            new TextLoader.Column("Age", DataKind.I4, 2),
                                            new TextLoader.Column("AnnualIncome", DataKind.R4, 3),
                                            new TextLoader.Column("SpendingScore", DataKind.R4, 4),
                                        }
                                });

    var file = _mlContext.OpenInputFile(trainingDataPath);
    var src = new FileHandleSource(file);
    return reader.Read(src);
}

The ML.NET algorithms only support the data types that are listed in the DataKind enumeration. The properties that we’re interested in (AnnualIncome and SpendingScore) are defined as float. In general, Machine Learning doesn’t like double because its values take more memory and the extra precision does not compensate that.

Here’s the call from the MVVM View. We read a file from the app’s Data folder, make a copy of it to the local storage, and pass the full path to the ML.NET code above:

// Prepare the files.
var trainingDataPath = await MlDotNet.FilePath(@"ms-appx:///Data/Mall_Customers.csv");

// Read training data.
var trainingDataView = await ViewModel.Load(trainingDataPath);

The returned IDataView does deferred execution similar to Enumerable in LINQ. The data is only physically read when it is consumed – in this case when training the model.

Here’s the raw data in a diagram:

Clustering_Raw_Plot

Extract the features

In the second step of this Machine Learning scenario, we prepare the input data and define the algorithm that will be used. These steps are combined into a ‘Learning Pipeline’ that represents the untrained model.

First we plug in a ConcatEstimator to ‘featurize’ the data. We enrich the input data with properties of the name and the type that the algorithm expects. The Clustering model looks for a single column called “Feature” with an expected data type. We create this from the columns that we want to cluster on: AnnualIncome and SpendingScore.

The last step in the pipeline represents the algorithm and defines the type of pipeline – the T in EstimatorChain<T>. in a Clustering scenario we will use a KMeansPlusPlusTrainer. The most important parameter is the number of clusters.

Here’s how the pipeline is created:

public void Build()
{
    Pipeline = new ConcatEstimator(_mlContext, "Features", "AnnualIncome", "SpendingScore")
        .Append(new KMeansPlusPlusTrainer(
            env: _mlContext,
            featureColumn: "Features",
            clustersCount: 5,
            advancedSettings: (a) =>
                {
                    // a.AccelMemBudgetMb = 1;
                    // a.MaxIterations = 1;
                    // a. ...
                }
            ));
}

The advanced settings parameter allows you to fine-tune the algorithm, e.g. to optimize it for speed or memory consumption. Of course this comes with a price: the quality of the model will decrease. Here’s what happens when you restrict the number of iterations to just one:

Clustering_bad_config

The initial centroids were used and never moved, so we ended up with green dots within the red cluster,and a dark blue cluster with only two dots.

In a production environment you would iterate through different algorithms and different configurations to create a model that fits your need and does not kill the hardware. It’s good to see that ML.NET supports this.

Train the model

The Machine Learning model is created and trained by feeding the pipeline with the training data using a Fit() call. In the ML.NET API that’s just a one-liner, but behind it is a very complex and CPU-intensive task:

public void Train(IDataView trainingDataView)
{
    Model = Pipeline.Fit(trainingDataView);
}

The resulting model -a TransformerChain<T>– can be immediately used for prediction, or it can be serialized for later use.

Evaluate the model

In the common Machine Learning scenarios we would use two data sets: one to train the model, and another for evaluation. Since clustering belongs to unsupervised learning, there’s no reference data set to evaluate the model.

That’s why we have used a data set that allows easy visual inspection. When looking at the diagram, it is pretty obvious to figure out where the five clusters should be. Observe that the KMeans algorithm with standard settings did a good job:

Clustering

Persist the model

The trained model can be persisted to a .zip file for later use or for use on another device with a simple call to the (oddly synchronous) SaveTo() method:

public void Save(string modelName)
{
    var storageFolder = ApplicationData.Current.LocalFolder;
    using (var fs = new FileStream(
            Path.Combine(storageFolder.Path, modelName), 
            FileMode.Create, 
            FileAccess.Write, 
            FileShare.Write))
    {
        Model.SaveTo(_mlContext, fs);
    }
}

Use the model

A trained model’s main purpose it to create predictions based on its input, so we need to define some classes to describe that input and output.

The class that describes the predicted result may contain all input fields (in our sample “AnnualIncome” and “SpendingScore”) and the algorithm-specific output. The KMeans algorithm spawns records with a “PredictedLabel” holding the most relevant cluster id and ”Score”, an array with the distances to the respective centroids.

You can use the ColumnName attribute to shape your own data types to these expectations:

    public class ClusteringPrediction
    {
        [ColumnName("PredictedLabel")]
        public uint PredictedCluster;

        [ColumnName("Score")]
        public float[] Distances;

        public float AnnualIncome;

        public float SpendingScore;
    }

Here’s the class that we use to store a single input point:

public class ClusteringData
{
    public float AnnualIncome;

    public float SpendingScore;
}

There are at least two ways to use the model for assigning clusters to input data

  • the Transform(IDataView) applies the transformation to a whole set of data, and there is also
  • the MakePredictionFunction<Tsrc, Tdst> (an extension method that is now called CreatePredictionEngine<Tsrc, Tdst>). This is a strongly typed method to create a prediction from a single input.

Here’s how both calls are used in the sample app:

public IEnumerable<ClusteringPrediction> Predict(IDataView dataView)
{
    var result = Model.Transform(dataView).AsEnumerable<ClusteringPrediction>(_mlContext, false);
    return result;
}

public ClusteringPrediction Predict(ClusteringData clusteringData)
{
    var predictionFunc = Model.MakePredictionFunction<ClusteringData, ClusteringPrediction>(_mlContext);
    return predictionFunc.Predict(clusteringData);
}

To color the graph in the sample, we asked the model to assign a cluster to each record in the training dataset:

var predictions = await ViewModel.Predict(trainingDataView);

This creates a list of predictions with the input fields and the assigned cluster. That’s what we used to populate the diagram.

The sample page also comes with two textboxes where you can fill out values for annual income and spending score. The trained model will assign the cluster to it. Here’s how that call looks like:

int.TryParse(AnnualIncomeInput.Text, out int annualIncome);
int.TryParse(SpendingScoreInput.Text, out int spendingScore);
var output = await ViewModel.Predict(
	new ClusteringData 
	{ 
		AnnualIncome = annualIncome, 
		SpendingScore = spendingScore 
	});

The result is a single prediction.

At this point in the scenario, most op the ML.NET sample app do Console.WriteLine of the predictions and then stop. But we’re in a UWP app, so we can have a lot more fun. Let’s add some sexy diagrams!

Visualizing the results

Introducing OxyPlot

OxyPlot is an open source plot generation library. It started in 2010 as a simple WPF plotting component, focusing on simplicity, performance and visual appearance. Today it targets multiple .NET platforms through a portable library, making it easy to re-use plotting code on different platforms. The library has implementations for WPF, Windows 8, Windows Phone, Windows Phone Silverlight, Windows Forms, Silverlight, GTK#, Xwt, Xamarin.iOS, Xamarin.Android, Xamarin.Forms, Xamarin.Mac … aaaand … UWP.

OxyPlot is primarily focused on two-dimensional coordinate systems, that’s the reason for the ‘xy’ in the name! Some of its restrictions are

  • lack of support for 3D plots,
  • no animations, and
  • supports data binding, but you must manually refresh the plots when changing your data.

The documentation is a work in progress that stalled somewhere back in 2015, but since then the OxyPlot team published more than enough example source code on any diagram type, and there are ‘getting started’ guides for all targeted technology stacks, including UWP and Xamarin.

OxyPlot ‘s API comes with all types of axes (linear, logarithmic, radial, date, time, category, color), series (area, bar, boxplot, candlestick, function, heat map, contour and many more) and annotations (point, rectangle, image, line, arrow, text, etc.) that you can think of. On top of that, it handles the Batman Equation correctly:

FunctionSeries

PlotView

The only UI control in the package is PlotView. It’s not really documented, but apart from the PlotModel which hosts the content it doesn’t have much properties after all – just look at the source code.

Since we will create everything in the sample app programmatically, the XAML is rather lean:

<oxy:PlotView x:Name="Diagram"
                Background="Transparent"
                BorderThickness="0" />

Axes

To represent the clusters, we will use two linear axes: a horizontal for the SpendingScore and a vertical for the AnnualIncome. Here’s how these are defined:

var foreground = OxyColors.SteelBlue;
var plotModel = new PlotModel
{
    PlotAreaBorderThickness = new OxyThickness(1, 0, 0, 1),
    PlotAreaBorderColor = foreground
};

var linearAxisX = new LinearAxis
{
    Position = AxisPosition.Bottom,
    Title = "Spending Score",
    TextColor = foreground,
    TicklineColor = foreground,
    TitleColor = foreground
};

plotModel.Axes.Add(linearAxisX);
var linearAxisY = new LinearAxis
{
    Maximum = 140,
    Title = "Annual Income",
    TextColor = foreground,
    TicklineColor = foreground,
    TitleColor = foreground
};
plotModel.Axes.Add(linearAxisY);

Please note that we could have defined this in XAML too.

Series

The dots that represent the training data are assigned to 5 ScatterSeries instances (one per cluster). Each series comes with its own color:

for (int i = 0; i < 5; i++)
{
    var series = new ScatterSeries
    {
        MarkerType = MarkerType.Circle,
        MarkerFill = _colors[i]
    };

    plotModel.Series.Add(series);
}

All we need to do now to prepare the diagram, is hook the PlotModel into the PlotView:

Diagram.Model = plotModel;

To (re)plot the diagram, we loop through the list of predictions on the training dataset, create a new ScatterPoint for each prediction, and add it to the appropriate cluster series:

foreach (var prediction in predictions)
{
    (Diagram.Model.Series[(int)prediction.PredictedCluster - 1] as ScatterSeries).Points.Add(
        new ScatterPoint
        (
            prediction.SpendingScore,
            prediction.AnnualIncome
        ));
}

When this code is executed you’ll see … nothing. You need to tell the diagram that its content was updated:

Diagram.InvalidatePlot();

Now you end up with the 5 colored clusters that we’ve seen before:

Clustering

Annotations

The extra dot for the single prediction is visualized as a PointAnnotation. We gave it the color of the assigned cluster, but a different shape and an extra label to easily spot it in the diagram:

var annotation = new PointAnnotation
    {
        Shape = MarkerType.Diamond,
        X = output.SpendingScore,
        Y = output.AnnualIncome,
        Fill = _colors[(int)output.PredictedCluster - 1],
        TextColor = OxyColors.SteelBlue,
        Text = "Here"
    };
Diagram.Model.Annotations.Add(annotation);
Diagram.InvalidatePlot();

Here’s how this looks like:

Cluster_Annotation

To finish this article, let’s promote some extra OxyPlot features.

Tracker

OxyPlot comes with a templatable tracker. By default it appears when you left-click on an item in a series, but you can also make it permanently visible. It shows a crosshair and the details of the underlying item, very convenient:

Clustering_bad_config_tracker

Zooming and Panning

By scrolling the mouse wheel you can zoom in and out, and with the right mouse button down you can pan the diagram:

Clustering_MouseActions

You want more?

This was just the first of a list of articles on ML.NET in UWP. Until now, we are very impressed!

If you want to play with the sample app, it lives here on GitHub.

Enjoy!

Fun with Bindable Squares in UWP

In the previous article we introduced some bindable Composition-drawn Path controls for UWP. In this article we will extend the list of controls with Rectangle and Square shapes, and introduce some optimizations in the framework. But above all: we’re going to have fun recreating some famous optical illusions with squares.

Bindable Rectangle

For starters, we created a Rectangle control, inspired by its SVG definition. Our base class already exposed a start point, and the necessary stroke and fill properties, so our new Rectangle subclass only required a height and a width. Semantically this new Rectangle control refers to ‘a panel that contains a rectangle figure somewhere in it’, so it did not make sense to reuse the inherited Height and Width properties. Instead we created SideX and SideY dependency properties.

Rendering a rectangle with a CanvasPathBuilder is quite obvious:

  • move to the start point with BeginFigure,
  • draw three lines with AddLine, and
  • close the figure with EndFigure (it will add the missing fourth line).

Here’s the full Render code:

protected override void Render()
{
    var canvasPathBuilder = GetCanvasPathBuilder();

    // Figure
    canvasPathBuilder.BeginFigure(new Vector2((float)StartPointX, (float)StartPointY));
    canvasPathBuilder.AddLine(
        new Vector2((float)(SideX + StartPointX), (float)(StartPointY)));
    canvasPathBuilder.AddLine(
        new Vector2((float)(SideX + StartPointX), (float)(SideY + StartPointY)));
    canvasPathBuilder.AddLine(
        new Vector2((float)(StartPointX), (float)(SideY + StartPointY)));
    canvasPathBuilder.EndFigure(CanvasFigureLoop.Closed);

    Render(canvasPathBuilder);
}

Observe that most of the rendering (creating the CanvasPathBuilder, creating and manipulating the CanvasGeometry, the SpriteShape, and the ShapeVisual) is common to all shapes, so we moved that to the base class.

Of course we created a small page to test the new control and its binding capabilities. Here how it looks like:

Rectangle

Bindable Square

Next in this series of bindable path controls comes a Square with a center point and a side – all implemented as dependency properties. Here’s the obvious rendering code:

protected override void Render()
{
    var canvasPathBuilder = GetCanvasPathBuilder();

    // Move starting point.
    StartPointX = CenterPointX - (Side / 2);
    StartPointY = CenterPointY - (Side / 2);

    // Figure
    canvasPathBuilder.BeginFigure(new Vector2((float)StartPointX, (float)StartPointY));
    canvasPathBuilder.AddLine(
        new Vector2((float)(Side + StartPointX), (float)(StartPointY)));
    canvasPathBuilder.AddLine(
        new Vector2((float)(Side + StartPointX), (float)(Side + StartPointY)));
    canvasPathBuilder.AddLine(
        new Vector2((float)(StartPointX), (float)(Side + StartPointY)));
    canvasPathBuilder.EndFigure(CanvasFigureLoop.Closed);

    Render(canvasPathBuilder);
}

Here’s the sample page:

Square

It already contains the elements to test some properties that we’ll define later in this article.

Here’s the XAML for the test square:

<controls:Square x:Name="Square"
                    Stroke="LightSlateGray"
                    CenterPointX="150"
                    CenterPointY="250"
                    Side="100"
                    Height="400"
                    Width="400" />

Let’s start the fun

To discover new requirements and optimizations the framework, we decided to reenact some famous optical illusions involving only squares.

Most of the pages start with just an empty canvas like this:

<Viewbox>
    <Canvas x:Name="Canvas"
            Height="1000"
            Width="1500"
            Background="Transparent" />
</Viewbox>

We’ll draw most of the squares programmatically.

The first one is the Café Wall Illusion (a.k.a. the shifted chessboard), a 2D geometrical-optical illusion where parallel lines appear to be sloped. Here’s how our bindable shape implementation looks like:

CafeWall

Believe it or not, all these horizontal lines are parallel.

Here’s how a square is added to the canvas:

var blackSquare = new Square
{
    Side = side,
    Fill = Colors.LightSlateGray,
    Height = Canvas.Height,
    Width = Canvas.Width,
    CenterPointX = (240 * i),
    CenterPointY = 120 + (240 * j)
};
Canvas.Children.Add(blackSquare);

This first test case did not reveal new requirements, but allowed us to demonstrate the programmatic instantiation of our new controls – and we had a lot of fun.

Optimizing for quantity

Our second test case is the Bulging Checkerboard Illusion, a 3D geometrical-optical illusion that makes a checkerboard to look like bulging out of the screen, just by adding small squares (or circles) in it:

Checkerboard

When we rendered the image, we noticed that it took an unacceptably long time. Our first attempt to solve this, was to drastically reducing the number of calls to the core rendering code (a technique that we borrowed from some of the WinRT XAML Toolkit Controls). Every change of every dependency property triggers rendering, so that’s a lot of calls when the control is loading. So we added a DelayRendering property to the base class an made sure not to call the core logic as long as the property is set to false:

public CompositionPathHost(bool delayRendering) : this()
{
    _delayRendering = delayRendering;
}
protected static void Render(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
    var path = (CompositionPathHost)d;
    if (path._delayRendering || path.Container.ActualWidth == 0)
    {
        return;
    }

    path.Render();
}
public bool DelayRendering
{
    get
    {
        return _delayRendering;
    }

    set
    {
        _delayRendering = value;
        if (!_delayRendering)
        {
            Render();
        }
    }
}

Of course this has an impact on the drawing code on the client side. We have to make sure that the property is assigned before any other (in a constructor), and have to reset is after all the other properties have their value:

var blackSquare = new Square(true)
{
    Side = side,
    Fill = Colors.LightSlateGray,
    Height = Canvas.Height,
    Width = Canvas.Width,
    CenterPointX = i * 100 + 50,
    CenterPointY = j * 100 + 50
};
Canvas.Children.Add(blackSquare);
blackSquare.DelayRendering = false;

Unfortunately we were underwhelmed by the impact of this code on the rendering time of all squares. So we started looking into other ways to speed up the process, by looking at opportunities to share ‘expensive’ objects – probably in the Win2D space. From the home page of CompositionProToolkit we learned that a CanvasDevice can be shared between CanvasPathBuilder instances, so we created a static device and reused it:

protected static CanvasDevice CanvasDevice = new CanvasDevice();

protected CanvasPathBuilder GetCanvasPathBuilder()
{
    var canvasPathBuilder = new CanvasPathBuilder(CanvasDevice);
    // ...

    return canvasPathBuilder;
}

The difference in rendering time is quite significant, and there is no impact on the client code. So it’s another successful test case!

Adding bindable Rotation properties

Our third test case (and personal favorite) is the hypnotizing Square Circe Spiral Illusion, an intertwining illusion where a set of concentric circles made of squares appear to spiral and intersect:

Intertwining

The effect is caused by selecting appropriate colors and by applying a rotation scheme to the squares. The latter is missing in our current API. Since it’s quite common to apply a rotation to shapes, we added the necessary dependency properties to the base class: RotationCenterX, RotationCenterY, and RotationAngle. We then only had to update the shared part of the rendering code to use these new properties:

shapeVisual.CenterPoint = new Vector3((float)RotationCenterX, (float)RotationCenterY, 0f);
shapeVisual.RotationAxis = new Vector3(0f, 0f, 1f);
shapeVisual.RotationAngle = (float)RotationAngle;

Et voilà: free rotation capabilities for all bindable shapes. Here’s a code snippet from the sample page:

var square = new Square
{
    Side = side,
    Stroke = i % 2 == 0 ? Colors.WhiteSmoke : Colors.LightSlateGray,
    StrokeThickness = 6,
    Height = Canvas.Height,
    Width = Canvas.Width,
    CenterPointX = (size / 2) + Math.Sin(Math.PI * 2 * i / squares) * radius,
    CenterPointY = (size / 2) + Math.Cos(Math.PI * 2 * i / squares) * radius,
    RotationCenterX = (size / 2) + Math.Sin(Math.PI * 2 * i / squares) * radius,
    RotationCenterY = (size / 2) + Math.Cos(Math.PI * 2 * i / squares) * radius,
    RotationAngle = (90d - i * (360d / squares) + rot * 15d) * Math.PI / 180
};
Canvas.Children.Add(square);

Adding animation

The fourth test case required animation: in the Breathing Square Illusion a spinning rectangle seems to grow and shrink, but it’s actually just rotating without any size change.

BreathingSquare

There is no way to expose animation behavior by just adding dependency properties. Instead we decided to expose the rendered ShapeVisual to the client of the control – not as a dependency property however, since the visual is recreated at each Render call. The shape visual is exposed via an Event on the control, so we first declared an argument:

public class RenderedEventArgs : EventArgs
{
    public ShapeVisual ShapeVisual { get; set; }
}

And then used it in a Rendered event:

public event EventHandler<RenderedEventArgs> Rendered;

protected virtual void OnRendered(RenderedEventArgs e)
{
    Rendered?.Invoke(this, e);
}

protected void Render(CanvasPathBuilder canvasPathBuilder)
{
    // ...
    root.Children.InsertAtTop(shapeVisual);
    OnRendered(new RenderedEventArgs { ShapeVisual = shapeVisual });
}

Here’s the definition of the square in the back:

var backSquare = new Square
{
    Side = 600,
    Fill = Colors.LightSteelBlue,
    Height = Canvas.Height,
    Width = Canvas.Width,
    CenterPointX = 500,
    CenterPointY = 500,
    RotationCenterX = 500,
    RotationCenterY = 500
};
backSquare.Rendered += BackSquare_Rendered;
Canvas.Children.Add(backSquare);

And here’s the handler for the Rendered event. It starts a ScalarKeyFrameAnimation with a LinearEasingFunction on the RotationAngleInDegrees:

private void BackSquare_Rendered(object sender, RenderedEventArgs e)
{
    // Start animation
    var shape = e.ShapeVisual;
    var compositor = Window.Current.Compositor;
    var animation = compositor.CreateScalarKeyFrameAnimation();
    var linear = compositor.CreateLinearEasingFunction();
    animation.InsertKeyFrame(1f, 360f, linear);
    animation.Duration = TimeSpan.FromSeconds(8);
    animation.Direction = AnimationDirection.Normal;
    animation.IterationBehavior = AnimationIterationBehavior.Forever;
    shape.StartAnimation(nameof(shape.RotationAngleInDegrees), animation);
}

Each time that the Square control is Rendered (when a dependency property changed), the shape visual is exposed via the event, and the rotation is started.

With these few –but fun- test cases, we were able to extend the base class of the Bindable Path API with nice features and improvements. This is how it now looks like:

ClassDiagram1

Note: there are more subclasses than just Square …

Look Mom, all XAML

Most of the test cases created the ‘bindable’ path controls programmatically, so we decided to add one entirely in XAML, to finish it off. Here’s how the Motion Binding Illusion looks like:

MotionBinding

At first sight there is nothing common in the movement of the four bars, except that they seem to move in pairs. Again this illusion is entirely made up of squares. The four bars are the sides of a 45° rotated square of which the corners are hidden by four smaller (and also rotated) square. The center square is making a circular motion.

Here’s how these five squares are defined in XAML – we now surely wish we had added a RotationInDegrees property:

<Canvas x:Name="Canvas"
        Height="1000"
        Width="1000"
        Background="LightSteelBlue">
    <controls:Square x:Name="Square"
                        Stroke="LightSlateGray"
                        StrokeThickness="16"
                        CenterPointX="500"
                        CenterPointY="500"
                        Side="540"
                        Height="1000"
                        Width="1000"
                        RotationCenterX="500"
                        RotationCenterY="500"
                        RotationAngle=".785"
                        Canvas.Left="-50"
                        Canvas.Top="-50">

    </controls:Square>
    <controls:Square Fill="LightSteelBlue"
                        StrokeThickness="0"
                        CenterPointX="500"
                        CenterPointY="142"
                        Side="200"
                        Height="1000"
                        Width="1000"
                        RotationCenterX="500"
                        RotationCenterY="142"
                        RotationAngle=".785" />
    <controls:Square Fill="LightSteelBlue"
                        StrokeThickness="0"
                        CenterPointX="500"
                        CenterPointY="858"
                        Side="200"
                        Height="1000"
                        Width="1000"
                        RotationCenterX="500"
                        RotationCenterY="858"
                        RotationAngle=".785" />
    <controls:Square Fill="LightSteelBlue"
                        StrokeThickness="0"
                        CenterPointX="142"
                        CenterPointY="500"
                        Side="200"
                        Height="1000"
                        Width="1000"
                        RotationCenterX="142"
                        RotationCenterY="500"
                        RotationAngle=".785" />
    <controls:Square Fill="LightSteelBlue"
                        StrokeThickness="0"
                        CenterPointX="858"
                        CenterPointY="500"
                        Side="200"
                        Height="1000"
                        Width="1000"
                        RotationCenterX="858"
                        RotationCenterY="500"
                        RotationAngle=".785" />
</Canvas>

The circular motion of the is done through a storyboarded animation on the Canvas.Top and Canvas.Left attached properties – both DoubleAnimations with a SineEase easing function in ‘EaseInOut’ mode (slow-fast-slow). To end up with a circular motion, one animation should follow a cosine function while the follows a sine function. This is solved by delaying the start of one animation with one second (half the full duration of the 180° animation).

Here’s the whole XAML: 

<Storyboard x:Name="myStoryboard"
            RepeatBehavior="Forever">
    <DoubleAnimation Storyboard.TargetName="Square"
                        Storyboard.TargetProperty="(Canvas.Left)"
                        From="-50"
                        To="50"
                        Duration="0:0:2"
                        AutoReverse="True"
                        RepeatBehavior="Forever">
        <DoubleAnimation.EasingFunction>
            <SineEase EasingMode="EaseInOut" />
        </DoubleAnimation.EasingFunction>
    </DoubleAnimation>
    <DoubleAnimation Storyboard.TargetName="Square"
                        Storyboard.TargetProperty="(Canvas.Top)"
                        From="-50"
                        To="50"
                        BeginTime="0:0:1"
                        Duration="0:0:2"
                        AutoReverse="True"
                        RepeatBehavior="Forever">
        <DoubleAnimation.EasingFunction>
            <SineEase EasingMode="EaseInOut" />
        </DoubleAnimation.EasingFunction>
    </DoubleAnimation>
</Storyboard>

Want some more?

There are a lot more optical illusions right here. Our sample project lives on GitHub.

Enjoy!

Creating Bindable Path Controls in UWP

In this article we’ll show some ways to create a bindable arc segment for UWP. We’ve been using elliptical arc elements relatively often. In most cases these where parts inside of user controls such as the percentage ring, the radial range indicator, and the radial gauge. We noticed that every time we used elliptical arcs, the hosting control’s code rapidly became cluttered with path calculation and drawing routines, making its core functionality unnecessarily hard to debug and extend. So we started looking for ways to abstract the calculation and drawing routines out of the user controls and into some reusable pattern.

Here’s a screenshot of the sample app that was created during the process:
CircleSegment

We targeted to implement the full Elliptical Arc SVG definition, including start and end point, radii, rotation angle, sweep direction, and a large arc indicator. Not only does this provide a mathematical context, it could also allow us to achieve compatibility with the SVG path syntax itself (e.g. for serialization). [Spoiler: We did not implement this in the sample app, because such a parser already exists in the CompositionProToolkit.]

For starters, here’s how a non-bindable elliptical arc path looks like in XAML (straight from the excellent documentation):

<Path Stroke="Black" StrokeThickness="1"  
  Data="M 10,100 A 100,50 45 1 0 200,100" />

XAML solution

The ‘classic’ way to draw an elliptical arc in UWP goes back to WPF, and is also implemented in the user controls that we just mentioned: in your XAML, place an ArcSegment as part of PathFigure in a Path control. The Path provides the Stroke properties (color, thickness, pattern), the PathFigure provides the start position, and the Arcsegment provides the rest.

Here’s how the XAML for a fixed arc segment looks like. It’s the exact same arc as the one in the previous code snippet, but without using the SVG mini-language:

<Path Stroke="Black" StrokeThickness="1">
  <Path.Data>
    <PathGeometry>
      <PathGeometry.Figures>
        <PathFigureCollection>
          <PathFigure StartPoint="10,100">
            <PathFigure.Segments>
              <PathSegmentCollection>
                <ArcSegment Size="100,50" RotationAngle="45" IsLargeArc="True" SweepDirection="CounterClockwise" Point="200,100" />
              </PathSegmentCollection>
            </PathFigure.Segments>
          </PathFigure>
        </PathFigureCollection>
      </PathGeometry.Figures>
    </PathGeometry>
  </Path.Data>
</Path>

This XAML structure looks a lot more binding-friendly than the Data string of the first Path definition. All we need to do is create an appropriate ViewModel for it, preferably one that implements INotifyPropertyChanged and exposes the properties through data types that are easy to bind to. Instead of Point or Vector instances for start position, end position, and radius, the ViewModel exposes numeric properties with the individual coordinates:

public class EllipticalArcViewModel : BindableBase
{
    public int RadiusX
    {
        get { return _radiusX; }
        set
        {
            SetProperty(ref _radiusX, value);
            OnPropertyChanged(nameof(Radius));
        }
    }

    public int RadiusY
    {
        get { return _radiusY; }
        set
        {
            SetProperty(ref _radiusY, value);
            OnPropertyChanged(nameof(Radius));
        }
    }

    public Size Radius => new Size(_radiusX, _radiusY);

    // ...
}

The enumerations for PenLineCap and SweepDirection are accessible through Boolean properties:

public bool IsClockwise
{
    get { return _isClockwise; }
    set
    {
        SetProperty(ref _isClockwise, value);
        OnPropertyChanged(nameof(SweepDirection));
    }
}

public SweepDirection SweepDirection => 
	_isClockwise ? SweepDirection.Clockwise : SweepDirection.Counterclockwise;

Here’s the class diagram for the first iteration of our ‘bindable arc’. The new ViewModel is on the left, and the XAML elements to which we will bind, are on the right:

EllipticalArcViewModel

The hosting page exposes an instance of the ViewModel in its code behind:

private EllipticalArcViewModel _viewModel = new EllipticalArcViewModel
{
    StartPointX = 10,
    StartPointY = 100,
    RadiusX = 100,
    RadiusY = 50,
    RotationAngle = 45,
    EndPointX = 200,
    EndPointY = 100
};

public EllipticalArcViewModel ViewModel => _viewModel;

And in the XAML we added {x:Bind} bindings to the elements:

<Path Stroke="{x:Bind ViewModel.Stroke, Mode=OneWay}"
        Height="400"
        Width="400"
        StrokeThickness="{x:Bind ViewModel.StrokeThickness, Mode=OneWay}"
        StrokeStartLineCap="{x:Bind ViewModel.StrokeLineCap, Mode=OneWay}"
        StrokeEndLineCap="{x:Bind ViewModel.StrokeLineCap, Mode=OneWay}">
    <Path.Data>
        <PathGeometry>
            <PathGeometry.Figures>
                <PathFigureCollection>
                    <PathFigure StartPoint="{x:Bind ViewModel.StartPoint, Mode=OneWay}">
                        <PathFigure.Segments>
                            <PathSegmentCollection>
                                <ArcSegment RotationAngle="{x:Bind ViewModel.RotationAngle, Mode=OneWay}"
                                            IsLargeArc="{x:Bind ViewModel.IsLargeArc, Mode=OneWay}"
                                            SweepDirection="{x:Bind ViewModel.SweepDirection, Mode=OneWay}"
                                            Point="{x:Bind ViewModel.EndPoint, Mode=OneWay}"
                                            Size="{x:Bind ViewModel.Radius, Mode=OneWay}" />
                            </PathSegmentCollection>
                        </PathFigure.Segments>
                    </PathFigure>
                </PathFigureCollection>
            </PathGeometry.Figures>
        </PathGeometry>
    </Path.Data>
</Path>

The ViewModel implements change propagation, so we can use Sliders (or other input controls) to modify its properties, and this will update the XAML elements:

<Slider Header="Rotation Angle"
        Value="{x:Bind ViewModel.RotationAngle, Mode=TwoWay}"
        Maximum="360" />
<ToggleSwitch Header="Sweep Direction"
                IsOn="{x:Bind ViewModel.IsClockwise, Mode=TwoWay}"
                OnContent="Clockwise"
                OffContent="Counterclockwise" />

Here’s how the corresponding page in the sample app looks like. It draws an ArcSegment with bindings to all of its SVG properties. Slider and ToggleSwitch controls allow you to modify the elliptical arc through the ViewModel:
ArcSegment

It’s relatively easy to make this solution more reusable by turning it into a user control. If you’re looking for an example of this, just check the code for the PieSlice and RingSlice controls from WinRT XAML Toolkit.

Composition solution

The last couple of years, we’ve seen an exciting increase in functionality and ease of use of the UWP Composition API. It has come to a point were apps can now relatively easy draw (and/or animate) anything directly in the Visual layer, a layer that is much closer to the fast DirectX drivers than the traditional XAML layer:
layers-win-ui-composition

The Visual Layer allows for fast drawing off the UI thread. Its API was recently enhanced with capabilities to draw paths. This makes it a logical next step in our attempt to create bindable paths.

In this second iteration, we packaged a ‘bindable arc’ as a user control. Its only XAML element is an empty grid which is only there to establish the link between the Visual (composition) Layer and the Framework (XAML) layer. Here’s that XAML:

<UserControl x:Class="XamlBrewer.Uwp.Controls.EllipticalArc">
    <Grid x:Name="Container" />
</UserControl>

Unsurprisingly, the new EllipticalArc control exposes the same list of properties as the EllipticalArcViewModel. Only this time they’re implemented as Dependency Properties:

public sealed partial class EllipticalArc : UserControl
{
    public static readonly DependencyProperty StartPointXProperty =
        DependencyProperty.Register(nameof(StartPointX), 
                                    typeof(int), 
                                    typeof(EllipticalArc), 
                                    new PropertyMetadata(0, new PropertyChangedCallback(Render)));

    public static readonly DependencyProperty StartPointYProperty =
        DependencyProperty.Register(nameof(StartPointY), 
                                    typeof(int), 
                                    typeof(EllipticalArc), 
                                    new PropertyMetadata(0, Render));

    public int EndPointX
    {
        get { return (int)GetValue(EndPointXProperty); }
        set { SetValue(EndPointXProperty, value); }
    }

    public int EndPointY
    {
        get { return (int)GetValue(EndPointYProperty); }
        set { SetValue(EndPointYProperty, value); }
    }

    // ...

}

Here’s the full API:

EllipticalArcControl

Every time a property changes, the Render method is called, to redraw the elliptical arc. It uses classes from the Composition API and Win2D (available through a NuGet package). Here are the key classes and their responsibility in that method:

Class Responsibility
Compositor Creates all Composition objects
Win2D CanvasPathBuilder Holds the path figure definitions
Win2D CanvasGeometry Draws geometric shapes in DirectX
CompositionPath Links the geometric shape to the Visual Layer
CompositionSpriteShape Holds the stroke and fill definitions
ShapeVisual Links the Visual Layer to the Framework Layer

Here’s the full Render method:

private static void Render(DependencyObject d, DependencyPropertyChangedEventArgs args)
{
    var arc = (EllipticalArc)d;
    if (arc.Container.ActualWidth == 0)
    {
        return;
    }

    var root = arc.Container.GetVisual();
    var compositor = Window.Current.Compositor;
    var canvasPathBuilder = new CanvasPathBuilder(new CanvasDevice());

    // Figure
    canvasPathBuilder.BeginFigure(new Vector2(arc.StartPointX, arc.StartPointY));
    canvasPathBuilder.AddArc(
        new Vector2(arc.EndPointX, arc.EndPointY),
        arc.RadiusX,
        arc.RadiusY,
        (float)(arc.RotationAngle * Math.PI / 180),
        arc.IsClockwise ? CanvasSweepDirection.Clockwise : CanvasSweepDirection.CounterClockwise,
        arc.IsLargeArc ? CanvasArcSize.Large : CanvasArcSize.Small);
    canvasPathBuilder.EndFigure(arc.IsClosed ? CanvasFigureLoop.Closed : CanvasFigureLoop.Open);

    // Path
    var canvasGeometry = CanvasGeometry.CreatePath(canvasPathBuilder);
    var compositionPath = new CompositionPath(canvasGeometry);
    var pathGeometry = compositor.CreatePathGeometry();
    pathGeometry.Path = compositionPath;
    var spriteShape = compositor.CreateSpriteShape(pathGeometry);
    spriteShape.FillBrush = compositor.CreateColorBrush(arc.Fill);
    spriteShape.StrokeThickness = (float)arc.StrokeThickness;
    spriteShape.StrokeBrush = compositor.CreateColorBrush(arc.Stroke);
    spriteShape.StrokeStartCap = arc.IsStrokeRounded ? CompositionStrokeCap.Round : CompositionStrokeCap.Flat;
    spriteShape.StrokeEndCap = arc.IsStrokeRounded ? CompositionStrokeCap.Round : CompositionStrokeCap.Flat;

    // Visual
    var shapeVisual = compositor.CreateShapeVisual();
    shapeVisual.Size = new Vector2((float)arc.Container.ActualWidth, (float)arc.Container.ActualHeight);
    shapeVisual.Shapes.Add(spriteShape);
    root.Children.RemoveAll();
    root.Children.InsertAtTop(shapeVisual);
}

Here’s how to use the control on a XAML page:

<controls:EllipticalArc x:Name="EllipticalArc"
                        StartPointX="10"
                        StartPointY="100"
                        RotationAngle="45"
                        RadiusX="100"
                        RadiusY="50"
                        EndPointX="200"
                        EndPointY="100"
                        Height="400"
                        Width="400" />

It supports two-way bindings, so you can update its properties through other controls – or through ViewModels if you prefer an MVVM approach:

<Slider Header="Rotation Angle"
        Value="{Binding RotationAngle, ElementName=EllipticalArc, Mode=TwoWay}"
        Maximum="360" />
<ToggleSwitch Header="Sweep Direction"
                IsOn="{Binding IsClockwise, ElementName=EllipticalArc, Mode=TwoWay}"
                OnContent="Clockwise"
                OffContent="Counterclockwise" />

Here’s how the corresponding page looks like in the sample app – it’s identical to the XAML version:
Composition

Reusable solution

The experiment with the light weight user control inspired us to move one step further, and create an easy-to-use circle segment control as part of a ‘reusable generic bindable path drawing framework for UWP’ (a.k.a. an abstract base class).

This circle segment control is intended for use inside (the template of) other controls – think of percentage ring, radial gauge, pie chart, and doughnut chart. All the common ‘bindable path’ members (such as stroke properties, start position, and closed figure indicator) are factored out into an abstract base class, while the CircleSegment itself only comes with a center, a radius, start end sweep angles. We also added a Boolean property to specify whether to add extra lines to the center and give it a pie slice shape. [We know: mathematically this property should have been called IsSector instead of IsPie.]

Here’s the class diagram for the last ‘bindable arc’ iteration:

CircleSegmentClassDiagram

As in the previous EllipticalArc, the user control’s only XAML element is a Grid to hook the composition drawings to the Visual Tree:

<UserControl
    x:Class="XamlBrewer.Uwp.Controls.CompositionPathHost">
    <Grid x:Name="Host" />
</UserControl>

On top of the property definitions, the control comes with an abstract Render method which is triggered on load and on property changes:

public static readonly DependencyProperty StartPointXProperty =
	DependencyProperty.Register(
		nameof(StartPointX), 
		typeof(double), 
		typeof(CompositionPathHost), 
		new PropertyMetadata(0d, new PropertyChangedCallback(Render)));

public double StartPointX
{
    get { return (double)GetValue(StartPointXProperty); }
    set { SetValue(StartPointXProperty, value); }
}

// ...

private void CompositionPathHost_Loaded(object sender, RoutedEventArgs e)
{
    Render(this, null);
}

protected static void Render(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
    var path = (CompositionPathHost)d;
    if (path.Container.ActualWidth == 0)
    {
        return;
    }

    path.Render();
}

protected abstract void Render();

The new CircleSegment control inherits from this abstract base class, and adds its own properties;

public class CircleSegment : CompositionPathHost
{
  public static readonly DependencyProperty CenterPointYProperty =
      DependencyProperty.Register(
		nameof(CenterPointY), 
		typeof(double), 
		typeof(CircleSegment), 
		new PropertyMetadata(0d, Render));

  public double CenterPointX
  {
    get { return (double)GetValue(CenterPointXProperty); }
    set { SetValue(CenterPointXProperty, value); }
  }

  // ...

}

And of course provides its own implementation of the Render method. It’s a specialized version of the one in EllipticalArc that’s based on a more convenient AddArc overload with a center point and start and sweep angles. The start point coordinates of the underlying figure are determined by the IsPie property. The figure starts at the circle’s center for a pie slice shape (sector), or at the beginning of the arc for a simple segment:

protected override void Render()
{
    var root = Container.GetVisual();
    var compositor = Window.Current.Compositor;
    var canvasPathBuilder = new CanvasPathBuilder(new CanvasDevice());
    if (IsStrokeRounded)
    {
        canvasPathBuilder.SetSegmentOptions(CanvasFigureSegmentOptions.ForceRoundLineJoin);
    }
    else
    {
        canvasPathBuilder.SetSegmentOptions(CanvasFigureSegmentOptions.None);
    }

    // Figure
    if (IsPie)
    {
        StartPointX = CenterPointX;
        StartPointY = CenterPointY;
    }
    else
    {
        StartPointX = Radius * Math.Cos(StartAngle * Degrees2Radians) + CenterPointX;
        StartPointY = Radius * Math.Sin(StartAngle * Degrees2Radians) + CenterPointY;
    }

    canvasPathBuilder.BeginFigure(new Vector2((float)StartPointX, (float)StartPointY));

    canvasPathBuilder.AddArc(
        new Vector2((float)CenterPointX, (float)CenterPointY),
        (float)Radius,
        (float)Radius,
        (float)(StartAngle * Degrees2Radians),
        (float)(SweepAngle * Degrees2Radians));

    canvasPathBuilder.EndFigure(IsClosed || IsPie ? CanvasFigureLoop.Closed : CanvasFigureLoop.Open);

    // Path
    var canvasGeometry = CanvasGeometry.CreatePath(canvasPathBuilder);
    var compositionPath = new CompositionPath(canvasGeometry);
    var pathGeometry = compositor.CreatePathGeometry();
    pathGeometry.Path = compositionPath;
    var spriteShape = compositor.CreateSpriteShape(pathGeometry);
    spriteShape.FillBrush = compositor.CreateColorBrush(Fill);
    spriteShape.StrokeThickness = (float)StrokeThickness;
    spriteShape.StrokeBrush = compositor.CreateColorBrush(Stroke);
    spriteShape.StrokeStartCap = IsStrokeRounded ? CompositionStrokeCap.Round : CompositionStrokeCap.Flat;
    spriteShape.StrokeEndCap = IsStrokeRounded ? CompositionStrokeCap.Round : CompositionStrokeCap.Flat;

    // Visual
    var shapeVisual = compositor.CreateShapeVisual();
    shapeVisual.Size = new Vector2((float)Container.ActualWidth, (float)Container.ActualHeight);
    shapeVisual.Shapes.Add(spriteShape);
    root.Children.RemoveAll();
    root.Children.InsertAtTop(shapeVisual);
}

Please observe we switched from integers to doubles for all coordinate properties. The calculation of the starting point revealed the rounding errors when using integers.

For a screenshot of the corresponding page in the sample app, scroll back to the beginning of this article. It’s basically the same as all the previous ones, only with better code behind. We also brought up the old SquareOfSquares control to host some CircleSegment instances with different sizes and random properties:

SquareOfSquares

Last but not least, we added an inspiring gallery page to demonstrate some potential usages of the control:

Gallery

All the code

The sample app lives here on GitHub.

Enjoy!

Improving the accessibility of a UWP control

In this article we’ll show some techniques to improve the accessibility of a custom control in UWP. We’ll focus on three aspects:

  • Support for High Contrast themes,
  • Narrator compatibility, and
  • Keyboard input capability.

We’ll use the Radial Gauge control from Windows Community Toolkit as an example – the High Contrast and Narrator code in this article was recently merged into the source.

Supporting High Contrast themes

Adding Theme Dictionaries

With Settings –> Ease of access –> High Contrast, a Windows user can switch his color scheme to a small palette of contrasting colors. There may be medical reasons for this (like cataract or diabetic retinopathy) but it could also be set as to deal with working conditions (like direct sunlight on the screen, or insufficient lighting).

In a UWP app, all of the built-in native controls (the “XAML Common Controls”) respect this user setting and will update their UI. As an app developer you have to make sure

  • not to override that behavior of the native controls that you use, and
  • to provide a similar behavior for the XAML elements that you write yourself: pages and custom controls.

Basically this boils down to never hard coding colors.

When building a custom control, you should create a ThemeDictionary that includes an entry for ‘High Contrast’. Here’s a XAML snippet from the previous style definition from the Radial Gauge control. It came with direct assignment of colors:

<Style TargetType="local:RadialGauge"> 
   <Setter Property="NeedleBrush" 
           Value="{ThemeResource SystemControlBackgroundAccentBrush}" /> 
   <Setter Property="TrailBrush" 
           Value="{ThemeResource SystemControlBackgroundAccentBrush}" />  
   <Setter Property="TickBrush" 
           Value="{ThemeResource SystemControlForegroundBaseHighBrush}" />  
   <Setter Property="ScaleTickBrush" 
           Value="Transparent" /> 
   <!-- ... --/>
</Style>

The brushes were pulled from theme resources to support Light an Dark themes. That’s a good start, but it doesn’t cover the High Contrast scenario. You should stick to a restricted palette of only 8 system colors in the High Contrast section of the theme dictionary:

Key Initial default
SystemColorButtonFaceColor #FFF0F0F0
SystemColorButtonTextColor #FF000000
SystemColorGrayTextColor #FF6D6D6D
SystemColorHighlightColor #FF3399FF
SystemColorHighlightTextColor #FFFFFFFF
SystemColorHotlightColor #FF0066CC
SystemColorWindowColor #FFFFFFFF
SystemColorWindowTextColor #FF000000

To support High Contrast in the Radial Gauge, an extra layer of abstraction was added in the color brush assignments by means of a resource dictionary:

<ResourceDictionary.ThemeDictionaries>
    <ResourceDictionary x:Key="Default">
        <SolidColorBrush x:Key="RadialGaugeNeedleBrush"
                         Color="{ThemeResource SystemChromeHighColor}" />
        <SolidColorBrush x:Key="RadialGaugeTrailBrush"
                         Color="{ThemeResource SystemChromeHighColor}" />
        <SolidColorBrush x:Key="RadialGaugeScaleBrush"
                         Color="{ThemeResource SystemBaseMediumLowColor}" />
        <SolidColorBrush x:Key="RadialGaugeScaleTickBrush"
                         Color="{ThemeResource SystemBaseMediumLowColor}" />
        <SolidColorBrush x:Key="RadialGaugeTickBrush"
                         Color="{ThemeResource SystemBaseHighColor}" />
        <SolidColorBrush x:Key="RadialGaugeForeground"
                         Color="{ThemeResource SystemBaseHighColor}" />
    </ResourceDictionary>
    <ResourceDictionary x:Key="Light">
        <!-- ... -->
    </ResourceDictionary>
    <ResourceDictionary x:Key="Dark">
        <!-- ... -->
    </ResourceDictionary>
    <ResourceDictionary x:Key="HighContrast">
        <!-- ... -->
    </ResourceDictionary>
</ResourceDictionary.ThemeDictionaries>

The default style of the control was updated to refer to resources from that dictionary:

<Style TargetType="local:RadialGauge">
    <Setter Property="NeedleBrush"
            Value="{ThemeResource RadialGaugeNeedleBrush}" />
    <Setter Property="TrailBrush"
            Value="{ThemeResource RadialGaugeTrailBrush}" />
    <Setter Property="ScaleBrush"
            Value="{ThemeResource RadialGaugeScaleBrush}" />
    <Setter Property="ScaleTickBrush"
            Value="{ThemeResource RadialGaugeScaleTickBrush}" />
    <Setter Property="TickBrush"
            Value="{ThemeResource RadialGaugeTickBrush}" />
    <Setter Property="Foreground"
            Value="{ThemeResource RadialGaugeForeground}" />
    <!-- ... -->
</Style>

The resource dictionary has an entry for ‘High Contrast’ with only brushes out of the limited set of system colors:

<ResourceDictionary x:Key="HighContrast">
    <SolidColorBrush x:Key="RadialGaugeNeedleBrush"
                     Color="{ThemeResource SystemColorHotlightColor}" />
    <SolidColorBrush x:Key="RadialGaugeTrailBrush"
                     Color="{ThemeResource SystemColorHotlightColor}" />
    <SolidColorBrush x:Key="RadialGaugeScaleBrush"
                     Color="{ThemeResource SystemColorWindowColor}" />
    <SolidColorBrush x:Key="RadialGaugeScaleTickBrush"
                     Color="{ThemeResource SystemColorWindowColor}" />
    <SolidColorBrush x:Key="RadialGaugeTickBrush"
                     Color="{ThemeResource SystemColorWindowTextColor}" />
    <SolidColorBrush x:Key="RadialGaugeForeground"
                     Color="{ThemeResource SystemColorWindowTextColor}" />
</ResourceDictionary>

The control now supports High contrast themes, and we didn’t even need to change the code behind.

Ignoring local properties

When a developer places our control on a page, he can locally assign brushes to the control’s color settings – after all that’s why we decorated it with properties. It makes sense for the control to ignore these assignments when a High Contrast theme is applied. For that we need to detect whether we’re in High contrast mode or not, and find a way to get notified when the user enables or disables the feature.

This is were the AccessibilitySettings class enters the picture, with a HighContrast property and a HighContrastChanged event. When the app starts, and whenever the event fires, we update the color scheme if necessary. We first create an instance of the class:

private static readonly AccessibilitySettings ThemeListener = new AccessibilitySettings();

And then register an event handler. That handler may force a redraw of the control –to update the colors- so the control’s Visual Tree should be assembled (but not yet displayed) when we register it. That makes the OnApplyTemplate event the best choice to host this code:

ThemeListener.HighContrastChanged += ThemeListener_HighContrastChanged;

Let’s take a look at the code to change the color scheme. When the app starts, we create a cache to remember the local values for the different colors. All the brushes are defined as dependency properties. To detect whether a dependency property is locally overridden (in XAML or C#, but not by a theme dictionary), we can use ReadLocalValue. It returns the local value or the so-called UnSetValue that indicates that no value was assigned:

private SolidColorBrush _needleBrush;
private Brush _trailBrush;
private Brush _scaleBrush;
// More brushes ...
_needleBrush = ReadLocalValue(NeedleBrushProperty) as SolidColorBrush;
_trailBrush = ReadLocalValue(TrailBrushProperty) as SolidColorBrush;
_scaleBrush = ReadLocalValue(ScaleBrushProperty) as SolidColorBrush;
// More assignments ...

When we enter a High Contrast theme, we unbind the local values from the dependency properties so that the entries of the theme dictionary are used. And when we enter a non High Contrast theme, we reassign the local values from the cache to reinstall the original color scheme:

private void OnColorsChanged()
{
    if (ThemeListener.HighContrast)
    {
        // Apply High Contrast Theme.
        ClearBrush(_needleBrush, NeedleBrushProperty);
        ClearBrush(_trailBrush, TrailBrushProperty);
        ClearBrush(_scaleBrush, ScaleBrushProperty);
        // More of these ...
    }
    else
    {
        // Apply User Defined or Default Theme.
        RestoreBrush(_needleBrush, NeedleBrushProperty);
        RestoreBrush(_trailBrush, TrailBrushProperty);
        RestoreBrush(_scaleBrush, ScaleBrushProperty);
        // More of these ...
    }

    // ...
}

To clear and restore the brushes, we use ClearValue and SetValue respectively:

private void ClearBrush(Brush brush, DependencyProperty prop)
{
    if (brush != null)
    {
        ClearValue(prop);
    }
}

private void RestoreBrush(Brush source, DependencyProperty prop)
{
    if (source != null)
    {
        SetValue(prop, source);
    }
}

For a deeper dive into Dependency Properties, check this documentation.

In the small sample app that I wrote, I encapsulated all accessibility related code behind (there’s more to come further in this article) in a separate partial class file:

FileStructure

To get these file under the xaml.cs file in Visual Studio’s explorer, I tweaked the project file:

ProjectFile

That sample app has a page with three radial gauge controls. The one in the middle has a local assignment for some colors:

<controls:RadialGauge Unit="Mississippi"
                      Value="20"
                      NeedleBrush="MediumOrchid"
                      TrailBrush="Indigo"
/>

Here’s how that page looks like in a non High Contrast theme. On the left side of the screen there’s the Settings app; the right side is main page of the sample app:

HighContrast_Off

Here’s the app in High Contrast mode. Observe the limited color palette, and the fact that we successfully ignored the local colors for the middle gauge:

HighContrast

The UI Automation API

Another way to improve the accessibility of a custom UWP control is to provide screen reader support. Narrator lets you use your device to complete common tasks without mouse or touch input. It reads and interacts with elements on the screen, like text and buttons. Of course that only works if you stick to a protocol. That protocol is Microsoft UI Automation, an API in Windows that enables your apps to provide (and consume) programmatic information about its user interface. Providing programmatic access to most UI elements on the desktop enables assistive technology products, such as Narrator and Magnifier, to provide this information to the end users, e.g. by speech.

One of the most important components of Windows Automation is the so-called UI Automation Tree. This is the hierarchical representation of your desktop window, its child elements (open windows) and their UI components (menus, buttons, lists, …). The UI Automation Tree makes a difference between Control elements (interactive) and Content elements, but it makes no difference in technology: it works against all stacks: WinForms, Web, XAML, …

Supporting Narrator

Creating and registering an Automation Peer

An Automation Peer is a class that helps exposing the content of a UI element class to Windows UI Automation. It ensures that the UI Automation Tree does not have to rely on high-level assumptions, so it improves the service of components like Narrator and Magnifier.

To create an automation peer for a class, you just create a descendant from AutomationPeerFrameworkAutomationPeer is a good parent for a XAML control- and specify the described class (the ‘owner’) in the constructor:

public class RadialGaugeAutomationPeer : FrameworkElementAutomationPeer
{
    public RadialGaugeAutomationPeer(RadialGauge owner)
        : base(owner)
    { }

    // ...
}

To activate the automation peer, you also have to override the OnCreateAutomationPeer in the owner class (RadialGauge in our case):

protected override AutomationPeer OnCreateAutomationPeer()
{
    return new RadialGaugeAutomationPeer(this);
}

Basic overrides

There are two methods that you would probably override in every automation peer. The first one is GetChildrenCore(). It returns the list of child elements that you want to expose to UI Automation. In the case of the RadialGauge, we want to expose the control to as a single element and hide its details. So the override returns no children:

protected override IList<AutomationPeer> GetChildrenCore()
{
    return null;
}

The second important automation peer method is GetNameCore() which returns the main description of the component. If you want more than the fully qualified class name, then you should override it. Here’s what we did in the RadialGauge to make it return the class name and unit measure:

protected override string GetNameCore()
{
    var gauge = (RadialGauge)Owner;
    return "radial gauge. " + (string.IsNullOrWhiteSpace(gauge.Unit) ? "no unit specified. " : "unit " + gauge.Unit + ". ");
}

Tip: provide punctuation and pauses (blanks) in the result string to create a natural description. When Narrator reads the screen, it respects all of these.

After the name, Narrator tells the type of control. Custom is the default return value for GetAutomationControlTypeCore(), so this is actually an obsolete override:

protected override AutomationControlType GetAutomationControlTypeCore()
{
    return AutomationControlType.Custom;
}

I just want to point out that the type comes from the AutomationControlType enumeration. So you can not add your own here, but you can still expose a lot more details of your control.

Refining the Automation Peer

To find out more details about your control, Windows Automation will call GetPatternCore() a few times to see if it supports some protocols. One of these protocols is RangeValue which describes the controls that have a Value that is between a Minimum and a Maximum, controls such as … the RadialGauge.

To fulfill the protocol we have to implement the IRangeValueProvider interface:

public class RadialGaugeAutomationPeer :
    FrameworkElementAutomationPeer,
    IRangeValueProvider
{
    // ...
}
public double Value => ((RadialGauge)Owner).Value;
public double Minimum => ((RadialGauge)Owner).Minimum;
public double SmallChange => ((RadialGauge)Owner).StepSize;
public bool IsReadOnly => !((RadialGauge)Owner).IsInteractive;
// ...

Then we have to override GetPatternCore() to return a reference for each protocol that we implemented:

protected override object GetPatternCore(PatternInterface patternInterface)
{
    if (patternInterface == PatternInterface.RangeValue)
    {
        // Expose RangeValue properties.
        return this;
    }

    return base.GetPatternCore(patternInterface);
}

That’s is! Narrator will now say the description of our control, followed by the ValueRange information (Value, Minimum and Maximum). If you activate Narrator (in Settings) and TAB to the different gauges in the sample app, they will be nicely described. Here’s a –silent- screenshot:

Narrator

You can use the Inspect tool from the Windows SDK to discover and visualize all the information that Windows Automation is able to extract from your control:

Inspect_Narrator

Supporting keyboard input

A lot of users don’t use the mouse or a touch screen to interact with their computer, but only the keyboard. So it makes sense for a UWP control to accept keyboard input when it has the focus. The RadialGauge has a handler for the KeyDown event. Its value can be increased and decreased with the right and left arrow keys respectively. The Control key makes the difference between a large change of 5 units and a small change of 1:

private void RadialGauge_KeyDown(object sender, KeyRoutedEventArgs e)
{
    var step = 1;
    var ctrl = Window.Current.CoreWindow.GetKeyState(VirtualKey.Control);
    if (ctrl.HasFlag(CoreVirtualKeyStates.Down))
    {
        step = 5;
    }

    if (e.Key == VirtualKey.Left)
    {
        Value = Math.Max(Minimum, Value - step);
        e.Handled = true;
        return;
    }

    if (e.Key == VirtualKey.Right)
    {
        Value = Math.Min(Maximum, Value + step);
        e.Handled = true;
    }
}

This article described three techniques to improve the accessibility of a UWP Custom Control. If you want to know more on this topic, check this excellent starting page on the DevCenter.  You’ll quickly learn that spending some time and effort on accessibility makes your controls and apps better for everyone.

If you want to play with the code, the small sample app with the accessible RadialGauge lives here on GitHub.

Enjoy!