Monthly Archives: April 2019

Machine Learning with ML.NET in UWP: Field-Aware Factorization Machine

In this article we demonstrate how to use a Field-Aware Factorization Machine to recommend hotels on the Las Vegas Strip, based on the traveler type (solo, family, business, …) and the season. We’ll use ML.NET for the Machine Learning stuff, OxyPlot for visualization, and a UWP app to host it all. This is how the sample page looks like:

FfmRecommendation

This page looks pretty much like the one in our previous article on recommendation in Machine Learning. That article also recommended hotels – but since we were using Matrix Factorization we could only use one feature to base the recommendation on (i.c. traveler type). In the previous article, the prediction model was defined by preparing the two feature columns (TravelerType and Hotel) and passing these to a recommendation algorithm, like this:

var pipeline = _mlContext.Transforms.Conversion.MapValueToKey("Hotel")
                .Append(_mlContext.Transforms.Conversion.MapValueToKey("TravelerType"))
                .Append(_mlContext.Recommendation().Trainers.MatrixFactorization(
                                    labelColumnName: "Label",
                                    matrixColumnIndexColumnName: "Hotel",
                                    matrixRowIndexColumnName: "TravelerType"))
                .Append(_mlContext.Transforms.Conversion.MapKeyToValue("Hotel"))
                .Append(_mlContext.Transforms.Conversion.MapKeyToValue("TravelerType"));

The corresponding pipeline in this article would look very similar:

var pipeline = _mlContext.Transforms.Categorical.OneHotEncoding("TravelerTypeOneHot", "TravelerType")
                .Append(_mlContext.Transforms.Categorical.OneHotEncoding("HotelOneHot", "Hotel"))
                .Append(_mlContext.Transforms.Concatenate("Features", "TravelerTypeOneHot", "HotelOneHot"))
                .Append(_mlContext.BinaryClassification.Trainers.FieldAwareFactorizationMachine(new string[] { "Features" }));

The advantage of a Field-Aware Factorization Machine over Matrix Factorization is that you’re not limited to two features. This allows you to provide much better recommendations. Hotels could be recommended on a combination of traveler type, season, country of origin, etc.

Field-Aware Factorization in Machine Learning

A Field-Aware Factorization Machine (FFM) is a recommendation algorithm that is specialized in deriving knowledge from large and sparse datasets. It recognizes feature conjunctions in feature vectors. This is particularly useful in Click-Through Rate prediction (CTR). Check this article for an introduction and a comparison to other algorithms.

The FFM algorithm takes a vector of numerical features as input. When we’re dealing with categorical data (countries, months, traveler types, …) we need to transform these into numbers. In the context of FFM the right approach is to use One-Hot Encoding, which boils down to pivoting feature values into separate columns. There’s an excellent beginners guide to one-hot encoding right here, but this illustration from Chris Albon’s Machine Learning Flashcards says it all:

 

One-Hot_Encoding

The one-hot encoding transformation extends the schema with a lot of new features, sparsely filled. That exactly how FFM likes it…

Field-Aware Factorization in ML.NET

With the FieldAwareFactorizationMachineTrainer and the OneHotEncodingTransformer, ML.NET has all the ingredients to implement a Field-Aware Factorization Machine scenario.

Let’s write some code

Here’s how a typical Machine Learning use case looks like:

MachineLearningModel

Raw data is collected, then cleaned up, completed, and transformed to fit the algorithm. Part of the data is used to train the model, another part is used to evaluate it. When the model passed all tests, it is persisted for consumption.

The sample app uses a light-weight MVVM architecture. The beef of the ML.NET manipulation is in this Model class.

Getting the Raw Data

The raw data in the sample app comes from a comma-separated flat file with Trip Advisor reviews from 2015 for hotels on the Las Vegas Strip:

RecommendationDataSet

Preparing the Data

While we read the data, we already start the preparation. FFM is a binary classification algorithm, so we need to transform the rating (0-5) to a Boolean value (“recommended or not”). Here’s the input structure:

public class FfmRecommendationData
{
    public bool Label;

    public string TravelerType;

    public string Season;

    public string Hotel;
}

The predicted label will also be a Boolean, and the algorithm also provides the corresponding probability. Here’s the model’s output structure:

public class FfmRecommendationPrediction
{
    public bool PredictedLabel;

    public float Probability;

    public string TravelerType;

    public string Season;

    public string Hotel;
}

Before we read all the data in an IDataView, we need to create a MLContext instance:

private MLContext _mlContext = new MLContext(seed: null);

While we read the data, we transform the score (0 to 5) into a Boolean by comparing it to a threshold value. We used 3 as the threshold, since the general ratings in the dataset are pretty high – which is probably why these were allowed to make public.

With the LoadFromEnumerable() method we transform it into an IDataView:

private IDataView _allData;
private ITransformer _model;

public IEnumerable<FfmRecommendationData> Load(string trainingDataPath)
{
    // Populating an IDataView from an IEnumerable.
    var data = File.ReadAllLines(trainingDataPath)
        .Skip(1)
        .Select(x => x.Split(';'))
        .Select(x => new FfmRecommendationData
        {
            Label = double.Parse(x[4]) > _ratingThreshold,
            Season = x[5],
            TravelerType = x[6],
            Hotel = x[13]
        });

    _allData = _mlContext.Data.LoadFromEnumerable(data);

    // Just 'return data;' would also do the trick...
    return _mlContext.Data.CreateEnumerable<FfmRecommendationData>(_allData, reuseRowObject: false);
}

For the next step in preparing the data we will rely on some ML.NET transformations. First we apply a OneHotEncoding transformation to all the features, to transform the categorical data into numbers. Then all features are combined into a vector with a call to Concatenate(). The model building pipeline is then completed with a FieldAwareFactorizationMachine:

var pipeline = _mlContext.Transforms.Categorical.OneHotEncoding("TravelerTypeOneHot", "TravelerType")
    .Append(_mlContext.Transforms.Categorical.OneHotEncoding("SeasonOneHot", "Season"))
    .Append(_mlContext.Transforms.Categorical.OneHotEncoding("HotelOneHot", "Hotel"))
    .Append(_mlContext.Transforms.Concatenate("Features", "TravelerTypeOneHot", "SeasonOneHot", "HotelOneHot"))
    .Append(_mlContext.BinaryClassification.Trainers.FieldAwareFactorizationMachine(new string[] { "Features" }));

Training the Model

To train the model, we send 450 randomly chosen rows from the dataset to it. The rows are selected with some methods from the DataOperationsCatalog: first we rearrange the dataset with ShuffleRows(), then we pick some rows with TakeRows():

var trainingData = _mlContext.Data.ShuffleRows(_allData);
trainingData = _mlContext.Data.TakeRows(trainingData, 450);

After training the model, we’ll create a strongly typed PredictionEngine from it, for individual recommendations. So we declared a field to host this:

private PredictionEngine<FfmRecommendationData, FfmRecommendationPrediction> _predictionEngine;

The model is created and trained with a call to Fit(), and then we create the prediction engine from it with CreatePredictionEngine():

_model = pipeline.Fit(trainingData);
_predictionEngine = _mlContext.Model.CreatePredictionEngine<FfmRecommendationData, FfmRecommendationPrediction>(_model);

Testing the Model

To test the model we send another 100 random rows from the dataset to it and call the Transform() to generate the predictions. A call to the Evaluate() method in the BinaryClassificationCatalog will compare the predicted labels to the original ones:

public CalibratedBinaryClassificationMetrics Evaluate(string testDataPath)
{
    var testData = _mlContext.Data.ShuffleRows(_allData);
    testData = _mlContext.Data.TakeRows(testData, 100);

    var scoredData = _model.Transform(testData);
    var metrics = _mlContext.BinaryClassification.Evaluate(
        data: scoredData, 
        labelColumnName: "Label", 
        scoreColumnName: "Probability", 
        predictedLabelColumnName: "PredictedLabel");

    // Place a breakpoint here to inspect the quality metrics.
    return metrics;
}

The result of the evaluation is an instance of CalibratedBinaryClassificationMetrics with useful statistics such as accuracy, entropy, recall, and F1-score:

CalibratedMetrics

Persisting the Model

When you’re happy with the model’s quality, then you can serialize and persist it for later use with a call to the Save() method from the ModelOperationsCatalog:

public void Save(string modelName)
{
    var storageFolder = ApplicationData.Current.LocalFolder;
    string modelPath = Path.Combine(storageFolder.Path, modelName);

    _mlContext.Model.Save(_model, inputSchema: null, filePath: modelPath);
}

Consuming the Model

There are two ways to consume the model. With a call to Transform() you can generate predictions or recommendations for a group of input records. The resulting IDataView can be transformed to a list of prediction records with CreateEnumerable():

public IEnumerable<FfmRecommendationPrediction> Predict(IEnumerable<FfmRecommendationData> recommendationData)
{
    // Group prediction
    var data = _mlContext.Data.LoadFromEnumerable(recommendationData);
    var predictions = _model.Transform(data);
    return _mlContext.Data.CreateEnumerable<FfmRecommendationPrediction>(predictions, reuseRowObject: false);
}

The strongly typed PredictionEngine that we created after training the model, can be used for single recommendation. Its Predict() method runs the prediction pipeline for a single input record:

public FfmRecommendationPrediction Predict(FfmRecommendationData recommendationData)
{
    // Single prediction
    return _predictionEngine.Predict(recommendationData);
}

When the predicted label is false, the model does not recommend the hotel/travelertype/season combination. In that case we reverse the displayed probability (the score) so that its values become a range from –1 (strongly discouraged) to +1 (strongly recommended). This is done in the MVVM View (the code-behind in the page):

var result = await ViewModel.Predict(recommendationData);
if (!result.PredictedLabel)
{
    // Bring to a range from -1 (highly discouraged) to +1 (highly recommended).
    result.Probability = -result.Probability;
}

 

The Model in Action

Here’s the FFM Recommendation page from the sample app again:

FfmRecommendation

When the model is ready for operation, the combo boxes are populated and unlocked. When you change the traveler type or the season, a group prediction is done for all the hotels in the data set. Its result is displayed on the left in a horizontal bar chart. We’ll not diving into its details, since it’s basically the same diagram as the one in the previous article (we’re just displaying the probability (-1 to +1) instead of the predicted rating (0 to 5).

When you select a hotel in the combo box in the bottom left corner, a single prediction is made, and the result is displayed next to it. The diagram for the group predictions only displays recommended hotels, but in the single prediction you can pick your own hotel. That’s why we decided to display a negative probability for negative advice.

If you want to run this scenario yourself, feel free to download the sample app. Its source lives here on GitHub.

Enjoy!

Machine Learning with ML.NET in UWP: Recommendation

In this article we describe how to define, train, evaluate, persist, and use a ML.NET Recommendation model in a UWP app. The blog post is part of a series on implementing different Machine Learning scenarios with .NET Open Source frameworks and components such as

All articles in the series are supported by the same UWP sample app that lives here on GitHub. Since the previous article was published, this sample app was upgraded to the latest prereleases of ML.NET thanks to Pull Requests from the Microsoft ML.NET Team itself (thanks Eric!). This means that the syntax in the code snippets is quite different from the previous articles, but much closer to the imminent official release.

Here’s how the Recommendation page in the sample app looks like:

Recommendation

It builds a model to generate recommendations for hotels on the Las Vegas Strip for a selected traveler type (single, family, business, …):

  • when you select a traveler type in the combo box, the top 10 recommended hotels appear in the diagram, and
  • when you select a hotel in the second combo box, a predicted rating will appear next to it.

Recommendation in Machine Learning

Machine Learning recommender systems are highly popular in e-commerce and social networks. They’re used for recommending books, TV series, music, events, products, friends, dating profiles, and a lot more.

There are two approaches for generating recommendations:

  • Content Based Filtering recommends items to a user that are similar to previously highly rated items by the same user. The advantage of this is transparency (the model can explain why it recommends the item). Unfortunately this approach does not scale well with large data.
  • Collaborative Based Filtering will recommend the items to a user that were highly rated by other -but similar- users. In most real world scenarios not every user has rated every item, so the base data can be very sparse. This makes the approach unsuitable in some scenarios.

Matrix Factorization in Machine Learning

Matrix Factorization is a common technique to solve the sparsity problem with Collaborative Base Filtering that we just mentioned. In a nutshell its goal is to mass-predict the missing ratings. Matrix Factorization is entirely based on linear algebra which is something that your CPUs, GPUs, and/or AI Accelerators are pretty good at. If you want to dive into the mathematical details, allow me to recommend (pun intended) the article with the very appropriate name A Gentle Introduction to Matrix Factorization for Machine Learning.

Major advantages of this algorithm are that it scales very well with large data and it is very fast. You don’t have take my word for it, but there must me a reason why Amazon and Netflix are relying on it. The algorithm has the disadvantage that it cannot always easy explain why it recommends an item. You must have stumbled upon recommendations like this before:

netflix

Before we dive into the code, allow us to clarify something: Matrix Factorization does NOT answer the question “What items would you recommend for this user?”. Instead it solves the “Here’s a list of products and a (list of) user(s), please predict their ratings” problem. So when you use it in your apps, there is some preprocessing (selecting the products to evaluate) and some postprocessing (filtering relevant recommendation) to do. Basically the algorithm always has to deal with too much data. Don’t worry about that: Matrix Factorization is a real Weapon of Mass Prediction

Matrix Factorization in ML.NET

For Matrix Factorization in ML.NET you’ll need the MatrixFactorizationTrainer class. It comes in a separate NuGet package (Microsoft.ML.Recommender):

RecommenderNuGet

Model Input and Output

For training and testing the model, we’ll use a 2015 dataset with 510 Las Vegal hotel ratings from TripAdvisor. Here’s how it looks like:

RecommendationDataSet

Matrix Factorization predicts the rating (“Label”) between only two fields (“Features”) . If you have to deal with more fields, then you’ll need the FieldAwareFactorizationMachine instead.

In the sample app we choose TravelerType and Hotel as features to respectively play the roles of ‘similar user’ and ‘recommended item’. The Score column contains the rating and will play the role of ‘label’ (the thing to predict). Since the prediction engine’s output column is also called Score, we renamed it to Label for the input.

Here’s the structure of input samples that we will feed the model with:

public class RecommendationData
{
    public float Label;

    public string TravelerType;

    public string Hotel;
}

The prediction looks like this:

public class RecommendationPrediction
{
    public float Score;

    public string TravelerType;

    public string Hotel;
}

Observe the lack of LoadColumn and ColumnName attributes on top of the fields – we had these in all the previous posts in this article series. We don’t need the attributes here because we’re not using a TextLoader to read the training and testing data sets. Instead we’ll create our IDataView with a call to the LoadFromEnumerable() method. This same method allows you to populate the model with records from a database:

private IDataView trainingData;

public IEnumerable<RecommendationData> Load(string trainingDataPath)
{
    var data = File.ReadAllLines(trainingDataPath)
        .Skip(1)
        .Select(x => x.Split(';'))
        .Select(x => new RecommendationData
        {
            Label = uint.Parse(x[4]),
            TravelerType = x[6],
            Hotel = x[13]
        })
        .OrderBy(x => (x.GetHashCode())) // Cheap Randomization.
        .Take(400);

    // Populating an IDataView from an IEnumerable.
    trainingData = _mlContext.Data.LoadFromEnumerable(data);

    // Keep DataView in memory.
    trainingData = _mlContext.Data.Cache(trainingData);

    // Populating an IEnumerable from an IDataView.
    return _mlContext.Data.CreateEnumerable<RecommendationData>(trainingData, reuseRowObject: false);
}

Part of the data set will be used for training, and another for evaluating the model. Since the original data set is sorted on Hotel name, we applied cheap randomization logic to the rows by sorting them on their GetHashCode() value.

The Cache() method keeps the selected columns (in our case: all columns) in memory after they’re accessed for the first time. For iterative algorithms this really is a time saver – at least if the data fits into memory.

Defining and Building the Model

The recommendation model is an ITransformer that is created from an EstimatorChain with a MatrixFactorization at its heart. You have to specify the label (labelColumn) and the two features (matrixRowIndexColumnName and matrixColumnIndexColumnName) and some options to fine tune the algorithm. Before sending the feature values to the transformer, they’re added to a dictionary with MapValueToKey(). The reverse function of that is MapKeyToValue(). It ensures that the original values are returned with the predicted score.

Here’s the whole pipeline:

private ITransformer _model;

public void Build()
{
    var pipeline = _mlContext.Transforms.Conversion.MapValueToKey("Hotel")
                    .Append(_mlContext.Transforms.Conversion.MapValueToKey("TravelerType"))
                    .Append(_mlContext.Recommendation().Trainers.MatrixFactorization(
                                        labelColumn: DefaultColumnNames.Label,
                                        matrixColumnIndexColumnName: "Hotel",
                                        matrixRowIndexColumnName: "TravelerType",
                                        // Optional fine tuning:
                                        numberOfIterations: 20,
                                        approximationRank: 8,
                                        learningRate: 0.4))
                    .Append(_mlContext.Transforms.Conversion.MapKeyToValue("Hotel"))
                    .Append(_mlContext.Transforms.Conversion.MapKeyToValue("TravelerType"));

    // Place a breakpoint here to peek the training data.
    var preview = pipeline.Preview(trainingData, maxRows: 10);

    _model = pipeline.Fit(trainingData);
}

The extremely useful Preview() method was recently added to the API. It allows you to inspect the content and schema of the pipeline while debugging – it feels a bit like the old SSIS Data Viewer:

PreviewSchema

PreviewRowContent

The prediction model is trained with a Fit() call.

Evaluating the Model

It’s always a good idea to evaluate your freshly trained model. Typically this is done by sending it a set of previously unknown –but labeled- data set rows. The Transform() call generates the predictions, while Evaluate() compares these with the original labels:

public RegressionMetrics Evaluate(string testDataPath)
{
    //var testData = _mlContext.Data.LoadFromTextFile<RecommendationData>(testDataPath);
    var data = File.ReadAllLines(testDataPath)
        .Skip(1)
        .Select(x => x.Split(';'))
        .Select(x => new RecommendationData
        {
            Label = uint.Parse(x[4]),
            TravelerType = x[6],
            Hotel = x[13]
        })
        .OrderBy(x => (x.GetHashCode())) // Cheap Randomization.
        .TakeLast(200);

    var testData = _mlContext.Data.LoadFromEnumerable(data);
    var scoredData = _model.Transform(testData);
    var metrics = _mlContext.Recommendation().Evaluate(scoredData);

    // Place a breakpoint here to inspect the quality metrics.
    return metrics;
}

The evaluation returns a RegressionMetrics instance with useful information on the quality of the model – such as the coefficient of determination, and the relative squared error:

RegressionMetrics

If you notice that your model lacks accuracy, then you need to fine tune its parameters and/or provide more representative training data and/or select another algorithm.

Persisting the Model

The model can be serialized and persisted with a call to Save():

public void Save(string modelName)
{
    var storageFolder = ApplicationData.Current.LocalFolder;
    string modelPath = Path.Combine(storageFolder.Path, modelName);

    _mlContext.Model.Save(_model, inputSchema: null, filePath: modelPath);
}

Inferencing with the model

There are two ways for creating recommendation scores. The first one generates a prediction for a single feature combination: a score for one specific traveler type/hotel combination. The API for this scenario cannot be more straightforward: you create a prediction engine with CreatePredictionEngine() and then you call Predict() to … predict:

public RecommendationPrediction Predict(RecommendationData recommendationData)
{
    // Single prediction
    var predictionEngine = _model.CreatePredictionEngine<RecommendationData, RecommendationPrediction>(_mlContext);
    return predictionEngine.Predict(recommendationData);
}

This code is triggered when you select a hotel from the lower left combo box  on the page:

Recommendation

The second way to generate recommendations takes a list of feature pairs instead of a single one. When you select an entry in the traveler type combo box, we first create a list of RecommendationData records – one for each hotel in the original data set. Then we call the Predict() method in the ViewModel – the sample app uses a lightweight MVVM architecture:

// Group Prediction
var recommendations = new List<RecommendationData>();
foreach (var hotel in ViewModel.Hotels)
{
    recommendations.Add(new RecommendationData
    {
        Hotel = hotel,
        TravelerType = TravelerTypesCombo.SelectedValue.ToString()
    });
}
var predictions = await ViewModel.Predict(recommendations);

This list is changed into an IDataView with same the LoadFromEnumerable() call that we encountered when loading the training data. The recommendation model transforms it into a IDataView that adheres to the output schema through the Transform() method. Finally, with the CreateEnumerable() method this structure is translated to a list of prediction entities:

public IEnumerable<RecommendationPrediction> Predict(IEnumerable<RecommendationData> recommendationData)
{
    // Group prediction
    var data = _mlContext.Data.LoadFromEnumerable(recommendationData);
    var predictions = _model.Transform(data);
    return _mlContext.Data.CreateEnumerable<RecommendationPrediction>(predictions, reuseRowObject: false);
}

There are 21 hotels in the data set, so this method returns 21 ratings. The end user is of course not interested in all of these. With a little LINQ query you can get the 10 most appropriate recommendations:

var recommendationsResult = predictions
        .Select(p => p)
        .OrderByDescending(p => p.Score)
        .ToList()
        .Take(10)
        .Reverse();

[Note: The reverse() is only there because we build up the bar chart from bottom to top.]

A word of warning

The current NuGet package for Microsoft.ML carries the v1.0.0-preview tag, so we may be close to an official release. This is not the case for the Microsoft.ML.Recommender. This one seems to need some extra stabilization sprints. In its current version, Matrix Factorization yields different types of exceptions when you’re running in x86 mode. With a little luck you only get weird results like these:

Recommendation_x86

Don’t worry, it’s a known issue, the team is working on it…

Let there be XAML

Let’s jump to the visualization of the predictions. For the horizontal bar chart on the sample page, we borrowed the diagram from the MultiClass Classification sample. XAML-wise we declared a PlotView with its PlotModel. The model has a CategoryAxis for the hotel names and a LinearAxis for the predicted score (0-5). The values are represented in a BarSeries:

<oxy:PlotView x:Name="Diagram"
                Background="Transparent"
                BorderThickness="0"
                Margin="0 0 40 60"
                Grid.Column="1">
    <oxy:PlotView.Model>
        <oxyplot:PlotModel Subtitle="Recommended Hotels"
                            PlotAreaBorderColor="{x:Bind OxyForeground}"
                            TextColor="{x:Bind OxyForeground}"
                            TitleColor="{x:Bind OxyForeground}"
                            SubtitleColor="{x:Bind OxyForeground}">
            <oxyplot:PlotModel.Axes>
                <axes:CategoryAxis Position="Left"
                                    TextColor="{x:Bind OxyForeground}"
                                    TicklineColor="{x:Bind OxyForeground}"
                                    TitleColor="{x:Bind OxyForeground}" />
                <axes:LinearAxis Position="Bottom"
                                    Title="Predicted Score (higher is better)"
                                    TextColor="{x:Bind OxyForeground}"
                                    TicklineColor="{x:Bind OxyForeground}"
                                    TitleColor="{x:Bind OxyForeground}" />
            </oxyplot:PlotModel.Axes>
            <oxyplot:PlotModel.Series>
                <series:BarSeries LabelPlacement="Inside"
                                    LabelFormatString="{}{0:0.00}"
                                    TextColor="{x:Bind OxyText}"
                                    FillColor="{x:Bind OxyFill}" />
            </oxyplot:PlotModel.Series>
        </oxyplot:PlotModel>
    </oxy:PlotView.Model>
</oxy:PlotView>

When the predictions come in, we add category (with the name) and a BarItem (with the score) for each of the maximum 10 hotels. These are added to their respective series, and the plot is refreshed:

// Update diagram
var categories = new List<string>();
var bars = new List<BarItem>();
foreach (var prediction in recommendationsResult)
{
    categories.Add(prediction.Hotel);
    bars.Add(new BarItem { Value = prediction.Score });
}

var plotModel = Diagram.Model;

(plotModel.Axes[0] as CategoryAxis).ItemsSource = categories;
(plotModel.Series[0] as BarSeries).ItemsSource = bars;
plotModel.InvalidatePlot(true);

That’s it for today. The UWP sample app –which is featured on the ML.NET Community Samples page- lives here on GitHub.

Enjoy!