- 19 Oct 2022
- 10 minutes to read
How does the recommendations engine work?
- Updated on 19 Oct 2022
- 10 minutes to read
The Totara recommendations engine recommends content from both Totara Engage and Totara Learn. Content includes resources, playlists, workspaces and courses.
The Totara recommendations engine uses the database to build a model. The model is then used to determine and present recommended content for the current user, all in real time.
Building the model is performed on a regular schedule and is called 'training'. Training is when your data is used to create a 'point-in-time' instance. We call this instance the model. The model is based on the data available at the time training takes place. This is important to note if you are adding content, as activity associated with the content will not be evident in the model until training occurs.
How are recommendations determined?
The recommendations engine provides two types of content recommendations:
- User recommendations: Content recommended to the current user
- Related content: Similar content to the specified content
Both of these recommendation types are based on one of the following:
- The user’s past interactions with the content. In this case, similar content to a given content item is based only on how the content is being interacted with.
- The user’s past interactions with the content, and metadata of the content and users.
Interactions refer to the types of user action with content items. Interactions are registered by the system when a user enrols in a course, or views, likes or comments on a resource, playlist or a workspace. The strength of the interaction is represented on a binary scale. User actions set the interaction value as shown:
- A single view on a resource, playlist or workspace sets the interaction value to 0.
- A subsequent view OR a single view with a like and/or comment sets the interaction value to 1. Subsequent activities do not change the interaction value after it has been set to 1.
- When a user enrols in a self-enrollable course the interaction value is set to 1. Viewing a course is not recorded as interaction.
An interaction of a user with content that has a value 0 is also termed as interacting leisurely, while an interaction value of 1 is termed as interacting positively.
The recommendations model utilises one of three machine learning algorithms (or modes):
- Matrix factorisation
- Partial hybrid
- Full hybrid (default mode in Totara 15 and above)
The features of the three modes are described in more detail below. The default mode for Totara 15 and above is full hybrid, but the mode can be changed at any time. The new mode will not be operational until training to build the new model is complete.
The fundamental logic used for all modes involves identifying content that the current user has not interacted with, but is likely to be interested in. This content is identified from other similar users’ interactions with the specified content that has not been viewed by the current user.
The method used to identify similar users is different for each mode. Similarly, the data that is utilised by the engine to provide content recommendations is dependent on the mode. Each mode is described in greater detail below.
In this mode, the recommendation engine does not use content or user metadata. The recommendations engine relies solely on the interaction data of users with content. The logic of user recommendations can be summarised in two steps:
- Identify similar users who, in general, share the same content interaction patterns as the current user.
- Assign a recommendation score for each identified user to each available content item based on how much those were interacted with by the identified similar users.
After sorting content items by their recommendation scores, the current user is then recommended the top N content items (N is configurable by the Totara Site Administrator). Items that the user has already interacted with are not recommended.
Related content is provided using similarity scores. The similarity scores are calculated for each content item with every other content item based on the pattern of user interaction. An example will help illustrate this. In this example, content items x and y are interacted with by the same set of users. Content item z is interacted with by a different set of users. Therefore, x will have a high similarity score with y, but x will have a low similarity score with z.
All content items are then sorted by their similarity score with the current content item. The top N items are recommended to the user as similar items.
One of the disadvantages of this mode is that it is unable to recommend any content to unknown users, i.e. users who have not interacted with any content yet.
In partial hybrid mode, the machine learning engine utilises the metadata of both users and content. The table shows the metadata fields utilised for each (if they are populated).
|User metadata fields||Content metadata fields|
|Language||Type (i.e. course, workspace, resource, playlist)|
|Country||Topic (for playlists, resources)|
|Current competencies scale|
Partial hybrid mode provides recommendations to known users (i.e. users who have interacted with content) in the same way as the matrix factorisation mode. Partial hybrid mode overcomes the disadvantage of dealing with unknown users (i.e. users who have not interacted with any content). Recommendations for unknown users are provided on the basis of metadata similarity with known users. Content interacted with by known users with metadata similar to an unknown user is used to surface recommendations for unknown users.
Related content is identified using a similarity score. This is a score indicating how similar two pieces of content are based on:
- The interactions from users
- The similarity of the content metadata
This is the default for Totara 15 and above. Full hybrid works best for a fresh install of the machine learning service.
Full hybrid mode is similar to partial hybrid mode. However, in full hybrid mode free text fields from the user profile (city and profile description) and content profile (course description, workspace description, playlist description, and resource content) are utilised. To accomplish this, the free text is converted to a numerical matrix. The additional data utilised in full hybrid mode provides a better method to determine the similarity between content and users.
Pros and cons of each mode
The matrix factorisation mode requires the least amount of computational overhead, and hence is fastest to train and provide recommendations. It is, however, unable to provide intelligible recommendations to unknown users (users who have not yet interacted with any content).
The full hybrid mode requires increased computational resources and, as a result, is relatively slower to train and provide recommendations. Since the metadata of users and content (including free text) is utilised in full hybrid mode, it provides good recommendations to unknown users. These recommendations are based solely on their metadata, which is then used to find similar known users.
The partial hybrid mode lies in the middle of the two other modes in terms of utilisation of computational resources. It uses all the metadata of users and content except free text metadata. As a result this mode is able to provide recommendations to unknown users while still being computationally efficient.
It is recommended that a new or small Totara site should be configured to run with full hybrid mode. It is only when the full hybrid algorithm is taking too long to train that the partial hybrid mode should be enabled instead. The matrix factorisation mode should only be used with very large Totara sites where a high volume of users, content and activity results in an extended training time (over one hour) for partial hybrid mode.
Recommendations engine process
The process used by the recommendations engine is outlined in the flowchart below.
Each block of this flowchart is described in detail below.
The data fetcher block fetches data from Totara. The data fetched includes user-to-content interactions, user metadata, and content metadata of each tenant.
The user and content data files include metadata of the users and the content respectively, while the interactions data is a record of whether a user has interacted positively with the content or interacted leisurely with the content, and the time when the interaction happened.
The user metadata consists of:
- ID in the database
- City/town (free text)
- Aspiring position
- Current competencies scale
- Profile description (free text)
The content metadata consists of:
- Content type (one of course, workspace, article, microlearning article, and playlist)
- Text description (free text)
The interactions data consists of:
- User ID
- Content ID
- Interaction value (0 or 1)
- Time of interaction
This block reads the data files fetched from Totara into a Python data structure for each tenant at a time, and pipes it for further processing.
This is a decision block that is set when the service is initially started. You can read more about it in the README.md file of the Machine Learning Service. The user can choose one mode from the following:
- Matrix factorisation
- Partial hybrid
- Full hybrid
Depending on the recommendation mode selected, one of the data processors transforms the data into the compatible form that can be consumed by the subsequent process. The output of each of the data processors is in sparse matrices form, so that the memory is used efficiently.
Collaborative data processor
This block ignores the user and content data and transforms the interactions data into a format that the subsequent modules can consume.
Partial data processor
This block uses the user and content metadata, as well as the interactions data, and transforms it for consumption in the subsequent process. This block ignores only the free text fields of user (city/town and profile description) and content (text description) data.
Full data processor
This utilises all of the data fields in the user, content, and interactions data, including the free text fields of the user and content data. The free text fields are passed through the Natural Language Processing pipeline where the text is cleaned and then converted to a matrix of TF-IDF features. The data sets are then transformed into a compatible form so that these can be consumed for subsequent processing.
Depending on the recommendation mode, hyper-parameters are optimised for best performance on a random unseen data set aside from the training. The hyper-parameters that get tuned are the lateral dimension and the number of epochs.
Collaborative hyper-parameters optimiser
These are tuned using the past interactions data of the users with the content if the selected mode is Matrix factorisation.
Content based hyper-parameters optimiser
If the Machine Learning Service is started in Partial hybrid or Full hybrid, the hyper-parameters are tuned using the past interactions of the users with the content and the provided metadata of the users and the content.
Train final model
Depending on the recommendation mode selected, either the Matrix factorisation (which is a subclass of the collaborative filtering approach) or the content-based filtering approach is used for building the machine learning model for recommendations. The class of the modelling algorithm used is implemented via the LightFM library, and is described in Maciej Kula, 2015.
Collaborative filtering model
If the service is running in the Matrix factorisation mode, the final model trained will be a collaborative filtering model using the tuned hyper-parameters and forwarded to the next stage.
Content-based filtering model
If the service is running in the Partial hybrid or the Full hybrid modes, the content-based filtering algorithm is used to build the model. The data input for this algorithm includes users' and items' metadata. The final model is built using the tuned set of these hyper-parameters from the previous stage. Note that this algorithm accepts data from either of the Partial data processor and Full data processor blocks, which means it can accept and use the processed Natural Language Processed data as well.
Cache final model in memory
The final trained model is passed to this block to be cached in memory, so that it is readily available for queries from Totara. Apart from caching the model in memory, it is also saved in the file system to be quickly loaded in case the service is restarted for any reason.
This block forwards the requests from Totara to the model to recommend content to a user or find similar content to a given content item.
The returned similar content is sorted in descending order by the cosine similarity score of each content item with the given content.
The recommended content for each user is sorted in descending order by the prediction score. The prediction scores (or rankings) themselves are not interpretable; they are simply a means of ranking the items. The order of the prediction scores is important though - content with higher prediction scores is more likely to be of interest to the user than content with lower prediction scores.
© Copyright 2022 Totara Learning Solutions. All rights reserved.