Evaluate trained models
This article provides an overview of how to evaluate your trained version before deploying your model. Use the following sections as a guide to view model status and the potential strength of predictions for your model before you deploy.
The quality of any machine learning model created in Tealium Predict or any other machine learning tool varies depending on several factors, including (but not limited to) the following:
- The attributes that comprise the dataset and how complete of a picture the combined attributes provide.
- The daily volume of visitors and visits in the dataset. A larger volume means there is more data available for the model to learn on.
- The training date range selected for the model. Training a model for a longer time period generally equates to more data available for the model to learn on.
To understand whether a given model has high quality for use in the real world, machine learning experts typically use a combination of interrelated metrics. The specific combination varies depending on the type of model being evaluated.
The quality of any model is a relative judgment, not an absolute fact. Different teams have different needs and goals for their models, as well as different levels of sophistication in their modeling and testing abilities and varying quality in their input datasets. For these reasons, model strength ratings are not regarded as absolute. The intention is to use the ratings as a general guideline for quality.
This page was last updated: January 7, 2023