The best Predictive Intelligence features in the ServiceNow Paris Release

4 minutes
Predictive Intelligence in ServiceNow Paris Release

The Predictive Intelligence features in ServiceNow are constantly improved and expanded. In the Paris release, ServiceNow released several new and valuable features that allow you to advance your Machine Learning capabilities on the platform, which they have been developing since the Kingston release. In this blog, we’d like to tell you more about these new and exciting features!

Predictive Intelligence in ServiceNow

ServiceNow’s Predictive Intelligence features are based on various Machine Learning frameworks that enable you to get more out of your ServiceNow data, to complete everyday tasks quicker and more efficiently, and to decrease resolution times. These frameworks are often relatively easy to implement and add value to your ServiceNow instance.

Predict a numerical variable - Regression

A new framework is available from the Paris release: Regression. This framework allows you to predict a continuous quantity where the outcome variable is numerical. An example of a regression framework is predicting the resolution time of an incident or predicting the costs to solve a problem. Predicting the resolution time of an incident enable you to optimize your resource planning. Which incidents can a fulfiller solve that day? How many people are needed to resolve all open incidents? If you want to plan your resources, or predict any other numerical variable, the new Predictive Intelligence regression framework gives the opportunity to do so.

 
New call-to-action

Choose from multiple algorithms to train your model – Classification/Regression/Clustering

Several new algorithms have been added to train the Machine Learning model in the Paris release. For example, multiple algorithms can be used for the clustering framework to determine which incidents belong to which cluster. Multiple algorithms can be suitable for a specific Machine Learning problem, but how do you know which one would give the best results? You can’t. It is not always possible to know which algorithm will give the best results until the model is trained and the results are The most used method is to train your model with a specific algorithm and determine if you are satisfied with the results. If not, try another algorithm. In the new release, it is possible to choose from multiple algorithms for the classification, regression, and clustering framework in order to get the best results from a Predictive Intelligence project.

Visualize your data clusters with the new Tree Map format – Clustering

A new visualization is created to represent the clusters of a data set. The scatter plot visualization is replaced by the new Tree Map chart to better indicate the cluster size. As we saw in previous releases, the clustering framework is very useful to group similar records in your data and identify automation candidates. Get more out of the clustering framework with the new visualization and identify next steps to improve your ServiceNow instance and to reduce costs.

 

Predictive Intelligence features in ServiceNow - New Clustering feature: Tree Map
New Clustering feature: Tree Map

Submit multiple solutions for training based on a Group By field – Classification

From Paris onwards, it is not necessary anymore to define multiple solution models based on the location of the ‘Requested for’ or another ‘Choice’ In earlier releases, it was required to define a solution model for each value found in the selected Choice field. It is now possible to define one solution model and use the ‘Group By’ field to train parallel solutions. In the backend, multiple solutions are created to separate the results. This new feature simplifies solution management, saves time and increases overall solution quality.

General prerequisites for Predictive Intelligence projects

Machine learning can be applied to any data set but in order to predict the outcome fields with a high coverage, there are some prerequisites for using your instance data for Machine Learning:

  • High data volume
    You need a good pool of data to train the model. But how much is enough? The amount of data needed for a machine learning model depends on the complexity of the problem and the complexity of the learning algorithm. ServiceNow has set a minimum number of records for each framework:

    • Regression: 10,000 records
    • Classification: 10,000 records
    • Clustering: 100 records
    • Similarity: 100 records

 

  • High-quality data
    Model performance is directly linked to data The saying “garbage in is garbage out” does really apply here. This sounds obvious, but it is a common problem. When the training set consists of several incorrect or missing output fields, the output field can be wrongly predicted. Again, this depends on the number of records with incorrect or missing output fields in the training set.

 

  • Wide variety
    Model coverage of categories is dependent on data variety and is negatively affected by skewed distributions between categories. For example, if Assignment Group X in the set of possible output fields is not part of the training data, the algorithm would not be able to predict Assignment Group X. Therefore, data variety is an important prerequisite for using Machine Learning.

 

Other interesting reads