Skip to main content

Introduction

Intermittent demand occurs when products or services have irregular purchase patterns with frequent zero-value periods. This is common in retail, spare parts inventory, and specialty products where demand is irregular rather than continuous. Forecasting these patterns accurately is essential for optimizing stock levels, reducing costs, and preventing stockouts. TimeGPT excels at intermittent demand forecasting by capturing complex patterns that traditional statistical methods miss. This tutorial demonstrates TimeGPT’s capabilities using the M5 dataset of food sales, including exogenous variables like pricing and promotional events that influence purchasing behavior.

What You’ll Learn

  • How to prepare and analyze intermittent demand data
  • How to leverage exogenous variables for better predictions
  • How to use log transforms to ensure realistic forecasts
  • How TimeGPT compares to specialized intermittent demand models
The methods shown here apply broadly to inventory management and retail forecasting challenges. For getting started with TimeGPT, see our quickstart guide.

How to Use TimeGPT to Forecast Intermittent Demand

Open In Colab

Step 1: Environment Setup

Start by importing the required packages for this tutorial and create an instance of NixtlaClient.
import pandas as pd
import numpy as np

from nixtla import NixtlaClient
from utilsforecast.losses import mae
from utilsforecast.evaluation import evaluate

nixtla_client = NixtlaClient(api_key='my_api_key_provided_by_nixtla')

Step 2: Load and Visualize the Dataset

Load the dataset from the M5 dataset and convert the ds column to a datetime object:
df = pd.read_csv("https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/m5_sales_exog_small.csv")
df['ds'] = pd.to_datetime(df['ds'])
df.head()
unique_iddsysell_priceevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_Sporting
FOODS_1_0012011-01-2932.00000
FOODS_1_0012011-01-3002.00000
FOODS_1_0012011-01-3102.00000
FOODS_1_0012011-02-0112.00000
FOODS_1_0012011-02-0242.00000
Visualize the dataset using the plot method:
nixtla_client.plot(
    df,
    max_insample_length=365,
)
Dataset Plot

Figure 1: Visualization of intermittent demand data

In the figure above, we can see the intermittent nature of this dataset, with many periods with zero demand. Now, let’s use TimeGPT to forecast the demand of each product.

Step 3: Transform the Data

To avoid getting negative predictions coming from the model, we use a log transformation on the data. That way, the model will be forced to predict only positive values. Note that due to the presence of zeros in our dataset, we add one to all points before taking the log.
df_transformed = df.copy()
df_transformed['y'] = np.log(df_transformed['y'] + 1)
Now, let’s keep the last 28 time steps for the test set and use the rest as input to the model.
test_df = df_transformed.groupby('unique_id').tail(28)
input_df = df_transformed.drop(test_df.index).reset_index(drop=True)

Step 4: Forecast with TimeGPT

Forecast with TimeGPT using the forecast method:
fcst_df = nixtla_client.forecast(
    df=input_df,
    h=28,
    level=[80],
    finetune_steps=10,               # Learn more about fine-tuning: /forecasting/fine-tuning/steps
    finetune_loss='mae',
    model='timegpt-1-long-horizon',  # For long-horizon forecasting: /forecasting/model-version/longhorizon_model
    time_col='ds',
    target_col='y',
    id_col='unique_id'
)
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: D
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
Great! We now have predictions. However, those predictions are transformed, so we need to inverse the transformation to get back to the original scale. Therefore, we take the exponential and subtract one from each data point.
cols = [col for col in fcst_df.columns if col not in ['ds', 'unique_id']]
fcst_df[cols] = np.exp(fcst_df[cols]) - 1
fcst_df.head()
unique_iddsTimeGPTTimeGPT-lo-80TimeGPT-hi-80
0FOODS_1_0012016-05-230.286841-0.2671011.259465
1FOODS_1_0012016-05-240.320482-0.2412361.298046
2FOODS_1_0012016-05-250.287392-0.3622501.598791
3FOODS_1_0012016-05-260.295326-0.1454890.963542
4FOODS_1_0012016-05-270.315868-0.1665161.077437

Step 5: Evaluate the Forecasts

Before measuring the performance metric, let’s plot the predictions against the actual values.
nixtla_client.plot(
    test_df,
    fcst_df,
    models=['TimeGPT'],
    level=[80],
    time_col='ds',
    target_col='y'
)
Predictions vs Actual Values

Figure 2: Visualization of the predictions against the actual values

Finally, we can measure the mean absolute error (MAE) of the model. Learn more about evaluation metrics in our documentation.
# Compute MAE
test_df = pd.merge(test_df, fcst_df, how='left', on=['unique_id', 'ds'])
evaluation = evaluate(
    test_df,
    metrics=[mae],
    models=['TimeGPT'],
    target_col='y',
    id_col='unique_id'
)
average_metrics = evaluation.groupby('metric')['TimeGPT'].mean()
average_metrics
metric
mae    0.492559

Step 6: Compare with Statistical Models

The library statsforecast by Nixtla provides a suite of statistical models specifically built for intermittent forecasting, such as Croston, IMAPA and TSB. Let’s use these models and see how they perform against TimeGPT.
from statsforecast import StatsForecast
from statsforecast.models import CrostonClassic, CrostonOptimized, IMAPA, TSB

sf = StatsForecast(
    models=[CrostonClassic(), CrostonOptimized(), IMAPA(), TSB(0.1, 0.1)],
    freq='D',
    n_jobs=-1
)
Then, we can fit the models on our data.
sf.fit(df=input_df)
sf_preds = sf.predict(h=28)
Again, we need to inverse the transformation. Remember that the training data was previously transformed using the log function.
cols = [col for col in sf_preds.columns if col not in ['ds', 'unique_id']]
sf_preds[cols] = np.exp(sf_preds[cols]) - 1
sf_preds.head()
unique_iddsCrostonClassicCrostonOptimizedIMAPATSB
0FOODS_1_0012016-05-230.5990930.5990930.4457790.396258
1FOODS_1_0012016-05-240.5990930.5990930.4457790.396258
2FOODS_1_0012016-05-250.5990930.5990930.4457790.396258
3FOODS_1_0012016-05-260.5990930.5990930.4457790.396258
4FOODS_1_0012016-05-270.5990930.5990930.4457790.396258
Now, let’s combine the predictions from all methods and see which performs best.
test_df = pd.merge(test_df, sf_preds, 'left', ['unique_id', 'ds'])
test_df.head()
unique_iddsysell_priceevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_SportingTimeGPTTimeGPT-lo-80TimeGPT-hi-80CrostonClassicCrostonOptimizedIMAPATSB
0FOODS_1_0012016-05-231.3862942.2400000.286841-0.2671011.2594650.5990930.5990930.4457790.396258
1FOODS_1_0012016-05-240.0000002.2400000.320482-0.2412361.2980460.5990930.5990930.4457790.396258
2FOODS_1_0012016-05-250.0000002.2400000.287392-0.3622501.5987910.5990930.5990930.4457790.396258
3FOODS_1_0012016-05-260.0000002.2400000.295326-0.1454890.9635420.5990930.5990930.4457790.396258
4FOODS_1_0012016-05-271.9459102.2400000.315868-0.1665161.0774370.5990930.5990930.4457790.396258
statistical_models = ["CrostonClassic", "CrostonOptimized", "IMAPA", "TSB"]
evaluation = evaluate(
    test_df,
    metrics=[mae],
    models=["TimeGPT"] + statistical_models,
    target_col="y",
    id_col='unique_id'
)

average_metrics = evaluation.groupby('metric')[[
    "TimeGPT"] + statistical_models].mean()
average_metrics
metricTimeGPTCrostonClassicCrostonOptimizedIMAPATSB
mae0.4925590.5645630.5809220.5719430.567178
In the table above, we can see that TimeGPT achieves the lowest MAE, achieving a 12.8% improvement over the best performing statistical model. These results demonstrate TimeGPT’s strong performance without additional features. We can further improve accuracy by incorporating exogenous variables, a capability TimeGPT supports but traditional statistical models do not.

Step 7: Use Exogenous Variables

To forecast with exogenous variables, we need to specify their future values over the forecast horizon. Therefore, let’s simply take the types of events, as those dates are known in advance. You can also explore using date features and holidays as exogenous variables.
# Include holiday/event data as exogenous features
exog_cols = ['event_type_Cultural', 'event_type_National', 'event_type_Religious', 'event_type_Sporting']
futr_exog_df = test_df[['unique_id', 'ds'] + exog_cols]
futr_exog_df.head()
unique_iddsevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_Sporting
0FOODS_1_0012016-05-230000
1FOODS_1_0012016-05-240000
2FOODS_1_0012016-05-250000
3FOODS_1_0012016-05-260000
4FOODS_1_0012016-05-270000
Then, we simply call the forecast method and pass the futr_exog_df in the X_df parameter.
fcst_df = nixtla_client.forecast(
    df=input_df,
    X_df=futr_exog_df,
    h=28,
    level=[80],                        # Generate a 80% confidence interval
    finetune_steps=10,                 # Specify the number of steps for fine-tuning
    finetune_loss='mae',               # Use the MAE as the loss function for fine-tuning
    model='timegpt-1-long-horizon',    # Use the model for long-horizon forecasting
    time_col='ds',
    target_col='y',
    id_col='unique_id'
)
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: D
INFO:nixtla.nixtla_client:Using the following exogenous variables: event_type_Cultural, event_type_National, event_type_Religious, event_type_Sporting
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
Great! Remember that the predictions are transformed, so we have to inverse the transformation again.
fcst_df.rename(columns={'TimeGPT': 'TimeGPT_ex'}, inplace=True)

cols = [col for col in fcst_df.columns if col not in ['ds', 'unique_id']]
fcst_df[cols] = np.exp(fcst_df[cols]) - 1

fcst_df.head()
unique_iddsTimeGPT_exTimeGPT-lo-80TimeGPT-hi-80
0FOODS_1_0012016-05-230.281922-0.2699021.250828
1FOODS_1_0012016-05-240.313774-0.2450911.286372
2FOODS_1_0012016-05-250.285639-0.3631191.595252
3FOODS_1_0012016-05-260.295037-0.1456790.963104
4FOODS_1_0012016-05-270.315484-0.1667601.076830
Finally, let’s evaluate the performance of TimeGPT with exogenous features.
test_df['TimeGPT_ex'] = fcst_df['TimeGPT_ex'].values
test_df.head()
unique_iddsysell_priceevent_type_Culturalevent_type_Nationalevent_type_Religiousevent_type_SportingTimeGPTTimeGPT-lo-80TimeGPT-hi-80CrostonClassicCrostonOptimizedIMAPATSBTimeGPT_ex
0FOODS_1_0012016-05-231.3862942.2400000.286841-0.2671011.2594650.5990930.5990930.4457790.3962580.281922
1FOODS_1_0012016-05-240.0000002.2400000.320482-0.2412361.2980460.5990930.5990930.4457790.3962580.313774
2FOODS_1_0012016-05-250.0000002.2400000.287392-0.3622501.5987910.5990930.5990930.4457790.3962580.285639
3FOODS_1_0012016-05-260.0000002.2400000.295326-0.1454890.9635420.5990930.5990930.4457790.3962580.295037
4FOODS_1_0012016-05-271.9459102.2400000.315868-0.1665161.0774370.5990930.5990930.4457790.3962580.315484
evaluation = evaluate(
    test_df,
    metrics=[mae],
    models=["TimeGPT"] + statistical_models + ["TimeGPT_ex"],
    target_col="y",
    id_col='unique_id'
)

average_metrics = (
    evaluation.groupby('metric')[["TimeGPT"] + statistical_models + ["TimeGPT_ex"]]
).mean()
average_metrics
metricTimeGPTCrostonClassicCrostonOptimizedIMAPATSBTimeGPT_ex
mae0.4925590.5645630.5809220.5719430.5671780.485352
From the table above, we can see that using exogenous features improved the performance of TimeGPT. Now, it represents a 14% improvement over the best statistical model.

Conclusion

TimeGPT provides a robust solution for forecasting intermittent demand:
  • ~14% MAE improvement over specialized models
  • Supports exogenous features for enhanced accuracy
By leveraging TimeGPT and combining both internal series patterns and external factors, organizations can achieve more reliable forecasts even for challenging intermittent demands.

Next Steps