Before running this notebook, please visit our dashboard to obtain your TimeGPT
api_key
.Why TimeGPT?
TimeGPT is a powerful, general-purpose time series forecasting solution. Throughout this notebook, we compare TimeGPT’s performance against three popular forecasting approaches:- Classical model (ARIMA)
- Machine learning model (LightGBM)
- Deep learning model (N-HiTS)
Accuracy
TimeGPT consistently outperforms traditional models by accurately capturing complex patterns.
Speed
Quickly generates forecasts with minimal training and tuning requirements per series.
Ease of Use
Minimal setup and no complex preprocessing make TimeGPT immediately accessible for use.
TimeGPT Advantage
TimeGPT delivers superior results with minimal effort compared to traditional approaches. In head-to-head testing against ARIMA, LightGBM, and N-HiTS models on M5 competition data, TimeGPT consistently achieves better accuracy metrics (lowest RMSE at 592.6 and SMAPE at 4.94%). Unlike other models which require:- Extensive preprocessing
- Parameter tuning
- Significant computational resources
1
1. Data Introduction
This notebook uses an aggregated subset from the M5 Forecasting Accuracy competition. The dataset:
Next, we split our dataset into training and test sets. Here, we use data up to “2016-04-24” for training and the remaining data for testing.
- Consists of 7 daily time series
- Has 1,941 observations per series
- Reserves the last 28 observations for evaluation on unseen data
Data Loading and Stats Preview
Below is a preview of the aggregated statistics for each of the 7 time series.
unique_id | min date | max date | count | min y | mean y | median y | max y |
---|---|---|---|---|---|---|---|
FOODS_1 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 2674.086 | 2665.0 | 5493.0 |
FOODS_2 | 2011-01-29 | 2016-05-22 | 1941 | 0.0 | 4015.984 | 3894.0 | 9069.0 |
… | … | … | … | … | … | … | … |
Train-Test Split Example
2
2. Model Fitting (TimeGPT, ARIMA, LightGBM, N-HiTS)
TimeGPT is compared against four different modeling approaches. Each approach forecasts the final 28 days of our dataset and we compare results across Root Mean Squared Error (RMSE) and Symmetric Mean Absolute Percentage Error (SMAPE).
2.1 TimeGPT
2.1 TimeGPT
TimeGPT offers a streamlined solution for time series forecasting with minimal setup.
TimeGPT Forecasting with NixtlaClient
2.2 Classical Models (ARIMA)
2.2 Classical Models (ARIMA)
ARIMA is a common baseline for time series, though it often requires more data preprocessing and does not handle multiple series as efficiently.
ARIMA Forecasting Using StatsForecast
2.3 Machine Learning Models (LightGBM)
2.3 Machine Learning Models (LightGBM)
LightGBM is a popular gradient-boosted tree approach. However, careful feature engineering is typically required for optimal results.
LightGBM Modeling with AutoMLForecast
2.4 N-HiTS
2.4 N-HiTS
N-HiTS is a deep learning architecture for time series. While powerful, it often requires GPU resources and more hyperparameter tuning.
N-HiTS Deep Learning Forecast
3
3. Performance Comparison and Results
Below is a summary of the performance metrics (RMSE and SMAPE) on the test dataset. TimeGPT consistently delivers superior forecasting accuracy:


Model | RMSE | SMAPE |
---|---|---|
ARIMA | 724.9 | 5.50% |
LightGBM | 687.8 | 5.14% |
N-HiTS | 605.0 | 5.34% |
TimeGPT | 592.6 | 4.94% |

Comparative Performance Visualization

Benchmark Results
4
4. Conclusion
TimeGPT stands out with its accuracy, speed, and ease of use. Get started today by visiting the
Nixtla dashboard to generate your
api_key
and access advanced forecasting with minimal overhead.