TimeGPT delivers highly accurate, fast forecasts. In this tutorial, you’ll learn how to reproduce a 4-day forecast for in-zone energy demand and see how TimeGPT compares to the deep-learning model N-HiTS.

This tutorial uses a subset of the PJM Hourly Energy Consumption dataset, focusing on in-zone consumption where electricity is generated and used within the same transmission zone. It contains hourly observations from October 1, 2023, to September 30, 2024, for five representative regions and is an excellent dataset to demonstrate TimeGPT’s capabilities.

With just a few lines of code, TimeGPT can achieve:
• 18.6% lower MAE compared to N-HiTS
• 31.1% lower sMAPE compared to N-HiTS
• 90% faster prediction times

Overview

1

Step 1: Initial Setup

Install and import required packages, then create a NixtlaClient instance to interact with TimeGPT.

setup
import time
import requests
import pandas as pd
from nixtla import NixtlaClient
from utilsforecast.losses import mae, smape
from utilsforecast.evaluation import evaluate

nixtla_client = NixtlaClient(
    api_key='my_api_key_provided_by_nixtla'  # Defaults to os.environ.get("NIXTLA_API_KEY")
)

If you want to connect to Azure AI instead of Nixtla’s API, specify base_url and api_key.

Nixtla API Initialization
nixtla_client = NixtlaClient(
    api_key='my_api_key_provided_by_nixtla'
)
2

Step 2: Read the Data

Load the energy consumption dataset and convert datetime strings to timestamps.

Load and Preview Data
df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/refs/heads/main/datasets/pjm_in_zone.csv')
df['ds'] = pd.to_datetime(df['ds'])

# Examine the dataset
df.groupby('unique_id').head(2)

Sample Data

unique_iddsy
0AP-AP2023-10-01 04:00:00+00:004042.513
1AP-AP2023-10-01 05:00:00+00:003850.067

Plot the data series to visualize seasonal patterns.

Plot Seasonal Patterns
nixtla_client.plot(
    df, 
    max_insample_length=365
)

Seasonal patterns in energy consumption.

3

Step 3: Forecast with TimeGPT

We’ll split our dataset into:
• A training/input set for model calibration
• A testing set (last 4 days) to validate performance

TimeGPT Forecasting
# Prepare test (last 4 days) and input data
test_df = df.groupby('unique_id').tail(96)
input_df = df.groupby('unique_id').apply(lambda group: group.iloc[-1104:-96]).reset_index(drop=True)

# Make forecasts
start = time.time()

fcst_df = nixtla_client.forecast(
    df=input_df,
    h=96,
    level=[90],
    finetune_steps=10,
    finetune_loss='mae',
    model='timegpt-1-long-horizon',
    time_col='ds',
    target_col='y',
    id_col='unique_id'
)

end = time.time()
timegpt_duration = end - start

print(f"Time (TimeGPT): {timegpt_duration}")

# Visualize forecasts against actual values
nixtla_client.plot(
    test_df,
    fcst_df,
    models=['TimeGPT'],
    level=[90],
    time_col='ds',
    target_col='y'
)

TimeGPT forecast compared to actual values.

4

Step 4: Evaluate TimeGPT

Compute accuracy metrics (MAE and sMAPE) for TimeGPT.

Evaluate TimeGPT Performance
fcst_df['ds'] = pd.to_datetime(fcst_df['ds'])
test_df = pd.merge(test_df, fcst_df, 'left', ['unique_id', 'ds'])

evaluation = evaluate(test_df, [mae, smape], ["TimeGPT"], "y", "unique_id")
average_metrics = evaluation.groupby('metric')['TimeGPT'].mean()
average_metrics
5

Step 5: Forecast with N-HiTS

For comparison, we train and forecast using the deep-learning model N-HiTS.

6

Step 6: Evaluate N-HiTS

Compute accuracy metrics (MAE and sMAPE) for N-HiTS.

Evaluate N-HiTS Performance
preds_df = pd.merge(test_df, nhits_preds, 'left', ['unique_id', 'ds'])

evaluation = evaluate(preds_df, [mae, smape], ["NHITS"], "y", "unique_id")
average_metrics = evaluation.groupby('metric')['NHITS'].mean()
print(average_metrics)

Conclusion

Key Results

  • TimeGPT achieves an MAE of 882.6, compared to 1084.7 from N-HiTS (18.6% improvement).

  • TimeGPT’s sMAPE is 31.1% lower than N-HiTS.

  • TimeGPT generates predictions in roughly 4.3 seconds, which is 90% faster than N-HiTS’s 44 seconds.

TimeGPT offers substantial benefits in accuracy and speed, making it a powerful tool for forecasting energy consumption and other time-series tasks. Experiment with the parameters to further optimize performance for your specific use case.