EcoCast: Multi-Task Global Temperature Forecasting via Autoregressive Transformers

Student: Yang Han
Table: ENV3
Experimentation location: School, Home
Regulated Research (Form 1c): No
Project continuation (Form 7): No

Display board image not available

Abstract:

Bibliography/Citations:

Bibliography:

[1] Hewage, Pradeep, et al. "Long-short term memory for an effective short-term weather forecasting model using surface weather data." IFIP International Conference on Artificial Intelligence Applications and Innovations. Cham: Springer International Publishing, 2019.

[2] Hunt, Kieran MR, et al. "Using a long short-term memory (LSTM) neural network to boost river streamflow forecasts over the western United States." Hydrology and Earth System Sciences 26.21 (2022): 5449-5472

[3] El-Habil, BASEL Y., and SAMY S. Abu-Naser. "Global climate prediction using deep learning." J Theor Appl Inf Technol 100 (2022): 24.

[4] Hanoon, Marwah Sattar, et al. "Developing machine learning algorithms for meteorological temperature and humidity forecasting at Terengganu state in Malaysia." Scientific Reports 11.1 (2021): 18935.

[5] Alerskans, Emy, et al. "A transformer neural network for predicting near‐surface temperature." Meteorological Applications 29.5 (2022): e2098.

[6] Bi, Kaifeng, et al. "Accurate medium-range global weather forecasting with 3D neural networks." Nature 619.7970 (2023): 533-538.

[7] Nandi, Arpan, et al. "Attention based long-term air temperature forecasting network: ALTF Net." Knowledge-Based Systems 252 (2022): 109442.

[8] https://arxiv.org/pdf/2205.00133.pdf
[9] https://climatedataguide.ucar.edu/climate-data/global-temperature-data-sets-overview-comparison-table
[10] https://data.giss.nasa.gov/gistemp/
[11] https://www.ncei.noaa.gov/products/land-based-station/noaa-global-tem
[12] https://github.com/owid/co2-data 


Additional Project Information

Project website: -- No project website --
Research paper:
Additional Resources: -- No resources provided --
Project files:
Project files
 

Research Plan:

Rationale:

Background:

Climate change is visible, from the rising sea levels to the depleting ozone layer, with natural disasters hitting more often than ever. But the most obvious factor is the drastic rise in global temperatures in the past centuries. Since 1880, the Earth's temperature has increased at a pace of 0.14° Fahrenheit (0.08° Celsius) every decade; however, the rate of warming since 1981 is more than double that, at 0.32 °F (0.18 °C) per decade. Based on NOAA's temperature data, 2021 was the sixth hottest year on record. Warmer temperatures can also lead to a chain reaction of other global changes.

Impact:

Earth has just recently recorded its hottest year – by far. Being able to predict the global climate and temperature serves as a crucial reminder of the crisis we are in. Addressing climate change through mitigation and adaptation requires us to understand the issue. Whether through green energy, political intervention through carbon taxation, or reforestation, knowing where we currently stand and where we are going is a good place to start.

Plan:

As a natural child of sequence and visual data, many sequence-to-sequence model architectures have been employed in this field. Recurrent neural networks, convolutional neural networks, long short-term memory, and transformers have been the predominant solutions to temperature & time series prediction. In this study, I aim to use the time-series temperature data recorded in the past and large language models (LLMs) to predict global temperature changes. I will use transfer learning to adapt LLMs to process the temperature data. I will conduct two different transfer learning approaches and compare their performance.

Research Questions:

1) What are some state-of-the-art global temperature time series forecasting techniques? This includes the query of the data types available and the different approaches researchers have taken. What has been their peak achievable performance? Are there remaining challenges in the field or gaps still needing to be filled?

2) Can I use larger, more complex models – transformer-based models – to improve the performance of time series prediction further? What are some of the main technical challenges and how can this be extended?

 

3) What metrics have other people used and should I use to quantify the results and to model the prediction performance? Why are these metrics important, and in what ways do they fall short?

 

4) Using large transformer-based sequences to sequence models, what is the achievable performance compared to previous work? What is the upper limit, and why?

 

5) What is the impact of data preprocessing on the performance of transformer-based models? Will overlapping the data affect the performance of our model? How much effect will it have on our final result?

 

6) How will adding new data mediums (precipitation, wind, greenhouse gas emissions) and training a multi-task model differ from a single-task one? Will it improve prediction performance?

 

7) Will I be able to distill a large, more complex model into a smaller one that can be trained on mobile devices so that it can become more accessible?

Procedures:

 

1) Literature Review:

The first step involves conducting a thorough literature review to understand the landscape of the research area. This includes reading and taking notes on many research papers on Time Series prediction and sequence-to-sequence prediction. These papers will help me better understand existing research on global temperature prediction using machine learning technologies such as recurrent neural networks, convolutional neural networks, long- and short-term memory, and transformers. I will assess their methodologies, performance, and drawbacks, and improve upon them. Some example references that we have found are listed in the bibliography section.

 

2) Analyze and Compare Datasets

The second step is to consider the data that provides a fully comprehensive record of global temperature and weather that is accessible. This will include comparing the source of the data (NOAA, NASA, etc.), the format of the data, and the coverage the data provides. Many of these factors are crucial, such as the spatial resolution, years of record, and their timestep will all affect the productivity of the model. For instance, we will explore the data provided at the websites listed in the bibliography.

 

3) Data Processing

Once I have a good understanding of the state-of-the-art methodologies and available datasets for the global temperature prediction task, the next step is to preprocess the data and build my model. For data processing, I will combine the raw data from different sources I have into a single-sourced format ready for machine learning. This can be done in many ways. This preprocessing phase will include (1) segmenting the data into uniform timestamps that will make machine learning more consistent, (2) normalizing the mean and the standard deviation of the data, and (3) creating a custom PyTorch Dataset and Dataloader to accommodate the input/target sequences and split the data into train, validation and test. This will ensure a uniform range that facilitates efficient model training.

 

4) Model Architecture

This step will take inspiration from the literature review and will produce a model that leverages the strengths of the state-of-the-art transformers for time-series prediction: self/cross-attention and parallel computation. The model will be adapted to handle sequence-to-sequence predictions in the time series global temperature context. 

The modeling efforts include (1) feature selection, (2) encoder design, (3) decoder design, (4) loss function design, (5) one-shot or autoregressive heads,  and (6) evaluation metrics. We will also explore the adding of dropout and normalization layers. 

 

5) Implementation

Once we have the design, the next step is to implement the data processing (e.g., segmentation), model construction, and the train / validation / test pipeline using Pytorch and Python. 

I plan to write individual raw python files / modules for the data processor, model class, and utilities for the train / validation / pipeline. Then I will use Jupyter Notebook to write the entire pipeline for easy experiments and debugging.

I will also create testing metrics, such as mean squared error, mean absolute error, root mean square error, and the r squared value, to evaluate my model. 

 

6) Visualization

In this step, I will create tools and functions for my predictions, graphing the true value and the predicted value side by side and also creating a visualization of each model's performance. This will help me analyze the model's validity while making it easier to compare models to each other.

 

7) Experimentation

This stage will focus on the extensive experimentation of the model. This includes tuning the hyperparameters, such as the model size, learning rate, number of attention heads, etc. This will also include experimenting with and comparing the performance of single-task and multi-task models and the difference between a single-shot and autoregressive training loop. I will also explore how different mediums of data can impact the performance of the model while they work in unison – such as precipitation levels, or greenhouse gas emissions – to determine their effect on model performance. 

 

8) Distillation Implementation

In this step, I aim to create a smaller model that can be used on mobile devices (phones, tablets, etc.). This would include a student model that is taught to mimic the larger (teacher) model, in hopes to achieve similar or even superior performance. I aim to be able to compare the cost of creating a smaller model, and balance the trade-off between size and performance.

 

9) Report

In the final step, I will aggregate my results from testing, reporting the training, validation, and testing metrics for both single-task and multitask learning. Additionally, we will compare the following in specific detail:

  • Single-task learning (global temperature only):
    • Performance of my transformer-based model against a baseline model
    • Nonoverlapping vs overlapping of data segments
  • Multi-task learning (global temperature data, precipitation data, greenhouse emission gases data)
    • Performance of multi-task against single-task on the global temperature prediction task.
    • Investigate the reason why multi-task can help boost the temperature prediction task, if it does at all. Some potential reasons might be that the shared encoding helps extract more useful information from the data.


Risk and Safety:

The safety concerns surrounding this project is trivial, since it doesn't require human operators or lab experiments. However, there is another way of approaching risk. In machine learning, the implicit bias in certain cases of reported data could infringe on the fairness of our results. Example, since global climate data is usually collected through stations located around the world, we need to ensure that our data is able to give a whole summary of global temperatures.

Questions and Answers

Initial project questions

1. What was the major objective of your project, and what was your plan to achieve it? 

 

The primary objective was to enhance environmental forecasting by making it more accurate and reliable so that we could better prepare to adapt to and mitigate climate change. I decided to use machine learning, specifically the transformer model, a powerful sequence-to-sequence model, to analyze the time series data. 

 

a. Was that goal the result of any specific situation, experience, or problem you encountered?  

 

Due to rapid changes in global temperatures making massive headlines in the news, with 2023 being the hottest year on record, something had to be done. I realized that people needed to understand the situation's urgency and the impacts it could bring. By enhancing the accuracy of these predictions of global temperatures and greenhouse gas emissions data, the hope was to provide a way for the public or the governments to make informed decisions regarding climate change adaptation and mitigation strategies.

 

b. Were you trying to solve a problem, answer a question, or test a hypothesis?

 

The project solved a problem by creating a highly accurate forecasting system for environmental changes. Experimentation and results showed that the autoregressive and multi-task-based transformer model outperformed the baseline and traditional forecasting models.

 

 

2. What were the major tasks you had to perform to complete your project?

 

To start the project, I focused on literature review, building a solid background on the recent work in time series forecasting, which isn't much. I deeply digested related previous methods, including non-machine learning methods, methods based on a Long-Short Term Memory model, and transformer-based models. In addition, I familiarized myself with the effects of environmental forecasting and climate change and how mitigation and adaptation methods were dependent on accurate predictions of environmental variables such as temperature and greenhouse gas emissions. 

 

After the literature review, I started searching for promising data sources. I ultimately decided to use the NOAAGlobalTemp dataset compiled by the National Centers for Environmental Information and the CO2 and Greenhouse Gas Emissions Data compiled by Our World in Data. I then downloaded the raw data and had to go through a meticulous preprocessing stage. This included segmentation, normalization, and then creating a custom PyTorch dataset.

 

After settling on the transformer architecture, I created a custom transformer architecture from scratch in Python. I understood every part of the model, which makes it easy to debug. After training, I had to compare my results with the baseline and dive into further ablation studies.

 

a. For teams, describe what each member worked on.

 

This was an individual project, and I managed all the tasks.

 

3. What is new or novel about your project?

 

a. Is there some aspect of your project's objective, or how did you achieve it that you haven't done before?

b. Is your project's objective or how you implemented it different from anything you have seen?

 

The novel aspect of the project stems from the custom transformer model architecture and the use of a multi-task learning framework to leverage the correlations between multiple environmental factors to improve the accuracy of the prediction. A combination of decisions and the additional results, including the comparisons between single-shot and autoregressive models, single-task and multi-task learning, and overlapping and non-overlapping data, are novel in this context.

 

c. If you believe your work to be unique in some way, what research have you done to confirm that it is?

 

A thorough literature review ensured that my proposed forecasting strategy is novel compared to previous work. In addition, a comparison to a baseline model (Informer) confirmed the uniqueness, effectiveness, and significant improvement of this project's approach.

 

4. What was the most challenging part of completing your project?

 

The most challenging part was implementing and optimizing the custom transformer model. Understanding dimensionality errors and other errors that prevent the model from training took a significant amount of time. 

 

a. What problems did you encounter, and how did you overcome them?

 

Many problems stemmed from my lack of data. Although I did have the last two hundred years to work with, in the grand scheme of things, that is not a lot of data. Overlapping the training data usually led to the model overfitting, and with a longer temporal resolution, I needed more data. However, by tuning the model well and finding the optimal size, I got the model to learn the dependencies within the sequences, even with the limited dataset. 

 

Another issue was integrating multi-task learning. It was hard to find data that worked well with the temperature data, and I wasn't sure how to overcome the differences in temporal resolution and dimensions of the input data. However, I could match the two input dimensions by aggregating the monthly temperature data into annual data and padding the global emissions data.

 

b. What did you learn from overcoming these problems?

 

The most significant lesson through these challenges was carefulness in data handling and model design. It taught me that data preprocessing was tedious and slow but necessary for good results. Ensuring the data was clean, normalized, and appropriately structured was essential to the project. Similarly, tuning the transformer model's hyperparameters taught me the balance between a complex and high-performance model and overfitting.

 

In addition to the technical skills gained through this experience, these challenges have significantly shaped me as an individual and researcher, inspiring a mindset of resilience and meticulous attention to detail. Navigating the complexities of a machine learning project has taught me the value of perseverance in the face of daunting challenges. This project has made me more resilient to problems and struggles and refined my problem-solving skills, making me more skilled at identifying and solving detailed issues. As I progress in my academic and professional journey, these traits will be the foundation for my approach to research, encouraging me to produce technologically excellent work and embrace any challenges along the way.

 

These challenges emphasized that every detail counts in machine learning, especially with complex models like transformers. 

 

5. If you were going to do this project again, would there be any things you would do differently the next time?

 

If I were to do this project again, I would spend more time searching for data and preprocessing the data. I would use temperature and emissions data for training and additional environmental factors to improve the multi-task learning framework. 

 

In addition, I would spend more time investigating the effects of overlapping data, especially with different degrees of overlap. Instead of overlapping every data point, I can spread out the overlap, providing the model with more data with a lower chance of redundancy.

 

 

6. Did working on this project give you any ideas for other projects?

 

Working on this project has inspired me to go beyond temperature forecasting. Currently, I am training the model on only temperature and greenhouse gas emissions data. I would like to see if I can generalize the model and train it with a multitude of different data inputs.

 

7. How did COVID-19 affect the completion of your project?

 

Since the communication between my mentor and I was completely online, COVID-19 did little to impact the project.