February 18, 2024

Author: Yangfeng Ji @ UVA ILP

Part 1. Environment Setup

The instructions are customized for two servers that are popularly used at UVA and the CS department. However, any of the following instruction can be easily adopted to any Linux server. For the servers that are not using SLURM for job scheduling or module for package management, the setup is even simpler.

On Rivanna

JupyterLab on Rivanna

On UVA CS GPU Servers

Part 2. Models

The base model offers the unified training and test APIs, while the specific model module takes care of the model-specific setup (e.g., tokenizer) as well as other things if needed.

A Simple Example

Base Functions for Fine-tuning and Predictions

Llama 2

Mistral

Falcon

Flan-T5

Part 3. Data Preparation

By default, this package will use the instruction-based tuning framework. It means every single instance will have three fields of information: instruction, input, and output. In the dataset that has only one task, the framework is a little redundant. However, it provides a generic framework for us to work on both single-task fine-tuning and multi-task fine-tuning.