LLMs as Regressor - Guide
1. Objective
This guide provides step-by-step instructions on fine-tuning a model for Regression tasks on Emissary using our novel regression approach. In this approach, we add a regression head on top of the base LLMs that returns predicted values.
2. Dataset Preparation
Prepare your dataset in the appropriate format for the Regression task.
Regression Data Format
Each entry should contain:
- prompt: The input text for regression task.
- completion: The ground-truth numeric target value (a JSON number, typically a float) that the model should predict for the given prompt. This represents a continuous label (e.g., a score), and the valid range depends on your task definition.
{
"prompt": "This is a sample text for regression task.",
"completion": 0.231
}
Note: For large target values or wide-ranging scales, normalize targets before training for better stability.
3. Finetuning Preparation
Please refer to the in-depth guide on Finetuning on Emissary here - Quickstart Guide.
Create Training Project
Navigate to Dashboard arriving at Training, the default page on the Emissary platform.
-
Click + NEW PROJECT in the dashboard.

-
In the pop-up, enter a new training project name, and click CREATE.

Uploading Dataset
A tile is created for your task. Click Manage to enter the task workspace.

-
Click Manage Datasets in the Datasets Available tile.

-
Click on + UPLOAD DATASET and select training and test datasets.

-
Name dataset and upload the file

4. Model Finetuning
Now, go back one panel by clicking OVERVIEW and then click Manage Training Jobs in the Training Jobs tile.

Click + NEW TRAINING JOB button and fill in the configuration


Required Fields
- Name: Name of your training job (fine-tuned model)
- Base Model: Choose the backbone pre-trained / fine-tuned model from the drop down list
- Training Technique Choose training technique to use, regression is only supported in SFT
- Task Type: Select task type ner
- Train Dataset: Select dataset you would like to train on the backbone model
(Optional)
-
Test Dataset: You can provide a test dataset which then will be used in testing (evaluation phase). If None selected, the testing phase will be skipped.
- Split Train/Test Dataset: Use ratio of train dataset as a test set
- Select existing dataset: Upload separate dataset for test
-
Hyper Parameters: Hyper Parameters’ value is all set with Good default values but you can adjust the value if you want.
-
Test Functions: When you select any Test Dataset option, you can also provide your own test functions which provides you an aggregate results. We recommend to try our
regression mae
After initiating the training job you will see your training job on the list

If you click the row you will be navigated to the training job detail page

You can check the Status and Progress from the summary and you can also check the live logs and loss graph when you click the tab on the side


Go to Artifacts tab to check checkpoints and test results (if test dataset and functions provided).

5. Deployment
From the Artifacts tab you can deploy any checkpoint from the training job by hitting DEPLOY button.

(Optional) You can also set resource management when creating a deployment. Setting a inactivity timeout will shutdown your deployment (inference engine) after a period of inactivity. Also you can schedule your deployment to be run in specific date and time.
Once you initiate your deployment you go to Inference dashboard and you will see your recent / previous deployments.

By clicking the card you can see the details of your deployment (inference engine).

Once your deployment status becomes Deployed then it means your inference server is ready to be used. You can test your deployment on Testing tab (UI) or you can also call by API referring the API examples tab.

