Text Generation
1. Objective
This guide provides step-by-step instructions on fine-tuning a model for Text Generation tasks on Emissary.
2. Dataset Preparation
Prepare your dataset in the appropriate format for the text generation task.
Text generation Data Format
For text generation, your dataset could have two formats.
Completion:
- Prompt: The input text for generation
- Completion: The output text that the model should generate
{ "prompt": "input", "completion": "output" }
Chat
- Messages: A list of messages, each containing a role and its corresponding content.
{
"messages": [
{ "role": "user", "content": "input" },
{ "role": "assistant", "content": "response" }
]
}
3. Finetuning Preparation
Please refer to the in-depth guide on Finetuning on Emissary here - Quickstart Guide.
Create Training Project
Navigate to Dashboard arriving at Training, the default page on the Emissary platform.
-
Click + NEW PROJECT in the dashboard.

-
In the pop-up, enter a new training project name, and click CREATE.###

Upload Dataset
A tile is created for your task. Click Manage to enter the task workspace.

-
Click Manage Datasets in the Datasets Available tile.

-
Click on + UPLOAD DATASET and select training and test datasets.

-
Name dataset and upload the file

4. Model Finetuning
Now, go back one panel by clicking OVERVIEW and then click Manage Training Jobs in the Training Jobs tile.

Click + NEW TRAINING JOB button and fill in the configuration

SFT Hyper Parameters

GRPO Hyper Parameters

Required Fields
- Name: Name of your training job (fine-tuned model)
- Base Model: Choose the backbone pre-trained / fine-tuned model from the drop down list
- Training Technique Choose training technique to use
- Task Type: Select task type ner
- Train Dataset: Select dataset you would like to train on the backbone model
- Reward Function (GRPO ONLY): Select or add your reward function used in GRPO training. The total reward should be must sum to 1.
(Optional)
-
Test Dataset: You can provide a test dataset which then will be used in testing (evaluation phase). If None selected, the testing phase will be skipped.
- Split Train/Test Dataset: Use ratio of train dataset as a test set
- Select existing dataset: Upload separate dataset for test
-
Hyper Parameters: Hyper Parameters’ value is all set with Good default values but you can adjust the value if you want.
-
Test Functions: When you select any Test Dataset option, you can also provide your own test functions which provides you an aggregate results. We recommend to try our
test similarity

After initiating the training job you will see your training job on the list

If you click the row you will be navigated to the training job detail page

You can check the Status and Progress from the summary and you can also check the live logs and loss graph when you click the tab on the side


Go to Artifacts tab to check checkpoints and test results (if test dataset and functions provided).

5. Deployment
From the Artifacts tab you can deploy any checkpoint from the training job by hitting DEPLOY button.

(Optional) You can also set resource management when creating a deployment. Setting a inactivity timeout will shutdown your deployment (inference engine) after a period of inactivity. Also you can schedule your deployment to be run in specific date and time.
Once you initiate your deployment you go to Inference dashboard and you will see your recent / previous deployments.

By clicking the card you can see the details of your deployment (inference engine).

Once your deployment status becomes Deployed then it means your inference server is ready to be used. You can test your deployment on Testing tab (UI) or you can also call by API referring the API examples tab.

