You’ve probably heard of ChatGPT, the natural language processing tool that can generate human-like responses and save time when writing code. However, the data used to generate ChatGPT’s responses is gleaned from all over the internet, making it difficult to influence what sources the model will use to produce a response. This can be an issue when using ChatGPT for a specific task, such as in a CLI application built for a specific purpose.
We have a solution! In this article, you will build a CLI tool to respond with an example code block that we will define. Once a model is fine-tuned, it can work anywhere the ChatGPT API is used. You will tune the model to use Bitovi’s Docker to AWS EC2 action when asking ChatGPT for an example of a GitHub Action. Here’s the code block:
name: Basic deploy
on:
push:
branches: [ main ]
jobs:
EC2-Deploy:
runs-on: ubuntu-latest
steps:
- id: deploy
uses: bitovi/github-actions-deploy-docker-to-ec2@v0.5.0
with:
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_default_region: us-east-1
dot_env: ${{ secrets.DOT_ENV }}
Now let’s review the steps needed to train ChatGPT with this code block.
Table of Contents |
Creating the Data
To fine-tune a model, you must first upload training data in a JSONL file. JSONL is similar to regular JSON, but it uses the newline character (\n
) instead of commas to separate each record. Call your file data.jsonl
. It will contain multiple prompt-completion objects, which have the following shape:
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
Since you want a block of code as a response, you’ll need to replace newlines and tab characters with their character values. So, the example yaml code in the previous section would look like this:
name: Basic deploy\non:\n\tpush:\n\t\tbranches: [ main ]\n\njobs:\n\tEC2-Deploy:\n\t\truns-on: ubuntu-latest\n\t\tsteps:\n\t\t\t- id: deploy\n\t\t\t\tuses: bitovi/github-actions-deploy-docker-to-ec2@v0.5.0\n\t\t\t\twith:\n\t\t\t\t\taws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}\n\t\t\t\t\taws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}\n\t\t\t\t\taws_default_region: us-east-1\n\t\t\t\t\tdot_env: ${{ secrets.DOT_ENV }}
Now that you have your expected response, you need to come up with the prompts to fine-tune the model. We were able to get a successful fine-tuned model with as little as 40 different prompts, such as:
-
bitovi github action
-
deploy a docker image to ec2
Combining both the prompt and completion, the file should contain all your prompt objects like this:
{"prompt": "bitovi github actions ", "completion":"name: Basic deploy\non:\n\tpush:\n\t\tbranches: [ main ]\n\njobs:\n\tEC2-Deploy:\n\t\truns-on: ubuntu-latest\n\t\tsteps:\n\t\t\t- id: deploy\n\t\t\t\tuses: bitovi/github-actions-deploy-docker-to-ec2@v0.5.0\n\t\t\t\twith:\n\t\t\t\t\taws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}\n\t\t\t\t\taws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}\n\t\t\t\t\taws_default_region: us-east-1\n\t\t\t\t\tdot_env: ${{ secrets.DOT_ENV }}" }
Writing the Code
Next, you’ll implement the code to tune a ChatGPT model. In this section, you’ll connect to the CLI with an API Key, write some configuration functions, upload the data file, and finally perform a few tasks for fine-tuning.
API Key and Setting Up the CLI
In order to create an API key, you’ll need an account with OpenAI. Set up an account on the ChatGPT website. After setting up an account, you can access the API keys. Create a new secret key and then copy and save it as an environment variable called OPENAI_API_KEY
.
If this is your first time setting up an environment variable, look for a tutorial for your operating system and shell environment. On MacOS, for example, you might use a .bash_profile
, a .profile
, or a .zprofile
depending on your setup.
As this is not a CLI tutorial, you can find the complete code for this app here. You can see that you will be using the gpt
keyword to call the app’s functions.
Writing the ChatGPT functions
The next step is to write the ChatGPT functions. This imports the API and creates a configuration you can fine-tune. To begin, import Configuration
and OpenAIApi
from the openai
package. Then you’ll set the Configuration
using the OPENAI_API_KEY
(which you set in a .env file). Next, you need to create a new instance of Conf to store the file and model names. Finally, you will create a new instance of the GPT SDK with the configuration object.
When done correctly, your code will look like this:
import {Configuration, OpenAIApi} from "openai";
import Conf from "conf"import fs from "fs"
import * as dotenv from "dotenv";
dotenv.config()
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,})
const conf = new Conf({projectName: "ChatGPT-CLI"})
const openai = new OpenAIApi(configuration)
Upload the Data File
Now that you have a configuration to edit, upload the data file you created. Call the createFile()
function and pass in the contents of data.jsonl, then save the file id to memory. Your code should look like this:
export async function upload() {
try {
const response = await openai.createFile(
fs.createReadStream("src/data.jsonl"),
"fine-tune"
);
conf.set("fileId", response.data.id)
console.log(`The file with ID: ${response.data.id} has been uploaded`)
return response.data.id
} catch (err) {
console.log("err: ", err)
}
}
Initiate a Fine-Tuning of the Model
Once the data file has been uploaded, you can create the fine-tuned model by calling createFineTune()
and passing in the file ID.
export async function createFineTuneModel() {
const fileId = conf.get("fileId")
try {
const response = await openai.createFineTune({training_file: fileId});
console.log(`The model with file ${fileId} is being created `)
} catch (err) {
console.log("err: ", err)
}
}
Get List for all Fine-Tuned Models & Get Status of Last Model Created
It can take some time to have your model fine-tuned. You can check if your model is ready by listing all available fine-tuned models and then choosing the last model created. Do this by invoking listFineTunes()
.
You are also getting the name of the fine-tuned model here, which you can use then use in your completions.
export async function getModelList() {
try {
const response = await openai.listFineTunes();
return response.data.data
} catch (err) {
console.log("err: ", err)
}
}
export async function checkIfModelIsComplete(){
const models = await getModelList()
const fileId = conf.get("fileId")
try{
const currentModel = models.reduce((prev, current) => (prev.created_at > current.created_at) ? prev : current)
if (currentModel.status === "succeeded"){
conf.set("modelName", currentModel.fine_tuned_model)
console.log(`This model has been fine-tuned with the name ${currentModel.fine_tuned_model}`)
return
}
if (currentModel.status === "failed"){
console.log("This model has failed to fine-tune")
return
}
console.log("This model has not been fine-tuned yet")
} catch (err) {
console.log("err: ", err)
}
}
Use the Fine-Tuned Model in Your Requests
Finally, you’ll createCompletion()
with the following parameters:
-
model
: Use your new fine-tuned model or the default curie model -
prompt
: The actual prompt that will be asked to ChatGPT -
max_tokens
: The length of the response
export async function completion(prompt, useModel) {
const model = (useModel && conf.get("modelName"))? conf.get("modelName"): "curie"
console.log("Model used: ", model)
prompt = prompt.join(" ")
try {
const response = await openai.createCompletion({
model: model,
prompt: prompt,
max_tokens: 250,
})
console.log(response.data.choices[0].text)
} catch (err) {
console.log("err: ", err)
}
}
Results
When it’s all put together, you will have successfully fine-tuned your model with custom code. I already pointed out the most important parts of how to accomplish the task in this article. If you’d like to see how those parts fit together, you can review the complete code here. Once everything’s put together, you should be able to run the app with a relevant prompt to see output like this screenshot.

Conclusion
ChatGPT has gained recognition as a versatile natural language processing tool, capable of crafting human-like responses and streamlining the code-writing process. In this article, we solved the main issue keeping you from fully utilizing ChatGPT: its diverse content, drawn from across the internet. When you build a custom CLI tool, you can harness the potential of fine-tuning ChatGPT to provide specific code responses. Armed with this approach, your fine-tuned model seamlessly integrates with the ChatGPT API, empowering you to deploy it wherever needed.
Now you can dive in and refine your ChatGPT experience! With the CLI tool and fine-tuned model at your disposal, you can confidently prompt ChatGPT for tailored code responses, bridging the gap between the tool's vast capabilities and your specific coding requirements. Unleash the power of ChatGPT with precision and purpose, and explore the endless horizons of code generation.
What do you think?
Are you using ChatGPT to help you write code? Want to stay on the cutting edge of tech? Join our Community Discord and share your feedback with us on the #Backend channel. Don't hesitate to schedule a free consultation call if you need any assistance. Our team is always here to support you.