Skip to content

moonwalker199/nebula.mdg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ”§ LLM Tuner: On-Device Fine-Tuner for Niche Domains

LLM Tuner is a lightweight Gradio-based application that allows you to fine-tune open-source language models (like GPT-2) on niche datasets or your own .txt files, directly on-device using LoRA (Low-Rank Adaptation) without using any cloud dependency, ensuring full data privacy and control. It features a simple interface to do fine-tuning and interact with your adapted model through a chat window.


πŸš€ Features

  • βœ… Niche Domain Fine-Tuning
    Fine-tune using predefined datasets in domains like Finance, Legal, Medical, and Education.

  • πŸ“‚ Custom Dataset Upload
    Upload your own .txt file for custom domain fine-tuning.

  • πŸ” LoRA Training
    Parameter-efficient fine-tuning using a backend lora_train.py script.

  • πŸ’¬ Chat Interface
    Chat with the fine-tuned model via an interactive chat window.

  • ⬇️ Download LoRA Adapters
    Download LoRA adapter weights after training for offline usage as a .zip.


πŸ“ Project Structure

β”œβ”€β”€ app.py # Main Gradio UI and logic
β”œβ”€β”€ lora_train.py # LoRA training script
β”œβ”€β”€ datasets/
β”‚ β”œβ”€β”€ finance.txt
β”‚ β”œβ”€β”€ legal.txt
β”‚ β”œβ”€β”€ medical.txt
β”‚ └── education.txt
β”œβ”€β”€ lora_adapter/ # (Created during training)
└── lora_adapter.zip # (Zip generated after training)

βš™οΈ Requirements

Create a requirements.txt file and install all dependencies:

transformers
gradio
torch
shutil
time
logging

pip install -r requirements.txt

πŸ§ͺ How to Run

Clone or download the project folder.

Ensure the datasets/ folder includes .txt files for niche domains.

Run the app:

python app.py

Open the link in your browser (usually http://127.0.0.1:7860).

πŸ–₯️ Usage Guide

Option 1: Niche Domain Fine-Tuning Click "Select Niche Domain Dataset".

Choose a domain and base model (e.g., GPT-2).

Click on "Train".

Wait a few seconds, fine-tuning will complete.

Download the LoRA adapter for offline usage.

Option 2: Upload Custom Dataset Click "Upload Custom Dataset (.txt)".

Upload your .txt file and choose a base model.

Click "Train".

Fine-tuning will complete in a few seconds.

Download the LoRA adapter.

Chat with Fine-Tuned Model Go to the Chat Interface screen.

Click "Load Fine-Tuned Model".

Start chatting with the fine-tuned model!

πŸ“š Acknowledgements Hugging Face Transformers
Gradio UI Framework
LoRA: Low-Rank Adaptation of Large Language Models

~by Debangan Sarkar, 23117043 :)

About

open project from mdgspace iitr

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages