Working directory
└───competitive-llms
└───talkative-llms
competitive-llms
now and install requirements
pip install -r requirements.txt
sys.append
that you should specify your home directory to the path where competitive-llms
is located.Now everything should be runnable.
To replicate the results you can utilize the provided aggregated responses in the n15_responses
folder.
To evaluate your own language model, you can add a config for your model in the configs
folder under the competitive-llms
directory.
To benchmark your model on each bias module:
Add your model to evaluations/model_configs.py
for the path to your models config file
Add your models to the list of evaluators array in evaluate.py
To run each script you can run the script from the competitive-llms
directory as:
python3 evaluations/evaluate.py 1 order
which runs the first batch of the list of models defined on the order benchmark.