This guide is outdated. Please check https://www.reddit.com/r/GameUpscale/ for more up-to-date info.



Note: ESRGAN training appears to be slower on Windows than Linux by around 5x, at least on my machine. I don’t know the cause of this.

If you haven’t gotten ESRGAN set up for testing, please read this. Everything needed to test ESRGAN is also needed to train it - Python, CUDA, etc. https://kingdomakrillic.tumblr.com/post/178254875891/i-figured-out-how-to-get-esrgan-and-sftgan

If you’ve already done all that, go to Step 1.

1: Download and install Microsoft Build Tools 2015. It’s needed for one of BasicSR’s dependencies.

Then go to the command line and paste in this: pip install numpy opencv-python lmdb

2. Download BasicSR and the ESRGAN pretrained models.

https://github.com/xinntao/BasicSR

https://github.com/xinntao/BasicSR#pretrained-models

Place the models in (BasicSR directory)/experiments/pretrained_models

3. Download a dataset. The BasicSR creator uploaded several datasets to use here, but there’s plenty of other datasets you can use online. 1000 tiles (see step 5) are the absolute minimum for getting good results, but the more, the better.

https://github.com/xinntao/BasicSR#datasets

Make absolutely sure that none of the images are greyscale or indexed, or have alpha channels. RGB only,, or else you will get a “Sizes of tensors must match” error. You can use InfranView or BIMP (see below) to convert the images to RGB.

4 You will need to split your “training” and “validation” images. Take about 5-10% of your images and put them in a separate folder; these will be your validation images.

5. You will also need to convert your dataset into fixed tiles. Open up codes/scripts/extract_subimgs_single.py.

Change crop_sz to 192 or 128 (I’d stick to the latter unless you have a beefy graphics card), input_folder to the full path name of your image folder, and save_folder to where you want to save the tiles to. If you’re using Windows, replace all the slashes (“") with double slashes, as ”" is an escape character.

Example:

input_folder = ‘C:\Users\Username\BasicSR-master\General100’

save_folder = 'C:\Users\Username\BasicSR-master\General100_tiles’

Double click to run it. Repeat this process for the validation images.

6. You will need to batch convert these HR tiles to 4x downscaled versions. Download and open InfranView (https://www.irfanview.com/), press B to open the batch convert dialog, check “Use advanced options” and then click “Advanced” button to access the resize settings. You may want to check “Change Color Depth” or add some JPG compression, noise or dithering if you’re specifically training it for low quality images. It also helps make the results smoother overall. Make sure that both the LR and HR images have the same format and filename.

If you have GIMP installed, you can also download a batch manipulation plugin called BIMP and process the image that way.

7. Go to codes/options/train/train_ESRGAN.json and make the following changes:

name: change to whatever you want, removing the “debug” from the name

scale: Default is 4, meaning that the HR images are 4x larger than the LR images. I’d recommend leaving it as it is, but if you do set the value to something else, you will need to alter path: {pretrain_model_G (see below)



train : { dataroot_HR: location of the training HR images

train : { dataroot_LR: location of the training LR images

val : { dataroot_HR: location of the HR validation images

val : { dataroot_LR: location of the LR validation images

path : { root: the location of the BasicSR directory

train : { HR_size: the size of the HR tiles. Leave at 128 if you’re getting “out of memory” errors.

train : { batch_size: You could lower this number if you’re getting “out of memory” errors, but that produces other errors on my Windows installation. “n_workers” may be an alternative.

train : { val_freq: How often the model will be validated. Defaults to 5,000 iterations (5e3), so feel free to lower it.

path: {pretrain_model_G: The model ESRGAN will initialize from. If using a scale other than 4, set to “null” without quotes



logger: { print_freq: How often the program will update you on how many iterations have passed. Set this to as low as 1 if you’d like

logger: { save_checkpoint_freq: How often a new model will be saved. Set it to the same number as val_freq.

Again, make sure to use double shashes.

8. Use the command line to navigate to the codes folder and run this command: python train.py -opt options/train/train_esrgan.json

You could also create a .bat file so you can just double click, though that does make it harder to find errors (as an error will close the command prompt instantly).

9. You can check on the model’s progress by going into the “experiments” folder. Your older sessions will have an "archived" in their name, while the latest session will not. Inside each folder is the “models” folder, which is where new models are saved, and the upscaled validation images will appear in “val_images”. Once you’re satisfied, hit Ctrl-C in the terminal to quit training and copy one of the “G.pth” files to ESRGAN’s models folder.

The “training_state” folder contains .state files that let you resume progress after you’ve stopped. Just add -resume_state next time you run train.py, plus the path to the file.

If you’re feeling brave, you can mess with the GAN weight, feature weight and pixel weight in train_ESRGAN.json or initialize from a different model instead of RRDB_PSNR_x4.pth