Motivation
This is a simple example.
https://github.com/ProGamerGov/pytorch-old-tensorflow-models
if pretrained:
self.load_state_dict(torch.hub.load_state_dict_from_url(model_urls['inceptionv1'], progress=progress))
The official blog about how to use this is here.
Hosting Weights
The major challenge is to publish weight online. For that you need a public file hosting
service, which Google Drive and OneDrive can do.
The second thing is you need to have a link directly to the file instead of to the webpage. For that you can search direct download link.
Currently my solution is,
- OneDrive business doesn’t seem to have the embed option so doesn’t work
- Google Drive cannot share large file directly, though you can download them through links.
- Use this website to turn the shareable link into a direct downloadable link. But it’s not directly downloadable for large files.
- So I use OneDrive personal for this,
- Put the files in the drive
- View the file online.
- Select the option
Embed
in right click menu. - Copy the embed code to this website and follow the 3 steps.
- Then you can get the Direct Download Link and put them in your code.
After hosting the weights
self.load_state_dict(torch.hub.load_state_dict_from_url(model_urls['inceptionv1'], progress=progress))
will work perfectly!
Loading Weights
To avoid downloading weight multiple times, we can make a local cache system like this.
def load_statedict_from_online(name="fc6"):
torchhome = torch.hub._get_torch_home()
ckpthome = join(torchhome, "checkpoints")
os.makedirs(ckpthome, exist_ok=True)
filepath = join(ckpthome, "upconvGAN_%s.pt"%name)
if os.path.exists(filepath):
torch.hub.download_url_to_file(model_urls[name], filepath, hash_prefix=None, progress=True)
SD = torch.load(filepath)
return SD
A more advanced cache system could host a dict or list in the global scope in the module. So if the model has been loaded once it can be loaded from memory directly.