My rule of thumb in using models from these public model zoo is to always use a separate container instance (or Colab) to check the source code. Then either convert such model to some standardized format like ONNX, which might not be possible for some state-of-the-art models with custom ops afaik.
Or you could just load the torch model first, then load the model state dict separately. Avoid using command like torch.load() since it uses pickle module implicitly, since it is possible to construct malicious pickle data which will execute arbitrary code during unpickling.
Then either convert such model to some standardized format like ONNX, which might not be possible for some state-of-the-art models with custom ops afaik.
Or you could just load the torch model first, then load the model state dict separately. Avoid using command like torch.load() since it uses pickle module implicitly, since it is possible to construct malicious pickle data which will execute arbitrary code during unpickling.