The following issues were found

torch/lib/libshm/socket.h
4 issues
strcpy - Does not check for buffer overflows when copying to destination [MS-banned]
Security

Line: 39 Column: 5 CWE codes: 120
Suggestion: Consider using snprintf, strcpy_s, or strlcpy (warning: strncpy easily misused)

                struct sockaddr_un prepare_address(const char *path) {
    struct sockaddr_un address;
    address.sun_family = AF_UNIX;
    strcpy(address.sun_path, path);
    return address;
  }

  // Implemented based on https://man7.org/linux/man-pages/man7/unix.7.html
  size_t address_length(struct sockaddr_un address) {

            

Reported by FlawFinder.

char - Statically-sized arrays can be improperly restricted, leading to potential overflows or other issues
Security

Line: 152 Column: 5 CWE codes: 119 120
Suggestion: Perform bounds checking, use functions that limit length, or ensure that the size is larger than the maximum possible length

                }

  void register_allocation(AllocInfo &info) {
    char buffer[3] = {0, 0, 0};
    send(&info, sizeof(info));
    recv(buffer, 2);
    if (strcmp(buffer, "OK") != 0)
      throw std::runtime_error("Shared memory manager didn't respond with an OK");
  }

            

Reported by FlawFinder.

strlen - Does not handle strings that are not \0-terminated; if given one it may perform an over-read (it could cause a crash if unprotected)
Security

Line: 45 Column: 46 CWE codes: 126

              
  // Implemented based on https://man7.org/linux/man-pages/man7/unix.7.html
  size_t address_length(struct sockaddr_un address) {
    return offsetof(sockaddr_un, sun_path) + strlen(address.sun_path) + 1;
  }

  void recv(void *_buffer, size_t num_bytes) {
    char *buffer = (char*)_buffer;
    size_t bytes_received = 0;

            

Reported by FlawFinder.

read - Check buffer boundaries if used in a loop including recursive loops
Security

Line: 58 Column: 52 CWE codes: 120 20

                  while (bytes_received < num_bytes) {
      SYSCHECK_ERR_RETURN_NEG1(poll(&pfd, 1, 1000));
      if (pfd.revents & POLLIN) {
        SYSCHECK_ERR_RETURN_NEG1(step_received = ::read(socket_fd, buffer, num_bytes - bytes_received));
        if (step_received == 0)
          throw std::runtime_error("Other end has closed the connection");
        bytes_received += step_received;
        buffer += step_received;
      } else if (pfd.revents & (POLLERR | POLLHUP)) {

            

Reported by FlawFinder.

torch/lib/libshm/core.cpp
4 issues
execl - This causes a new program to execute and is difficult to use safely
Security

Line: 36 Column: 5 CWE codes: 78
Suggestion: try using a library call that implements the same functionality if available

                  SYSCHECK_ERR_RETURN_NEG1(close(pipe_ends[0]));
    SYSCHECK_ERR_RETURN_NEG1(dup2(pipe_ends[1], 1)); // Replace stdout
    SYSCHECK_ERR_RETURN_NEG1(close(pipe_ends[1]));
    execl(manager_executable_path.c_str(), "torch_shm_manager", NULL);

    std::string msg("ERROR: execl failed: ");
    msg += std::strerror(errno);
    msg += '\n';
    write(1, msg.c_str(), msg.size());

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 21 Column: 3 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                if (len >= sizeof(info.filename)) {
    throw std::runtime_error("MapAllocatorContext_filename too long");
  }
  memcpy(info.filename, filename, len + 1);
  return info;
}

void start_manager() {
  std::array<int, 2> pipe_ends;

            

Reported by FlawFinder.

strlen - Does not handle strings that are not \0-terminated; if given one it may perform an over-read (it could cause a crash if unprotected)
Security

Line: 17 Column: 16 CWE codes: 126

                AllocInfo info = {0};
  info.pid = getpid();
  info.free = false;
  size_t len = strlen(filename);
  if (len >= sizeof(info.filename)) {
    throw std::runtime_error("MapAllocatorContext_filename too long");
  }
  memcpy(info.filename, filename, len + 1);
  return info;

            

Reported by FlawFinder.

read - Check buffer boundaries if used in a loop including recursive loops
Security

Line: 51 Column: 29 CWE codes: 120 20

                std::array<char, MAX_BUFFER_SIZE> buffer;
  std::string handle;
  while(handle.empty() || handle.back() != '\n') {
    const auto bytes_read = read(pipe_ends[0], buffer.data(), buffer.size());
    SYSCHECK_ERR_RETURN_NEG1(bytes_read);
    if (bytes_read == 0) {
      break;
    }
    handle.append(buffer.data(), bytes_read);

            

Reported by FlawFinder.

torch/utils/data/sharding.py
4 issues
Missing module docstring
Error

Line: 1 Column: 1

              import torch.utils.data.graph


def apply_sharding(datapipe, num_of_instances, instance_id):
    graph = torch.utils.data.graph.traverse(datapipe)

    def traverse_graph(graph):
        results = set()
        for datapipe, sub_graph in graph.items():

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 4 Column: 1

              import torch.utils.data.graph


def apply_sharding(datapipe, num_of_instances, instance_id):
    graph = torch.utils.data.graph.traverse(datapipe)

    def traverse_graph(graph):
        results = set()
        for datapipe, sub_graph in graph.items():

            

Reported by Pylint.

Line too long (132/100)
Error

Line: 23 Column: 1

                          if pipe.is_shardable():
                if hasattr(pipe, 'apply_sharding'):
                    if already_applied_to is not None:
                        raise RuntimeError('This implementation of sharding can be only applied once per instance of DataPipeline.',
                                           'Already applied to', already_applied_to, 'while trying to apply to', pipe)
                    pipe.apply_sharding(num_of_instances, instance_id)
                    already_applied_to = pipe

            

Reported by Pylint.

Line too long (118/100)
Error

Line: 24 Column: 1

                              if hasattr(pipe, 'apply_sharding'):
                    if already_applied_to is not None:
                        raise RuntimeError('This implementation of sharding can be only applied once per instance of DataPipeline.',
                                           'Already applied to', already_applied_to, 'while trying to apply to', pipe)
                    pipe.apply_sharding(num_of_instances, instance_id)
                    already_applied_to = pipe

            

Reported by Pylint.

torch/utils/data/backward_compatibility.py
4 issues
Instance of 'WorkerInfo' has no 'num_workers' member
Error

Line: 6 Column: 19

              
def worker_init_fn(worker_id):
    info = torch.utils.data.get_worker_info()
    num_workers = info.num_workers
    datapipe = info.dataset
    torch.utils.data.sharding.apply_sharding(datapipe, num_workers, worker_id)

            

Reported by Pylint.

Instance of 'WorkerInfo' has no 'dataset' member
Error

Line: 7 Column: 16

              def worker_init_fn(worker_id):
    info = torch.utils.data.get_worker_info()
    num_workers = info.num_workers
    datapipe = info.dataset
    torch.utils.data.sharding.apply_sharding(datapipe, num_workers, worker_id)

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              import torch.utils.data.sharding


def worker_init_fn(worker_id):
    info = torch.utils.data.get_worker_info()
    num_workers = info.num_workers
    datapipe = info.dataset
    torch.utils.data.sharding.apply_sharding(datapipe, num_workers, worker_id)

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 4 Column: 1

              import torch.utils.data.sharding


def worker_init_fn(worker_id):
    info = torch.utils.data.get_worker_info()
    num_workers = info.num_workers
    datapipe = info.dataset
    torch.utils.data.sharding.apply_sharding(datapipe, num_workers, worker_id)

            

Reported by Pylint.

torch/nn/intrinsic/quantized/modules/__init__.py
4 issues
Unable to import '__init__.linear_relu'
Error

Line: 1 Column: 1

              from .linear_relu import LinearReLU
from .conv_relu import ConvReLU1d, ConvReLU2d, ConvReLU3d
from .bn_relu import BNReLU2d, BNReLU3d

__all__ = [
    'LinearReLU',
    'ConvReLU1d',
    'ConvReLU2d',
    'ConvReLU3d',

            

Reported by Pylint.

Unable to import '__init__.conv_relu'
Error

Line: 2 Column: 1

              from .linear_relu import LinearReLU
from .conv_relu import ConvReLU1d, ConvReLU2d, ConvReLU3d
from .bn_relu import BNReLU2d, BNReLU3d

__all__ = [
    'LinearReLU',
    'ConvReLU1d',
    'ConvReLU2d',
    'ConvReLU3d',

            

Reported by Pylint.

Unable to import '__init__.bn_relu'
Error

Line: 3 Column: 1

              from .linear_relu import LinearReLU
from .conv_relu import ConvReLU1d, ConvReLU2d, ConvReLU3d
from .bn_relu import BNReLU2d, BNReLU3d

__all__ = [
    'LinearReLU',
    'ConvReLU1d',
    'ConvReLU2d',
    'ConvReLU3d',

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              from .linear_relu import LinearReLU
from .conv_relu import ConvReLU1d, ConvReLU2d, ConvReLU3d
from .bn_relu import BNReLU2d, BNReLU3d

__all__ = [
    'LinearReLU',
    'ConvReLU1d',
    'ConvReLU2d',
    'ConvReLU3d',

            

Reported by Pylint.

torch/nn/qat/modules/__init__.py
4 issues
Unable to import '__init__.linear'
Error

Line: 1 Column: 1

              from .linear import Linear
from .conv import Conv2d
from .conv import Conv3d

__all__ = [
    "Linear",
    "Conv2d",
    "Conv3d",
]

            

Reported by Pylint.

Unable to import '__init__.conv'
Error

Line: 2 Column: 1

              from .linear import Linear
from .conv import Conv2d
from .conv import Conv3d

__all__ = [
    "Linear",
    "Conv2d",
    "Conv3d",
]

            

Reported by Pylint.

Unable to import '__init__.conv'
Error

Line: 3 Column: 1

              from .linear import Linear
from .conv import Conv2d
from .conv import Conv3d

__all__ = [
    "Linear",
    "Conv2d",
    "Conv3d",
]

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              from .linear import Linear
from .conv import Conv2d
from .conv import Conv3d

__all__ = [
    "Linear",
    "Conv2d",
    "Conv3d",
]

            

Reported by Pylint.

torch/quantization/fx/__init__.py
4 issues
Unable to import '__init__.prepare'
Error

Line: 1 Column: 1

              from .prepare import prepare
from .convert import convert
from .fuse import Fuser

            

Reported by Pylint.

Unable to import '__init__.convert'
Error

Line: 2 Column: 1

              from .prepare import prepare
from .convert import convert
from .fuse import Fuser

            

Reported by Pylint.

Unable to import '__init__.fuse'
Error

Line: 3 Column: 1

              from .prepare import prepare
from .convert import convert
from .fuse import Fuser

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              from .prepare import prepare
from .convert import convert
from .fuse import Fuser

            

Reported by Pylint.

torch/fx/experimental/unification/multipledispatch/core.py
4 issues
Attempted relative import beyond top-level package
Error

Line: 4 Column: 1

              import inspect
import sys

from .dispatcher import Dispatcher, MethodDispatcher

global_namespace = dict()  # type: ignore[var-annotated]


def dispatch(*types, **kwargs):

            

Reported by Pylint.

Using deprecated method getargspec()
Error

Line: 73 Column: 20

                      return signature.parameters.get('self', None) is not None
    else:
        if sys.version_info.major < 3:
            spec = inspect.getargspec(func)
        else:
            spec = inspect.getfullargspec(func)  # type: ignore[union-attr, assignment]
        return spec and spec.args and spec.args[0] == 'self'

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              import inspect
import sys

from .dispatcher import Dispatcher, MethodDispatcher

global_namespace = dict()  # type: ignore[var-annotated]


def dispatch(*types, **kwargs):

            

Reported by Pylint.

Unnecessary "else" after "return"
Error

Line: 68 Column: 5

                  Note that this has to work as the method is defined but before the class is
    defined.  At this stage methods look like functions.
    """
    if hasattr(inspect, "signature"):
        signature = inspect.signature(func)
        return signature.parameters.get('self', None) is not None
    else:
        if sys.version_info.major < 3:
            spec = inspect.getargspec(func)

            

Reported by Pylint.

torch/package/analyze/is_from_package.py
4 issues
Attempted relative import beyond top-level package
Error

Line: 4 Column: 1

              from types import ModuleType
from typing import Any

from .._mangling import is_mangled


def is_from_package(obj: Any) -> bool:
    """
    Return whether an object was loaded from a package.

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              from types import ModuleType
from typing import Any

from .._mangling import is_mangled


def is_from_package(obj: Any) -> bool:
    """
    Return whether an object was loaded from a package.

            

Reported by Pylint.

Unnecessary "else" after "return"
Error

Line: 13 Column: 5

              
    Note: packaged objects from externed modules will return ``False``.
    """
    if type(obj) == ModuleType:
        return is_mangled(obj.__name__)
    else:
        return is_mangled(type(obj).__module__)

            

Reported by Pylint.

Using type() instead of isinstance() for a typecheck.
Error

Line: 13 Column: 8

              
    Note: packaged objects from externed modules will return ``False``.
    """
    if type(obj) == ModuleType:
        return is_mangled(obj.__name__)
    else:
        return is_mangled(type(obj).__module__)

            

Reported by Pylint.

torch/utils/tensorboard/_convert_np.py
4 issues
Argument name "x" doesn't conform to snake_case naming style
Error

Line: 8 Column: 1

              import torch


def make_np(x):
    """
    Args:
      x: An instance of torch tensor or caffe blob name

    Returns:

            

Reported by Pylint.

Argument name "x" doesn't conform to snake_case naming style
Error

Line: 28 Column: 1

                      'Got {}, but numpy array, torch tensor, or caffe2 blob name are expected.'.format(type(x)))


def _prepare_pytorch(x):
    if isinstance(x, torch.autograd.Variable):
        x = x.data
    x = x.cpu().numpy()
    return x


            

Reported by Pylint.

Argument name "x" doesn't conform to snake_case naming style
Error

Line: 35 Column: 1

                  return x


def _prepare_caffe2(x):
    from caffe2.python import workspace
    x = workspace.FetchBlob(x)
    return x

            

Reported by Pylint.

Import outside toplevel (caffe2.python.workspace)
Error

Line: 36 Column: 5

              

def _prepare_caffe2(x):
    from caffe2.python import workspace
    x = workspace.FetchBlob(x)
    return x

            

Reported by Pylint.