The following issues were found

tools/autograd/gen_python_functions.py
73 issues
Attempted relative import beyond top-level package
Error

Line: 38 Column: 1

              import re
import yaml

from .gen_trace_type import should_trace

from tools.codegen.code_template import CodeTemplate
from tools.codegen.api import cpp
from tools.codegen.api.types import CppSignatureGroup
from tools.codegen.api.python import (PythonArgument, PythonSignature,

            

Reported by Pylint.

Redefining name 'signature' from outer scope (line 43)
Error

Line: 121 Column: 5

                      if skip_regex.match(name):
            return False

    signature = str(f.func)
    for pattern in SKIP_PYTHON_BINDINGS_SIGNATURES:
        if pattern == signature:
            return False

    return True

            

Reported by Pylint.

Lambda may not be necessary
Error

Line: 202 Column: 44

                      if pred(pair.function):
            grouped[pair.function.func.name.name].append(pair)

    for name in sorted(grouped.keys(), key=lambda x: str(x)):
        overloads = grouped[name]
        py_methods.append(method_impl(name, module, overloads, method=method))
        py_method_defs.append(method_def(name, module, overloads, method=method))
        py_forwards.extend(forward_decls(name, overloads, method=method))


            

Reported by Pylint.

Redefining built-in 'type'
Error

Line: 269 Column: 13

                      for param in params:
            if param == '*':
                continue
            type, name = param.split(' ')
            types[name] = type
        # if the name in the call is not in the parameter list, assume it's
        # a literal Scalar
        rearranged_types = ', '.join(types.get(arg, 'Scalar') for arg in call_args)
        return f'{opname}({rearranged_types})'

            

Reported by Pylint.

Use of unsafe yaml load. Allows instantiation of arbitrary objects. Consider yaml.safe_load().
Security criptography

Line: 285
Suggestion: https://bandit.readthedocs.io/en/latest/plugins/b506_yaml_load.html

                  results: List[PythonSignatureNativeFunctionPair] = []

    with open(deprecated_yaml_path, 'r') as f:
        deprecated_defs = yaml.load(f, Loader=YamlLoader)

    for deprecated in deprecated_defs:
        _, params = split_name_params(deprecated['name'])
        aten_name, call_args = split_name_params(deprecated['aten'])


            

Reported by Bandit.

Redefining name 'signature' from outer scope (line 43)
Error

Line: 501 Column: 9

                  signatures: List[str] = []
    dispatch: List[str] = []
    for overload_index, overload in enumerate(grouped_overloads):
        signature = overload.signature.signature_str()
        signatures.append(f'{cpp_string(str(signature))},')
        dispatch_body = emit_dispatch_case(overload, namedtuple_typenames)
        dispatch.append(
            PY_VARIABLE_CASE.substitute(overload_index=overload_index, body=dispatch_body)
            if not is_singleton else dispatch_body)

            

Reported by Pylint.

TODO: should use some canonical form instead of 'str(arg.type)' - see comments
Error

Line: 778 Column: 3

                      args1, args2 = s1.arguments(skip_outputs=True), s2.arguments(skip_outputs=True)
        if len(args1) != len(args2):
            return False
        # TODO: should use some canonical form instead of 'str(arg.type)' - see comments
        # above. The old codegen used the deprecated 'dynamic_type(arg.type)', which
        # ignores the optional annotation, i.e. 'Scalar' and 'Scalar?'.
        equal = all(arg1.type == arg2.type for arg1, arg2 in zip(args1, args2))
        smaller_or_equal = all(str(arg1.type) == str(arg2.type)
                               or is_arg_smaller(arg1.type, arg2.type)

            

Reported by Pylint.

TODO: Checking `ps.method and ('requires_grad' in parser_outputs)` is a hacky
Error

Line: 851 Column: 3

                      lambda_args = ', '.join(lambda_arg_exprs.exprs)

        # scatter fields
        # TODO: Checking `ps.method and ('requires_grad' in parser_outputs)` is a hacky
        #       solution for enabling the 'requires_grad' argument for tensor methods
        #       new_full, new_empty, and new_zeros. A much better but more difficult to
        #       implement solution involves refactoring according to Ed's description here:
        #       https://github.com/pytorch/pytorch/issues/36455#issuecomment-614767589
        need_set_requires_grad = ps.tensor_options_args and (not has_tensor_options(f) or (

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              # Generates Python bindings for ATen functions
#
# The bindings are generated as methods on python_variable or functions on the
# torch._C._nn. torch._C._fft, torch._C._linalg or torch._C._special objects.
#

# Code tries to stick to the following rules:
#
# - templates should be colocated with the functions that use them.

            

Reported by Pylint.

third party import "from tools.codegen.code_template import CodeTemplate" should be placed before "from .gen_trace_type import should_trace"
Error

Line: 40 Column: 1

              
from .gen_trace_type import should_trace

from tools.codegen.code_template import CodeTemplate
from tools.codegen.api import cpp
from tools.codegen.api.types import CppSignatureGroup
from tools.codegen.api.python import (PythonArgument, PythonSignature,
                                      PythonSignatureDeprecated,
                                      PythonSignatureGroup,

            

Reported by Pylint.

tools/nightly.py
73 issues
Value 'tempfile.TemporaryDirectory' is unsubscriptable
Error

Line: 327 Column: 34

              

@timed("Installing pytorch nightly binaries")
def pytorch_install(url: str) -> tempfile.TemporaryDirectory[str]:
    """"Install pytorch into a temporary directory"""
    pytdir = tempfile.TemporaryDirectory()
    cmd = ["conda", "create", "--yes", "--no-deps", "--prefix", pytdir.name, url]
    p = subprocess.run(cmd, check=True)
    return pytdir

            

Reported by Pylint.

Catching too general exception Exception
Error

Line: 167 Column: 12

                      logging_rotate()
        print(f"log file: {log_file}")
        yield root_logger
    except Exception as e:
        logging.exception("Fatal exception")
        logging_record_exception(e)
        print(f"log file: {log_file}")
        sys.exit(1)
    except BaseException as e:

            

Reported by Pylint.

Catching too general exception BaseException
Error

Line: 172 Column: 12

                      logging_record_exception(e)
        print(f"log file: {log_file}")
        sys.exit(1)
    except BaseException as e:
        # You could logging.debug here to suppress the backtrace
        # entirely, but there is no reason to hide it from technically
        # savvy users.
        logging.info("", exc_info=True)
        logging_record_exception(e)

            

Reported by Pylint.

Using the global statement
Error

Line: 230 Column: 13

                  def dec(f: F) -> F:
        @functools.wraps(f)
        def wrapper(*args: Any, **kwargs: Any) -> Any:
            global LOGGER
            logger = cast(logging.Logger, LOGGER)
            logger.info(prefix)
            with timer(logger, prefix):
                return f(*args, **kwargs)


            

Reported by Pylint.

Unused variable 'p'
Error

Line: 319 Column: 9

                  if not existing_env:
        # first remove previous pytorch-deps env
        cmd = ["conda", "env", "remove", "--yes"] + env_opts
        p = subprocess.run(cmd, check=True)
    # install new deps
    inst_opt = "install" if existing_env else "create"
    cmd = ["conda", inst_opt, "--yes", "--no-deps"] + env_opts + deps
    p = subprocess.run(cmd, check=True)


            

Reported by Pylint.

Unused variable 'p'
Error

Line: 331 Column: 5

                  """"Install pytorch into a temporary directory"""
    pytdir = tempfile.TemporaryDirectory()
    cmd = ["conda", "create", "--yes", "--no-deps", "--prefix", pytdir.name, url]
    p = subprocess.run(cmd, check=True)
    return pytdir


def _site_packages(dirname: str, platform: str) -> str:
    if platform.startswith("win"):

            

Reported by Pylint.

Unused variable 'p'
Error

Line: 390 Column: 5

                  """Get's the nightly version and then checks it out."""
    nightly_version = _nightly_version(spdir)
    cmd = ["git", "checkout", "-b", branch, nightly_version]
    p = subprocess.run(cmd, check=True)


@timed("Pulling nightly PyTorch")
def pull_nightly_version(spdir: str) -> None:
    """Fetches the nightly version and then merges it ."""

            

Reported by Pylint.

Unused variable 'p'
Error

Line: 398 Column: 5

                  """Fetches the nightly version and then merges it ."""
    nightly_version = _nightly_version(spdir)
    cmd = ["git", "merge", nightly_version]
    p = subprocess.run(cmd, check=True)


def _get_listing_linux(source_dir: str) -> List[str]:
    listing = glob.glob(os.path.join(source_dir, "*.so"))
    listing.extend(glob.glob(os.path.join(source_dir, "lib", "*.so")))

            

Reported by Pylint.

Catching too general exception Exception
Error

Line: 513 Column: 16

                  else:
        try:
            _link_files(listing, source_dir, target_dir)
        except Exception:
            _copy_files(listing, source_dir, target_dir)


def _available_envs() -> Dict[str, str]:
    cmd = ["conda", "env", "list"]

            

Reported by Pylint.

Using the global statement
Error

Line: 649 Column: 5

              
def main(args: Optional[Sequence[str]] = None) -> None:
    """Main entry point"""
    global LOGGER
    p = make_parser()
    ns = p.parse_args(args)
    ns.branch = getattr(ns, "branch", None)
    status = check_in_repo()
    status = status or check_branch(ns.subcmd, ns.branch)

            

Reported by Pylint.

aten/src/ATen/native/vulkan/ops/Tensor.cpp
73 issues
access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 43 Column: 24 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                };
}

vTensor::Access::Flags access(
    const VkAccessFlags vk_access) {
  vTensor::Access::Flags access = 0u;

  constexpr VkAccessFlags kRead =
      VK_ACCESS_HOST_READ_BIT |

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 61 Column: 5 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                    VK_ACCESS_TRANSFER_WRITE_BIT;

  if (vk_access & kRead) {
    access |= vTensor::Access::Read;
  }

  if (vk_access & kWrite) {
    access |= vTensor::Access::Write;
  }

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 65 Column: 5 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                }

  if (vk_access & kWrite) {
    access |= vTensor::Access::Write;
  }

  return access;
}


            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 68 Column: 10 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                  access |= vTensor::Access::Write;
  }

  return access;
}

VkAccessFlags vk_access(
    const vTensor::Stage::Flags stage,
    const vTensor::Access::Flags access) {

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 73 Column: 34 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

              
VkAccessFlags vk_access(
    const vTensor::Stage::Flags stage,
    const vTensor::Access::Flags access) {
  VkAccessFlags vk_access = 0u;

  if (access & vTensor::Access::Read) {
    if (stage & vTensor::Stage::Compute) {
      vk_access |= VK_ACCESS_SHADER_READ_BIT;

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 76 Column: 7 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                  const vTensor::Access::Flags access) {
  VkAccessFlags vk_access = 0u;

  if (access & vTensor::Access::Read) {
    if (stage & vTensor::Stage::Compute) {
      vk_access |= VK_ACCESS_SHADER_READ_BIT;
    }

    if (stage & vTensor::Stage::Host) {

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 90 Column: 7 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                  }
  }

  if (access & vTensor::Access::Write) {
    if (stage & vTensor::Stage::Compute) {
      vk_access |= VK_ACCESS_SHADER_WRITE_BIT;
    }

    if (stage & vTensor::Stage::Host) {

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 109 Column: 34 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

              
VkImageLayout vk_layout(
    const vTensor::Stage::Flags stage,
    const vTensor::Access::Flags access) {
  switch (stage) {
    case vTensor::Stage::Compute:
      switch (access) {
        case vTensor::Access::Read:
          return VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 112 Column: 15 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                  const vTensor::Access::Flags access) {
  switch (stage) {
    case vTensor::Stage::Compute:
      switch (access) {
        case vTensor::Access::Read:
          return VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL;

        default:
          return VK_IMAGE_LAYOUT_GENERAL;

            

Reported by FlawFinder.

access - This usually indicates a security flaw. If an attacker can change anything along the path between the call to access() and the file's actual use (e.g., by moving files), the attacker can exploit the race condition
Security

Line: 121 Column: 15 CWE codes: 362/367!
Suggestion: Set up the correct permissions (e.g., using setuid()) and try to open the file directly

                    } break;

    case vTensor::Stage::Transfer:
      switch (access) {
        case vTensor::Access::Read:
          return VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL;

        case vTensor::Access::Write:
          return VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL;

            

Reported by FlawFinder.

torch/fx/operator_schemas.py
73 issues
function already defined line 22
Error

Line: 26 Column: 5

                      pass
    signatures.append(inspect.signature(nonzero))

    def nonzero(self, *, as_tuple : bool):  # type: ignore[no-redef]
        pass
    signatures.append(inspect.signature(nonzero))

    return signatures


            

Reported by Pylint.

Module 'torch' has no 'nonzero' member
Error

Line: 32 Column: 19

              
    return signatures

_manual_overrides[torch.nonzero] = _nonzero_schemas()

class _FakeGlobalNamespace:
    def __getattr__(self, name):
        if name == 'torch':
            return torch

            

Reported by Pylint.

Module 'torch' has no 'device' member
Error

Line: 40 Column: 59

                          return torch
        raise RuntimeError('Expected a torch namespace lookup')

_type_eval_globals = {'Tensor' : torch.Tensor, 'Device' : torch.device, 'Layout' : torch.layout,
                      'number' : numbers.Number, 'Future' : torch.jit.Future,
                      'AnyEnumType' : enum.Enum, 'QScheme' : torch.qscheme,
                      '__torch__': _FakeGlobalNamespace(), 'NoneType': type(None),
                      't': typing.TypeVar('t')}
for k in dir(typing):

            

Reported by Pylint.

Module 'torch' has no 'layout' member
Error

Line: 40 Column: 84

                          return torch
        raise RuntimeError('Expected a torch namespace lookup')

_type_eval_globals = {'Tensor' : torch.Tensor, 'Device' : torch.device, 'Layout' : torch.layout,
                      'number' : numbers.Number, 'Future' : torch.jit.Future,
                      'AnyEnumType' : enum.Enum, 'QScheme' : torch.qscheme,
                      '__torch__': _FakeGlobalNamespace(), 'NoneType': type(None),
                      't': typing.TypeVar('t')}
for k in dir(typing):

            

Reported by Pylint.

Module 'torch' has no 'qscheme' member
Error

Line: 42 Column: 62

              
_type_eval_globals = {'Tensor' : torch.Tensor, 'Device' : torch.device, 'Layout' : torch.layout,
                      'number' : numbers.Number, 'Future' : torch.jit.Future,
                      'AnyEnumType' : enum.Enum, 'QScheme' : torch.qscheme,
                      '__torch__': _FakeGlobalNamespace(), 'NoneType': type(None),
                      't': typing.TypeVar('t')}
for k in dir(typing):
    _type_eval_globals[k] = getattr(typing, k)


            

Reported by Pylint.

Module 'torch' has no 'dtype' member
Error

Line: 170 Column: 51

                      return is_homogeneous_tuple(argument_type)

    # Dtype is an int in schemas
    if signature_type is int and argument_type is torch.dtype:
        return True

    if signature_type is numbers.Number and argument_type in {int, float}:
        return True
    if inspect.isclass(argument_type) and inspect.isclass(signature_type):

            

Reported by Pylint.

Unused argument 'self'
Error

Line: 22 Column: 17

              def _nonzero_schemas():
    signatures = []

    def nonzero(self):
        pass
    signatures.append(inspect.signature(nonzero))

    def nonzero(self, *, as_tuple : bool):  # type: ignore[no-redef]
        pass

            

Reported by Pylint.

Unused argument 'as_tuple'
Error

Line: 26 Column: 26

                      pass
    signatures.append(inspect.signature(nonzero))

    def nonzero(self, *, as_tuple : bool):  # type: ignore[no-redef]
        pass
    signatures.append(inspect.signature(nonzero))

    return signatures


            

Reported by Pylint.

Unused argument 'self'
Error

Line: 26 Column: 17

                      pass
    signatures.append(inspect.signature(nonzero))

    def nonzero(self, *, as_tuple : bool):  # type: ignore[no-redef]
        pass
    signatures.append(inspect.signature(nonzero))

    return signatures


            

Reported by Pylint.

Use of possibly insecure function - consider using safer ast.literal_eval.
Security blacklist

Line: 54
Suggestion: https://bandit.readthedocs.io/en/latest/blacklists/blacklist_calls.html#b307-eval

                  eval'ing the annotation_str. _type_eval_globals sets up expressions
    like "List" and "Future" to map to actual types (typing.List and jit.Future)
    """
    return eval(ts_type.annotation_str, _type_eval_globals)

def _torchscript_schema_to_signature(ts_schema : torch._C.FunctionSchema) -> inspect.Signature:
    parameters : List[inspect.Parameter] = []
    for arg in ts_schema.arguments:
        arg_type = _torchscript_type_to_python_type(arg.type)

            

Reported by Bandit.

test/distributed/test_pg_wrapper.py
73 issues
Unable to import 'torch'
Error

Line: 5 Column: 1

              import sys
from datetime import timedelta

import torch
import torch.distributed as c10d

if not c10d.is_available():
    print("c10d not available, skipping tests", file=sys.stderr)
    sys.exit(0)

            

Reported by Pylint.

Unable to import 'torch.distributed'
Error

Line: 6 Column: 1

              from datetime import timedelta

import torch
import torch.distributed as c10d

if not c10d.is_available():
    print("c10d not available, skipping tests", file=sys.stderr)
    sys.exit(0)


            

Reported by Pylint.

Unable to import 'torch.testing._internal.common_distributed'
Error

Line: 13 Column: 1

                  sys.exit(0)

from test_c10d_common import LOOPBACK
from torch.testing._internal.common_distributed import (
    MultiProcessTestCase,
    requires_nccl,
    requires_gloo,
    skip_if_lt_x_gpu,
    with_dist_debug_levels,

            

Reported by Pylint.

Unable to import 'torch.testing._internal.common_utils'
Error

Line: 21 Column: 1

                  with_dist_debug_levels,
    create_device,
)
from torch.testing._internal.common_utils import (
    run_tests,
    TEST_WITH_TSAN,
    TEST_WITH_DEV_DBG_ASAN,
)


            

Reported by Pylint.

Bad first argument 'AbstractProcessGroupWrapperTest' given to super()
Error

Line: 221 Column: 13

                  @requires_nccl()
    class ProcessGroupNCCLWrapperTest(AbstractProcessGroupWrapperTest):
        def setUp(self):
            super(AbstractProcessGroupWrapperTest, self).setUp()
            self._spawn_processes()
            # NCCL_BLOCKING_WAIT overrides NCCL_ASYNC_ERROR_HANDLING hence tests
            # that use NCCL_BLOCKING_WAIT will test it as expected.
            os.environ["NCCL_ASYNC_ERROR_HANDLING"] = "1"


            

Reported by Pylint.

Unused argument 'rank'
Error

Line: 37 Column: 51

                      else:
            self._fork_processes()

    def _validate_error(self, exception, op_type, rank, tensor):
        err = str(exception)
        self.assertTrue(
            op_type in err, f"Got {err} but expected {op_type} to be in error."
        )
        # User doesn't call barrier with tensor.

            

Reported by Pylint.

Redefining built-in 'input'
Error

Line: 185 Column: 9

                      )

        # Check shape errors with scatter
        input = [
            torch.tensor(
                [self.rank] if self.rank == 0 else [self.rank, self.rank],
                device=self.rank if use_cuda else "cpu",
            )
            for _ in range(self.world_size)

            

Reported by Pylint.

Access to a protected member _create_process_group_wrapper of a client class
Error

Line: 246 Column: 22

                              _pg = c10d.ProcessGroupNCCL(
                    store, self.rank, self.world_size, timeout=timedelta(seconds=timeout)
                )
                pg = c10d._create_process_group_wrapper(
                    _pg,
                    "unused",
                    store,
                    self.rank,
                    self.world_size,

            

Reported by Pylint.

Useless super delegation in method 'setUp'
Error

Line: 298 Column: 9

              if not TEST_WITH_TSAN:
    @requires_gloo()
    class ProcessGroupGlooWrapperTest(AbstractProcessGroupWrapperTest):
        def setUp(self):
            super(ProcessGroupGlooWrapperTest, self).setUp()

        def opts(self, threads=2, timeout=10.0):
            opts = c10d.ProcessGroupGloo._Options()
            opts._timeout = timeout

            

Reported by Pylint.

Access to a protected member _Options of a client class
Error

Line: 302 Column: 20

                          super(ProcessGroupGlooWrapperTest, self).setUp()

        def opts(self, threads=2, timeout=10.0):
            opts = c10d.ProcessGroupGloo._Options()
            opts._timeout = timeout
            opts._devices = [create_device(interface=LOOPBACK)]
            opts._threads = threads
            return opts


            

Reported by Pylint.

torch/backends/cudnn/__init__.py
73 issues
Module 'torch' has no 'half' member
Error

Line: 56 Column: 5

              

CUDNN_TENSOR_DTYPES = {
    torch.half,
    torch.float,
    torch.double,
}



            

Reported by Pylint.

Module 'torch' has no 'float' member
Error

Line: 57 Column: 5

              
CUDNN_TENSOR_DTYPES = {
    torch.half,
    torch.float,
    torch.double,
}


def is_available():

            

Reported by Pylint.

Module 'torch' has no 'double' member
Error

Line: 58 Column: 5

              CUDNN_TENSOR_DTYPES = {
    torch.half,
    torch.float,
    torch.double,
}


def is_available():
    r"""Returns a bool indicating if CUDNN is currently available."""

            

Reported by Pylint.

Using the global statement
Error

Line: 22 Column: 9

              
if _cudnn is not None:
    def _init():
        global __cudnn_version
        if __cudnn_version is None:
            __cudnn_version = _cudnn.getVersionInt()
            runtime_version = _cudnn.getRuntimeVersion()
            compile_version = _cudnn.getCompileVersion()
            runtime_major, runtime_minor, _ = runtime_version

            

Reported by Pylint.

Access to a protected member _C of a client class
Error

Line: 64 Column: 12

              
def is_available():
    r"""Returns a bool indicating if CUDNN is currently available."""
    return torch._C.has_cudnn


def is_acceptable(tensor):
    if not torch._C._get_cudnn_enabled():
        return False

            

Reported by Pylint.

Access to a protected member _C of a client class
Error

Line: 68 Column: 12

              

def is_acceptable(tensor):
    if not torch._C._get_cudnn_enabled():
        return False
    if tensor.device.type != 'cuda' or tensor.dtype not in CUDNN_TENSOR_DTYPES:
        return False
    if not is_available():
        warnings.warn(

            

Reported by Pylint.

Access to a protected member _get_cudnn_enabled of a client class
Error

Line: 68 Column: 12

              

def is_acceptable(tensor):
    if not torch._C._get_cudnn_enabled():
        return False
    if tensor.device.type != 'cuda' or tensor.dtype not in CUDNN_TENSOR_DTYPES:
        return False
    if not is_available():
        warnings.warn(

            

Reported by Pylint.

Access to a protected member _get_cudnn_enabled of a client class
Error

Line: 88 Column: 19

              

def set_flags(_enabled=None, _benchmark=None, _deterministic=None, _allow_tf32=None):
    orig_flags = (torch._C._get_cudnn_enabled(),
                  torch._C._get_cudnn_benchmark(),
                  torch._C._get_cudnn_deterministic(),
                  torch._C._get_cudnn_allow_tf32())
    if _enabled is not None:
        torch._C._set_cudnn_enabled(_enabled)

            

Reported by Pylint.

Access to a protected member _C of a client class
Error

Line: 88 Column: 19

              

def set_flags(_enabled=None, _benchmark=None, _deterministic=None, _allow_tf32=None):
    orig_flags = (torch._C._get_cudnn_enabled(),
                  torch._C._get_cudnn_benchmark(),
                  torch._C._get_cudnn_deterministic(),
                  torch._C._get_cudnn_allow_tf32())
    if _enabled is not None:
        torch._C._set_cudnn_enabled(_enabled)

            

Reported by Pylint.

Access to a protected member _C of a client class
Error

Line: 89 Column: 19

              
def set_flags(_enabled=None, _benchmark=None, _deterministic=None, _allow_tf32=None):
    orig_flags = (torch._C._get_cudnn_enabled(),
                  torch._C._get_cudnn_benchmark(),
                  torch._C._get_cudnn_deterministic(),
                  torch._C._get_cudnn_allow_tf32())
    if _enabled is not None:
        torch._C._set_cudnn_enabled(_enabled)
    if _benchmark is not None:

            

Reported by Pylint.

caffe2/python/operator_test/conv_transpose_test.py
73 issues
Unable to import 'hypothesis'
Error

Line: 6 Column: 1

              

import numpy as np
from hypothesis import assume, given, settings
import hypothesis.strategies as st

from caffe2.proto import caffe2_pb2
from caffe2.python import core, utils
import caffe2.python.hypothesis_test_util as hu

            

Reported by Pylint.

Unable to import 'hypothesis.strategies'
Error

Line: 7 Column: 1

              
import numpy as np
from hypothesis import assume, given, settings
import hypothesis.strategies as st

from caffe2.proto import caffe2_pb2
from caffe2.python import core, utils
import caffe2.python.hypothesis_test_util as hu
import caffe2.python.hip_test_util as hiputl

            

Reported by Pylint.

TODO: Group conv_transpose in NHWC not implemented for GPU yet.
Error

Line: 384 Column: 3

                          output_channels, batch_size, group, order, engine, shared_buffer,
            use_bias, gc, dc):
        assume(adj < stride)
        # TODO: Group conv_transpose in NHWC not implemented for GPU yet.
        assume(group == 1 or order == "NCHW" or
               gc.device_type == caffe2_pb2.CPU)
        if group != 1 and order == "NHWC":
            dc = [d for d in dc if d.device_type == caffe2_pb2.CPU]


            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              



import numpy as np
from hypothesis import assume, given, settings
import hypothesis.strategies as st

from caffe2.proto import caffe2_pb2

            

Reported by Pylint.

Missing class docstring
Error

Line: 15 Column: 1

              import caffe2.python.hip_test_util as hiputl


class TestConvolutionTranspose(hu.HypothesisTestCase):
    @given(stride=st.integers(1, 3),
           pad=st.integers(0, 3),
           kernel=st.integers(1, 5),
           adj=st.integers(0, 2),
           size=st.integers(7, 10),

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 28 Column: 5

                         shared_buffer=st.booleans(),
           use_bias=st.booleans(),
           **hu.gcs)
    def test_convolution_transpose_layout_legacy_args(
            self, stride, pad, kernel, adj,
            size, input_channels,
            output_channels, batch_size,
            engine, shared_buffer, use_bias, gc, dc):
        assume(adj < stride)

            

Reported by Pylint.

Too many local variables (24/15)
Error

Line: 28 Column: 5

                         shared_buffer=st.booleans(),
           use_bias=st.booleans(),
           **hu.gcs)
    def test_convolution_transpose_layout_legacy_args(
            self, stride, pad, kernel, adj,
            size, input_channels,
            output_channels, batch_size,
            engine, shared_buffer, use_bias, gc, dc):
        assume(adj < stride)

            

Reported by Pylint.

Argument name "gc" doesn't conform to snake_case naming style
Error

Line: 28 Column: 5

                         shared_buffer=st.booleans(),
           use_bias=st.booleans(),
           **hu.gcs)
    def test_convolution_transpose_layout_legacy_args(
            self, stride, pad, kernel, adj,
            size, input_channels,
            output_channels, batch_size,
            engine, shared_buffer, use_bias, gc, dc):
        assume(adj < stride)

            

Reported by Pylint.

Argument name "dc" doesn't conform to snake_case naming style
Error

Line: 28 Column: 5

                         shared_buffer=st.booleans(),
           use_bias=st.booleans(),
           **hu.gcs)
    def test_convolution_transpose_layout_legacy_args(
            self, stride, pad, kernel, adj,
            size, input_channels,
            output_channels, batch_size,
            engine, shared_buffer, use_bias, gc, dc):
        assume(adj < stride)

            

Reported by Pylint.

Too many arguments (14/5)
Error

Line: 28 Column: 5

                         shared_buffer=st.booleans(),
           use_bias=st.booleans(),
           **hu.gcs)
    def test_convolution_transpose_layout_legacy_args(
            self, stride, pad, kernel, adj,
            size, input_channels,
            output_channels, batch_size,
            engine, shared_buffer, use_bias, gc, dc):
        assume(adj < stride)

            

Reported by Pylint.

torch/fx/experimental/optimization.py
73 issues
Module 'torch' has no 'transpose' member
Error

Line: 110 Column: 17

              
mkldnn_supported = [
    nn.Conv2d, nn.Linear, nn.BatchNorm2d, nn.ReLU, nn.MaxPool2d, nn.AvgPool2d, nn.AdaptiveAvgPool2d,
    torch.relu, torch.transpose, torch.sigmoid,
    F.relu, F.avg_pool2d, F.adaptive_avg_pool2d
]
# These are operators that may not be convertible into MKLDNN ops (e.g. the
# args are scalar values). Thus, we only include them in the subgraph if their
# arguments are already in MKLDNN.

            

Reported by Pylint.

Module 'torch' has no 'relu' member
Error

Line: 110 Column: 5

              
mkldnn_supported = [
    nn.Conv2d, nn.Linear, nn.BatchNorm2d, nn.ReLU, nn.MaxPool2d, nn.AvgPool2d, nn.AdaptiveAvgPool2d,
    torch.relu, torch.transpose, torch.sigmoid,
    F.relu, F.avg_pool2d, F.adaptive_avg_pool2d
]
# These are operators that may not be convertible into MKLDNN ops (e.g. the
# args are scalar values). Thus, we only include them in the subgraph if their
# arguments are already in MKLDNN.

            

Reported by Pylint.

Module 'torch' has no 'sigmoid' member
Error

Line: 110 Column: 34

              
mkldnn_supported = [
    nn.Conv2d, nn.Linear, nn.BatchNorm2d, nn.ReLU, nn.MaxPool2d, nn.AvgPool2d, nn.AdaptiveAvgPool2d,
    torch.relu, torch.transpose, torch.sigmoid,
    F.relu, F.avg_pool2d, F.adaptive_avg_pool2d
]
# These are operators that may not be convertible into MKLDNN ops (e.g. the
# args are scalar values). Thus, we only include them in the subgraph if their
# arguments are already in MKLDNN.

            

Reported by Pylint.

Module 'torch' has no 'float' member
Error

Line: 137 Column: 71

                          assert(isinstance(node.target, str))
            cur_module = modules[node.target]
            if type(cur_module) in mkldnn_map:
                new_module = mkldnn_map[type(cur_module)](cur_module, torch.float)
                assert(isinstance(new_module, nn.Module))
                old_modules[new_module] = copy.deepcopy(cur_module)
                replace_node_module(node, modules, new_module)
    return old_modules


            

Reported by Pylint.

Module 'torch' has no 'randn' member
Error

Line: 181 Column: 26

                          fx_model = graph.fx_graph.owning_module
            old_modules = graph.fx_graph.old_modules  # type: ignore[attr-defined]
            ShapeProp(fx_model).propagate(example_inputs)
        sample_inputs = [torch.randn(node.shape) for node in input_nodes]  # type: ignore[attr-defined]
        output_args = cast(List[fx.Node], [node.args[0] for node in graph.end_nodes])
        submodule = extract_subgraph(fx_model, graph.nodes, input_nodes, output_args)

        def benchmark(f):
            for _ in range(warmup):

            

Reported by Pylint.

Module 'torch' has no 'float' member
Error

Line: 295 Column: 54

                              supports_mkldnn = MklSupport.YES
                sample_parameter = next(cur_module.parameters(), None)
                if sample_parameter is not None:
                    assert(sample_parameter.dtype == torch.float), "this pass is only for torch.float modules"
                    assert(sample_parameter.device == torch.device('cpu')), "this pass is only for CPU modules"
        elif node.op == 'call_function':
            if node.target in mkldnn_supported:
                supports_mkldnn = MklSupport.YES
            elif node.target in mkldnn_supported_unknown:

            

Reported by Pylint.

Module 'torch' has no 'device' member
Error

Line: 296 Column: 55

                              sample_parameter = next(cur_module.parameters(), None)
                if sample_parameter is not None:
                    assert(sample_parameter.dtype == torch.float), "this pass is only for torch.float modules"
                    assert(sample_parameter.device == torch.device('cpu')), "this pass is only for CPU modules"
        elif node.op == 'call_function':
            if node.target in mkldnn_supported:
                supports_mkldnn = MklSupport.YES
            elif node.target in mkldnn_supported_unknown:
                supports_mkldnn = MklSupport.UNKNOWN

            

Reported by Pylint.

Redefining built-in 'input'
Error

Line: 98 Column: 9

                  """
    new_graph = fx.Graph()
    env: Dict[fx.Node, fx.Node] = {}
    for input in inputs:
        new_node = new_graph.placeholder(input.name)
        env[input] = new_node
    for node in nodes:
        new_node = new_graph.node_copy(node, lambda x: env[x])
        env[node] = new_node

            

Reported by Pylint.

TODO: Determine whether this can be removed after type inference.
Error

Line: 116 Column: 3

              # These are operators that may not be convertible into MKLDNN ops (e.g. the
# args are scalar values). Thus, we only include them in the subgraph if their
# arguments are already in MKLDNN.
# TODO: Determine whether this can be removed after type inference.
mkldnn_supported_unknown = [operator.add, operator.mul]
mkldnn_map = {
    nn.Conv2d: th_mkldnn.MkldnnConv2d,
    nn.Linear: th_mkldnn.MkldnnLinear,
    nn.BatchNorm2d: lambda a, _: th_mkldnn.MkldnnBatchNorm(a)

            

Reported by Pylint.

Unused variable 'out'
Error

Line: 190 Column: 17

                              f()
            begin = time.time()
            for _ in range(iters):
                out = f()
            return time.time() - begin

        mkl_time = benchmark(lambda: [i.to_dense() for i in submodule(*[i.to_mkldnn() for i in sample_inputs])])

        reset_modules(submodule.graph.nodes, dict(submodule.named_modules()), old_modules)

            

Reported by Pylint.

torch/quantization/quantize_jit.py
73 issues
Attempted relative import beyond top-level package
Error

Line: 3 Column: 1

              
import torch
from .qconfig import QConfig
from .quant_type import QuantType
from torch.jit._recursive import wrap_cpp_module

def _check_is_script_module(model):
    if not isinstance(model, torch.jit.ScriptModule):
        raise ValueError('input must be a script module, got: ' + str(type(model)))

            

Reported by Pylint.

Attempted relative import beyond top-level package
Error

Line: 4 Column: 1

              
import torch
from .qconfig import QConfig
from .quant_type import QuantType
from torch.jit._recursive import wrap_cpp_module

def _check_is_script_module(model):
    if not isinstance(model, torch.jit.ScriptModule):
        raise ValueError('input must be a script module, got: ' + str(type(model)))

            

Reported by Pylint.

Access to a protected member _c of a client class
Error

Line: 12 Column: 12

                      raise ValueError('input must be a script module, got: ' + str(type(model)))

def _check_forward_method(model):
    if not model._c._has_method('forward'):
        raise ValueError('input script module does not have forward method')

def script_qconfig(qconfig):
    r"""Instantiate the activation and weight observer modules and script
    them, these observer module instances will be deepcopied during

            

Reported by Pylint.

Access to a protected member _has_method of a client class
Error

Line: 12 Column: 12

                      raise ValueError('input must be a script module, got: ' + str(type(model)))

def _check_forward_method(model):
    if not model._c._has_method('forward'):
        raise ValueError('input script module does not have forward method')

def script_qconfig(qconfig):
    r"""Instantiate the activation and weight observer modules and script
    them, these observer module instances will be deepcopied during

            

Reported by Pylint.

Access to a protected member _c of a client class
Error

Line: 21 Column: 20

                  prepare_jit step.
    """
    return QConfig(
        activation=torch.jit.script(qconfig.activation())._c,
        weight=torch.jit.script(qconfig.weight())._c)

def script_qconfig_dict(qconfig_dict):
    r"""Helper function used by `prepare_jit`.
    Apply `script_qconfig` for all entries in `qconfig_dict` that is

            

Reported by Pylint.

Access to a protected member _c of a client class
Error

Line: 22 Column: 16

                  """
    return QConfig(
        activation=torch.jit.script(qconfig.activation())._c,
        weight=torch.jit.script(qconfig.weight())._c)

def script_qconfig_dict(qconfig_dict):
    r"""Helper function used by `prepare_jit`.
    Apply `script_qconfig` for all entries in `qconfig_dict` that is
    not None.

            

Reported by Pylint.

Access to a protected member _C of a client class
Error

Line: 38 Column: 5

                  Args:
        model: TorchScript model from scripting or tracing
    """
    torch._C._log_api_usage_once("quantization_api.quantize_jit.fuse_conv_bn_jit")
    model_c = model._c
    model_c = torch._C._jit_pass_fold_convbn(model_c)
    if inplace:
        model._reconstruct(model_c)
    else:

            

Reported by Pylint.

Access to a protected member _log_api_usage_once of a client class
Error

Line: 38 Column: 5

                  Args:
        model: TorchScript model from scripting or tracing
    """
    torch._C._log_api_usage_once("quantization_api.quantize_jit.fuse_conv_bn_jit")
    model_c = model._c
    model_c = torch._C._jit_pass_fold_convbn(model_c)
    if inplace:
        model._reconstruct(model_c)
    else:

            

Reported by Pylint.

Access to a protected member _c of a client class
Error

Line: 39 Column: 15

                      model: TorchScript model from scripting or tracing
    """
    torch._C._log_api_usage_once("quantization_api.quantize_jit.fuse_conv_bn_jit")
    model_c = model._c
    model_c = torch._C._jit_pass_fold_convbn(model_c)
    if inplace:
        model._reconstruct(model_c)
    else:
        model = wrap_cpp_module(model_c)

            

Reported by Pylint.

Access to a protected member _jit_pass_fold_convbn of a client class
Error

Line: 40 Column: 15

                  """
    torch._C._log_api_usage_once("quantization_api.quantize_jit.fuse_conv_bn_jit")
    model_c = model._c
    model_c = torch._C._jit_pass_fold_convbn(model_c)
    if inplace:
        model._reconstruct(model_c)
    else:
        model = wrap_cpp_module(model_c)
    return model

            

Reported by Pylint.

test/distributed/pipeline/sync/test_stream.py
72 issues
Unable to import 'pytest'
Error

Line: 7 Column: 1

              #
# This source code is licensed under the BSD license found in the
# LICENSE file in the root directory of this source tree.
import pytest
import torch

from torch.distributed.pipeline.sync.stream import (
    CPUStream,
    current_stream,

            

Reported by Pylint.

Unable to import 'torch'
Error

Line: 8 Column: 1

              # This source code is licensed under the BSD license found in the
# LICENSE file in the root directory of this source tree.
import pytest
import torch

from torch.distributed.pipeline.sync.stream import (
    CPUStream,
    current_stream,
    default_stream,

            

Reported by Pylint.

Unable to import 'torch.distributed.pipeline.sync.stream'
Error

Line: 10 Column: 1

              import pytest
import torch

from torch.distributed.pipeline.sync.stream import (
    CPUStream,
    current_stream,
    default_stream,
    get_device,
    is_cuda,

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              # Copyright 2019 Kakao Brain
#
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
#
# This source code is licensed under the BSD license found in the
# LICENSE file in the root directory of this source tree.
import pytest
import torch


            

Reported by Pylint.

Missing class docstring
Error

Line: 26 Column: 1

              skip_if_no_cuda = pytest.mark.skipif(not torch.cuda.is_available(), reason="cuda required")


class TestNewStream:
    def test_new_stream_cpu(self):
        stream = new_stream(torch.device("cpu"))
        assert stream is CPUStream

    @skip_if_no_cuda

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 27 Column: 5

              

class TestNewStream:
    def test_new_stream_cpu(self):
        stream = new_stream(torch.device("cpu"))
        assert stream is CPUStream

    @skip_if_no_cuda
    def test_new_stream_cuda(self):

            

Reported by Pylint.

Method could be a function
Error

Line: 27 Column: 5

              

class TestNewStream:
    def test_new_stream_cpu(self):
        stream = new_stream(torch.device("cpu"))
        assert stream is CPUStream

    @skip_if_no_cuda
    def test_new_stream_cuda(self):

            

Reported by Pylint.

Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.
Security

Line: 29
Suggestion: https://bandit.readthedocs.io/en/latest/plugins/b101_assert_used.html

              class TestNewStream:
    def test_new_stream_cpu(self):
        stream = new_stream(torch.device("cpu"))
        assert stream is CPUStream

    @skip_if_no_cuda
    def test_new_stream_cuda(self):
        stream = new_stream(torch.device("cuda"))
        assert isinstance(stream, torch.cuda.Stream)

            

Reported by Bandit.

Method could be a function
Error

Line: 32 Column: 5

                      assert stream is CPUStream

    @skip_if_no_cuda
    def test_new_stream_cuda(self):
        stream = new_stream(torch.device("cuda"))
        assert isinstance(stream, torch.cuda.Stream)
        assert stream != torch.cuda.default_stream()



            

Reported by Pylint.

Missing function or method docstring
Error

Line: 32 Column: 5

                      assert stream is CPUStream

    @skip_if_no_cuda
    def test_new_stream_cuda(self):
        stream = new_stream(torch.device("cuda"))
        assert isinstance(stream, torch.cuda.Stream)
        assert stream != torch.cuda.default_stream()



            

Reported by Pylint.