The following issues were found

torch/distributed/algorithms/model_averaging/averagers.py
9 issues
Missing module docstring
Error

Line: 1 Column: 1

              import warnings
from abc import ABC, abstractmethod

import torch.distributed as dist
import torch.distributed.algorithms.model_averaging.utils as utils


class ModelAverager(ABC):
    r"""Base class for all model averagers.

            

Reported by Pylint.

Too few public methods (1/2)
Error

Line: 8 Column: 1

              import torch.distributed.algorithms.model_averaging.utils as utils


class ModelAverager(ABC):
    r"""Base class for all model averagers.

    Args:
        process_group: The process group to be used for all-reduce.
                       If ``None``, the default process group, which

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 25 Column: 5

                      self.step = 0

    @abstractmethod
    def average_parameters(self, params):
        raise NotImplementedError


class PeriodicModelAverager(ModelAverager):
    r"""

            

Reported by Pylint.

Too few public methods (1/2)
Error

Line: 29 Column: 1

                      raise NotImplementedError


class PeriodicModelAverager(ModelAverager):
    r"""
    Averages parameters periodically after the warm-up stage.

    This can be used for running `post-local SGD <https://arxiv.org/abs/1808.07217>`_,
    by running :class:`~torch.nn.DistributedDataParallel` (DDP)

            

Reported by Pylint.

Line too long (103/100)
Error

Line: 39 Column: 1

              
    Args:
        period (int): The number of steps per model averaging.
                      Usually the period should be greater than ``1`` to reduce the communication cost.
                      Otherwise, only DDP needs to be used.
        warmup_steps (int): The number of warm-up steps. During this stage,
                            model averaging is skipped.
        process_group: The process group to be used for all-reduce.
                       If ``None``, the default process group, which

            

Reported by Pylint.

Line too long (116/100)
Error

Line: 69 Column: 1

                      >>>
        >>>  # In the first 100 steps, run global gradient averaging like normal DDP at every step.
        >>>  # After 100 steps, run model averaging every 4 steps.
        >>>  # Note that ``warmup_steps`` must be the same as ``start_localSGD_iter`` used in ``PostLocalSGDState``.
        >>>  averager = averagers.PeriodicModelAverager(period=4, warmup_steps=100)
        >>>  for step in range(0, 20):
        >>>     optimizer.zero_grad()
        >>>     loss = loss_fn(output, labels)
        >>>     loss.backward()

            

Reported by Pylint.

Line too long (101/100)
Error

Line: 77 Column: 1

                      >>>     loss.backward()
        >>>     optimizer.step()
        >>>     # Average parameters globally after ``optimizer.step()``.
        >>>     # Thus, the inter-node communication only occurs periodically after ``warmup_steps``.
        >>>     averager.average_parameters(model.parameters())

    .. warning ::
        `PeriodicModelAverager` is experimental and subject to change.
    """

            

Reported by Pylint.

Unnecessary "elif" after "raise"
Error

Line: 94 Column: 9

                      if warmup_steps < 0:
            raise ValueError("Arg ``warmup_steps`` must be a non-negative number.")
        self.warmup_steps = warmup_steps
        if period < 1:
            raise ValueError("Arg ``period`` must be a positive value.")
        elif period == 1:
            warnings.warn(
                "When period is 1, no need to use model averaging because the communication cost "
                "of all-reducing parameters will be no less than the cost of all-reducing gradients "

            

Reported by Pylint.

Line too long (101/100)
Error

Line: 99 Column: 1

                      elif period == 1:
            warnings.warn(
                "When period is 1, no need to use model averaging because the communication cost "
                "of all-reducing parameters will be no less than the cost of all-reducing gradients "
                "by DistributedDataParall in the backward pass. Therefore, only "
                "DistributedDataParallel should be used for this case."
            )
        self.period = period


            

Reported by Pylint.

caffe2/python/operator_test/cudnn_recurrent_test.py
9 issues
Missing module docstring
Error

Line: 1 Column: 1

              




from caffe2.python import model_helper, workspace, core, rnn_cell
from future.utils import viewitems
import numpy as np


            

Reported by Pylint.

standard import "import unittest" should be placed before "from caffe2.python import model_helper, workspace, core, rnn_cell"
Error

Line: 10 Column: 1

              from future.utils import viewitems
import numpy as np

import unittest


@unittest.skipIf(not workspace.has_gpu_support, "No gpu support.")
class TestLSTMs(unittest.TestCase):


            

Reported by Pylint.

Missing class docstring
Error

Line: 14 Column: 1

              

@unittest.skipIf(not workspace.has_gpu_support, "No gpu support.")
class TestLSTMs(unittest.TestCase):

    def testEqualToCudnn(self):
        with core.DeviceScope(core.DeviceOption(workspace.GpuDeviceType)):
            T = 8
            batch_size = 4

            

Reported by Pylint.

Method name "testEqualToCudnn" doesn't conform to snake_case naming style
Error

Line: 16 Column: 5

              @unittest.skipIf(not workspace.has_gpu_support, "No gpu support.")
class TestLSTMs(unittest.TestCase):

    def testEqualToCudnn(self):
        with core.DeviceScope(core.DeviceOption(workspace.GpuDeviceType)):
            T = 8
            batch_size = 4
            input_dim = 8
            hidden_dim = 31

            

Reported by Pylint.

Too many local variables (28/15)
Error

Line: 16 Column: 5

              @unittest.skipIf(not workspace.has_gpu_support, "No gpu support.")
class TestLSTMs(unittest.TestCase):

    def testEqualToCudnn(self):
        with core.DeviceScope(core.DeviceOption(workspace.GpuDeviceType)):
            T = 8
            batch_size = 4
            input_dim = 8
            hidden_dim = 31

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 16 Column: 5

              @unittest.skipIf(not workspace.has_gpu_support, "No gpu support.")
class TestLSTMs(unittest.TestCase):

    def testEqualToCudnn(self):
        with core.DeviceScope(core.DeviceOption(workspace.GpuDeviceType)):
            T = 8
            batch_size = 4
            input_dim = 8
            hidden_dim = 31

            

Reported by Pylint.

Variable name "T" doesn't conform to snake_case naming style
Error

Line: 18 Column: 13

              
    def testEqualToCudnn(self):
        with core.DeviceScope(core.DeviceOption(workspace.GpuDeviceType)):
            T = 8
            batch_size = 4
            input_dim = 8
            hidden_dim = 31

            workspace.FeedBlob(

            

Reported by Pylint.

Variable name "LR" doesn't conform to snake_case naming style
Error

Line: 85 Column: 13

                          own_model.AddGradientOperators([own_loss])

            # Add parameter updates
            LR = cudnn_model.param_init_net.ConstantFill(
                [], shape=[1], value=0.01
            )
            ONE = cudnn_model.param_init_net.ConstantFill(
                [], shape=[1], value=1.0
            )

            

Reported by Pylint.

Variable name "ONE" doesn't conform to snake_case naming style
Error

Line: 88 Column: 13

                          LR = cudnn_model.param_init_net.ConstantFill(
                [], shape=[1], value=0.01
            )
            ONE = cudnn_model.param_init_net.ConstantFill(
                [], shape=[1], value=1.0
            )
            for param in cudnn_model.GetParams():
                cudnn_model.WeightedSum(
                    [param, ONE, cudnn_model.param_to_grad[param], LR], param

            

Reported by Pylint.

caffe2/python/operator_test/erf_op_test.py
9 issues
Unable to import 'hypothesis'
Error

Line: 9 Column: 1

              import math

from caffe2.python import core
from hypothesis import given, settings
import caffe2.python.hypothesis_test_util as hu
import caffe2.python.serialized_test.serialized_test_util as serial

import numpy as np
import unittest

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              




import math

from caffe2.python import core
from hypothesis import given, settings

            

Reported by Pylint.

standard import "import unittest" should be placed before "from caffe2.python import core"
Error

Line: 14 Column: 1

              import caffe2.python.serialized_test.serialized_test_util as serial

import numpy as np
import unittest


class TestErfOp(serial.SerializedTestCase):
    @given(
        X=hu.tensor(elements=hu.floats(min_value=-0.7, max_value=0.7)),

            

Reported by Pylint.

Missing class docstring
Error

Line: 17 Column: 1

              import unittest


class TestErfOp(serial.SerializedTestCase):
    @given(
        X=hu.tensor(elements=hu.floats(min_value=-0.7, max_value=0.7)),
        **hu.gcs)
    @settings(deadline=10000)
    def test_erf(self, X, gc, dc):

            

Reported by Pylint.

Argument name "X" doesn't conform to snake_case naming style
Error

Line: 22 Column: 5

                      X=hu.tensor(elements=hu.floats(min_value=-0.7, max_value=0.7)),
        **hu.gcs)
    @settings(deadline=10000)
    def test_erf(self, X, gc, dc):
        op = core.CreateOperator('Erf', ["X"], ["Y"])
        self.assertReferenceChecks(gc, op, [X], lambda x: (np.vectorize(math.erf)(X),))
        self.assertDeviceChecks(dc, op, [X], [0])
        self.assertGradientChecks(gc, op, [X], 0, [0])


            

Reported by Pylint.

Argument name "gc" doesn't conform to snake_case naming style
Error

Line: 22 Column: 5

                      X=hu.tensor(elements=hu.floats(min_value=-0.7, max_value=0.7)),
        **hu.gcs)
    @settings(deadline=10000)
    def test_erf(self, X, gc, dc):
        op = core.CreateOperator('Erf', ["X"], ["Y"])
        self.assertReferenceChecks(gc, op, [X], lambda x: (np.vectorize(math.erf)(X),))
        self.assertDeviceChecks(dc, op, [X], [0])
        self.assertGradientChecks(gc, op, [X], 0, [0])


            

Reported by Pylint.

Argument name "dc" doesn't conform to snake_case naming style
Error

Line: 22 Column: 5

                      X=hu.tensor(elements=hu.floats(min_value=-0.7, max_value=0.7)),
        **hu.gcs)
    @settings(deadline=10000)
    def test_erf(self, X, gc, dc):
        op = core.CreateOperator('Erf', ["X"], ["Y"])
        self.assertReferenceChecks(gc, op, [X], lambda x: (np.vectorize(math.erf)(X),))
        self.assertDeviceChecks(dc, op, [X], [0])
        self.assertGradientChecks(gc, op, [X], 0, [0])


            

Reported by Pylint.

Missing function or method docstring
Error

Line: 22 Column: 5

                      X=hu.tensor(elements=hu.floats(min_value=-0.7, max_value=0.7)),
        **hu.gcs)
    @settings(deadline=10000)
    def test_erf(self, X, gc, dc):
        op = core.CreateOperator('Erf', ["X"], ["Y"])
        self.assertReferenceChecks(gc, op, [X], lambda x: (np.vectorize(math.erf)(X),))
        self.assertDeviceChecks(dc, op, [X], [0])
        self.assertGradientChecks(gc, op, [X], 0, [0])


            

Reported by Pylint.

Variable name "op" doesn't conform to snake_case naming style
Error

Line: 23 Column: 9

                      **hu.gcs)
    @settings(deadline=10000)
    def test_erf(self, X, gc, dc):
        op = core.CreateOperator('Erf', ["X"], ["Y"])
        self.assertReferenceChecks(gc, op, [X], lambda x: (np.vectorize(math.erf)(X),))
        self.assertDeviceChecks(dc, op, [X], [0])
        self.assertGradientChecks(gc, op, [X], 0, [0])



            

Reported by Pylint.

test/backward_compatibility/dump_all_function_schemas.py
9 issues
Unable to import 'torch'
Error

Line: 3 Column: 1

              
import argparse
import torch


def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:

            

Reported by Pylint.

Access to a protected member _jit_get_all_schemas of a client class
Error

Line: 7 Column: 15

              

def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:
        for s in schemas:
            f.write(str(s))
            f.write('\n')

            

Reported by Pylint.

Access to a protected member _C of a client class
Error

Line: 7 Column: 15

              

def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:
        for s in schemas:
            f.write(str(s))
            f.write('\n')

            

Reported by Pylint.

Access to a protected member _C of a client class
Error

Line: 8 Column: 16

              
def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:
        for s in schemas:
            f.write(str(s))
            f.write('\n')


            

Reported by Pylint.

Access to a protected member _jit_get_custom_class_schemas of a client class
Error

Line: 8 Column: 16

              
def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:
        for s in schemas:
            f.write(str(s))
            f.write('\n')


            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              
import argparse
import torch


def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 6 Column: 1

              import torch


def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:
        for s in schemas:
            f.write(str(s))

            

Reported by Pylint.

Variable name "f" doesn't conform to snake_case naming style
Error

Line: 9 Column: 33

              def dump(filename):
    schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:
        for s in schemas:
            f.write(str(s))
            f.write('\n')



            

Reported by Pylint.

Variable name "s" doesn't conform to snake_case naming style
Error

Line: 10 Column: 13

                  schemas = torch._C._jit_get_all_schemas()
    schemas += torch._C._jit_get_custom_class_schemas()
    with open(filename, 'w') as f:
        for s in schemas:
            f.write(str(s))
            f.write('\n')


if __name__ == '__main__':

            

Reported by Pylint.

test/cpp_api_parity/parity_table_parser.py
9 issues
Redefining built-in 'str'
Error

Line: 33 Column: 29

              ```
'''
def parse_parity_tracker_table(file_path):
    def parse_parity_choice(str):
        if str in ['Yes', 'No']:
            return str == 'Yes'
        else:
            raise RuntimeError(
                '{} is not a supported parity choice. The valid choices are "Yes" and "No".'.format(str))

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              from collections import namedtuple

ParityStatus = namedtuple('ParityStatus', ['has_impl_parity', 'has_doc_parity'])

'''
This function expects the parity tracker Markdown file to have the following format:

```
## package1_name

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 32 Column: 1

                      -> ParityStatus
```
'''
def parse_parity_tracker_table(file_path):
    def parse_parity_choice(str):
        if str in ['Yes', 'No']:
            return str == 'Yes'
        else:
            raise RuntimeError(

            

Reported by Pylint.

Unnecessary "else" after "return"
Error

Line: 34 Column: 9

              '''
def parse_parity_tracker_table(file_path):
    def parse_parity_choice(str):
        if str in ['Yes', 'No']:
            return str == 'Yes'
        else:
            raise RuntimeError(
                '{} is not a supported parity choice. The valid choices are "Yes" and "No".'.format(str))


            

Reported by Pylint.

Line too long (105/100)
Error

Line: 38 Column: 1

                          return str == 'Yes'
        else:
            raise RuntimeError(
                '{} is not a supported parity choice. The valid choices are "Yes" and "No".'.format(str))

    parity_tracker_dict = {}

    with open(file_path, 'r') as f:
        all_text = f.read()

            

Reported by Pylint.

Variable name "f" doesn't conform to snake_case naming style
Error

Line: 42 Column: 34

              
    parity_tracker_dict = {}

    with open(file_path, 'r') as f:
        all_text = f.read()
        packages = all_text.split('##')
        for package in packages[1:]:
            lines = [line.strip() for line in package.split('\n') if line.strip() != '']
            package_name = lines[0]

            

Reported by Pylint.

Unnecessary "else" after "raise"
Error

Line: 48 Column: 13

                      for package in packages[1:]:
            lines = [line.strip() for line in package.split('\n') if line.strip() != '']
            package_name = lines[0]
            if package_name in parity_tracker_dict:
                raise RuntimeError("Duplicated package name `{}` found in {}".format(package_name, file_path))
            else:
                parity_tracker_dict[package_name] = {}
            for api_status in lines[3:]:
                api_name, has_impl_parity_str, has_doc_parity_str = [x.strip() for x in api_status.split('|')]

            

Reported by Pylint.

Line too long (110/100)
Error

Line: 49 Column: 1

                          lines = [line.strip() for line in package.split('\n') if line.strip() != '']
            package_name = lines[0]
            if package_name in parity_tracker_dict:
                raise RuntimeError("Duplicated package name `{}` found in {}".format(package_name, file_path))
            else:
                parity_tracker_dict[package_name] = {}
            for api_status in lines[3:]:
                api_name, has_impl_parity_str, has_doc_parity_str = [x.strip() for x in api_status.split('|')]
                parity_tracker_dict[package_name][api_name] = ParityStatus(

            

Reported by Pylint.

Line too long (110/100)
Error

Line: 53 Column: 1

                          else:
                parity_tracker_dict[package_name] = {}
            for api_status in lines[3:]:
                api_name, has_impl_parity_str, has_doc_parity_str = [x.strip() for x in api_status.split('|')]
                parity_tracker_dict[package_name][api_name] = ParityStatus(
                    has_impl_parity=parse_parity_choice(has_impl_parity_str),
                    has_doc_parity=parse_parity_choice(has_doc_parity_str))

    return parity_tracker_dict

            

Reported by Pylint.

test/cpp_api_parity/sample_module.py
9 issues
Unable to import 'torch'
Error

Line: 1 Column: 1

              import torch

'''
`SampleModule` is used by `test_cpp_api_parity.py` to test that Python / C++ API
parity test harness works for `torch.nn.Module` subclasses.

When `SampleModule.has_parity` is true, behavior of `forward` / `backward`
is the same as the C++ equivalent.


            

Reported by Pylint.

String statement has no effect
Error

Line: 3 Column: 1

              import torch

'''
`SampleModule` is used by `test_cpp_api_parity.py` to test that Python / C++ API
parity test harness works for `torch.nn.Module` subclasses.

When `SampleModule.has_parity` is true, behavior of `forward` / `backward`
is the same as the C++ equivalent.


            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              import torch

'''
`SampleModule` is used by `test_cpp_api_parity.py` to test that Python / C++ API
parity test harness works for `torch.nn.Module` subclasses.

When `SampleModule.has_parity` is true, behavior of `forward` / `backward`
is the same as the C++ equivalent.


            

Reported by Pylint.

Missing class docstring
Error

Line: 14 Column: 1

              is different from the C++ equivalent.
'''

class SampleModule(torch.nn.Module):
    def __init__(self, has_parity, has_submodule):
        super(SampleModule, self).__init__()
        self.has_parity = has_parity
        if has_submodule:
            self.submodule = SampleModule(self.has_parity, False)

            

Reported by Pylint.

Consider using Python 3 style super() without arguments
Error

Line: 16 Column: 9

              
class SampleModule(torch.nn.Module):
    def __init__(self, has_parity, has_submodule):
        super(SampleModule, self).__init__()
        self.has_parity = has_parity
        if has_submodule:
            self.submodule = SampleModule(self.has_parity, False)

        self.has_submodule = has_submodule

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 26 Column: 5

              
        self.reset_parameters()

    def reset_parameters(self):
        with torch.no_grad():
            self.param.fill_(1)

    def forward(self, x):
        submodule_forward_result = self.submodule(x) if hasattr(self, 'submodule') else 0

            

Reported by Pylint.

Argument name "x" doesn't conform to snake_case naming style
Error

Line: 30 Column: 5

                      with torch.no_grad():
            self.param.fill_(1)

    def forward(self, x):
        submodule_forward_result = self.submodule(x) if hasattr(self, 'submodule') else 0
        if self.has_parity:
            return x + self.param * 2 + submodule_forward_result
        else:
            return x + self.param * 4 + submodule_forward_result + 3

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 30 Column: 5

                      with torch.no_grad():
            self.param.fill_(1)

    def forward(self, x):
        submodule_forward_result = self.submodule(x) if hasattr(self, 'submodule') else 0
        if self.has_parity:
            return x + self.param * 2 + submodule_forward_result
        else:
            return x + self.param * 4 + submodule_forward_result + 3

            

Reported by Pylint.

Unnecessary "else" after "return"
Error

Line: 32 Column: 9

              
    def forward(self, x):
        submodule_forward_result = self.submodule(x) if hasattr(self, 'submodule') else 0
        if self.has_parity:
            return x + self.param * 2 + submodule_forward_result
        else:
            return x + self.param * 4 + submodule_forward_result + 3

torch.nn.SampleModule = SampleModule

            

Reported by Pylint.

caffe2/python/onnx/backend_rep.py
9 issues
Unable to import 'onnx.backend.base'
Error

Line: 10 Column: 1

              
from caffe2.python import core
from caffe2.proto import caffe2_pb2
from onnx.backend.base import BackendRep, namedtupledict

class Caffe2Rep(BackendRep):
    def __init__(self, init_net, predict_net, workspace, uninitialized):
        super(Caffe2Rep, self).__init__()
        self.init_net = init_net

            

Reported by Pylint.

Catching too general exception Exception
Error

Line: 62 Column: 20

                      for name in self.predict_net.external_output:
            try:
                output_values.append(self.workspace.FetchBlob(name))
            except Exception:
                output_values.append(self.workspace.FetchInt8Blob(name))
        return namedtupledict('Outputs',
                              self.predict_net.external_output)(*output_values)

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              # @package onnx
# Module caffe2.python.onnx.backend_rep





from caffe2.python import core
from caffe2.proto import caffe2_pb2

            

Reported by Pylint.

Too few public methods (1/2)
Error

Line: 12 Column: 1

              from caffe2.proto import caffe2_pb2
from onnx.backend.base import BackendRep, namedtupledict

class Caffe2Rep(BackendRep):
    def __init__(self, init_net, predict_net, workspace, uninitialized):
        super(Caffe2Rep, self).__init__()
        self.init_net = init_net
        self.predict_net = predict_net
        self.workspace = workspace

            

Reported by Pylint.

Missing class docstring
Error

Line: 12 Column: 1

              from caffe2.proto import caffe2_pb2
from onnx.backend.base import BackendRep, namedtupledict

class Caffe2Rep(BackendRep):
    def __init__(self, init_net, predict_net, workspace, uninitialized):
        super(Caffe2Rep, self).__init__()
        self.init_net = init_net
        self.predict_net = predict_net
        self.workspace = workspace

            

Reported by Pylint.

Consider using Python 3 style super() without arguments
Error

Line: 14 Column: 9

              
class Caffe2Rep(BackendRep):
    def __init__(self, init_net, predict_net, workspace, uninitialized):
        super(Caffe2Rep, self).__init__()
        self.init_net = init_net
        self.predict_net = predict_net
        self.workspace = workspace
        # The list of uninitialized external_inputs in workspace, we need this to
        # pair the name with given sequence inputs.

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 30 Column: 5

                          return 'gpu_{}'.format(self.predict_net.device_option.device_id)
        return ''

    def run(self, inputs, **kwargs):
        super(Caffe2Rep, self).run(inputs, **kwargs)
        with core.DeviceScope(self.predict_net.device_option):
            if isinstance(inputs, dict):
                with core.NameScope(self._name_scope):
                    for key, value in inputs.items():

            

Reported by Pylint.

Consider using Python 3 style super() without arguments
Error

Line: 31 Column: 9

                      return ''

    def run(self, inputs, **kwargs):
        super(Caffe2Rep, self).run(inputs, **kwargs)
        with core.DeviceScope(self.predict_net.device_option):
            if isinstance(inputs, dict):
                with core.NameScope(self._name_scope):
                    for key, value in inputs.items():
                        self.workspace.FeedBlob(key, value)

            

Reported by Pylint.

Consider merging these isinstance calls to isinstance(inputs, (list, tuple))
Error

Line: 37 Column: 18

                              with core.NameScope(self._name_scope):
                    for key, value in inputs.items():
                        self.workspace.FeedBlob(key, value)
            elif isinstance(inputs, list) or isinstance(inputs, tuple):
                if len(self.uninitialized) != len(inputs):
                    raise RuntimeError('Expected {} values for uninitialized '
                                       'graph inputs ({}), but got {}.'.format(
                                           len(self.uninitialized),
                                           ', '.join(self.uninitialized),

            

Reported by Pylint.

test/distributed/elastic/rendezvous/etcd_server_test.py
9 issues
Unable to import 'etcd'
Error

Line: 9 Column: 1

              import os
import unittest

import etcd
from torch.distributed.elastic.rendezvous.etcd_rendezvous import (
    EtcdRendezvous,
    EtcdRendezvousHandler,
)
from torch.distributed.elastic.rendezvous.etcd_server import EtcdServer

            

Reported by Pylint.

Unable to import 'torch.distributed.elastic.rendezvous.etcd_rendezvous'
Error

Line: 10 Column: 1

              import unittest

import etcd
from torch.distributed.elastic.rendezvous.etcd_rendezvous import (
    EtcdRendezvous,
    EtcdRendezvousHandler,
)
from torch.distributed.elastic.rendezvous.etcd_server import EtcdServer


            

Reported by Pylint.

Unable to import 'torch.distributed.elastic.rendezvous.etcd_server'
Error

Line: 14 Column: 1

                  EtcdRendezvous,
    EtcdRendezvousHandler,
)
from torch.distributed.elastic.rendezvous.etcd_server import EtcdServer

if os.getenv("CIRCLECI"):
    print("T85992919 temporarily disabling in circle ci", file=sys.stderr)
    sys.exit(0)


            

Reported by Pylint.

Undefined variable 'sys'
Error

Line: 17 Column: 64

              from torch.distributed.elastic.rendezvous.etcd_server import EtcdServer

if os.getenv("CIRCLECI"):
    print("T85992919 temporarily disabling in circle ci", file=sys.stderr)
    sys.exit(0)

class EtcdServerTest(unittest.TestCase):
    def test_etcd_server_start_stop(self):
        server = EtcdServer()

            

Reported by Pylint.

Undefined variable 'sys'
Error

Line: 18 Column: 5

              
if os.getenv("CIRCLECI"):
    print("T85992919 temporarily disabling in circle ci", file=sys.stderr)
    sys.exit(0)

class EtcdServerTest(unittest.TestCase):
    def test_etcd_server_start_stop(self):
        server = EtcdServer()
        server.start()

            

Reported by Pylint.

Missing module docstring
Error

Line: 1 Column: 1

              # Copyright (c) Facebook, Inc. and its affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import os
import unittest

import etcd

            

Reported by Pylint.

Missing class docstring
Error

Line: 20 Column: 1

                  print("T85992919 temporarily disabling in circle ci", file=sys.stderr)
    sys.exit(0)

class EtcdServerTest(unittest.TestCase):
    def test_etcd_server_start_stop(self):
        server = EtcdServer()
        server.start()

        try:

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 21 Column: 5

                  sys.exit(0)

class EtcdServerTest(unittest.TestCase):
    def test_etcd_server_start_stop(self):
        server = EtcdServer()
        server.start()

        try:
            port = server.get_port()

            

Reported by Pylint.

Missing function or method docstring
Error

Line: 36 Column: 5

                      finally:
            server.stop()

    def test_etcd_server_with_rendezvous(self):
        server = EtcdServer()
        server.start()

        client = etcd.Client(server.get_host(), server.get_port())


            

Reported by Pylint.

caffe2/utils/math_cpu.cc
9 issues
memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 1541 Column: 7 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                  if (copy) {
      copy(static_cast<const char*>(A), static_cast<char*>(B), N * M);
    } else {
      memcpy(
          static_cast<char*>(B), static_cast<const char*>(A), itemsize * N * M);
    }
    return;
  }


            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 1554 Column: 7 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                        static_cast<char*>(B) + ldb * i * itemsize,
          N);
    } else {
      memcpy(
          static_cast<char*>(B) + ldb * i * itemsize,
          static_cast<const char*>(A) + lda * i * itemsize,
          itemsize * N);
    }
  }

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 1621 Column: 14 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                  }                                                                    \
    if (lda == N) {                                                      \
      if (ldb == N) {                                                    \
        std::memcpy(B, A, sizeof(T) * M * N);                            \
      } else {                                                           \
        EigenOuterStridedMatrixMap<T>(B, N, M, EigenOuterStride(ldb)) =  \
            ConstEigenMatrixMap<T>(A, N, M);                             \
      }                                                                  \
    } else {                                                             \

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 1924 Column: 13 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                        const auto iy = y * stride_h + kh;
          const auto ix = kw;
          if (stride_w == 1) {
            memcpy(
                dst + (t * output_hw_size + y * output_w),
                src + (it * channel_hw_size + iy * width + ix),
                sizeof(T) * output_w);
          } else {
            for (auto x = 0; x < output_w; x++) {

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 1930 Column: 15 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                              sizeof(T) * output_w);
          } else {
            for (auto x = 0; x < output_w; x++) {
              memcpy(
                  dst + (t * output_hw_size + y * output_w + x),
                  src + (it * channel_hw_size + iy * width + ix + x * stride_w),
                  sizeof(T));
            }
          }

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 2235 Column: 20 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                        }
          for (int iw = w_pad; iw < w_pad + dkernel_w; iw += dilation_w) {
            if (utils::IsAGeZeroAndALtB(iw, W)) {
              std::memcpy(
                  col_data, img_data + (ih * W + iw) * C, sizeof(float) * C);
            } else {
              std::memset(col_data, 0, sizeof(float) * C);
            }
            col_data += C;

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 2265 Column: 22 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                          if (utils::IsAGeZeroAndALtB(ih, H) &&
                utils::IsAGeZeroAndALtB(iw, W)) {
              for (int g = 0; g < groups; ++g) {
                std::memcpy(
                    col_data + ((g * kernel_h + r) * kernel_w + s) * C_per_G,
                    img_data + (ih * W + iw) * C + g * C_per_G,
                    sizeof(float) * C_per_G);
              }
            } else {

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 2341 Column: 24 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                                utils::IsAGeZeroAndALtB(ih, H) &&
                  utils::IsAGeZeroAndALtB(iw, W)) {
                for (int g = 0; g < groups; ++g) {
                  std::memcpy(
                      col_data +
                          (((g * kernel_t + q) * kernel_h + r) * kernel_w + s) *
                              C_per_G,
                      img_data + ((it * H + ih) * W + iw) * C + g * C_per_G,
                      sizeof(TData) * C_per_G);

            

Reported by FlawFinder.

memcpy - Does not check for buffer overflows when copying to destination
Security

Line: 2794 Column: 7 CWE codes: 120
Suggestion: Make sure destination can always hold the source data

                C10_EXPORT void CopyVector<T, CPUContext>(                        \
      const int N, const T* src, T* dst, CPUContext* /*context*/) { \
    if (src != dst && N > 0) {                                      \
      memcpy(dst, src, sizeof(T) * N);                              \
    }                                                               \
  }
CAFFE2_SPECIALIZED_COPYVECTOR(float)
CAFFE2_SPECIALIZED_COPYVECTOR(int32_t)
#undef CAFFE2_SPECIALIZED_COPYVECTOR

            

Reported by FlawFinder.

test/cpp/jit/test_backend.cpp
9 issues
syntax error
Error

Line: 83

                AT_ASSERT(res[1].toTensor().equal(ref[1].toTensor()));
}

TEST(BackendTest, ToBackendNotAvailable) {
  Module m("m");
  m.define(R"(
    def forward(self, x, h):
        return self.accum(x, h), self.sub_accum(x, h)


            

Reported by Cppcheck.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 79 Column: 31 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

              
   */
  auto res = lm.forward(inputs).toTuple()->elements();
  AT_ASSERT(res[0].toTensor().equal(ref[0].toTensor()));
  AT_ASSERT(res[1].toTensor().equal(ref[1].toTensor()));
}

TEST(BackendTest, ToBackendNotAvailable) {
  Module m("m");

            

Reported by FlawFinder.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 80 Column: 31 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

                 */
  auto res = lm.forward(inputs).toTuple()->elements();
  AT_ASSERT(res[0].toTensor().equal(ref[0].toTensor()));
  AT_ASSERT(res[1].toTensor().equal(ref[1].toTensor()));
}

TEST(BackendTest, ToBackendNotAvailable) {
  Module m("m");
  m.define(R"(

            

Reported by FlawFinder.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 137 Column: 28 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

                auto lm = torch::jit::detail::codegen_backend_module(
      "backend_with_compiler_demo", m, compile_spec, any_dict_ty);
  auto res = lm.forward(inputs);
  AT_ASSERT(res.toTensor().equal(ref.toTensor()));

  std::stringstream ss;
  lm._save_for_mobile(ss);
  auto mlm = _load_for_mobile(ss);
  auto mres = mlm.forward(inputs);

            

Reported by FlawFinder.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 143 Column: 29 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

                lm._save_for_mobile(ss);
  auto mlm = _load_for_mobile(ss);
  auto mres = mlm.forward(inputs);
  AT_ASSERT(mres.toTensor().equal(ref.toTensor()));
}

TEST(BackendTest, TestComposite) {
  c10::Dict<IValue, IValue> compile_spec(StringType::get(), AnyType::get());
  c10::Dict<IValue, IValue> fake_dict(StringType::get(), AnyType::get());

            

Reported by FlawFinder.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 187 Column: 32 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

                auto mc = _load_for_mobile(ss);
  auto res_mobile = mc.forward(inputs);

  AT_ASSERT(res_jit.toTensor().equal(res_mobile.toTensor()));
}

Module getCompositeModuleWithSameNameSubModules() {
  // Two submodules with same module name but different forward and other
  // functions should be serialized and loaded correctly.

            

Reported by FlawFinder.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 243 Column: 32 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

                c._save_for_mobile(ss);
  auto mc = _load_for_mobile(ss);
  auto res_mobile = mc.forward(inputs);
  AT_ASSERT(res_jit.toTensor().equal(res_mobile.toTensor()));
}

TEST(BackendTest, TestConsistencyOfCompositeWithSetStates) {
  Module c = getCompositeModuleWithSameNameSubModules();


            

Reported by FlawFinder.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 269 Column: 42 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

                auto mc_reload = _load_for_mobile(ss_resave);
  auto res_mobile_reload = mc_reload.forward(inputs);

  AT_ASSERT(res_mobile_reload.toTensor().equal(res_mobile.toTensor()));

  auto mc_methods = mc.get_methods();
  auto mc_reload_methods = mc_reload.get_methods();

  std::vector<std::string> mc_method_qns, mc_reload_method_qns;

            

Reported by FlawFinder.

equal - Function does not check the second iterator for over-read conditions
Security

Line: 292 Column: 18 CWE codes: 126
Suggestion: This function is often discouraged by most C++ coding standards in favor of its safer alternatives provided since C++14. Consider using a form of this function that checks the second iterator before potentially overflowing it

                    std::back_inserter(mc_reload_method_qns),
      get_qual_name);

  AT_ASSERT(std::equal(
      mc_method_qns.begin(),
      mc_method_qns.end(),
      mc_reload_method_qns.begin()));
}


            

Reported by FlawFinder.