Module rulebook-pylint.rulebook_pylint.trailing_comma
Functions
def register(linter: PyLinter)
-
Expand source code
def register(linter: 'PyLinter') -> None: linter.register_checker(TrailingCommaChecker(linter))
Classes
class TrailingCommaChecker (linter: PyLinter)
-
Expand source code
class TrailingCommaChecker(RulebookTokenChecker): """See wiki: https://github.com/hanggrian/rulebook/wiki/Rules/#trailing-comma""" MSG_SINGLE: str = 'trailing-comma-single' MSG_MULTI: str = 'trailing-comma-multi' name: str = 'trailing-comma' msgs: dict[str, MessageDefinitionTuple] = Messages.of(MSG_SINGLE, MSG_MULTI) def process_tokens(self, tokens: list[TokenInfo]) -> None: # filter out comments tokens = [t for t in tokens if t.type != COMMENT] token: TokenInfo for i, token in enumerate(tokens): # find closing parenthesis if token.type != OP or \ token.string not in [')', ']', '}']: continue # checks for violation prev_token: TokenInfo = tokens[i - 1] prev_token2: TokenInfo = tokens[i - 2] if prev_token.type == OP and prev_token.string == ',': self.add_message( self.MSG_SINGLE, line=prev_token.start[0], col_offset=prev_token.end[1], ) continue if prev_token.type != NL: continue if prev_token2.type == OP and prev_token2.string == ',': continue self.add_message( self.MSG_MULTI, line=prev_token2.start[0], col_offset=prev_token2.end[1], )
See wiki: https://github.com/hanggrian/rulebook/wiki/Rules/#trailing-comma
Checker instances should have the linter as argument.
Ancestors
- rulebook_pylint.checkers.RulebookTokenChecker
- pylint.checkers.base_checker.BaseTokenChecker
- pylint.checkers.base_checker.BaseChecker
- pylint.config.arguments_provider._ArgumentsProvider
- abc.ABC
Class variables
var MSG_MULTI : str
-
The type of the None singleton.
var MSG_SINGLE : str
-
The type of the None singleton.
var msgs : dict[str, tuple[str, str, str] | tuple[str, str, str, pylint.typing.ExtraMessageOptions]]
-
The type of the None singleton.
var name : str
-
The type of the None singleton.
Methods
def process_tokens(self, tokens: list[tokenize.TokenInfo]) ‑> None
-
Expand source code
def process_tokens(self, tokens: list[TokenInfo]) -> None: # filter out comments tokens = [t for t in tokens if t.type != COMMENT] token: TokenInfo for i, token in enumerate(tokens): # find closing parenthesis if token.type != OP or \ token.string not in [')', ']', '}']: continue # checks for violation prev_token: TokenInfo = tokens[i - 1] prev_token2: TokenInfo = tokens[i - 2] if prev_token.type == OP and prev_token.string == ',': self.add_message( self.MSG_SINGLE, line=prev_token.start[0], col_offset=prev_token.end[1], ) continue if prev_token.type != NL: continue if prev_token2.type == OP and prev_token2.string == ',': continue self.add_message( self.MSG_MULTI, line=prev_token2.start[0], col_offset=prev_token2.end[1], )
Should be overridden by subclasses.