Projects
openEuler:Mainline
python-ruamel-yaml
Sign Up
Log In
Username
Password
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
Expand all
Collapse all
Changes of Revision 10
View file
_service:tar_scm:python-ruamel-yaml.spec
Changed
@@ -1,11 +1,11 @@ %global _empty_manifest_terminate_build 0 Name: python-ruamel-yaml -Version: 0.17.21 +Version: 0.17.32 Release: 1 Summary: ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order License: MIT URL: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree -Source0: https://files.pythonhosted.org/packages/46/a9/6ed24832095b692a8cecc323230ce2ec3480015fbfa4b79941bd41b23a3c/ruamel.yaml-0.17.21.tar.gz +Source0: https://files.pythonhosted.org/packages/63/dd/b4719a290e49015536bd0ab06ab13e3b468d8697bec6c2f668ac48b05661/ruamel.yaml-0.17.32.tar.gz BuildArch: noarch Patch0001: 0000-fix-big-endian-issues.patch %description @@ -80,6 +80,9 @@ %doc README.rst CHANGES %changelog +* Fri Jul 14 2023 chenzixuan <chenzixuan@kylinos.cn> - 0.17.32-1 +- Upgrade python3-ruamel-yaml to version 0.17.32 + * Sat Jun 04 2022 OpenStack_SIG <openstack@openeuler.org> - 0.17.21-1 - Upgrade python3-ruamel-yaml to version 0.17.21
View file
_service:tar_scm:0000-fix-big-endian-issues.patch
Changed
@@ -5,21 +5,24 @@ As the cpython code has an endianness bug https://sourceforge.net/p/ruamel-yaml/tickets/360/ Thanks to Rebecca N. Palmer for the tip about sys.byteorder! +--- + main.py | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) -Index: ruamel.yaml/main.py -=================================================================== ---- ruamel.yaml.orig/main.py 2021-10-14 00:10:27.265523204 +0200 -+++ ruamel.yaml/main.py 2021-10-14 00:11:02.469504291 +0200 -@@ -51,7 +51,7 @@ - - - class YAML: -- def __init__(self, *, typ=None, pure=False, output=None, plug_ins=None): # input=None, -+ def __init__(self, *, typ=None, pure=None, output=None, plug_ins=None): # input=None, - # type: (Any, OptionalText, Any, Any, Any) -> None - """ - typ: 'rt'/None -> RoundTripLoader/RoundTripDumper, (default) -@@ -64,6 +64,11 @@ +diff --git a/main.py b/main.py +index 9068282..49e70a4 100644 +--- a/main.py ++++ b/main.py +@@ -55,7 +55,7 @@ class YAML: + self: Any, + *, + typ: OptionalUnionListText, Text = None, +- pure: Any = False, ++ pure: Any = None, + output: Any = None, + plug_ins: Any = None, + ) -> None: # input=None, +@@ -70,6 +70,11 @@ class YAML: """ self.typ = 'rt' if typ is None else (typ if isinstance(typ, list) else typ) @@ -31,3 +34,6 @@ self.pure = pure # self._input = input +-- +2.39.1 +
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/.ruamel
Deleted
-(directory)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/.ruamel/__init__.py
Deleted
@@ -1,2 +0,0 @@ -import pkg_resources -pkg_resources.declare_namespace(__name__)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/ruamel.yaml.egg-info/namespace_packages.txt
Deleted
@@ -1,1 +0,0 @@ -ruamel
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/CHANGES -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/CHANGES
Changed
@@ -1,3 +1,93 @@ +0, 17, 32: 2023-06-17 + - fix issue with scanner getting stuck in infinite loop + +0, 17, 31: 2023-05-31 + - added tag.setter on `ScalarEvent` and on `Node`, that takes either + a `Tag` instance, or a str + (reported by `Sorin Sbarnea <https://sourceforge.net/u/ssbarnea/profile/>`__) + +0, 17, 30: 2023-05-30 + - fix issue 467, caused by Tag instances not being hashable (reported by + `Douglas Raillard + <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__) + +0, 17, 29: 2023-05-30 + - changed the internals of the tag property from a string to a class which allows + for preservation of the original handle and suffix. This should + result in better results using documents with %TAG directives, as well + as preserving URI escapes in tag suffixes. + +0, 17, 28: 2023-05-26 + - fix for issue 464: documents ending with document end marker without final newline + fail to load (reported by `Mariusz Rusiniak <https://sourceforge.net/u/r2dan/profile/>`__) + +0, 17, 27: 2023-05-25 + - fix issue with inline mappings as value for merge keys + (reported by Sirish on `StackOverflow <https://stackoverflow.com/q/76331049/1307905>`__) + - fix for 468, error inserting after accessing merge attribute on ``CommentedMap`` + (reported by `Bastien gerard <https://sourceforge.net/u/bagerard/>`__) + - fix for issue 461 pop + insert on same `CommentedMap` key throwing error + (reported by `John Thorvald Wodder II <https://sourceforge.net/u/jwodder/profile/>`__) + +0, 17, 26: 2023-05-09 + - Fix for error on edge cage for issue 459 + +0, 17, 25: 2023-05-09 + - fix for regression while dumping wrapped strings with too many backslashes removed + (issue 459, reported by `Lele Gaifax <https://sourceforge.net/u/lele/profile/>`__) + +0, 17, 24: 2023-05-06 + - rewrite of ``CommentedMap.insert()``. If you have a merge key in + the YAML document for the mapping you insert to, the position value should + be the one as you look at the YAML input. + This fixes issue 453 where other + keys of a merged in mapping would show up after an insert (reported by + `Alex Miller <https://sourceforge.net/u/millerdevel/profile/>`__). It + also fixes a call to `.insert()` resulting into the merge key to move + to be the first key if it wasn't already and it is also now possible + to insert a key before a merge key (even if the fist key in the mapping). + - fix (in the pure Python implementation including default) for issue 447. + (reported by `Jack Cherng <https://sourceforge.net/u/jfcherng/profile/>`__, + also brought up by brent on + `StackOverflow <https://stackoverflow.com/q/40072485/1307905>`__) + +0, 17, 23: 2023-05-05 + - fix 458, error on plain scalars starting with word longer than width. + (reported by `Kyle Larose <https://sourceforge.net/u/klarose/profile/>`__) + - fix for ``.update()`` no longer correctly handling keyword arguments + (reported by John Lin on <StackOverflow + `<https://stackoverflow.com/q/76089100/1307905>`__) + - fix issue 454: high Unicode (emojis) in quoted strings always + escaped (reported by `Michal Čihař <https://sourceforge.net/u/nijel/profile/>`__ + based on a question on StackOverflow). + - fix issue with emitter conservatively inserting extra backslashes in wrapped + quoted strings (reported by thebenman on `StackOverflow + <https://stackoverflow.com/q/75631454/1307905>`__) + +0, 17, 22: 2023-05-02 + + - fix issue 449 where the second exclamation marks got URL encoded (reported + and fixing PR provided by `John Stark <https://sourceforge.net/u/jods/profile/>`__) + - fix issue with indent != 2 and literal scalars with empty first line + (reported by wrdis on `StackOverflow <https://stackoverflow.com/q/75584262/1307905>`__) + - updated __repr__ of CommentedMap, now that Python's dict is ordered -> no more + ordereddict(list-of-tuples) + - merge MR 4, handling OctalInt in YAML 1.1 + (provided by `Jacob Floyd <https://sourceforge.net/u/cognifloyd/profile/>`_) + - fix loading of `!!float 42` (reported by Eric on + `Stack overflow <https://stackoverflow.com/a/71555107/1307905>`_) + - line numbers are now set on `CommentedKeySeq` and `CommentedKeyMap` (which + are created if you have a sequence resp. mapping as the key in a mapping) + - plain scalars: put single words longer than width on a line of their own, instead + of after the previous line (issue 427, reported by `Antoine Cotten + <https://sourceforge.net/u/antoineco/profile/>`_). Caveat: this currently results in a + space ending the previous line. + - fix for folded scalar part of 421: comments after ">" on first line of folded + scalars are now preserved (as were those in the same position on literal scalars). + Issue reported by Jacob Floyd. + - added stacklevel to warnings + - typing changed from Py2 compatible comments to Py3, removed various Py2-isms + 0, 17, 21: 2022-02-12 - fix bug in calling `.compose()` method with `pathlib.Path` instance.
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/LICENSE -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/LICENSE
Changed
@@ -1,6 +1,6 @@ The MIT License (MIT) - Copyright (c) 2014-2022 Anthon van der Neut, Ruamel bvba + Copyright (c) 2014-2023 Anthon van der Neut, Ruamel bvba Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/PKG-INFO -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/PKG-INFO
Changed
@@ -1,13 +1,12 @@ Metadata-Version: 2.1 Name: ruamel.yaml -Version: 0.17.21 +Version: 0.17.32 Summary: ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order Home-page: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree Author: Anthon van der Neut Author-email: a.van.der.neut@ruamel.eu License: MIT license Keywords: yaml 1.2 parser round-trip preserve quotes order config -Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License @@ -15,8 +14,7 @@ Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.10 -Classifier: Programming Language :: Python :: 3.5 -Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 @@ -36,32 +34,26 @@ ``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python. -:version: 0.17.21 -:updated: 2022-02-12 +:version: 0.17.32 +:updated: 2023-06-17 :documentation: http://yaml.readthedocs.io :repository: https://sourceforge.net/projects/ruamel-yaml/ :pypi: https://pypi.org/project/ruamel.yaml/ -*The 0.16.13 release was the last that was tested to be working on Python 2.7. -The 0.17.21 is the last one tested to be working on Python 3.5, -that is also the last release supporting old PyYAML functions, you'll have to create a -`YAML()` instance and use its `.load()` and `.dump()` methods.* +*Starting with 0.17.22 only Python 3.7+ is supported. +The 0.17 series is also the last to support old PyYAML functions, replace it by +creating a `YAML()` instance and use its `.load()` and `.dump()` methods.* +New(er) functionality is usually only available via the new API. -*Please adjust your dependencies accordingly if necessary. (`ruamel.yaml<0.18`)* +The 0.17.21 was the last one tested to be working on Python 3.5 and 3.6 (the +latter was not tested, because +tox/virtualenv stopped supporting that EOL version). +The 0.16.13 release was the last that was tested to be working on Python 2.7. -Starting with version 0.15.0 the way YAML files are loaded and dumped -has been changing, see the API doc for details. Currently existing -functionality will throw a warning before being changed/removed. -**For production systems already using a pre 0.16 version, you should -pin the version being used with ``ruamel.yaml<=0.15``** if you cannot -fully test upgrading to a newer version. For new usage -pin to the minor version tested ( ``ruamel.yaml<=0.17``) or even to the -exact version used. +*Please adjust/pin your dependencies accordingly if necessary. (`ruamel.yaml<0.18`)* -New functionality is usually only available via the new API, so -make sure you use it and stop using the `ruamel.yaml.safe_load()`, -`ruamel.yaml.round_trip_load()` and `ruamel.yaml.load()` functions -(and their `....dump()` counterparts). +There are now two extra plug-in packages (`ruamel.yaml.bytes` and `ruamel.yaml.string`) +for those not wanting to do the streaming to a `io.BytesIO/StringIO` buffer themselves. If your package uses ``ruamel.yaml`` and is not listed on PyPI, drop me an email, preferably with some information on how you use the @@ -99,6 +91,96 @@ .. should insert NEXT: at the beginning of line for next key (with empty line) +0.17.32 (2023-06-17): + - fix issue with scanner getting stuck in infinite loop + +0.17.31 (2023-05-31): + - added tag.setter on `ScalarEvent` and on `Node`, that takes either + a `Tag` instance, or a str + (reported by `Sorin Sbarnea <https://sourceforge.net/u/ssbarnea/profile/>`__) + +0.17.30 (2023-05-30): + - fix issue 467, caused by Tag instances not being hashable (reported by + `Douglas Raillard + <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__) + +0.17.29 (2023-05-30): + - changed the internals of the tag property from a string to a class which allows + for preservation of the original handle and suffix. This should + result in better results using documents with %TAG directives, as well + as preserving URI escapes in tag suffixes. + +0.17.28 (2023-05-26): + - fix for issue 464: documents ending with document end marker without final newline + fail to load (reported by `Mariusz Rusiniak <https://sourceforge.net/u/r2dan/profile/>`__) + +0.17.27 (2023-05-25): + - fix issue with inline mappings as value for merge keys + (reported by Sirish on `StackOverflow <https://stackoverflow.com/q/76331049/1307905>`__) + - fix for 468, error inserting after accessing merge attribute on ``CommentedMap`` + (reported by `Bastien gerard <https://sourceforge.net/u/bagerard/>`__) + - fix for issue 461 pop + insert on same `CommentedMap` key throwing error + (reported by `John Thorvald Wodder II <https://sourceforge.net/u/jwodder/profile/>`__) + +0.17.26 (2023-05-09): + - fix for error on edge cage for issue 459 + +0.17.25 (2023-05-09): + - fix for regression while dumping wrapped strings with too many backslashes removed + (issue 459, reported by `Lele Gaifax <https://sourceforge.net/u/lele/profile/>`__) + +0.17.24 (2023-05-06): + - rewrite of ``CommentedMap.insert()``. If you have a merge key in + the YAML document for the mapping you insert to, the position value should + be the one as you look at the YAML input. + This fixes issue 453 where other + keys of a merged in mapping would show up after an insert (reported by + `Alex Miller <https://sourceforge.net/u/millerdevel/profile/>`__). It + also fixes a call to `.insert()` resulting into the merge key to move + to be the first key if it wasn't already and it is also now possible + to insert a key before a merge key (even if the fist key in the mapping). + - fix (in the pure Python implementation including default) for issue 447. + (reported by `Jack Cherng <https://sourceforge.net/u/jfcherng/profile/>`__, + also brought up by brent on + `StackOverflow <https://stackoverflow.com/q/40072485/1307905>`__) + +0.17.23 (2023-05-05): + - fix 458, error on plain scalars starting with word longer than width. + (reported by `Kyle Larose <https://sourceforge.net/u/klarose/profile/>`__) + - fix for ``.update()`` no longer correctly handling keyword arguments + (reported by John Lin on <StackOverflow + `<https://stackoverflow.com/q/76089100/1307905>`__) + - fix issue 454: high Unicode (emojis) in quoted strings always + escaped (reported by `Michal Čihař <https://sourceforge.net/u/nijel/profile/>`__ + based on a question on StackOverflow). + - fix issue with emitter conservatively inserting extra backslashes in wrapped + quoted strings (reported by thebenman on `StackOverflow + <https://stackoverflow.com/q/75631454/1307905>`__) + +0.17.22 (2023-05-02): + + - fix issue 449 where the second exclamation marks got URL encoded (reported + and fixing PR provided by `John Stark <https://sourceforge.net/u/jods/profile/>`__) + - fix issue with indent != 2 and literal scalars with empty first line + (reported by wrdis on `StackOverflow <https://stackoverflow.com/q/75584262/1307905>`__) + - updated __repr__ of CommentedMap, now that Python's dict is ordered -> no more + ordereddict(list-of-tuples) + - merge MR 4, handling OctalInt in YAML 1.1 + (provided by `Jacob Floyd <https://sourceforge.net/u/cognifloyd/profile/>`_) + - fix loading of `!!float 42` (reported by Eric on + `Stack overflow <https://stackoverflow.com/a/71555107/1307905>`_) + - line numbers are now set on `CommentedKeySeq` and `CommentedKeyMap` (which + are created if you have a sequence resp. mapping as the key in a mapping) + - plain scalars: put single words longer than width on a line of their own, instead + of after the previous line (issue 427, reported by `Antoine Cotten + <https://sourceforge.net/u/antoineco/profile/>`_). Caveat: this currently results in a + space ending the previous line. + - fix for folded scalar part of 421: comments after ">" on first line of folded + scalars are now preserved (as were those in the same position on literal scalars). + Issue reported by Jacob Floyd. + - added stacklevel to warnings + - typing changed from Py2 compatible comments to Py3, removed various Py2-isms + 0.17.21 (2022-02-12): - fix bug in calling `.compose()` method with `pathlib.Path` instance. @@ -137,7 +219,7 @@ attrs with `@attr.s()` (both reported by `ssph <https://sourceforge.net/u/sph/>`__) 0.17.11 (2021-08-19): - - fix error baseclass for ``DuplicateKeyErorr`` (reported by `Łukasz Rogalski + - fix error baseclass for ``DuplicateKeyError`` (reported by `Łukasz Rogalski <https://sourceforge.net/u/lrogalski/>`__) - fix typo in reader error message, causing `KeyError` during reader error (reported by `MTU <https://sourceforge.net/u/mtu/>`__) @@ -288,5 +370,3 @@ For older changes see the file `CHANGES <https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/CHANGES>`_ - -
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/README.rst -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/README.rst
Changed
@@ -4,32 +4,26 @@ ``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python. -:version: 0.17.21 -:updated: 2022-02-12 +:version: 0.17.32 +:updated: 2023-06-17 :documentation: http://yaml.readthedocs.io :repository: https://sourceforge.net/projects/ruamel-yaml/ :pypi: https://pypi.org/project/ruamel.yaml/ -*The 0.16.13 release was the last that was tested to be working on Python 2.7. -The 0.17.21 is the last one tested to be working on Python 3.5, -that is also the last release supporting old PyYAML functions, you'll have to create a -`YAML()` instance and use its `.load()` and `.dump()` methods.* +*Starting with 0.17.22 only Python 3.7+ is supported. +The 0.17 series is also the last to support old PyYAML functions, replace it by +creating a `YAML()` instance and use its `.load()` and `.dump()` methods.* +New(er) functionality is usually only available via the new API. -*Please adjust your dependencies accordingly if necessary. (`ruamel.yaml<0.18`)* +The 0.17.21 was the last one tested to be working on Python 3.5 and 3.6 (the +latter was not tested, because +tox/virtualenv stopped supporting that EOL version). +The 0.16.13 release was the last that was tested to be working on Python 2.7. -Starting with version 0.15.0 the way YAML files are loaded and dumped -has been changing, see the API doc for details. Currently existing -functionality will throw a warning before being changed/removed. -**For production systems already using a pre 0.16 version, you should -pin the version being used with ``ruamel.yaml<=0.15``** if you cannot -fully test upgrading to a newer version. For new usage -pin to the minor version tested ( ``ruamel.yaml<=0.17``) or even to the -exact version used. +*Please adjust/pin your dependencies accordingly if necessary. (`ruamel.yaml<0.18`)* -New functionality is usually only available via the new API, so -make sure you use it and stop using the `ruamel.yaml.safe_load()`, -`ruamel.yaml.round_trip_load()` and `ruamel.yaml.load()` functions -(and their `....dump()` counterparts). +There are now two extra plug-in packages (`ruamel.yaml.bytes` and `ruamel.yaml.string`) +for those not wanting to do the streaming to a `io.BytesIO/StringIO` buffer themselves. If your package uses ``ruamel.yaml`` and is not listed on PyPI, drop me an email, preferably with some information on how you use the @@ -67,6 +61,96 @@ .. should insert NEXT: at the beginning of line for next key (with empty line) +0.17.32 (2023-06-17): + - fix issue with scanner getting stuck in infinite loop + +0.17.31 (2023-05-31): + - added tag.setter on `ScalarEvent` and on `Node`, that takes either + a `Tag` instance, or a str + (reported by `Sorin Sbarnea <https://sourceforge.net/u/ssbarnea/profile/>`__) + +0.17.30 (2023-05-30): + - fix issue 467, caused by Tag instances not being hashable (reported by + `Douglas Raillard + <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__) + +0.17.29 (2023-05-30): + - changed the internals of the tag property from a string to a class which allows + for preservation of the original handle and suffix. This should + result in better results using documents with %TAG directives, as well + as preserving URI escapes in tag suffixes. + +0.17.28 (2023-05-26): + - fix for issue 464: documents ending with document end marker without final newline + fail to load (reported by `Mariusz Rusiniak <https://sourceforge.net/u/r2dan/profile/>`__) + +0.17.27 (2023-05-25): + - fix issue with inline mappings as value for merge keys + (reported by Sirish on `StackOverflow <https://stackoverflow.com/q/76331049/1307905>`__) + - fix for 468, error inserting after accessing merge attribute on ``CommentedMap`` + (reported by `Bastien gerard <https://sourceforge.net/u/bagerard/>`__) + - fix for issue 461 pop + insert on same `CommentedMap` key throwing error + (reported by `John Thorvald Wodder II <https://sourceforge.net/u/jwodder/profile/>`__) + +0.17.26 (2023-05-09): + - fix for error on edge cage for issue 459 + +0.17.25 (2023-05-09): + - fix for regression while dumping wrapped strings with too many backslashes removed + (issue 459, reported by `Lele Gaifax <https://sourceforge.net/u/lele/profile/>`__) + +0.17.24 (2023-05-06): + - rewrite of ``CommentedMap.insert()``. If you have a merge key in + the YAML document for the mapping you insert to, the position value should + be the one as you look at the YAML input. + This fixes issue 453 where other + keys of a merged in mapping would show up after an insert (reported by + `Alex Miller <https://sourceforge.net/u/millerdevel/profile/>`__). It + also fixes a call to `.insert()` resulting into the merge key to move + to be the first key if it wasn't already and it is also now possible + to insert a key before a merge key (even if the fist key in the mapping). + - fix (in the pure Python implementation including default) for issue 447. + (reported by `Jack Cherng <https://sourceforge.net/u/jfcherng/profile/>`__, + also brought up by brent on + `StackOverflow <https://stackoverflow.com/q/40072485/1307905>`__) + +0.17.23 (2023-05-05): + - fix 458, error on plain scalars starting with word longer than width. + (reported by `Kyle Larose <https://sourceforge.net/u/klarose/profile/>`__) + - fix for ``.update()`` no longer correctly handling keyword arguments + (reported by John Lin on <StackOverflow + `<https://stackoverflow.com/q/76089100/1307905>`__) + - fix issue 454: high Unicode (emojis) in quoted strings always + escaped (reported by `Michal Čihař <https://sourceforge.net/u/nijel/profile/>`__ + based on a question on StackOverflow). + - fix issue with emitter conservatively inserting extra backslashes in wrapped + quoted strings (reported by thebenman on `StackOverflow + <https://stackoverflow.com/q/75631454/1307905>`__) + +0.17.22 (2023-05-02): + + - fix issue 449 where the second exclamation marks got URL encoded (reported + and fixing PR provided by `John Stark <https://sourceforge.net/u/jods/profile/>`__) + - fix issue with indent != 2 and literal scalars with empty first line + (reported by wrdis on `StackOverflow <https://stackoverflow.com/q/75584262/1307905>`__) + - updated __repr__ of CommentedMap, now that Python's dict is ordered -> no more + ordereddict(list-of-tuples) + - merge MR 4, handling OctalInt in YAML 1.1 + (provided by `Jacob Floyd <https://sourceforge.net/u/cognifloyd/profile/>`_) + - fix loading of `!!float 42` (reported by Eric on + `Stack overflow <https://stackoverflow.com/a/71555107/1307905>`_) + - line numbers are now set on `CommentedKeySeq` and `CommentedKeyMap` (which + are created if you have a sequence resp. mapping as the key in a mapping) + - plain scalars: put single words longer than width on a line of their own, instead + of after the previous line (issue 427, reported by `Antoine Cotten + <https://sourceforge.net/u/antoineco/profile/>`_). Caveat: this currently results in a + space ending the previous line. + - fix for folded scalar part of 421: comments after ">" on first line of folded + scalars are now preserved (as were those in the same position on literal scalars). + Issue reported by Jacob Floyd. + - added stacklevel to warnings + - typing changed from Py2 compatible comments to Py3, removed various Py2-isms + 0.17.21 (2022-02-12): - fix bug in calling `.compose()` method with `pathlib.Path` instance. @@ -105,7 +189,7 @@ attrs with `@attr.s()` (both reported by `ssph <https://sourceforge.net/u/sph/>`__) 0.17.11 (2021-08-19): - - fix error baseclass for ``DuplicateKeyErorr`` (reported by `Łukasz Rogalski + - fix error baseclass for ``DuplicateKeyError`` (reported by `Łukasz Rogalski <https://sourceforge.net/u/lrogalski/>`__) - fix typo in reader error message, causing `KeyError` during reader error (reported by `MTU <https://sourceforge.net/u/mtu/>`__)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/__init__.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/__init__.py
Changed
@@ -5,27 +5,26 @@ _package_data = dict( full_package_name='ruamel.yaml', - version_info=(0, 17, 21), - __version__='0.17.21', - version_timestamp='2022-02-12 09:49:22', + version_info=(0, 17, 32), + __version__='0.17.32', + version_timestamp='2023-06-17 07:55:58', author='Anthon van der Neut', author_email='a.van.der.neut@ruamel.eu', description='ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order', # NOQA entry_points=None, since=2014, extras_require={ - ':platform_python_implementation=="CPython" and python_version<"3.11"': 'ruamel.yaml.clib>=0.2.6', # NOQA + ':platform_python_implementation=="CPython" and python_version<"3.12"': 'ruamel.yaml.clib>=0.2.7', # NOQA 'jinja2': 'ruamel.yaml.jinja2>=0.2', 'docs': 'ryd', }, classifiers= 'Programming Language :: Python :: 3 :: Only', - 'Programming Language :: Python :: 3.5', - 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', + 'Programming Language :: Python :: 3.11', 'Programming Language :: Python :: Implementation :: CPython', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Text Processing :: Markup', @@ -33,10 +32,10 @@ , keywords='yaml 1.2 parser round-trip preserve quotes order config', read_the_docs='yaml', - supported=(3, 5), # minimum + supported=(3, 7), # minimum tox=dict( - env='*f', # f for 3.5 - fl8excl='_test/lib', + env='*', + fl8excl='_test/lib,branch_default', ), # universal=True, python_requires='>=3',
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/anchor.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/anchor.py
Changed
@@ -1,6 +1,6 @@ # coding: utf-8 -if False: # MYPY - from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA + +from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA anchor_attrib = '_yaml_anchor' @@ -9,12 +9,10 @@ __slots__ = 'value', 'always_dump' attrib = anchor_attrib - def __init__(self): - # type: () -> None + def __init__(self) -> None: self.value = None self.always_dump = False - def __repr__(self): - # type: () -> Any + def __repr__(self) -> Any: ad = ', (always dump)' if self.always_dump else "" - return 'Anchor({!r}{})'.format(self.value, ad) + return f'Anchor({self.value!r}{ad})'
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/comments.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/comments.py
Changed
@@ -11,14 +11,14 @@ from ruamel.yaml.compat import ordereddict -from ruamel.yaml.compat import MutableSliceableSequence, _F, nprintf # NOQA +from ruamel.yaml.compat import MutableSliceableSequence, nprintf # NOQA from ruamel.yaml.scalarstring import ScalarString from ruamel.yaml.anchor import Anchor +from ruamel.yaml.tag import Tag from collections.abc import MutableSet, Sized, Set, Mapping -if False: # MYPY - from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA +from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA # fmt: off __all__ = 'CommentedSeq', 'CommentedKeySeq', @@ -46,18 +46,15 @@ class IDX: # temporary auto increment, so rearranging is easier - def __init__(self): - # type: () -> None + def __init__(self) -> None: self._idx = 0 - def __call__(self): - # type: () -> Any + def __call__(self) -> Any: x = self._idx self._idx += 1 return x - def __str__(self): - # type: () -> Any + def __str__(self) -> Any: return str(self._idx) @@ -83,7 +80,6 @@ format_attrib = '_yaml_format' line_col_attrib = '_yaml_line_col' merge_attrib = '_yaml_merge' -tag_attrib = '_yaml_tag' class Comment: @@ -92,27 +88,24 @@ __slots__ = 'comment', '_items', '_post', '_pre' attrib = comment_attrib - def __init__(self, old=True): - # type: (bool) -> None + def __init__(self, old: bool = True) -> None: self._pre = None if old else # type: ignore self.comment = None # post, pre # map key (mapping/omap/dict) or index (sequence/list) to a list of # dict: post_key, pre_key, post_value, pre_value # list: pre item, post item - self._items = {} # type: DictAny, Any + self._items: DictAny, Any = {} # self._start = # should not put these on first item - self._post = # type: ListAny # end of document comments + self._post: ListAny = # end of document comments - def __str__(self): - # type: () -> str + def __str__(self) -> str: if bool(self._post): end = ',\n end=' + str(self._post) else: end = "" - return 'Comment(comment={0},\n items={1}{2})'.format(self.comment, self._items, end) + return f'Comment(comment={self.comment},\n items={self._items}{end})' - def _old__repr__(self): - # type: () -> str + def _old__repr__(self) -> str: if bool(self._post): end = ',\n end=' + str(self._post) else: @@ -121,15 +114,12 @@ ln = max(len(str(k)) for k in self._items) + 1 except ValueError: ln = '' # type: ignore - it = ' '.join( - '{:{}} {}\n'.format(str(k) + ':', ln, v) for k, v in self._items.items() - ) + it = ' '.join(f'{str(k) + ":":{ln}} {v}\n' for k, v in self._items.items()) if it: it = '\n ' + it + ' ' - return 'Comment(\n start={},\n items={{{}}}{})'.format(self.comment, it, end) + return f'Comment(\n start={self.comment},\n items={{{it}}}{end})' - def __repr__(self): - # type: () -> str + def __repr__(self) -> str: if self._pre is None: return self._old__repr__() if bool(self._post): @@ -140,47 +130,38 @@ ln = max(len(str(k)) for k in self._items) + 1 except ValueError: ln = '' # type: ignore - it = ' '.join( - '{:{}} {}\n'.format(str(k) + ':', ln, v) for k, v in self._items.items() - ) + it = ' '.join(f'{str(k) + ":":{ln}} {v}\n' for k, v in self._items.items()) if it: it = '\n ' + it + ' ' - return 'Comment(\n pre={},\n items={{{}}}{})'.format(self.pre, it, end) + return f'Comment(\n pre={self.pre},\n items={{{it}}}{end})' @property - def items(self): - # type: () -> Any + def items(self) -> Any: return self._items @property - def end(self): - # type: () -> Any + def end(self) -> Any: return self._post @end.setter - def end(self, value): - # type: (Any) -> None + def end(self, value: Any) -> None: self._post = value @property - def pre(self): - # type: () -> Any + def pre(self) -> Any: return self._pre @pre.setter - def pre(self, value): - # type: (Any) -> None + def pre(self, value: Any) -> None: self._pre = value - def get(self, item, pos): - # type: (Any, Any) -> Any + def get(self, item: Any, pos: Any) -> Any: x = self._items.get(item) if x is None or len(x) < pos: return None return xpos # can be None - def set(self, item, pos, value): - # type: (Any, Any, Any) -> Any + def set(self, item: Any, pos: Any, value: Any) -> Any: x = self._items.get(item) if x is None: self._itemsitem = x = None * (pos + 1) @@ -190,8 +171,7 @@ assert xpos is None xpos = value - def __contains__(self, x): - # type: (Any) -> Any + def __contains__(self, x: Any) -> Any: # test if a substring is in any of the attached comments if self.comment: if self.comment0 and x in self.comment0.value: @@ -214,29 +194,24 @@ # to distinguish key from None -def NoComment(): - # type: () -> None - pass +class NotNone: + pass # NOQA class Format: __slots__ = ('_flow_style',) attrib = format_attrib - def __init__(self): - # type: () -> None - self._flow_style = None # type: Any + def __init__(self) -> None: + self._flow_style: Any = None - def set_flow_style(self): - # type: () -> None + def set_flow_style(self) -> None: self._flow_style = True - def set_block_style(self): - # type: () -> None + def set_block_style(self) -> None: self._flow_style = False - def flow_style(self, default=None): - # type: (OptionalAny) -> Any + def flow_style(self, default: OptionalAny = None) -> Any: """if default (the flow_style) is None, the flow style tacked on to the object explicitly will be taken. If that is None as well the default flow style rules the format down the line, or the type @@ -253,63 +228,40 @@ attrib = line_col_attrib - def __init__(self): - # type: () -> None + def __init__(self) -> None: self.line = None self.col = None - self.data = None # type: OptionalDictAny, Any + self.data: OptionalDictAny, Any = None - def add_kv_line_col(self, key, data): - # type: (Any, Any) -> None + def add_kv_line_col(self, key: Any, data: Any) -> None: if self.data is None: self.data = {} self.datakey = data - def key(self, k): - # type: (Any) -> Any + def key(self, k: Any) -> Any: return self._kv(k, 0, 1) - def value(self, k): - # type: (Any) -> Any + def value(self, k: Any) -> Any: return self._kv(k, 2, 3) - def _kv(self, k, x0, x1): - # type: (Any, Any, Any) -> Any + def _kv(self, k: Any, x0: Any, x1: Any) -> Any: if self.data is None: return None data = self.datak return datax0, datax1 - def item(self, idx): - # type: (Any) -> Any + def item(self, idx: Any) -> Any: if self.data is None: return None return self.dataidx0, self.dataidx1 - def add_idx_line_col(self, key, data): - # type: (Any, Any) -> None + def add_idx_line_col(self, key: Any, data: Any) -> None: if self.data is None: self.data = {} self.datakey = data - def __repr__(self): - # type: () -> str - return _F('LineCol({line}, {col})', line=self.line, col=self.col) # type: ignore - - -class Tag: - """store tag information for roundtripping""" - - __slots__ = ('value',) - attrib = tag_attrib - - def __init__(self): - # type: () -> None - self.value = None - - def __repr__(self): - # type: () -> Any - return '{0.__class__.__name__}({0.value!r})'.format(self) + def __repr__(self) -> str: + return f'LineCol({self.line}, {self.col})' class CommentedBase: @@ -320,16 +272,14 @@ setattr(self, Comment.attrib, Comment()) return getattr(self, Comment.attrib) - def yaml_end_comment_extend(self, comment, clear=False): - # type: (Any, bool) -> None + def yaml_end_comment_extend(self, comment: Any, clear: bool = False) -> None: if comment is None: return if clear or self.ca.end is None: self.ca.end = self.ca.end.extend(comment) - def yaml_key_comment_extend(self, key, comment, clear=False): - # type: (Any, Any, bool) -> None + def yaml_key_comment_extend(self, key: Any, comment: Any, clear: bool = False) -> None: r = self.ca._items.setdefault(key, None, None, None, None) if clear or r1 is None: if comment1 is not None: @@ -339,8 +289,7 @@ r1.extend(comment0) r0 = comment0 - def yaml_value_comment_extend(self, key, comment, clear=False): - # type: (Any, Any, bool) -> None + def yaml_value_comment_extend(self, key: Any, comment: Any, clear: bool = False) -> None: r = self.ca._items.setdefault(key, None, None, None, None) if clear or r3 is None: if comment1 is not None: @@ -350,8 +299,7 @@ r3.extend(comment0) r2 = comment0 - def yaml_set_start_comment(self, comment, indent=0): - # type: (Any, Any) -> None + def yaml_set_start_comment(self, comment: Any, indent: Any = 0) -> None: """overwrites any preceding comment lines on an object expects comment to be without `#` and possible have multiple lines """ @@ -369,17 +317,20 @@ pre_comments.append(CommentToken(com + '\n', start_mark)) def yaml_set_comment_before_after_key( - self, key, before=None, indent=0, after=None, after_indent=None - ): - # type: (Any, Any, Any, Any, Any) -> None + self, + key: Any, + before: Any = None, + indent: Any = 0, + after: Any = None, + after_indent: Any = None, + ) -> None: """ expects comment (before/after) to be without `#` and possible have multiple lines """ from ruamel.yaml.error import CommentMark from ruamel.yaml.tokens import CommentToken - def comment_token(s, mark): - # type: (Any, Any) -> Any + def comment_token(s: Any, mark: Any) -> Any: # handle empty lines as having no comment return CommentToken(('# ' if s else "") + s + '\n', mark) @@ -407,8 +358,7 @@ c3.append(comment_token(com, start_mark)) # type: ignore @property - def fa(self): - # type: () -> Any + def fa(self) -> Any: """format attribute set_flow_style()/set_block_style()""" @@ -416,8 +366,9 @@ setattr(self, Format.attrib, Format()) return getattr(self, Format.attrib) - def yaml_add_eol_comment(self, comment, key=NoComment, column=None): - # type: (Any, OptionalAny, OptionalAny) -> None + def yaml_add_eol_comment( + self, comment: Any, key: OptionalAny = NotNone, column: OptionalAny = None + ) -> None: """ there is a problem as eol comments should start with ' #' (but at the beginning of the line the space doesn't have to be before @@ -442,56 +393,46 @@ self._yaml_add_eol_comment(ct, key=key) @property - def lc(self): - # type: () -> Any + def lc(self) -> Any: if not hasattr(self, LineCol.attrib): setattr(self, LineCol.attrib, LineCol()) return getattr(self, LineCol.attrib) - def _yaml_set_line_col(self, line, col): - # type: (Any, Any) -> None + def _yaml_set_line_col(self, line: Any, col: Any) -> None: self.lc.line = line self.lc.col = col - def _yaml_set_kv_line_col(self, key, data): - # type: (Any, Any) -> None + def _yaml_set_kv_line_col(self, key: Any, data: Any) -> None: self.lc.add_kv_line_col(key, data) - def _yaml_set_idx_line_col(self, key, data): - # type: (Any, Any) -> None + def _yaml_set_idx_line_col(self, key: Any, data: Any) -> None: self.lc.add_idx_line_col(key, data) @property - def anchor(self): - # type: () -> Any + def anchor(self) -> Any: if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) - def yaml_anchor(self): - # type: () -> Any + def yaml_anchor(self) -> Any: if not hasattr(self, Anchor.attrib): return None return self.anchor - def yaml_set_anchor(self, value, always_dump=False): - # type: (Any, bool) -> None + def yaml_set_anchor(self, value: Any, always_dump: bool = False) -> None: self.anchor.value = value self.anchor.always_dump = always_dump @property - def tag(self): - # type: () -> Any + def tag(self) -> Any: if not hasattr(self, Tag.attrib): setattr(self, Tag.attrib, Tag()) return getattr(self, Tag.attrib) - def yaml_set_tag(self, value): - # type: (Any) -> None - self.tag.value = value + def yaml_set_ctag(self, value: Tag) -> None: + setattr(self, Tag.attrib, value) - def copy_attributes(self, t, memo=None): - # type: (Any, Any) -> None + def copy_attributes(self, t: Any, memo: Any = None) -> None: # fmt: off for a in Comment.attrib, Format.attrib, LineCol.attrib, Anchor.attrib, Tag.attrib, merge_attrib: @@ -502,32 +443,26 @@ setattr(t, a, getattr(self, a)) # fmt: on - def _yaml_add_eol_comment(self, comment, key): - # type: (Any, Any) -> None + def _yaml_add_eol_comment(self, comment: Any, key: Any) -> None: raise NotImplementedError - def _yaml_get_pre_comment(self): - # type: () -> Any + def _yaml_get_pre_comment(self) -> Any: raise NotImplementedError - def _yaml_get_column(self, key): - # type: (Any) -> Any + def _yaml_get_column(self, key: Any) -> Any: raise NotImplementedError class CommentedSeq(MutableSliceableSequence, list, CommentedBase): # type: ignore __slots__ = (Comment.attrib, '_lst') - def __init__(self, *args, **kw): - # type: (Any, Any) -> None + def __init__(self, *args: Any, **kw: Any) -> None: list.__init__(self, *args, **kw) - def __getsingleitem__(self, idx): - # type: (Any) -> Any + def __getsingleitem__(self, idx: Any) -> Any: return list.__getitem__(self, idx) - def __setsingleitem__(self, idx, value): - # type: (Any, Any) -> None + def __setsingleitem__(self, idx: Any, value: Any) -> None: # try to preserve the scalarstring type if setting an existing key to a new value if idx < len(self): if ( @@ -538,8 +473,7 @@ value = type(selfidx)(value) list.__setitem__(self, idx, value) - def __delsingleitem__(self, idx=None): - # type: (Any) -> Any + def __delsingleitem__(self, idx: Any = None) -> Any: list.__delitem__(self, idx) self.ca.items.pop(idx, None) # might not be there -> default value for list_index in sorted(self.ca.items): @@ -547,12 +481,10 @@ continue self.ca.itemslist_index - 1 = self.ca.items.pop(list_index) - def __len__(self): - # type: () -> int + def __len__(self) -> int: return list.__len__(self) - def insert(self, idx, val): - # type: (Any, Any) -> None + def insert(self, idx: Any, val: Any) -> None: """the comments after the insertion have to move forward""" list.insert(self, idx, val) for list_index in sorted(self.ca.items, reverse=True): @@ -560,31 +492,25 @@ break self.ca.itemslist_index + 1 = self.ca.items.pop(list_index) - def extend(self, val): - # type: (Any) -> None + def extend(self, val: Any) -> None: list.extend(self, val) - def __eq__(self, other): - # type: (Any) -> bool + def __eq__(self, other: Any) -> bool: return list.__eq__(self, other) - def _yaml_add_comment(self, comment, key=NoComment): - # type: (Any, OptionalAny) -> None - if key is not NoComment: + def _yaml_add_comment(self, comment: Any, key: OptionalAny = NotNone) -> None: + if key is not NotNone: self.yaml_key_comment_extend(key, comment) else: self.ca.comment = comment - def _yaml_add_eol_comment(self, comment, key): - # type: (Any, Any) -> None + def _yaml_add_eol_comment(self, comment: Any, key: Any) -> None: self._yaml_add_comment(comment, key=key) - def _yaml_get_columnX(self, key): - # type: (Any) -> Any + def _yaml_get_columnX(self, key: Any) -> Any: return self.ca.itemskey0.start_mark.column - def _yaml_get_column(self, key): - # type: (Any) -> Any + def _yaml_get_column(self, key: Any) -> Any: column = None sel_idx = None pre, post = key - 1, key + 1 @@ -604,26 +530,23 @@ column = self._yaml_get_columnX(sel_idx) return column - def _yaml_get_pre_comment(self): - # type: () -> Any - pre_comments = # type: ListAny + def _yaml_get_pre_comment(self) -> Any: + pre_comments: ListAny = if self.ca.comment is None: self.ca.comment = None, pre_comments else: pre_comments = self.ca.comment1 return pre_comments - def _yaml_clear_pre_comment(self): - # type: () -> Any - pre_comments = # type: ListAny + def _yaml_clear_pre_comment(self) -> Any: + pre_comments: ListAny = if self.ca.comment is None: self.ca.comment = None, pre_comments else: self.ca.comment1 = pre_comments return pre_comments - def __deepcopy__(self, memo): - # type: (Any) -> Any + def __deepcopy__(self, memo: Any) -> Any: res = self.__class__() memoid(self) = res for k in self: @@ -631,12 +554,10 @@ self.copy_attributes(res, memo=memo) return res - def __add__(self, other): - # type: (Any) -> Any + def __add__(self, other: Any) -> Any: return list.__add__(self, other) - def sort(self, key=None, reverse=False): - # type: (Any, bool) -> None + def sort(self, key: Any = None, reverse: bool = False) -> None: if key is None: tmp_lst = sorted(zip(self, range(len(self))), reverse=reverse) list.__init__(self, x0 for x in tmp_lst) @@ -652,31 +573,26 @@ if old_index in itm: self.ca.itemsidx = itmold_index - def __repr__(self): - # type: () -> Any + def __repr__(self) -> Any: return list.__repr__(self) class CommentedKeySeq(tuple, CommentedBase): # type: ignore """This primarily exists to be able to roundtrip keys that are sequences""" - def _yaml_add_comment(self, comment, key=NoComment): - # type: (Any, OptionalAny) -> None - if key is not NoComment: + def _yaml_add_comment(self, comment: Any, key: OptionalAny = NotNone) -> None: + if key is not NotNone: self.yaml_key_comment_extend(key, comment) else: self.ca.comment = comment - def _yaml_add_eol_comment(self, comment, key): - # type: (Any, Any) -> None + def _yaml_add_eol_comment(self, comment: Any, key: Any) -> None: self._yaml_add_comment(comment, key=key) - def _yaml_get_columnX(self, key): - # type: (Any) -> Any + def _yaml_get_columnX(self, key: Any) -> Any: return self.ca.itemskey0.start_mark.column - def _yaml_get_column(self, key): - # type: (Any) -> Any + def _yaml_get_column(self, key: Any) -> Any: column = None sel_idx = None pre, post = key - 1, key + 1 @@ -696,18 +612,16 @@ column = self._yaml_get_columnX(sel_idx) return column - def _yaml_get_pre_comment(self): - # type: () -> Any - pre_comments = # type: ListAny + def _yaml_get_pre_comment(self) -> Any: + pre_comments: ListAny = if self.ca.comment is None: self.ca.comment = None, pre_comments else: pre_comments = self.ca.comment1 return pre_comments - def _yaml_clear_pre_comment(self): - # type: () -> Any - pre_comments = # type: ListAny + def _yaml_clear_pre_comment(self) -> Any: + pre_comments: ListAny = if self.ca.comment is None: self.ca.comment = None, pre_comments else: @@ -718,12 +632,10 @@ class CommentedMapView(Sized): __slots__ = ('_mapping',) - def __init__(self, mapping): - # type: (Any) -> None + def __init__(self, mapping: Any) -> None: self._mapping = mapping - def __len__(self): - # type: () -> int + def __len__(self) -> int: count = len(self._mapping) return count @@ -732,16 +644,14 @@ __slots__ = () @classmethod - def _from_iterable(self, it): - # type: (Any) -> Any + def _from_iterable(self, it: Any) -> Any: return set(it) - def __contains__(self, key): - # type: (Any) -> Any + def __contains__(self, key: Any) -> Any: return key in self._mapping - def __iter__(self): - # type: () -> Any # yield from self._mapping # not in py27, pypy + def __iter__(self) -> Any: + # yield from self._mapping # not in py27, pypy # for x in self._mapping._keys(): for x in self._mapping: yield x @@ -751,12 +661,10 @@ __slots__ = () @classmethod - def _from_iterable(self, it): - # type: (Any) -> Any + def _from_iterable(self, it: Any) -> Any: return set(it) - def __contains__(self, item): - # type: (Any) -> Any + def __contains__(self, item: Any) -> Any: key, value = item try: v = self._mappingkey @@ -765,8 +673,7 @@ else: return v == value - def __iter__(self): - # type: () -> Any + def __iter__(self) -> Any: for key in self._mapping._keys(): yield (key, self._mappingkey) @@ -774,15 +681,13 @@ class CommentedMapValuesView(CommentedMapView): __slots__ = () - def __contains__(self, value): - # type: (Any) -> Any + def __contains__(self, value: Any) -> Any: for key in self._mapping: if value == self._mappingkey: return True return False - def __iter__(self): - # type: () -> Any + def __iter__(self) -> Any: for key in self._mapping._keys(): yield self._mappingkey @@ -790,34 +695,31 @@ class CommentedMap(ordereddict, CommentedBase): __slots__ = (Comment.attrib, '_ok', '_ref') - def __init__(self, *args, **kw): - # type: (Any, Any) -> None - self._ok = set() # type: MutableSetAny # own keys - self._ref = # type: ListCommentedMap + def __init__(self, *args: Any, **kw: Any) -> None: + self._ok: MutableSetAny = set() # own keys + self._ref: ListCommentedMap = ordereddict.__init__(self, *args, **kw) - def _yaml_add_comment(self, comment, key=NoComment, value=NoComment): - # type: (Any, OptionalAny, OptionalAny) -> None + def _yaml_add_comment( + self, comment: Any, key: OptionalAny = NotNone, value: OptionalAny = NotNone + ) -> None: """values is set to key to indicate a value attachment of comment""" - if key is not NoComment: + if key is not NotNone: self.yaml_key_comment_extend(key, comment) return - if value is not NoComment: + if value is not NotNone: self.yaml_value_comment_extend(value, comment) else: self.ca.comment = comment - def _yaml_add_eol_comment(self, comment, key): - # type: (Any, Any) -> None + def _yaml_add_eol_comment(self, comment: Any, key: Any) -> None: """add on the value line, with value specified by the key""" self._yaml_add_comment(comment, value=key) - def _yaml_get_columnX(self, key): - # type: (Any) -> Any + def _yaml_get_columnX(self, key: Any) -> Any: return self.ca.itemskey2.start_mark.column - def _yaml_get_column(self, key): - # type: (Any) -> Any + def _yaml_get_column(self, key: Any) -> Any: column = None sel_idx = None pre, post, last = None, None, None @@ -844,26 +746,23 @@ column = self._yaml_get_columnX(sel_idx) return column - def _yaml_get_pre_comment(self): - # type: () -> Any - pre_comments = # type: ListAny + def _yaml_get_pre_comment(self) -> Any: + pre_comments: ListAny = if self.ca.comment is None: self.ca.comment = None, pre_comments else: pre_comments = self.ca.comment1 return pre_comments - def _yaml_clear_pre_comment(self): - # type: () -> Any - pre_comments = # type: ListAny + def _yaml_clear_pre_comment(self) -> Any: + pre_comments: ListAny = if self.ca.comment is None: self.ca.comment = None, pre_comments else: self.ca.comment1 = pre_comments return pre_comments - def update(self, *vals, **kw): - # type: (Any, Any) -> None + def update(self, *vals: Any, **kw: Any) -> None: try: ordereddict.update(self, *vals, **kw) except TypeError: @@ -878,32 +777,49 @@ for x in vals0: self._ok.add(x0) if kw: - self._ok.add(*kw.keys()) + self._ok.update(*kw.keys()) # type: ignore - def insert(self, pos, key, value, comment=None): - # type: (Any, Any, Any, OptionalAny) -> None - """insert key value into given position + def insert(self, pos: Any, key: Any, value: Any, comment: OptionalAny = None) -> None: + """insert key value into given position, as defined by source YAML attach comment if provided """ - keys = list(self.keys()) + key - ordereddict.insert(self, pos, key, value) - for keytmp in keys: - self._ok.add(keytmp) - for referer in self._ref: - for keytmp in keys: - referer.update_key_value(keytmp) + if key in self._ok: + del selfkey + keys = k for k in self.keys() if k in self._ok + try: + ma0 = getattr(self, merge_attrib, -1)0 + merge_pos = ma00 + except IndexError: + merge_pos = -1 + if merge_pos >= 0: + if merge_pos >= pos: + getattr(self, merge_attrib)0 = (merge_pos + 1, ma01) + idx_min = pos + idx_max = len(self._ok) + else: + idx_min = pos - 1 + idx_max = len(self._ok) + else: + idx_min = pos + idx_max = len(self._ok) + selfkey = value # at the end + # print(f'{idx_min=} {idx_max=}') + for idx in range(idx_min, idx_max): + self.move_to_end(keysidx) + self._ok.add(key) + # for referer in self._ref: + # for keytmp in keys: + # referer.update_key_value(keytmp) if comment is not None: self.yaml_add_eol_comment(comment, key=key) - def mlget(self, key, default=None, list_ok=False): - # type: (Any, Any, Any) -> Any + def mlget(self, key: Any, default: Any = None, list_ok: Any = False) -> Any: """multi-level get that expects dicts within dicts""" if not isinstance(key, list): return self.get(key, default) # assume that the key is a list of recursively accessible dicts - def get_one_level(key_list, level, d): - # type: (Any, Any, Any) -> Any + def get_one_level(key_list: Any, level: Any, d: Any) -> Any: if not list_ok: assert isinstance(d, dict) if level >= len(key_list): @@ -921,8 +837,7 @@ raise return default - def __getitem__(self, key): - # type: (Any) -> Any + def __getitem__(self, key: Any) -> Any: try: return ordereddict.__getitem__(self, key) except KeyError: @@ -931,8 +846,7 @@ return merged1key raise - def __setitem__(self, key, value): - # type: (Any, Any) -> None + def __setitem__(self, key: Any, value: Any) -> None: # try to preserve the scalarstring type if setting an existing key to a new value if key in self: if ( @@ -944,35 +858,36 @@ ordereddict.__setitem__(self, key, value) self._ok.add(key) - def _unmerged_contains(self, key): - # type: (Any) -> Any + def _unmerged_contains(self, key: Any) -> Any: if key in self._ok: return True return None - def __contains__(self, key): - # type: (Any) -> bool + def __contains__(self, key: Any) -> bool: return bool(ordereddict.__contains__(self, key)) - def get(self, key, default=None): - # type: (Any, Any) -> Any + def get(self, key: Any, default: Any = None) -> Any: try: return self.__getitem__(key) except: # NOQA return default - def __repr__(self): - # type: () -> Any - return ordereddict.__repr__(self).replace('CommentedMap', 'ordereddict') + def __repr__(self) -> Any: + res = '{' + sep = '' + for k, v in self.items(): + res += f'{sep}{k!r}: {v!r}' + if not sep: + sep = ', ' + res += '}' + return res - def non_merged_items(self): - # type: () -> Any + def non_merged_items(self) -> Any: for x in ordereddict.__iter__(self): if x in self._ok: yield x, ordereddict.__getitem__(self, x) - def __delitem__(self, key): - # type: (Any) -> None + def __delitem__(self, key: Any) -> None: # for merged in getattr(self, merge_attrib, ): # if key in merged1: # value = merged1key @@ -991,73 +906,70 @@ for referer in self._ref: referer.update_key_value(key) - def __iter__(self): - # type: () -> Any + def __iter__(self) -> Any: for x in ordereddict.__iter__(self): yield x - def _keys(self): - # type: () -> Any + def pop(self, key: Any, default: Any = NotNone) -> Any: + try: + result = selfkey + except KeyError: + if default is NotNone: + raise + return default + del selfkey + return result + + def _keys(self) -> Any: for x in ordereddict.__iter__(self): yield x - def __len__(self): - # type: () -> int + def __len__(self) -> int: return int(ordereddict.__len__(self)) - def __eq__(self, other): - # type: (Any) -> bool + def __eq__(self, other: Any) -> bool: return bool(dict(self) == other) - def keys(self): - # type: () -> Any + def keys(self) -> Any: return CommentedMapKeysView(self) - def values(self): - # type: () -> Any + def values(self) -> Any: return CommentedMapValuesView(self) - def _items(self): - # type: () -> Any + def _items(self) -> Any: for x in ordereddict.__iter__(self): yield x, ordereddict.__getitem__(self, x) - def items(self): - # type: () -> Any + def items(self) -> Any: return CommentedMapItemsView(self) @property - def merge(self): - # type: () -> Any + def merge(self) -> Any: if not hasattr(self, merge_attrib): setattr(self, merge_attrib, ) return getattr(self, merge_attrib) - def copy(self): - # type: () -> Any + def copy(self) -> Any: x = type(self)() # update doesn't work for k, v in self._items(): xk = v self.copy_attributes(x) return x - def add_referent(self, cm): - # type: (Any) -> None + def add_referent(self, cm: Any) -> None: if cm not in self._ref: self._ref.append(cm) - def add_yaml_merge(self, value): - # type: (Any) -> None + def add_yaml_merge(self, value: Any) -> None: for v in value: v1.add_referent(self) - for k, v in v1.items(): - if ordereddict.__contains__(self, k): + for k1, v1 in v1.items(): + if ordereddict.__contains__(self, k1): continue - ordereddict.__setitem__(self, k, v) + ordereddict.__setitem__(self, k1, v1) self.merge.extend(value) - def update_key_value(self, key): - # type: (Any) -> None + def update_key_value(self, key: Any) -> None: if key in self._ok: return for v in self.merge: @@ -1066,8 +978,7 @@ return ordereddict.__delitem__(self, key) - def __deepcopy__(self, memo): - # type: (Any) -> Any + def __deepcopy__(self, memo: Any) -> Any: res = self.__class__() memoid(self) = res for k in self: @@ -1078,17 +989,15 @@ # based on brownie mappings @classmethod # type: ignore -def raise_immutable(cls, *args, **kwargs): - # type: (Any, *Any, **Any) -> None - raise TypeError('{} objects are immutable'.format(cls.__name__)) +def raise_immutable(cls: Any, *args: Any, **kwargs: Any) -> None: + raise TypeError(f'{cls.__name__} objects are immutable') class CommentedKeyMap(CommentedBase, Mapping): # type: ignore __slots__ = Comment.attrib, '_od' """This primarily exists to be able to roundtrip keys that are mappings""" - def __init__(self, *args, **kw): - # type: (Any, Any) -> None + def __init__(self, *args: Any, **kw: Any) -> None: if hasattr(self, '_od'): raise_immutable(self) try: @@ -1099,51 +1008,41 @@ __delitem__ = __setitem__ = clear = pop = popitem = setdefault = update = raise_immutable # need to implement __getitem__, __iter__ and __len__ - def __getitem__(self, index): - # type: (Any) -> Any + def __getitem__(self, index: Any) -> Any: return self._odindex - def __iter__(self): - # type: () -> IteratorAny + def __iter__(self) -> IteratorAny: for x in self._od.__iter__(): yield x - def __len__(self): - # type: () -> int + def __len__(self) -> int: return len(self._od) - def __hash__(self): - # type: () -> Any + def __hash__(self) -> Any: return hash(tuple(self.items())) - def __repr__(self): - # type: () -> Any + def __repr__(self) -> Any: if not hasattr(self, merge_attrib): return self._od.__repr__() return 'ordereddict(' + repr(list(self._od.items())) + ')' @classmethod - def fromkeys(keys, v=None): - # type: (Any, Any) -> Any + def fromkeys(keys: Any, v: Any = None) -> Any: return CommentedKeyMap(dict.fromkeys(keys, v)) - def _yaml_add_comment(self, comment, key=NoComment): - # type: (Any, OptionalAny) -> None - if key is not NoComment: + def _yaml_add_comment(self, comment: Any, key: OptionalAny = NotNone) -> None: + if key is not NotNone: self.yaml_key_comment_extend(key, comment) else: self.ca.comment = comment - def _yaml_add_eol_comment(self, comment, key): - # type: (Any, Any) -> None + def _yaml_add_eol_comment(self, comment: Any, key: Any) -> None: self._yaml_add_comment(comment, key=key) - def _yaml_get_columnX(self, key): - # type: (Any) -> Any + def _yaml_get_columnX(self, key: Any) -> Any: return self.ca.itemskey0.start_mark.column - def _yaml_get_column(self, key): - # type: (Any) -> Any + def _yaml_get_column(self, key: Any) -> Any: column = None sel_idx = None pre, post = key - 1, key + 1 @@ -1163,9 +1062,8 @@ column = self._yaml_get_columnX(sel_idx) return column - def _yaml_get_pre_comment(self): - # type: () -> Any - pre_comments = # type: ListAny + def _yaml_get_pre_comment(self) -> Any: + pre_comments: ListAny = if self.ca.comment is None: self.ca.comment = None, pre_comments else: @@ -1180,87 +1078,85 @@ class CommentedSet(MutableSet, CommentedBase): # type: ignore # NOQA __slots__ = Comment.attrib, 'odict' - def __init__(self, values=None): - # type: (Any) -> None + def __init__(self, values: Any = None) -> None: self.odict = ordereddict() MutableSet.__init__(self) if values is not None: - self |= values # type: ignore + self |= values - def _yaml_add_comment(self, comment, key=NoComment, value=NoComment): - # type: (Any, OptionalAny, OptionalAny) -> None + def _yaml_add_comment( + self, comment: Any, key: OptionalAny = NotNone, value: OptionalAny = NotNone + ) -> None: """values is set to key to indicate a value attachment of comment""" - if key is not NoComment: + if key is not NotNone: self.yaml_key_comment_extend(key, comment) return - if value is not NoComment: + if value is not NotNone: self.yaml_value_comment_extend(value, comment) else: self.ca.comment = comment - def _yaml_add_eol_comment(self, comment, key): - # type: (Any, Any) -> None + def _yaml_add_eol_comment(self, comment: Any, key: Any) -> None: """add on the value line, with value specified by the key""" self._yaml_add_comment(comment, value=key) - def add(self, value): - # type: (Any) -> None + def add(self, value: Any) -> None: """Add an element.""" self.odictvalue = None - def discard(self, value): - # type: (Any) -> None + def discard(self, value: Any) -> None: """Remove an element. Do not raise an exception if absent.""" del self.odictvalue - def __contains__(self, x): - # type: (Any) -> Any + def __contains__(self, x: Any) -> Any: return x in self.odict - def __iter__(self): - # type: () -> Any + def __iter__(self) -> Any: for x in self.odict: yield x - def __len__(self): - # type: () -> int + def __len__(self) -> int: return len(self.odict) - def __repr__(self): - # type: () -> str - return 'set({0!r})'.format(self.odict.keys()) + def __repr__(self) -> str: + return f'set({self.odict.keys()!r})' class TaggedScalar(CommentedBase): # the value and style attributes are set during roundtrip construction - def __init__(self, value=None, style=None, tag=None): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any = None, style: Any = None, tag: Any = None) -> None: self.value = value self.style = style if tag is not None: - self.yaml_set_tag(tag) + if isinstance(tag, str): + tag = Tag(suffix=tag) + self.yaml_set_ctag(tag) - def __str__(self): - # type: () -> Any + def __str__(self) -> Any: return self.value + def count(self, s: str, start: Optionalint = None, end: Optionalint = None) -> Any: + return self.value.count(s, start, end) + + def __getitem__(self, pos: int) -> Any: + return self.valuepos + -def dump_comments(d, name="", sep='.', out=sys.stdout): - # type: (Any, str, str, Any) -> None +def dump_comments(d: Any, name: str = "", sep: str = '.', out: Any = sys.stdout) -> None: """ recursively dump comments, all but the toplevel preceded by the path in dotted form x.0.a """ if isinstance(d, dict) and hasattr(d, 'ca'): if name: - out.write('{} {}\n'.format(name, type(d))) - out.write('{!r}\n'.format(d.ca)) # type: ignore + out.write(f'{name} {type(d)}\n') + out.write(f'{d.ca!r}\n') for k in d: dump_comments(dk, name=(name + sep + str(k)) if name else k, sep=sep, out=out) elif isinstance(d, list) and hasattr(d, 'ca'): if name: - out.write('{} {}\n'.format(name, type(d))) - out.write('{!r}\n'.format(d.ca)) # type: ignore + out.write(f'{name} {type(d)}\n') + out.write(f'{d.ca!r}\n') for idx, k in enumerate(d): dump_comments( k, name=(name + sep + str(idx)) if name else str(idx), sep=sep, out=out
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/compat.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/compat.py
Changed
@@ -11,11 +11,15 @@ # fmt: off -if False: # MYPY - from typing import Any, Dict, Optional, List, Union, BinaryIO, IO, Text, Tuple # NOQA - from typing import Optional # NOQA +from typing import Any, Dict, Optional, List, Union, BinaryIO, IO, Text, Tuple # NOQA +from typing import Optional # NOQA +try: + from typing import SupportsIndex as SupportsIndex # in order to reexport for mypy +except ImportError: + SupportsIndex = int # type: ignore # fmt: on + _DEFAULT_YAML_VERSION = (1, 2) try: @@ -29,8 +33,7 @@ class ordereddict(OrderedDict): # type: ignore if not hasattr(OrderedDict, 'insert'): - def insert(self, pos, key, value): - # type: (int, Any, Any) -> None + def insert(self, pos: int, key: Any, value: Any) -> None: if pos >= len(self): selfkey = value return @@ -47,34 +50,20 @@ PY2 = sys.version_info0 == 2 PY3 = sys.version_info0 == 3 - -# replace with f-strings when 3.5 support is dropped -# ft = '42' -# assert _F('abc {ft!r}', ft=ft) == 'abc %r' % ft -# 'abc %r' % ft -> _F('abc {ft!r}' -> f'abc {ft!r}' -def _F(s, *superfluous, **kw): - # type: (Any, Any, Any) -> Any - if superfluous: - raise TypeError - return s.format(**kw) - - StringIO = io.StringIO BytesIO = io.BytesIO -if False: # MYPY - # StreamType = UnionBinaryIO, IOstr, IOunicode, StringIO - # StreamType = UnionBinaryIO, IOstr, StringIO # type: ignore - StreamType = Any +# StreamType = UnionBinaryIO, IOstr, IOunicode, StringIO +# StreamType = UnionBinaryIO, IOstr, StringIO # type: ignore +StreamType = Any - StreamTextType = StreamType # UnionText, StreamType - VersionType = UnionListint, str, Tupleint, int +StreamTextType = StreamType # UnionText, StreamType +VersionType = UnionListint, str, Tupleint, int builtins_module = 'builtins' -def with_metaclass(meta, *bases): - # type: (Any, Any) -> Any +def with_metaclass(meta: Any, *bases: Any) -> Any: """Create a base class with a metaclass.""" return meta('NewBase', bases, {}) @@ -84,7 +73,7 @@ DBG_NODE = 4 -_debug = None # type: Optionalint +_debug: Optionalint = None if 'RUAMELDEBUG' in os.environ: _debugx = os.environ.get('RUAMELDEBUG') if _debugx is None: @@ -96,25 +85,21 @@ if bool(_debug): class ObjectCounter: - def __init__(self): - # type: () -> None - self.map = {} # type: DictAny, Any + def __init__(self) -> None: + self.map: DictAny, Any = {} - def __call__(self, k): - # type: (Any) -> None + def __call__(self, k: Any) -> None: self.mapk = self.map.get(k, 0) + 1 - def dump(self): - # type: () -> None + def dump(self) -> None: for k in sorted(self.map): - sys.stdout.write('{} -> {}'.format(k, self.mapk)) + sys.stdout.write(f'{k} -> {self.mapk}') object_counter = ObjectCounter() # used from yaml util when testing -def dbg(val=None): - # type: (Any) -> Any +def dbg(val: Any = None) -> Any: global _debug if _debug is None: # set to true or false @@ -129,14 +114,12 @@ class Nprint: - def __init__(self, file_name=None): - # type: (Any) -> None - self._max_print = None # type: Any - self._count = None # type: Any + def __init__(self, file_name: Any = None) -> None: + self._max_print: Any = None + self._count: Any = None self._file_name = file_name - def __call__(self, *args, **kw): - # type: (Any, Any) -> None + def __call__(self, *args: Any, **kw: Any) -> None: if not bool(_debug): return out = sys.stdout if self._file_name is None else open(self._file_name, 'a') @@ -157,13 +140,11 @@ if self._file_name: out.close() - def set_max_print(self, i): - # type: (int) -> None + def set_max_print(self, i: int) -> None: self._max_print = i self._count = None - def fp(self, mode='a'): - # type: (str) -> Any + def fp(self, mode: str = 'a') -> Any: out = sys.stdout if self._file_name is None else open(self._file_name, mode) return out @@ -174,8 +155,7 @@ # char checkers following production rules -def check_namespace_char(ch): - # type: (Any) -> bool +def check_namespace_char(ch: Any) -> bool: if '\x21' <= ch <= '\x7E': # ! to ~ return True if '\xA0' <= ch <= '\uD7FF': @@ -187,15 +167,13 @@ return False -def check_anchorname_char(ch): - # type: (Any) -> bool +def check_anchorname_char(ch: Any) -> bool: if ch in ',{}': return False return check_namespace_char(ch) -def version_tnf(t1, t2=None): - # type: (Any, Any) -> Any +def version_tnf(t1: Any, t2: Any = None) -> Any: """ return True if ruamel.yaml version_info < t1, None if t2 is specified and bigger else False """ @@ -211,14 +189,12 @@ class MutableSliceableSequence(collections.abc.MutableSequence): # type: ignore __slots__ = () - def __getitem__(self, index): - # type: (Any) -> Any + def __getitem__(self, index: Any) -> Any: if not isinstance(index, slice): return self.__getsingleitem__(index) return type(self)(selfi for i in range(*index.indices(len(self)))) # type: ignore - def __setitem__(self, index, value): - # type: (Any, Any) -> None + def __setitem__(self, index: Any, value: Any) -> None: if not isinstance(index, slice): return self.__setsingleitem__(index, value) assert iter(value) @@ -233,19 +209,16 @@ # need to test before changing, in case TypeError is caught if nr_assigned_items < len(value): raise TypeError( - 'too many elements in value {} < {}'.format(nr_assigned_items, len(value)) + f'too many elements in value {nr_assigned_items} < {len(value)}' ) elif nr_assigned_items > len(value): raise TypeError( - 'not enough elements in value {} > {}'.format( - nr_assigned_items, len(value) - ) + f'not enough elements in value {nr_assigned_items} > {len(value)}' ) for idx, i in enumerate(range(*range_parms)): selfi = valueidx - def __delitem__(self, index): - # type: (Any) -> None + def __delitem__(self, index: Any) -> None: if not isinstance(index, slice): return self.__delsingleitem__(index) # nprint(index.start, index.stop, index.step, index.indices(len(self))) @@ -253,16 +226,13 @@ del selfi @abstractmethod - def __getsingleitem__(self, index): - # type: (Any) -> Any + def __getsingleitem__(self, index: Any) -> Any: raise IndexError @abstractmethod - def __setsingleitem__(self, index, value): - # type: (Any, Any) -> None + def __setsingleitem__(self, index: Any, value: Any) -> None: raise IndexError @abstractmethod - def __delsingleitem__(self, index): - # type: (Any) -> None + def __delsingleitem__(self, index: Any) -> None: raise IndexError
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/composer.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/composer.py
Changed
@@ -3,7 +3,7 @@ import warnings from ruamel.yaml.error import MarkedYAMLError, ReusedAnchorWarning -from ruamel.yaml.compat import _F, nprint, nprintf # NOQA +from ruamel.yaml.compat import nprint, nprintf # NOQA from ruamel.yaml.events import ( StreamStartEvent, @@ -17,8 +17,7 @@ ) from ruamel.yaml.nodes import MappingNode, ScalarNode, SequenceNode -if False: # MYPY - from typing import Any, Dict, Optional, List # NOQA +from typing import Any, Dict, Optional, List # NOQA __all__ = 'Composer', 'ComposerError' @@ -28,30 +27,27 @@ class Composer: - def __init__(self, loader=None): - # type: (Any) -> None + def __init__(self, loader: Any = None) -> None: self.loader = loader if self.loader is not None and getattr(self.loader, '_composer', None) is None: self.loader._composer = self - self.anchors = {} # type: DictAny, Any + self.anchors: DictAny, Any = {} + self.warn_double_anchors = True @property - def parser(self): - # type: () -> Any + def parser(self) -> Any: if hasattr(self.loader, 'typ'): self.loader.parser return self.loader._parser @property - def resolver(self): - # type: () -> Any + def resolver(self) -> Any: # assert self.loader._resolver is not None if hasattr(self.loader, 'typ'): self.loader.resolver return self.loader._resolver - def check_node(self): - # type: () -> Any + def check_node(self) -> Any: # Drop the STREAM-START event. if self.parser.check_event(StreamStartEvent): self.parser.get_event() @@ -59,19 +55,17 @@ # If there are more documents available? return not self.parser.check_event(StreamEndEvent) - def get_node(self): - # type: () -> Any + def get_node(self) -> Any: # Get the root node of the next document. if not self.parser.check_event(StreamEndEvent): return self.compose_document() - def get_single_node(self): - # type: () -> Any + def get_single_node(self) -> Any: # Drop the STREAM-START event. self.parser.get_event() # Compose a document if the stream is not empty. - document = None # type: Any + document: Any = None if not self.parser.check_event(StreamEndEvent): document = self.compose_document() @@ -90,8 +84,7 @@ return document - def compose_document(self): - # type: (Any) -> Any + def compose_document(self: Any) -> Any: # Drop the DOCUMENT-START event. self.parser.get_event() @@ -104,36 +97,28 @@ self.anchors = {} return node - def return_alias(self, a): - # type: (Any) -> Any + def return_alias(self, a: Any) -> Any: return a - def compose_node(self, parent, index): - # type: (Any, Any) -> Any + def compose_node(self, parent: Any, index: Any) -> Any: if self.parser.check_event(AliasEvent): event = self.parser.get_event() alias = event.anchor if alias not in self.anchors: raise ComposerError( - None, - None, - _F('found undefined alias {alias!r}', alias=alias), - event.start_mark, + None, None, f'found undefined alias {alias!r}', event.start_mark, ) return self.return_alias(self.anchorsalias) event = self.parser.peek_event() anchor = event.anchor if anchor is not None: # have an anchor - if anchor in self.anchors: - # raise ComposerError( - # "found duplicate anchor %r; first occurrence" - # % (anchor), self.anchorsanchor.start_mark, - # "second occurrence", event.start_mark) + if self.warn_double_anchors and anchor in self.anchors: ws = ( - '\nfound duplicate anchor {!r}\nfirst occurrence {}\nsecond occurrence ' - '{}'.format((anchor), self.anchorsanchor.start_mark, event.start_mark) + f'\nfound duplicate anchor {anchor!r}\n' + f'first occurrence {self.anchorsanchor.start_mark}\n' + f'second occurrence {event.start_mark}' ) - warnings.warn(ws, ReusedAnchorWarning) + warnings.warn(ws, ReusedAnchorWarning, stacklevel=2) self.resolver.descend_resolver(parent, index) if self.parser.check_event(ScalarEvent): node = self.compose_scalar_node(anchor) @@ -144,12 +129,13 @@ self.resolver.ascend_resolver() return node - def compose_scalar_node(self, anchor): - # type: (Any) -> Any + def compose_scalar_node(self, anchor: Any) -> Any: event = self.parser.get_event() - tag = event.tag - if tag is None or tag == '!': + tag = event.ctag + if tag is None or str(tag) == '!': tag = self.resolver.resolve(ScalarNode, event.value, event.implicit) + assert not isinstance(tag, str) + # e.g tag.yaml.org,2002:str node = ScalarNode( tag, event.value, @@ -163,12 +149,12 @@ self.anchorsanchor = node return node - def compose_sequence_node(self, anchor): - # type: (Any) -> Any + def compose_sequence_node(self, anchor: Any) -> Any: start_event = self.parser.get_event() - tag = start_event.tag - if tag is None or tag == '!': + tag = start_event.ctag + if tag is None or str(tag) == '!': tag = self.resolver.resolve(SequenceNode, None, start_event.implicit) + assert not isinstance(tag, str) node = SequenceNode( tag, , @@ -187,21 +173,21 @@ end_event = self.parser.get_event() if node.flow_style is True and end_event.comment is not None: if node.comment is not None: + x = node.flow_style nprint( - 'Warning: unexpected end_event commment in sequence ' - 'node {}'.format(node.flow_style) + f'Warning: unexpected end_event commment in sequence node {x}' ) node.comment = end_event.comment node.end_mark = end_event.end_mark self.check_end_doc_comment(end_event, node) return node - def compose_mapping_node(self, anchor): - # type: (Any) -> Any + def compose_mapping_node(self, anchor: Any) -> Any: start_event = self.parser.get_event() - tag = start_event.tag - if tag is None or tag == '!': + tag = start_event.ctag + if tag is None or str(tag) == '!': tag = self.resolver.resolve(MappingNode, None, start_event.implicit) + assert not isinstance(tag, str) node = MappingNode( tag, , @@ -230,8 +216,7 @@ self.check_end_doc_comment(end_event, node) return node - def check_end_doc_comment(self, end_event, node): - # type: (Any, Any) -> None + def check_end_doc_comment(self, end_event: Any, node: Any) -> None: if end_event.comment and end_event.comment1: # pre comments on an end_event, no following to move to if node.comment is None:
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/configobjwalker.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/configobjwalker.py
Changed
@@ -4,11 +4,12 @@ from ruamel.yaml.util import configobj_walker as new_configobj_walker -if False: # MYPY - from typing import Any # NOQA +from typing import Any -def configobj_walker(cfg): - # type: (Any) -> Any - warnings.warn('configobj_walker has moved to ruamel.yaml.util, please update your code') +def configobj_walker(cfg: Any) -> Any: + warnings.warn( + 'configobj_walker has moved to ruamel.yaml.util, please update your code', + stacklevel=2 + ) return new_configobj_walker(cfg)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/constructor.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/constructor.py
Changed
@@ -13,7 +13,7 @@ MantissaNoDotYAML1_1Warning) from ruamel.yaml.nodes import * # NOQA from ruamel.yaml.nodes import (SequenceNode, MappingNode, ScalarNode) -from ruamel.yaml.compat import (_F, builtins_module, # NOQA +from ruamel.yaml.compat import (builtins_module, # NOQA nprint, nprintf, version_tnf) from ruamel.yaml.compat import ordereddict @@ -33,8 +33,7 @@ from ruamel.yaml.timestamp import TimeStamp from ruamel.yaml.util import timestamp_regexp, create_timestamp -if False: # MYPY - from typing import Any, Dict, List, Set, Generator, Union, Optional # NOQA +from typing import Any, Dict, List, Set, Iterator, Union, Optional # NOQA __all__ = 'BaseConstructor', 'SafeConstructor', 'Constructor', @@ -59,70 +58,62 @@ yaml_constructors = {} # type: DictAny, Any yaml_multi_constructors = {} # type: DictAny, Any - def __init__(self, preserve_quotes=None, loader=None): - # type: (Optionalbool, Any) -> None + def __init__(self, preserve_quotes: Optionalbool = None, loader: Any = None) -> None: self.loader = loader if self.loader is not None and getattr(self.loader, '_constructor', None) is None: self.loader._constructor = self self.loader = loader self.yaml_base_dict_type = dict self.yaml_base_list_type = list - self.constructed_objects = {} # type: DictAny, Any - self.recursive_objects = {} # type: DictAny, Any - self.state_generators = # type: ListAny + self.constructed_objects: DictAny, Any = {} + self.recursive_objects: DictAny, Any = {} + self.state_generators: ListAny = self.deep_construct = False self._preserve_quotes = preserve_quotes self.allow_duplicate_keys = version_tnf((0, 15, 1), (0, 16)) @property - def composer(self): - # type: () -> Any + def composer(self) -> Any: if hasattr(self.loader, 'typ'): return self.loader.composer try: return self.loader._composer except AttributeError: - sys.stdout.write('slt {}\n'.format(type(self))) - sys.stdout.write('slc {}\n'.format(self.loader._composer)) - sys.stdout.write('{}\n'.format(dir(self))) + sys.stdout.write(f'slt {type(self)}\n') + sys.stdout.write(f'slc {self.loader._composer}\n') + sys.stdout.write(f'{dir(self)}\n') raise @property - def resolver(self): - # type: () -> Any + def resolver(self) -> Any: if hasattr(self.loader, 'typ'): return self.loader.resolver return self.loader._resolver @property - def scanner(self): - # type: () -> Any + def scanner(self) -> Any: # needed to get to the expanded comments if hasattr(self.loader, 'typ'): return self.loader.scanner return self.loader._scanner - def check_data(self): - # type: () -> Any + def check_data(self) -> Any: # If there are more documents available? return self.composer.check_node() - def get_data(self): - # type: () -> Any + def get_data(self) -> Any: # Construct and return the next document. if self.composer.check_node(): return self.construct_document(self.composer.get_node()) - def get_single_data(self): - # type: () -> Any + def get_single_data(self) -> Any: # Ensure that the stream contains a single document and construct it. node = self.composer.get_single_node() if node is not None: return self.construct_document(node) return None - def construct_document(self, node): - # type: (Any) -> Any + def construct_document(self, node: Any) -> Any: data = self.construct_object(node) while bool(self.state_generators): state_generators = self.state_generators @@ -135,8 +126,7 @@ self.deep_construct = False return data - def construct_object(self, node, deep=False): - # type: (Any, bool) -> Any + def construct_object(self, node: Any, deep: bool = False) -> Any: """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ @@ -159,9 +149,8 @@ self.deep_construct = old_deep return data - def construct_non_recursive_object(self, node, tag=None): - # type: (Any, Optionalstr) -> Any - constructor = None # type: Any + def construct_non_recursive_object(self, node: Any, tag: Optionalstr = None) -> Any: + constructor: Any = None tag_suffix = None if tag is None: tag = node.tag @@ -199,19 +188,14 @@ self.state_generators.append(generator) return data - def construct_scalar(self, node): - # type: (Any) -> Any + def construct_scalar(self, node: Any) -> Any: if not isinstance(node, ScalarNode): raise ConstructorError( - None, - None, - _F('expected a scalar node, but found {node_id!s}', node_id=node.id), - node.start_mark, + None, None, f'expected a scalar node, but found {node.id!s}', node.start_mark, ) return node.value - def construct_sequence(self, node, deep=False): - # type: (Any, bool) -> Any + def construct_sequence(self, node: Any, deep: bool = False) -> Any: """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ @@ -219,22 +203,18 @@ raise ConstructorError( None, None, - _F('expected a sequence node, but found {node_id!s}', node_id=node.id), + f'expected a sequence node, but found {node.id!s}', node.start_mark, ) return self.construct_object(child, deep=deep) for child in node.value - def construct_mapping(self, node, deep=False): - # type: (Any, bool) -> Any + def construct_mapping(self, node: Any, deep: bool = False) -> Any: """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ if not isinstance(node, MappingNode): raise ConstructorError( - None, - None, - _F('expected a mapping node, but found {node_id!s}', node_id=node.id), - node.start_mark, + None, None, f'expected a mapping node, but found {node.id!s}', node.start_mark, ) total_mapping = self.yaml_base_dict_type() if getattr(node, 'merge', None) is not None: @@ -242,7 +222,7 @@ else: todo = (node.value, True) for values, check in todo: - mapping = self.yaml_base_dict_type() # type: DictAny, Any + mapping: DictAny, Any = self.yaml_base_dict_type() for key_node, value_node in values: # keys can be list -> deep key = self.construct_object(key_node, deep=True) @@ -267,8 +247,9 @@ total_mapping.update(mapping) return total_mapping - def check_mapping_key(self, node, key_node, mapping, key, value): - # type: (Any, Any, Any, Any, Any) -> bool + def check_mapping_key( + self, node: Any, key_node: Any, mapping: Any, key: Any, value: Any + ) -> bool: """return True if key is unique""" if key in mapping: if not self.allow_duplicate_keys: @@ -276,8 +257,8 @@ args = 'while constructing a mapping', node.start_mark, - 'found duplicate key "{}" with value "{}" ' - '(original value: "{}")'.format(key, value, mk), + f'found duplicate key "{key}" with value "{value}" ' + f'(original value: "{mk}")', key_node.start_mark, """ To suppress this check see: @@ -289,20 +270,19 @@ """, if self.allow_duplicate_keys is None: - warnings.warn(DuplicateKeyFutureWarning(*args)) + warnings.warn(DuplicateKeyFutureWarning(*args), stacklevel=1) else: raise DuplicateKeyError(*args) return False return True - def check_set_key(self, node, key_node, setting, key): - # type: (Any, Any, Any, Any, Any) -> None + def check_set_key(self: Any, node: Any, key_node: Any, setting: Any, key: Any) -> None: if key in setting: if not self.allow_duplicate_keys: args = 'while constructing a set', node.start_mark, - 'found duplicate key "{}"'.format(key), + f'found duplicate key "{key}"', key_node.start_mark, """ To suppress this check see: @@ -314,18 +294,14 @@ """, if self.allow_duplicate_keys is None: - warnings.warn(DuplicateKeyFutureWarning(*args)) + warnings.warn(DuplicateKeyFutureWarning(*args), stacklevel=1) else: raise DuplicateKeyError(*args) - def construct_pairs(self, node, deep=False): - # type: (Any, bool) -> Any + def construct_pairs(self, node: Any, deep: bool = False) -> Any: if not isinstance(node, MappingNode): raise ConstructorError( - None, - None, - _F('expected a mapping node, but found {node_id!s}', node_id=node.id), - node.start_mark, + None, None, f'expected a mapping node, but found {node.id!s}', node.start_mark, ) pairs = for key_node, value_node in node.value: @@ -335,37 +311,33 @@ return pairs @classmethod - def add_constructor(cls, tag, constructor): - # type: (Any, Any) -> None + def add_constructor(cls, tag: Any, constructor: Any) -> None: if 'yaml_constructors' not in cls.__dict__: cls.yaml_constructors = cls.yaml_constructors.copy() cls.yaml_constructorstag = constructor @classmethod - def add_multi_constructor(cls, tag_prefix, multi_constructor): - # type: (Any, Any) -> None + def add_multi_constructor(cls, tag_prefix: Any, multi_constructor: Any) -> None: if 'yaml_multi_constructors' not in cls.__dict__: cls.yaml_multi_constructors = cls.yaml_multi_constructors.copy() cls.yaml_multi_constructorstag_prefix = multi_constructor class SafeConstructor(BaseConstructor): - def construct_scalar(self, node): - # type: (Any) -> Any + def construct_scalar(self, node: Any) -> Any: if isinstance(node, MappingNode): for key_node, value_node in node.value: if key_node.tag == 'tag:yaml.org,2002:value': return self.construct_scalar(value_node) return BaseConstructor.construct_scalar(self, node) - def flatten_mapping(self, node): - # type: (Any) -> Any + def flatten_mapping(self, node: Any) -> Any: """ This implements the merge key feature http://yaml.org/type/merge.html by inserting keys from the merge dict/list of dicts if not yet available in this node """ - merge = # type: ListAny + merge: ListAny = index = 0 while index < len(node.value): key_node, value_node = node.valueindex @@ -378,7 +350,7 @@ args = 'while constructing a mapping', node.start_mark, - 'found duplicate key "{}"'.format(key_node.value), + f'found duplicate key "{key_node.value}"', key_node.start_mark, """ To suppress this check see: @@ -390,7 +362,7 @@ """, if self.allow_duplicate_keys is None: - warnings.warn(DuplicateKeyFutureWarning(*args)) + warnings.warn(DuplicateKeyFutureWarning(*args), stacklevel=1) else: raise DuplicateKeyError(*args) del node.valueindex @@ -404,10 +376,7 @@ raise ConstructorError( 'while constructing a mapping', node.start_mark, - _F( - 'expected a mapping for merging, but found {subnode_id!s}', - subnode_id=subnode.id, - ), + f'expected a mapping for merging, but found {subnode.id!s}', subnode.start_mark, ) self.flatten_mapping(subnode) @@ -419,11 +388,8 @@ raise ConstructorError( 'while constructing a mapping', node.start_mark, - _F( - 'expected a mapping or list of mappings for merging, ' - 'but found {value_node_id!s}', - value_node_id=value_node.id, - ), + 'expected a mapping or list of mappings for merging, ' + f'but found {value_node.id!s}', value_node.start_mark, ) elif key_node.tag == 'tag:yaml.org,2002:value': @@ -435,8 +401,7 @@ node.merge = merge # separate merge keys to be able to update without duplicate node.value = merge + node.value - def construct_mapping(self, node, deep=False): - # type: (Any, bool) -> Any + def construct_mapping(self, node: Any, deep: bool = False) -> Any: """deep is True when creating an object/mapping recursively, in that case want the underlying elements available during construction """ @@ -444,8 +409,7 @@ self.flatten_mapping(node) return BaseConstructor.construct_mapping(self, node, deep=deep) - def construct_yaml_null(self, node): - # type: (Any) -> Any + def construct_yaml_null(self, node: Any) -> Any: self.construct_scalar(node) return None @@ -461,13 +425,11 @@ 'off': False, } - def construct_yaml_bool(self, node): - # type: (Any) -> bool + def construct_yaml_bool(self, node: Any) -> bool: value = self.construct_scalar(node) return self.bool_valuesvalue.lower() - def construct_yaml_int(self, node): - # type: (Any) -> int + def construct_yaml_int(self, node: Any) -> int: value_s = self.construct_scalar(node) value_s = value_s.replace('_', "") sign = +1 @@ -502,8 +464,7 @@ inf_value *= inf_value nan_value = -inf_value / inf_value # Trying to make a quiet NaN (like C99). - def construct_yaml_float(self, node): - # type: (Any) -> float + def construct_yaml_float(self, node: Any) -> float: value_so = self.construct_scalar(node) value_s = value_so.replace('_', "").lower() sign = +1 @@ -529,34 +490,29 @@ # value_s is lower case independent of input mantissa, exponent = value_s.split('e') if '.' not in mantissa: - warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so)) + warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so), stacklevel=1) return sign * float(value_s) - def construct_yaml_binary(self, node): - # type: (Any) -> Any + def construct_yaml_binary(self, node: Any) -> Any: try: value = self.construct_scalar(node).encode('ascii') except UnicodeEncodeError as exc: raise ConstructorError( None, None, - _F('failed to convert base64 data into ascii: {exc!s}', exc=exc), + f'failed to convert base64 data into ascii: {exc!s}', node.start_mark, ) try: return base64.decodebytes(value) except binascii.Error as exc: raise ConstructorError( - None, - None, - _F('failed to decode base64 data: {exc!s}', exc=exc), - node.start_mark, + None, None, f'failed to decode base64 data: {exc!s}', node.start_mark, ) timestamp_regexp = timestamp_regexp # moved to util 0.17.17 - def construct_yaml_timestamp(self, node, values=None): - # type: (Any, Any) -> Any + def construct_yaml_timestamp(self, node: Any, values: Any = None) -> Any: if values is None: try: match = self.timestamp_regexp.match(node.value) @@ -566,14 +522,13 @@ raise ConstructorError( None, None, - 'failed to construct timestamp from "{}"'.format(node.value), + f'failed to construct timestamp from "{node.value}"', node.start_mark, ) values = match.groupdict() return create_timestamp(**values) - def construct_yaml_omap(self, node): - # type: (Any) -> Any + def construct_yaml_omap(self, node: Any) -> Any: # Note: we do now check for duplicate keys omap = ordereddict() yield omap @@ -581,7 +536,7 @@ raise ConstructorError( 'while constructing an ordered map', node.start_mark, - _F('expected a sequence, but found {node_id!s}', node_id=node.id), + f'expected a sequence, but found {node.id!s}', node.start_mark, ) for subnode in node.value: @@ -589,20 +544,14 @@ raise ConstructorError( 'while constructing an ordered map', node.start_mark, - _F( - 'expected a mapping of length 1, but found {subnode_id!s}', - subnode_id=subnode.id, - ), + f'expected a mapping of length 1, but found {subnode.id!s}', subnode.start_mark, ) if len(subnode.value) != 1: raise ConstructorError( 'while constructing an ordered map', node.start_mark, - _F( - 'expected a single mapping item, but found {len_subnode_val:d} items', - len_subnode_val=len(subnode.value), - ), + f'expected a single mapping item, but found {len(subnode.value):d} items', subnode.start_mark, ) key_node, value_node = subnode.value0 @@ -611,16 +560,15 @@ value = self.construct_object(value_node) omapkey = value - def construct_yaml_pairs(self, node): - # type: (Any) -> Any + def construct_yaml_pairs(self, node: Any) -> Any: # Note: the same code as `construct_yaml_omap`. - pairs = # type: ListAny + pairs: ListAny = yield pairs if not isinstance(node, SequenceNode): raise ConstructorError( 'while constructing pairs', node.start_mark, - _F('expected a sequence, but found {node_id!s}', node_id=node.id), + f'expected a sequence, but found {node.id!s}', node.start_mark, ) for subnode in node.value: @@ -628,20 +576,14 @@ raise ConstructorError( 'while constructing pairs', node.start_mark, - _F( - 'expected a mapping of length 1, but found {subnode_id!s}', - subnode_id=subnode.id, - ), + f'expected a mapping of length 1, but found {subnode.id!s}', subnode.start_mark, ) if len(subnode.value) != 1: raise ConstructorError( 'while constructing pairs', node.start_mark, - _F( - 'expected a single mapping item, but found {len_subnode_val:d} items', - len_subnode_val=len(subnode.value), - ), + f'expected a single mapping item, but found {len(subnode.value):d} items', subnode.start_mark, ) key_node, value_node = subnode.value0 @@ -649,33 +591,28 @@ value = self.construct_object(value_node) pairs.append((key, value)) - def construct_yaml_set(self, node): - # type: (Any) -> Any - data = set() # type: SetAny + def construct_yaml_set(self, node: Any) -> Any: + data: SetAny = set() yield data value = self.construct_mapping(node) data.update(value) - def construct_yaml_str(self, node): - # type: (Any) -> Any + def construct_yaml_str(self, node: Any) -> Any: value = self.construct_scalar(node) return value - def construct_yaml_seq(self, node): - # type: (Any) -> Any - data = self.yaml_base_list_type() # type: ListAny + def construct_yaml_seq(self, node: Any) -> Any: + data: ListAny = self.yaml_base_list_type() yield data data.extend(self.construct_sequence(node)) - def construct_yaml_map(self, node): - # type: (Any) -> Any - data = self.yaml_base_dict_type() # type: DictAny, Any + def construct_yaml_map(self, node: Any) -> Any: + data: DictAny, Any = self.yaml_base_dict_type() yield data value = self.construct_mapping(node) data.update(value) - def construct_yaml_object(self, node, cls): - # type: (Any, Any) -> Any + def construct_yaml_object(self, node: Any, cls: Any) -> Any: data = cls.__new__(cls) yield data if hasattr(data, '__setstate__'): @@ -685,14 +622,11 @@ state = self.construct_mapping(node) data.__dict__.update(state) - def construct_undefined(self, node): - # type: (Any) -> None + def construct_undefined(self, node: Any) -> None: raise ConstructorError( None, None, - _F( - 'could not determine a constructor for the tag {node_tag!r}', node_tag=node.tag - ), + f'could not determine a constructor for the tag {node.tag!r}', node.start_mark, ) @@ -733,50 +667,40 @@ class Constructor(SafeConstructor): - def construct_python_str(self, node): - # type: (Any) -> Any + def construct_python_str(self, node: Any) -> Any: return self.construct_scalar(node) - def construct_python_unicode(self, node): - # type: (Any) -> Any + def construct_python_unicode(self, node: Any) -> Any: return self.construct_scalar(node) - def construct_python_bytes(self, node): - # type: (Any) -> Any + def construct_python_bytes(self, node: Any) -> Any: try: value = self.construct_scalar(node).encode('ascii') except UnicodeEncodeError as exc: raise ConstructorError( None, None, - _F('failed to convert base64 data into ascii: {exc!s}', exc=exc), + f'failed to convert base64 data into ascii: {exc!s}', node.start_mark, ) try: return base64.decodebytes(value) except binascii.Error as exc: raise ConstructorError( - None, - None, - _F('failed to decode base64 data: {exc!s}', exc=exc), - node.start_mark, + None, None, f'failed to decode base64 data: {exc!s}', node.start_mark, ) - def construct_python_long(self, node): - # type: (Any) -> int + def construct_python_long(self, node: Any) -> int: val = self.construct_yaml_int(node) return val - def construct_python_complex(self, node): - # type: (Any) -> Any + def construct_python_complex(self, node: Any) -> Any: return complex(self.construct_scalar(node)) - def construct_python_tuple(self, node): - # type: (Any) -> Any + def construct_python_tuple(self, node: Any) -> Any: return tuple(self.construct_sequence(node)) - def find_python_module(self, name, mark): - # type: (Any, Any) -> Any + def find_python_module(self, name: Any, mark: Any) -> Any: if not name: raise ConstructorError( 'while constructing a Python module', @@ -790,13 +714,12 @@ raise ConstructorError( 'while constructing a Python module', mark, - _F('cannot find module {name!r} ({exc!s})', name=name, exc=exc), + f'cannot find module {name!r} ({exc!s})', mark, ) return sys.modulesname - def find_python_name(self, name, mark): - # type: (Any, Any) -> Any + def find_python_name(self, name: Any, mark: Any) -> Any: if not name: raise ConstructorError( 'while constructing a Python object', @@ -807,7 +730,7 @@ if '.' in name: lname = name.split('.') lmodule_name = lname - lobject_name = # type: ListAny + lobject_name: ListAny = while len(lmodule_name) > 1: lobject_name.insert(0, lmodule_name.pop()) module_name = '.'.join(lmodule_name) @@ -826,11 +749,7 @@ raise ConstructorError( 'while constructing a Python object', mark, - _F( - 'cannot find module {module_name!r} ({exc!s})', - module_name=module_name, - exc=exc, - ), + f'cannot find module {module_name!r} ({exc!s})', mark, ) module = sys.modulesmodule_name @@ -842,42 +761,37 @@ raise ConstructorError( 'while constructing a Python object', mark, - _F( - 'cannot find {object_name!r} in the module {module_name!r}', - object_name=object_name, - module_name=module.__name__, - ), + f'cannot find {object_name!r} in the module {module.__name__!r}', mark, ) obj = getattr(obj, lobject_name.pop(0)) return obj - def construct_python_name(self, suffix, node): - # type: (Any, Any) -> Any + def construct_python_name(self, suffix: Any, node: Any) -> Any: value = self.construct_scalar(node) if value: raise ConstructorError( 'while constructing a Python name', node.start_mark, - _F('expected the empty value, but found {value!r}', value=value), + f'expected the empty value, but found {value!r}', node.start_mark, ) return self.find_python_name(suffix, node.start_mark) - def construct_python_module(self, suffix, node): - # type: (Any, Any) -> Any + def construct_python_module(self, suffix: Any, node: Any) -> Any: value = self.construct_scalar(node) if value: raise ConstructorError( 'while constructing a Python module', node.start_mark, - _F('expected the empty value, but found {value!r}', value=value), + f'expected the empty value, but found {value!r}', node.start_mark, ) return self.find_python_module(suffix, node.start_mark) - def make_python_instance(self, suffix, node, args=None, kwds=None, newobj=False): - # type: (Any, Any, Any, Any, bool) -> Any + def make_python_instance( + self, suffix: Any, node: Any, args: Any = None, kwds: Any = None, newobj: bool = False + ) -> Any: if not args: args = if not kwds: @@ -888,12 +802,11 @@ else: return cls(*args, **kwds) - def set_python_instance_state(self, instance, state): - # type: (Any, Any) -> None + def set_python_instance_state(self, instance: Any, state: Any) -> None: if hasattr(instance, '__setstate__'): instance.__setstate__(state) else: - slotstate = {} # type: DictAny, Any + slotstate: DictAny, Any = {} if isinstance(state, tuple) and len(state) == 2: state, slotstate = state if hasattr(instance, '__dict__'): @@ -903,8 +816,7 @@ for key, value in slotstate.items(): setattr(instance, key, value) - def construct_python_object(self, suffix, node): - # type: (Any, Any) -> Any + def construct_python_object(self, suffix: Any, node: Any) -> Any: # Format: # !!python/object:module.name { ... state ... } instance = self.make_python_instance(suffix, node, newobj=True) @@ -914,8 +826,9 @@ state = self.construct_mapping(node, deep=deep) self.set_python_instance_state(instance, state) - def construct_python_object_apply(self, suffix, node, newobj=False): - # type: (Any, Any, bool) -> Any + def construct_python_object_apply( + self, suffix: Any, node: Any, newobj: bool = False + ) -> Any: # Format: # !!python/object/apply # (or !!python/object/new) # args: ... arguments ... @@ -929,10 +842,10 @@ # is how an object is created, check make_python_instance for details. if isinstance(node, SequenceNode): args = self.construct_sequence(node, deep=True) - kwds = {} # type: DictAny, Any - state = {} # type: DictAny, Any - listitems = # type: ListAny - dictitems = {} # type: DictAny, Any + kwds: DictAny, Any = {} + state: DictAny, Any = {} + listitems: ListAny = + dictitems: DictAny, Any = {} else: value = self.construct_mapping(node, deep=True) args = value.get('args', ) @@ -950,8 +863,7 @@ instancekey = dictitemskey return instance - def construct_python_object_new(self, suffix, node): - # type: (Any, Any) -> Any + def construct_python_object_new(self, suffix: Any, node: Any) -> Any: return self.construct_python_object_apply(suffix, node, newobj=True) @@ -1013,15 +925,13 @@ as well as on the items """ - def comment(self, idx): - # type: (Any) -> Any + def comment(self, idx: Any) -> Any: assert self.loader.comment_handling is not None x = self.scanner.commentsidx x.set_assigned() return x - def comments(self, list_of_comments, idx=None): - # type: (Any, OptionalAny) -> Any + def comments(self, list_of_comments: Any, idx: OptionalAny = None) -> Any: # hand in the comment and optional pre, eol, post segment if list_of_comments is None: return @@ -1032,14 +942,10 @@ for x in list_of_comments: yield self.comment(x) - def construct_scalar(self, node): - # type: (Any) -> Any + def construct_scalar(self, node: Any) -> Any: if not isinstance(node, ScalarNode): raise ConstructorError( - None, - None, - _F('expected a scalar node, but found {node_id!s}', node_id=node.id), - node.start_mark, + None, None, f'expected a scalar node, but found {node.id!s}', node.start_mark, ) if node.style == '|' and isinstance(node.value, str): @@ -1055,7 +961,7 @@ lss.comment = self.comment(node.comment10) # type: ignore return lss if node.style == '>' and isinstance(node.value, str): - fold_positions = # type: Listint + fold_positions: Listint = idx = -1 while True: idx = node.value.find('\a', idx + 1) @@ -1080,17 +986,27 @@ return SingleQuotedScalarString(node.value, anchor=node.anchor) if node.style == '"': return DoubleQuotedScalarString(node.value, anchor=node.anchor) + # if node.ctag: + # data2 = TaggedScalar() + # data2.value = node.value + # data2.style = node.style + # data2.yaml_set_ctag(node.ctag) + # if node.anchor: + # from ruamel.yaml.serializer import templated_id + + # if not templated_id(node.anchor): + # data2.yaml_set_anchor(node.anchor, always_dump=True) + # return data2 if node.anchor: return PlainScalarString(node.value, anchor=node.anchor) return node.value - def construct_yaml_int(self, node): - # type: (Any) -> Any - width = None # type: Any + def construct_yaml_int(self, node: Any) -> Any: + width: Any = None value_su = self.construct_scalar(node) try: sx = value_su.rstrip('_') - underscore = len(sx) - sx.rindex('_') - 1, False, False # type: Any + underscore: Any = len(sx) - sx.rindex('_') - 1, False, False except ValueError: underscore = None except IndexError: @@ -1119,7 +1035,7 @@ # default to lower-case if no a-fA-F in string if self.resolver.processing_version > (1, 1) and value_s2 == '0': width = len(value_s2:) - hex_fun = HexInt # type: Any + hex_fun: Any = HexInt for ch in value_s2:: if ch in 'ABCDEF': # first non-digit is capital hex_fun = HexCapsInt @@ -1148,7 +1064,12 @@ anchor=node.anchor, ) elif self.resolver.processing_version != (1, 2) and value_s0 == '0': - return sign * int(value_s, 8) + return OctalInt( + sign * int(value_s, 8), + width=width, + underscore=underscore, + anchor=node.anchor, + ) elif self.resolver.processing_version != (1, 2) and ':' in value_s: digits = int(part) for part in value_s.split(':') digits.reverse() @@ -1175,10 +1096,8 @@ else: return sign * int(value_s) - def construct_yaml_float(self, node): - # type: (Any) -> Any - def leading_zeros(v): - # type: (Any) -> int + def construct_yaml_float(self, node: Any) -> Any: + def leading_zeros(v: Any) -> int: lead0 = 0 idx = 0 while idx < len(v) and vidx in '0.': @@ -1188,7 +1107,7 @@ return lead0 # underscore = None - m_sign = False # type: Any + m_sign: Any = False value_so = self.construct_scalar(node) value_s = value_so.replace('_', "").lower() sign = +1 @@ -1220,7 +1139,7 @@ if self.resolver.processing_version != (1, 2): # value_s is lower case independent of input if '.' not in mantissa: - warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so)) + warnings.warn(MantissaNoDotYAML1_1Warning(node, value_so), stacklevel=1) lead0 = leading_zeros(mantissa) width = len(mantissa) prec = mantissa.find('.') @@ -1241,7 +1160,8 @@ anchor=node.anchor, ) width = len(value_so) - prec = value_so.index('.') # you can use index, this would not be float without dot + # you can't use index, !!float 42 would be a float without a dot + prec = value_so.find('.') lead0 = leading_zeros(value_so) return ScalarFloat( sign * float(value_s), @@ -1252,20 +1172,21 @@ anchor=node.anchor, ) - def construct_yaml_str(self, node): - # type: (Any) -> Any - value = self.construct_scalar(node) + def construct_yaml_str(self, node: Any) -> Any: + if node.ctag.handle: + value = self.construct_unknown(node) + else: + value = self.construct_scalar(node) if isinstance(value, ScalarString): return value return value - def construct_rt_sequence(self, node, seqtyp, deep=False): - # type: (Any, Any, bool) -> Any + def construct_rt_sequence(self, node: Any, seqtyp: Any, deep: bool = False) -> Any: if not isinstance(node, SequenceNode): raise ConstructorError( None, None, - _F('expected a sequence node, but found {node_id!s}', node_id=node.id), + f'expected a sequence node, but found {node.id!s}', node.start_mark, ) ret_val = @@ -1296,16 +1217,14 @@ ) return ret_val - def flatten_mapping(self, node): - # type: (Any) -> Any + def flatten_mapping(self, node: Any) -> Any: """ This implements the merge key feature http://yaml.org/type/merge.html by inserting keys from the merge dict/list of dicts if not yet available in this node """ - def constructed(value_node): - # type: (Any) -> Any + def constructed(value_node: Any) -> Any: # If the contents of a merge are defined within the # merge marker, then they won't have been constructed # yet. But if they were already constructed, we need to use @@ -1313,11 +1232,11 @@ if value_node in self.constructed_objects: value = self.constructed_objectsvalue_node else: - value = self.construct_object(value_node, deep=False) + value = self.construct_object(value_node, deep=True) return value # merge = - merge_map_list = # type: ListAny + merge_map_list: ListAny = index = 0 while index < len(node.value): key_node, value_node = node.valueindex @@ -1330,7 +1249,7 @@ args = 'while constructing a mapping', node.start_mark, - 'found duplicate key "{}"'.format(key_node.value), + f'found duplicate key "{key_node.value}"', key_node.start_mark, """ To suppress this check see: @@ -1342,7 +1261,7 @@ """, if self.allow_duplicate_keys is None: - warnings.warn(DuplicateKeyFutureWarning(*args)) + warnings.warn(DuplicateKeyFutureWarning(*args), stacklevel=1) else: raise DuplicateKeyError(*args) del node.valueindex @@ -1357,10 +1276,7 @@ raise ConstructorError( 'while constructing a mapping', node.start_mark, - _F( - 'expected a mapping for merging, but found {subnode_id!s}', - subnode_id=subnode.id, - ), + f'expected a mapping for merging, but found {subnode.id!s}', subnode.start_mark, ) merge_map_list.append((index, constructed(subnode))) @@ -1373,11 +1289,8 @@ raise ConstructorError( 'while constructing a mapping', node.start_mark, - _F( - 'expected a mapping or list of mappings for merging, ' - 'but found {value_node_id!s}', - value_node_id=value_node.id, - ), + 'expected a mapping or list of mappings for merging, ' + f'but found {value_node.id!s}', value_node.start_mark, ) elif key_node.tag == 'tag:yaml.org,2002:value': @@ -1389,18 +1302,13 @@ # if merge: # node.value = merge + node.value - def _sentinel(self): - # type: () -> None + def _sentinel(self) -> None: pass - def construct_mapping(self, node, maptyp, deep=False): # type: ignore - # type: (Any, Any, bool) -> Any + def construct_mapping(self, node: Any, maptyp: Any, deep: bool = False) -> Any: # type: ignore # NOQA if not isinstance(node, MappingNode): raise ConstructorError( - None, - None, - _F('expected a mapping node, but found {node_id!s}', node_id=node.id), - node.start_mark, + None, None, f'expected a mapping node, but found {node.id!s}', node.start_mark, ) merge_map = self.flatten_mapping(node) # mapping = {} @@ -1434,6 +1342,7 @@ key_s.fa.set_flow_style() elif key_node.flow_style is False: key_s.fa.set_block_style() + key_s._yaml_set_line_col(key.lc.line, key.lc.col) # type: ignore key = key_s elif isinstance(key, MutableMapping): key_m = CommentedKeyMap(key) @@ -1441,6 +1350,7 @@ key_m.fa.set_flow_style() elif key_node.flow_style is False: key_m.fa.set_block_style() + key_m._yaml_set_line_col(key.lc.line, key.lc.col) # type: ignore key = key_m if not isinstance(key, Hashable): raise ConstructorError( @@ -1498,14 +1408,10 @@ if merge_map: maptyp.add_yaml_merge(merge_map) - def construct_setting(self, node, typ, deep=False): - # type: (Any, Any, bool) -> Any + def construct_setting(self, node: Any, typ: Any, deep: bool = False) -> Any: if not isinstance(node, MappingNode): raise ConstructorError( - None, - None, - _F('expected a mapping node, but found {node_id!s}', node_id=node.id), - node.start_mark, + None, None, f'expected a mapping node, but found {node.id!s}', node.start_mark, ) if self.loader and self.loader.comment_handling is None: if node.comment: @@ -1551,8 +1457,7 @@ nprintf('nc7b', value_node.comment) typ.add(key) - def construct_yaml_seq(self, node): - # type: (Any) -> Any + def construct_yaml_seq(self, node: Any) -> IteratorCommentedSeq: data = CommentedSeq() data._yaml_set_line_col(node.start_mark.line, node.start_mark.column) # if node.comment: @@ -1561,16 +1466,14 @@ data.extend(self.construct_rt_sequence(node, data)) self.set_collection_style(data, node) - def construct_yaml_map(self, node): - # type: (Any) -> Any + def construct_yaml_map(self, node: Any) -> IteratorCommentedMap: data = CommentedMap() data._yaml_set_line_col(node.start_mark.line, node.start_mark.column) yield data self.construct_mapping(node, data, deep=True) self.set_collection_style(data, node) - def set_collection_style(self, data, node): - # type: (Any, Any) -> None + def set_collection_style(self, data: Any, node: Any) -> None: if len(data) == 0: return if node.flow_style is True: @@ -1578,8 +1481,7 @@ elif node.flow_style is False: data.fa.set_block_style() - def construct_yaml_object(self, node, cls): - # type: (Any, Any) -> Any + def construct_yaml_object(self, node: Any, cls: Any) -> Any: data = cls.__new__(cls) yield data if hasattr(data, '__setstate__'): @@ -1603,8 +1505,7 @@ a = getattr(data, Anchor.attrib) a.value = node.anchor - def construct_yaml_omap(self, node): - # type: (Any) -> Any + def construct_yaml_omap(self, node: Any) -> IteratorCommentedOrderedMap: # Note: we do now check for duplicate keys omap = CommentedOrderedMap() omap._yaml_set_line_col(node.start_mark.line, node.start_mark.column) @@ -1626,7 +1527,7 @@ raise ConstructorError( 'while constructing an ordered map', node.start_mark, - _F('expected a sequence, but found {node_id!s}', node_id=node.id), + f'expected a sequence, but found {node.id!s}', node.start_mark, ) for subnode in node.value: @@ -1634,20 +1535,14 @@ raise ConstructorError( 'while constructing an ordered map', node.start_mark, - _F( - 'expected a mapping of length 1, but found {subnode_id!s}', - subnode_id=subnode.id, - ), + f'expected a mapping of length 1, but found {subnode.id!s}', subnode.start_mark, ) if len(subnode.value) != 1: raise ConstructorError( 'while constructing an ordered map', node.start_mark, - _F( - 'expected a single mapping item, but found {len_subnode_val:d} items', - len_subnode_val=len(subnode.value), - ), + f'expected a single mapping item, but found {len(subnode.value):d} items', subnode.start_mark, ) key_node, value_node = subnode.value0 @@ -1671,15 +1566,15 @@ nprintf('nc9c', value_node.comment) omapkey = value - def construct_yaml_set(self, node): - # type: (Any) -> Any + def construct_yaml_set(self, node: Any) -> IteratorCommentedSet: data = CommentedSet() data._yaml_set_line_col(node.start_mark.line, node.start_mark.column) yield data self.construct_setting(node, data) - def construct_undefined(self, node): - # type: (Any) -> Any + def construct_unknown( + self, node: Any + ) -> IteratorUnionCommentedMap, TaggedScalar, CommentedSeq: try: if isinstance(node, MappingNode): data = CommentedMap() @@ -1688,7 +1583,7 @@ data.fa.set_flow_style() elif node.flow_style is False: data.fa.set_block_style() - data.yaml_set_tag(node.tag) + data.yaml_set_ctag(node.ctag) yield data if node.anchor: from ruamel.yaml.serializer import templated_id @@ -1701,7 +1596,7 @@ data2 = TaggedScalar() data2.value = self.construct_scalar(node) data2.style = node.style - data2.yaml_set_tag(node.tag) + data2.yaml_set_ctag(node.ctag) yield data2 if node.anchor: from ruamel.yaml.serializer import templated_id @@ -1716,7 +1611,7 @@ data3.fa.set_flow_style() elif node.flow_style is False: data3.fa.set_block_style() - data3.yaml_set_tag(node.tag) + data3.yaml_set_ctag(node.ctag) yield data3 if node.anchor: from ruamel.yaml.serializer import templated_id @@ -1730,14 +1625,13 @@ raise ConstructorError( None, None, - _F( - 'could not determine a constructor for the tag {node_tag!r}', node_tag=node.tag - ), + f'could not determine a constructor for the tag {node.tag!r}', node.start_mark, ) - def construct_yaml_timestamp(self, node, values=None): - # type: (Any, Any) -> Any + def construct_yaml_timestamp( + self, node: Any, values: Any = None + ) -> Uniondatetime.date, datetime.datetime, TimeStamp: try: match = self.timestamp_regexp.match(node.value) except TypeError: @@ -1746,7 +1640,7 @@ raise ConstructorError( None, None, - 'failed to construct timestamp from "{}"'.format(node.value), + f'failed to construct timestamp from "{node.value}"', node.start_mark, ) values = match.groupdict() @@ -1769,9 +1663,15 @@ if values'tz_sign' == '-': delta = -delta # should check for None and solve issue 366 should be tzinfo=delta) - data = TimeStamp( - dd.year, dd.month, dd.day, dd.hour, dd.minute, dd.second, dd.microsecond - ) + # isinstance(datetime.datetime.now, datetime.date) is true) + if isinstance(dd, datetime.datetime): + data = TimeStamp( + dd.year, dd.month, dd.day, dd.hour, dd.minute, dd.second, dd.microsecond + ) + else: + # ToDo: make this into a DateStamp? + data = TimeStamp(dd.year, dd.month, dd.day, 0, 0, 0, 0) + return data if delta: data._yaml'delta' = delta tz = values'tz_sign' + values'tz_hour' @@ -1781,13 +1681,11 @@ else: if values'tz': # no delta data._yaml'tz' = values'tz' - if values't': data._yaml't' = True return data - def construct_yaml_bool(self, node): - # type: (Any) -> Any + def construct_yaml_sbool(self, node: Any) -> Unionbool, ScalarBoolean: b = SafeConstructor.construct_yaml_bool(self, node) if node.anchor: return ScalarBoolean(b, anchor=node.anchor) @@ -1799,7 +1697,7 @@ ) RoundTripConstructor.add_constructor( - 'tag:yaml.org,2002:bool', RoundTripConstructor.construct_yaml_bool + 'tag:yaml.org,2002:bool', RoundTripConstructor.construct_yaml_sbool ) RoundTripConstructor.add_constructor( @@ -1842,4 +1740,4 @@ 'tag:yaml.org,2002:map', RoundTripConstructor.construct_yaml_map ) -RoundTripConstructor.add_constructor(None, RoundTripConstructor.construct_undefined) +RoundTripConstructor.add_constructor(None, RoundTripConstructor.construct_unknown)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/cyaml.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/cyaml.py
Changed
@@ -6,9 +6,9 @@ from ruamel.yaml.representer import Representer, SafeRepresenter, BaseRepresenter from ruamel.yaml.resolver import Resolver, BaseResolver -if False: # MYPY - from typing import Any, Union, Optional # NOQA - from ruamel.yaml.compat import StreamTextType, StreamType, VersionType # NOQA + +from typing import Any, Union, Optional # NOQA +from ruamel.yaml.compat import StreamTextType, StreamType, VersionType # NOQA __all__ = 'CBaseLoader', 'CSafeLoader', 'CLoader', 'CBaseDumper', 'CSafeDumper', 'CDumper' @@ -18,8 +18,12 @@ class CBaseLoader(CParser, BaseConstructor, BaseResolver): # type: ignore - def __init__(self, stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None + def __init__( + self, + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, + ) -> None: CParser.__init__(self, stream) self._parser = self._composer = self BaseConstructor.__init__(self, loader=self) @@ -30,8 +34,12 @@ class CSafeLoader(CParser, SafeConstructor, Resolver): # type: ignore - def __init__(self, stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None + def __init__( + self, + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, + ) -> None: CParser.__init__(self, stream) self._parser = self._composer = self SafeConstructor.__init__(self, loader=self) @@ -42,8 +50,12 @@ class CLoader(CParser, Constructor, Resolver): # type: ignore - def __init__(self, stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None + def __init__( + self, + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, + ) -> None: CParser.__init__(self, stream) self._parser = self._composer = self Constructor.__init__(self, loader=self) @@ -55,25 +67,25 @@ class CBaseDumper(CEmitter, BaseRepresenter, BaseResolver): # type: ignore def __init__( - self, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (StreamType, Any, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + self: StreamType, + stream: Any, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA CEmitter.__init__( self, stream, @@ -100,25 +112,25 @@ class CSafeDumper(CEmitter, SafeRepresenter, Resolver): # type: ignore def __init__( - self, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (StreamType, Any, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + self: StreamType, + stream: Any, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA self._emitter = self._serializer = self._representer = self CEmitter.__init__( self, @@ -143,25 +155,25 @@ class CDumper(CEmitter, Representer, Resolver): # type: ignore def __init__( - self, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (StreamType, Any, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + self: StreamType, + stream: Any, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA CEmitter.__init__( self, stream,
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/dumper.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/dumper.py
Changed
@@ -10,34 +10,33 @@ ) from ruamel.yaml.resolver import Resolver, BaseResolver, VersionedResolver -if False: # MYPY - from typing import Any, Dict, List, Union, Optional # NOQA - from ruamel.yaml.compat import StreamType, VersionType # NOQA +from typing import Any, Dict, List, Union, Optional # NOQA +from ruamel.yaml.compat import StreamType, VersionType # NOQA __all__ = 'BaseDumper', 'SafeDumper', 'Dumper', 'RoundTripDumper' class BaseDumper(Emitter, Serializer, BaseRepresenter, BaseResolver): def __init__( - self, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (Any, StreamType, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + self: Any, + stream: StreamType, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA Emitter.__init__( self, stream, @@ -70,24 +69,24 @@ class SafeDumper(Emitter, Serializer, SafeRepresenter, Resolver): def __init__( self, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (StreamType, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + stream: StreamType, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA Emitter.__init__( self, stream, @@ -120,24 +119,24 @@ class Dumper(Emitter, Serializer, Representer, Resolver): def __init__( self, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (StreamType, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + stream: StreamType, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA Emitter.__init__( self, stream, @@ -170,24 +169,24 @@ class RoundTripDumper(Emitter, Serializer, RoundTripRepresenter, VersionedResolver): def __init__( self, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (StreamType, Any, Optionalbool, Optionalint, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + stream: StreamType, + default_style: Any = None, + default_flow_style: Optionalbool = None, + canonical: Optionalint = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA Emitter.__init__( self, stream,
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/emitter.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/emitter.py
Changed
@@ -12,13 +12,13 @@ from ruamel.yaml.events import * # NOQA # fmt: off -from ruamel.yaml.compat import _F, nprint, dbg, DBG_EVENT, \ +from ruamel.yaml.compat import nprint, dbg, DBG_EVENT, \ check_anchorname_char, nprintf # NOQA # fmt: on -if False: # MYPY - from typing import Any, Dict, List, Union, Text, Tuple, Optional # NOQA - from ruamel.yaml.compat import StreamType # NOQA + +from typing import Any, Dict, List, Union, Text, Tuple, Optional # NOQA +from ruamel.yaml.compat import StreamType # NOQA __all__ = 'Emitter', 'EmitterError' @@ -30,16 +30,15 @@ class ScalarAnalysis: def __init__( self, - scalar, - empty, - multiline, - allow_flow_plain, - allow_block_plain, - allow_single_quoted, - allow_double_quoted, - allow_block, - ): - # type: (Any, Any, Any, bool, bool, bool, bool, bool) -> None + scalar: Any, + empty: Any, + multiline: Any, + allow_flow_plain: bool, + allow_block_plain: bool, + allow_single_quoted: bool, + allow_double_quoted: bool, + allow_block: bool, + ) -> None: self.scalar = scalar self.empty = empty self.multiline = multiline @@ -52,20 +51,16 @@ class Indents: # replacement for the list based stack of None/int - def __init__(self): - # type: () -> None - self.values = # type: ListTupleAny, bool + def __init__(self) -> None: + self.values: ListTupleAny, bool = - def append(self, val, seq): - # type: (Any, Any) -> None + def append(self, val: Any, seq: Any) -> None: self.values.append((val, seq)) - def pop(self): - # type: () -> Any + def pop(self) -> Any: return self.values.pop()0 - def last_seq(self): - # type: () -> bool + def last_seq(self) -> bool: # return the seq(uence) value for the element added before the last one # in increase_indent() try: @@ -73,8 +68,9 @@ except IndexError: return False - def seq_flow_align(self, seq_indent, column, pre_comment=False): - # type: (int, int, Optionalbool) -> int + def seq_flow_align( + self, seq_indent: int, column: int, pre_comment: Optionalbool = False + ) -> int: # extra spaces because of dash # nprint('seq_flow_align', self.values, pre_comment) if len(self.values) < 2 or not self.values-11: @@ -87,8 +83,7 @@ # -1 for the dash return base + seq_indent - column - 1 # type: ignore - def __len__(self): - # type: () -> int + def __len__(self) -> int: return len(self.values) @@ -97,6 +92,7 @@ DEFAULT_TAG_PREFIXES = { '!': '!', 'tag:yaml.org,2002:': '!!', + '!!': '!!', } # fmt: on @@ -104,44 +100,44 @@ def __init__( self, - stream, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - brace_single_entry_mapping_in_flow_sequence=None, - dumper=None, - ): - # type: (StreamType, Any, Optionalint, Optionalint, Optionalbool, Any, Optionalint, Optionalbool, Any, Optionalbool, Any) -> None # NOQA + stream: StreamType, + canonical: Any = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + block_seq_indent: Optionalint = None, + top_level_colon_align: Optionalbool = None, + prefix_colon: Any = None, + brace_single_entry_mapping_in_flow_sequence: Optionalbool = None, + dumper: Any = None, + ) -> None: + # NOQA self.dumper = dumper if self.dumper is not None and getattr(self.dumper, '_emitter', None) is None: self.dumper._emitter = self self.stream = stream # Encoding can be overriden by STREAM-START. - self.encoding = None # type: OptionalText + self.encoding: OptionalText = None self.allow_space_break = None # Emitter is a state machine with a stack of states to handle nested # structures. - self.states = # type: ListAny - self.state = self.expect_stream_start # type: Any + self.states: ListAny = + self.state: Any = self.expect_stream_start # Current event and the event queue. - self.events = # type: ListAny - self.event = None # type: Any + self.events: ListAny = + self.event: Any = None # The current indentation level and the stack of previous indents. self.indents = Indents() - self.indent = None # type: Optionalint + self.indent: Optionalint = None # flow_context is an expanding/shrinking list consisting of '{' and '' # for each unclosed flow context. If empty list that means block context - self.flow_context = # type: ListText + self.flow_context: ListText = # Contexts. self.root_context = False @@ -161,7 +157,7 @@ self.compact_seq_seq = True # dash after dash self.compact_seq_map = True # key after dash # self.compact_ms = False # dash after key, only when excplicit key with ? - self.no_newline = None # type: Optionalbool # set if directly after `- ` + self.no_newline: Optionalbool = None # set if directly after `- ` # Whether the document requires an explicit document end indicator self.open_ended = False @@ -191,36 +187,34 @@ self.best_width = 80 if width and width > self.best_sequence_indent * 2: self.best_width = width - self.best_line_break = '\n' # type: Any + self.best_line_break: Any = '\n' if line_break in '\r', '\n', '\r\n': self.best_line_break = line_break # Tag prefixes. - self.tag_prefixes = None # type: Any + self.tag_prefixes: Any = None # Prepared anchor and tag. - self.prepared_anchor = None # type: Any - self.prepared_tag = None # type: Any + self.prepared_anchor: Any = None + self.prepared_tag: Any = None # Scalar analysis and style. - self.analysis = None # type: Any - self.style = None # type: Any + self.analysis: Any = None + self.style: Any = None self.scalar_after_indicator = True # write a scalar on the same line as `---` self.alt_null = 'null' @property - def stream(self): - # type: () -> Any + def stream(self) -> Any: try: return self._stream except AttributeError: - raise YAMLStreamError('output stream needs to specified') + raise YAMLStreamError('output stream needs to be specified') @stream.setter - def stream(self, val): - # type: (Any) -> None + def stream(self, val: Any) -> None: if val is None: return if not hasattr(val, 'write'): @@ -228,8 +222,7 @@ self._stream = val @property - def serializer(self): - # type: () -> Any + def serializer(self) -> Any: try: if hasattr(self.dumper, 'typ'): return self.dumper.serializer @@ -238,18 +231,15 @@ return self # cyaml @property - def flow_level(self): - # type: () -> int + def flow_level(self) -> int: return len(self.flow_context) - def dispose(self): - # type: () -> None + def dispose(self) -> None: # Reset the state attributes (to clear self-references) self.states = self.state = None - def emit(self, event): - # type: (Any) -> None + def emit(self, event: Any) -> None: if dbg(DBG_EVENT): nprint(event) self.events.append(event) @@ -260,8 +250,7 @@ # In some cases, we wait for a few next events before emitting. - def need_more_events(self): - # type: () -> bool + def need_more_events(self) -> bool: if not self.events: return True event = self.events0 @@ -274,8 +263,7 @@ else: return False - def need_events(self, count): - # type: (int) -> bool + def need_events(self, count: int) -> bool: level = 0 for event in self.events1:: if isinstance(event, (DocumentStartEvent, CollectionStartEvent)): @@ -288,8 +276,9 @@ return False return len(self.events) < count + 1 - def increase_indent(self, flow=False, sequence=None, indentless=False): - # type: (bool, Optionalbool, bool) -> None + def increase_indent( + self, flow: bool = False, sequence: Optionalbool = None, indentless: bool = False + ) -> None: self.indents.append(self.indent, sequence) if self.indent is None: # top level if flow: @@ -315,32 +304,24 @@ # Stream handlers. - def expect_stream_start(self): - # type: () -> None + def expect_stream_start(self) -> None: if isinstance(self.event, StreamStartEvent): if self.event.encoding and not hasattr(self.stream, 'encoding'): self.encoding = self.event.encoding self.write_stream_start() self.state = self.expect_first_document_start else: - raise EmitterError( - _F('expected StreamStartEvent, but got {self_event!s}', self_event=self.event) - ) + raise EmitterError(f'expected StreamStartEvent, but got {self.event!s}') - def expect_nothing(self): - # type: () -> None - raise EmitterError( - _F('expected nothing, but got {self_event!s}', self_event=self.event) - ) + def expect_nothing(self) -> None: + raise EmitterError(f'expected nothing, but got {self.event!s}') # Document handlers. - def expect_first_document_start(self): - # type: () -> Any + def expect_first_document_start(self) -> Any: return self.expect_document_start(first=True) - def expect_document_start(self, first=False): - # type: (bool) -> None + def expect_document_start(self, first: bool = False) -> None: if isinstance(self.event, DocumentStartEvent): if (self.event.version or self.event.tags) and self.open_ended: self.write_indicator('...', True) @@ -378,15 +359,9 @@ self.write_stream_end() self.state = self.expect_nothing else: - raise EmitterError( - _F( - 'expected DocumentStartEvent, but got {self_event!s}', - self_event=self.event, - ) - ) + raise EmitterError(f'expected DocumentStartEvent, but got {self.event!s}') - def expect_document_end(self): - # type: () -> None + def expect_document_end(self) -> None: if isinstance(self.event, DocumentEndEvent): self.write_indent() if self.event.explicit: @@ -395,19 +370,21 @@ self.flush_stream() self.state = self.expect_document_start else: - raise EmitterError( - _F('expected DocumentEndEvent, but got {self_event!s}', self_event=self.event) - ) + raise EmitterError(f'expected DocumentEndEvent, but got {self.event!s}') - def expect_document_root(self): - # type: () -> None + def expect_document_root(self) -> None: self.states.append(self.expect_document_end) self.expect_node(root=True) # Node handlers. - def expect_node(self, root=False, sequence=False, mapping=False, simple_key=False): - # type: (bool, bool, bool, bool) -> None + def expect_node( + self, + root: bool = False, + sequence: bool = False, + mapping: bool = False, + simple_key: bool = False, + ) -> None: self.root_context = root self.sequence_context = sequence # not used in PyYAML force_flow_indent = False @@ -472,24 +449,21 @@ or self.event.flow_style or self.check_empty_mapping() ): - self.expect_flow_mapping(single=self.event.nr_items == 1, - force_flow_indent=force_flow_indent) + self.expect_flow_mapping( + single=self.event.nr_items == 1, force_flow_indent=force_flow_indent + ) else: self.expect_block_mapping() else: - raise EmitterError( - _F('expected NodeEvent, but got {self_event!s}', self_event=self.event) - ) + raise EmitterError('expected NodeEvent, but got {self.event!s}') - def expect_alias(self): - # type: () -> None + def expect_alias(self) -> None: if self.event.anchor is None: raise EmitterError('anchor is not specified for alias') self.process_anchor('*') self.state = self.states.pop() - def expect_scalar(self): - # type: () -> None + def expect_scalar(self) -> None: self.increase_indent(flow=True) self.process_scalar() self.indent = self.indents.pop() @@ -497,20 +471,19 @@ # Flow sequence handlers. - def expect_flow_sequence(self, force_flow_indent=False): - # type: (Optionalbool) -> None + def expect_flow_sequence(self, force_flow_indent: Optionalbool = False) -> None: if force_flow_indent: self.increase_indent(flow=True, sequence=True) - ind = self.indents.seq_flow_align(self.best_sequence_indent, self.column, - force_flow_indent) + ind = self.indents.seq_flow_align( + self.best_sequence_indent, self.column, force_flow_indent + ) self.write_indicator(' ' * ind + '', True, whitespace=True) if not force_flow_indent: self.increase_indent(flow=True, sequence=True) self.flow_context.append('') self.state = self.expect_first_flow_sequence_item - def expect_first_flow_sequence_item(self): - # type: () -> None + def expect_first_flow_sequence_item(self) -> None: if isinstance(self.event, SequenceEndEvent): self.indent = self.indents.pop() popped = self.flow_context.pop() @@ -528,8 +501,7 @@ self.states.append(self.expect_flow_sequence_item) self.expect_node(sequence=True) - def expect_flow_sequence_item(self): - # type: () -> None + def expect_flow_sequence_item(self) -> None: if isinstance(self.event, SequenceEndEvent): self.indent = self.indents.pop() popped = self.flow_context.pop() @@ -553,12 +525,14 @@ # Flow mapping handlers. - def expect_flow_mapping(self, single=False, force_flow_indent=False): - # type: (Optionalbool, Optionalbool) -> None + def expect_flow_mapping( + self, single: Optionalbool = False, force_flow_indent: Optionalbool = False + ) -> None: if force_flow_indent: self.increase_indent(flow=True, sequence=False) - ind = self.indents.seq_flow_align(self.best_sequence_indent, self.column, - force_flow_indent) + ind = self.indents.seq_flow_align( + self.best_sequence_indent, self.column, force_flow_indent + ) map_init = '{' if ( single @@ -575,8 +549,7 @@ self.increase_indent(flow=True, sequence=False) self.state = self.expect_first_flow_mapping_key - def expect_first_flow_mapping_key(self): - # type: () -> None + def expect_first_flow_mapping_key(self) -> None: if isinstance(self.event, MappingEndEvent): self.indent = self.indents.pop() popped = self.flow_context.pop() @@ -599,8 +572,7 @@ self.states.append(self.expect_flow_mapping_value) self.expect_node(mapping=True) - def expect_flow_mapping_key(self): - # type: () -> None + def expect_flow_mapping_key(self) -> None: if isinstance(self.event, MappingEndEvent): # if self.event.comment and self.event.comment1: # self.write_pre_comment(self.event) @@ -630,14 +602,12 @@ self.states.append(self.expect_flow_mapping_value) self.expect_node(mapping=True) - def expect_flow_mapping_simple_value(self): - # type: () -> None + def expect_flow_mapping_simple_value(self) -> None: self.write_indicator(self.prefixed_colon, False) self.states.append(self.expect_flow_mapping_key) self.expect_node(mapping=True) - def expect_flow_mapping_value(self): - # type: () -> None + def expect_flow_mapping_value(self) -> None: if self.canonical or self.column > self.best_width: self.write_indent() self.write_indicator(self.prefixed_colon, True) @@ -646,8 +616,7 @@ # Block sequence handlers. - def expect_block_sequence(self): - # type: () -> None + def expect_block_sequence(self) -> None: if self.mapping_context: indentless = not self.indention else: @@ -657,12 +626,10 @@ self.increase_indent(flow=False, sequence=True, indentless=indentless) self.state = self.expect_first_block_sequence_item - def expect_first_block_sequence_item(self): - # type: () -> Any + def expect_first_block_sequence_item(self) -> Any: return self.expect_block_sequence_item(first=True) - def expect_block_sequence_item(self, first=False): - # type: (bool) -> None + def expect_block_sequence_item(self, first: bool = False) -> None: if not first and isinstance(self.event, SequenceEndEvent): if self.event.comment and self.event.comment1: # final comments on a block list e.g. empty line @@ -684,19 +651,16 @@ # Block mapping handlers. - def expect_block_mapping(self): - # type: () -> None + def expect_block_mapping(self) -> None: if not self.mapping_context and not (self.compact_seq_map or self.column == 0): self.write_line_break() self.increase_indent(flow=False, sequence=False) self.state = self.expect_first_block_mapping_key - def expect_first_block_mapping_key(self): - # type: () -> None + def expect_first_block_mapping_key(self) -> None: return self.expect_block_mapping_key(first=True) - def expect_block_mapping_key(self, first=False): - # type: (Any) -> None + def expect_block_mapping_key(self, first: Any = False) -> None: if not first and isinstance(self.event, MappingEndEvent): if self.event.comment and self.event.comment1: # final comments from a doc @@ -727,8 +691,7 @@ self.states.append(self.expect_block_mapping_value) self.expect_node(mapping=True) - def expect_block_mapping_simple_value(self): - # type: () -> None + def expect_block_mapping_simple_value(self) -> None: if getattr(self.event, 'style', None) != '?': # prefix = '' if self.indent == 0 and self.top_level_colon_align is not None: @@ -740,8 +703,7 @@ self.states.append(self.expect_block_mapping_key) self.expect_node(mapping=True) - def expect_block_mapping_value(self): - # type: () -> None + def expect_block_mapping_value(self) -> None: self.write_indent() self.write_indicator(self.prefixed_colon, True, indention=True) self.states.append(self.expect_block_mapping_key) @@ -749,24 +711,21 @@ # Checkers. - def check_empty_sequence(self): - # type: () -> bool + def check_empty_sequence(self) -> bool: return ( isinstance(self.event, SequenceStartEvent) and bool(self.events) and isinstance(self.events0, SequenceEndEvent) ) - def check_empty_mapping(self): - # type: () -> bool + def check_empty_mapping(self) -> bool: return ( isinstance(self.event, MappingStartEvent) and bool(self.events) and isinstance(self.events0, MappingEndEvent) ) - def check_empty_document(self): - # type: () -> bool + def check_empty_document(self) -> bool: if not isinstance(self.event, DocumentStartEvent) or not self.events: return False event = self.events0 @@ -778,8 +737,7 @@ and event.value == "" ) - def check_simple_key(self): - # type: () -> bool + def check_simple_key(self) -> bool: length = 0 if isinstance(self.event, NodeEvent) and self.event.anchor is not None: if self.prepared_anchor is None: @@ -790,7 +748,7 @@ and self.event.tag is not None ): if self.prepared_tag is None: - self.prepared_tag = self.prepare_tag(self.event.tag) + self.prepared_tag = self.prepare_tag(self.event.ctag) length += len(self.prepared_tag) if isinstance(self.event, ScalarEvent): if self.analysis is None: @@ -812,8 +770,7 @@ # Anchor, Tag, and Scalar processors. - def process_anchor(self, indicator): - # type: (Any) -> bool + def process_anchor(self, indicator: Any) -> bool: if self.event.anchor is None: self.prepared_anchor = None return False @@ -826,8 +783,7 @@ self.prepared_anchor = None return True - def process_tag(self): - # type: () -> None + def process_tag(self) -> None: tag = self.event.tag if isinstance(self.event, ScalarEvent): if self.style is None: @@ -857,7 +813,7 @@ if tag is None: raise EmitterError('tag is not specified') if self.prepared_tag is None: - self.prepared_tag = self.prepare_tag(tag) + self.prepared_tag = self.prepare_tag(self.event.ctag) if self.prepared_tag: self.write_indicator(self.prepared_tag, True) if ( @@ -868,8 +824,10 @@ self.no_newline = True self.prepared_tag = None - def choose_scalar_style(self): - # type: () -> Any + def choose_scalar_style(self) -> Any: + # issue 449 needs this otherwise emits single quoted empty string + if self.event.value == '' and self.event.ctag.handle == '!!': + return None if self.analysis is None: self.analysis = self.analyze_scalar(self.event.value) if self.event.style == '"' or self.canonical: @@ -903,8 +861,7 @@ return "'" return '"' - def process_scalar(self): - # type: () -> None + def process_scalar(self) -> None: if self.analysis is None: self.analysis = self.analyze_scalar(self.event.value) if self.style is None: @@ -921,7 +878,11 @@ elif self.style == "'": self.write_single_quoted(self.analysis.scalar, split) elif self.style == '>': - self.write_folded(self.analysis.scalar) + try: + cmx = self.event.comment10 + except (IndexError, TypeError): + cmx = "" + self.write_folded(self.analysis.scalar, cmx) if ( self.event.comment and self.event.comment0 @@ -952,39 +913,26 @@ # Analyzers. - def prepare_version(self, version): - # type: (Any) -> Any + def prepare_version(self, version: Any) -> Any: major, minor = version if major != 1: - raise EmitterError( - _F('unsupported YAML version: {major:d}.{minor:d}', major=major, minor=minor) - ) - return _F('{major:d}.{minor:d}', major=major, minor=minor) + raise EmitterError(f'unsupported YAML version: {major:d}.{minor:d}') + return f'{major:d}.{minor:d}' - def prepare_tag_handle(self, handle): - # type: (Any) -> Any + def prepare_tag_handle(self, handle: Any) -> Any: if not handle: raise EmitterError('tag handle must not be empty') if handle0 != '!' or handle-1 != '!': - raise EmitterError( - _F("tag handle must start and end with '!': {handle!r}", handle=handle) - ) + raise EmitterError(f"tag handle must start and end with '!': {handle!r}") for ch in handle1:-1: if not ('0' <= ch <= '9' or 'A' <= ch <= 'Z' or 'a' <= ch <= 'z' or ch in '-_'): - raise EmitterError( - _F( - 'invalid character {ch!r} in the tag handle: {handle!r}', - ch=ch, - handle=handle, - ) - ) + raise EmitterError(f'invalid character {ch!r} in the tag handle: {handle!r}') return handle - def prepare_tag_prefix(self, prefix): - # type: (Any) -> Any + def prepare_tag_prefix(self, prefix: Any) -> Any: if not prefix: raise EmitterError('tag prefix must not be empty') - chunks = # type: ListAny + chunks: ListAny = start = end = 0 if prefix0 == '!': end = 1 @@ -1003,16 +951,16 @@ start = end = end + 1 data = ch for ch in data: - chunks.append(_F('%{ord_ch:02X}', ord_ch=ord(ch))) + chunks.append(f'%{ord(ch):02X}') if start < end: chunks.append(prefixstart:end) return "".join(chunks) - def prepare_tag(self, tag): - # type: (Any) -> Any + def prepare_tag(self, tag: Any) -> Any: if not tag: raise EmitterError('tag must not be empty') - if tag == '!': + tag = str(tag) + if tag == '!' or tag == '!!': return tag handle = None suffix = tag @@ -1021,7 +969,7 @@ if tag.startswith(prefix) and (prefix == '!' or len(prefix) < len(tag)): handle = self.tag_prefixesprefix suffix = taglen(prefix) : - chunks = # type: ListAny + chunks: ListAny = start = end = 0 ch_set = "-;/?:@&=+$,_.~*'()" if self.dumper: @@ -1044,32 +992,24 @@ start = end = end + 1 data = ch for ch in data: - chunks.append(_F('%{ord_ch:02X}', ord_ch=ord(ch))) + chunks.append(f'%{ord(ch):02X}') if start < end: chunks.append(suffixstart:end) suffix_text = "".join(chunks) if handle: - return _F('{handle!s}{suffix_text!s}', handle=handle, suffix_text=suffix_text) + return f'{handle!s}{suffix_text!s}' else: - return _F('!<{suffix_text!s}>', suffix_text=suffix_text) + return f'!<{suffix_text!s}>' - def prepare_anchor(self, anchor): - # type: (Any) -> Any + def prepare_anchor(self, anchor: Any) -> Any: if not anchor: raise EmitterError('anchor must not be empty') for ch in anchor: if not check_anchorname_char(ch): - raise EmitterError( - _F( - 'invalid character {ch!r} in the anchor: {anchor!r}', - ch=ch, - anchor=anchor, - ) - ) + raise EmitterError(f'invalid character {ch!r} in the anchor: {anchor!r}') return anchor - def analyze_scalar(self, scalar): - # type: (Any) -> Any + def analyze_scalar(self, scalar: Any) -> Any: # Empty scalar is a special case. if not scalar: return ScalarAnalysis( @@ -1249,23 +1189,25 @@ # Writers. - def flush_stream(self): - # type: () -> None + def flush_stream(self) -> None: if hasattr(self.stream, 'flush'): self.stream.flush() - def write_stream_start(self): - # type: () -> None + def write_stream_start(self) -> None: # Write BOM if needed. if self.encoding and self.encoding.startswith('utf-16'): self.stream.write('\uFEFF'.encode(self.encoding)) - def write_stream_end(self): - # type: () -> None + def write_stream_end(self) -> None: self.flush_stream() - def write_indicator(self, indicator, need_whitespace, whitespace=False, indention=False): - # type: (Any, Any, bool, bool) -> None + def write_indicator( + self, + indicator: Any, + need_whitespace: Any, + whitespace: bool = False, + indention: bool = False, + ) -> None: if self.whitespace or not need_whitespace: data = indicator else: @@ -1278,8 +1220,7 @@ data = data.encode(self.encoding) self.stream.write(data) - def write_indent(self): - # type: () -> None + def write_indent(self) -> None: indent = self.indent or 0 if ( not self.indention @@ -1298,8 +1239,7 @@ data = data.encode(self.encoding) # type: ignore self.stream.write(data) - def write_line_break(self, data=None): - # type: (Any) -> None + def write_line_break(self, data: Any = None) -> None: if data is None: data = self.best_line_break self.whitespace = True @@ -1310,21 +1250,15 @@ data = data.encode(self.encoding) self.stream.write(data) - def write_version_directive(self, version_text): - # type: (Any) -> None - data = _F('%YAML {version_text!s}', version_text=version_text) + def write_version_directive(self, version_text: Any) -> None: + data: Any = f'%YAML {version_text!s}' if self.encoding: data = data.encode(self.encoding) self.stream.write(data) self.write_line_break() - def write_tag_directive(self, handle_text, prefix_text): - # type: (Any, Any) -> None - data = _F( - '%TAG {handle_text!s} {prefix_text!s}', - handle_text=handle_text, - prefix_text=prefix_text, - ) + def write_tag_directive(self, handle_text: Any, prefix_text: Any) -> None: + data: Any = f'%TAG {handle_text!s} {prefix_text!s}' if self.encoding: data = data.encode(self.encoding) self.stream.write(data) @@ -1332,8 +1266,7 @@ # Scalar streams. - def write_single_quoted(self, text, split=True): - # type: (Any, Any) -> None + def write_single_quoted(self, text: Any, split: Any = True) -> None: if self.root_context: if self.requested_indent is not None: self.write_line_break() @@ -1415,8 +1348,7 @@ '\u2029': 'P', } - def write_double_quoted(self, text, split=True): - # type: (Any, Any) -> None + def write_double_quoted(self, text: Any, split: Any = True) -> None: if self.root_context: if self.requested_indent is not None: self.write_line_break() @@ -1435,7 +1367,11 @@ '\x20' <= ch <= '\x7E' or ( self.allow_unicode - and ('\xA0' <= ch <= '\uD7FF' or '\uE000' <= ch <= '\uFFFD') + and ( + ('\xA0' <= ch <= '\uD7FF') + or ('\uE000' <= ch <= '\uFFFD') + or ('\U00010000' <= ch <= '\U0010FFFF') + ) ) ) ): @@ -1450,11 +1386,11 @@ if ch in self.ESCAPE_REPLACEMENTS: data = '\\' + self.ESCAPE_REPLACEMENTSch elif ch <= '\xFF': - data = _F('\\x{ord_ch:02X}', ord_ch=ord(ch)) + data = '\\x%02X' % ord(ch) elif ch <= '\uFFFF': - data = _F('\\u{ord_ch:04X}', ord_ch=ord(ch)) + data = '\\u%04X' % ord(ch) else: - data = _F('\\U{ord_ch:08X}', ord_ch=ord(ch)) + data = '\\U%08X' % ord(ch) self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) @@ -1466,7 +1402,22 @@ and self.column + (end - start) > self.best_width and split ): - data = textstart:end + '\\' + # SO https://stackoverflow.com/a/75634614/1307905 + # data = textstart:end + u'\\' # <<< replaced with following six lines + need_backquote = True + if len(text) > end: + try: + space_pos = text.index(' ', end) + if ( + '"' not in textend:space_pos + and "'" not in textend:space_pos + and textspace_pos + 1 != ' ' + and textend - 1 : end + 1 != ' ' + ): + need_backquote = False + except (ValueError, IndexError): + pass + data = textstart:end + ('\\' if need_backquote else '') if start < end: start = end self.column += len(data) @@ -1477,7 +1428,11 @@ self.whitespace = False self.indention = False if textstart == ' ': - data = '\\' + if not need_backquote: + # remove leading space it will load from the newline + start += 1 + # data = u'\\' # <<< replaced with following line + data = '\\' if need_backquote else '' self.column += len(data) if bool(self.encoding): data = data.encode(self.encoding) @@ -1485,14 +1440,13 @@ end += 1 self.write_indicator('"', False) - def determine_block_hints(self, text): - # type: (Any) -> Any + def determine_block_hints(self, text: Any) -> Any: indent = 0 indicator = '' hints = '' if text: if text0 in ' \n\x85\u2028\u2029': - indent = self.best_sequence_indent + indent = 2 hints += str(indent) elif self.root_context: for end in '\n---', '\n...': @@ -1510,7 +1464,7 @@ if pos > -1: break if pos > 0: - indent = self.best_sequence_indent + indent = 2 if text-1 not in '\n\x85\u2028\u2029': indicator = '-' elif len(text) == 1 or text-2 in '\n\x85\u2028\u2029': @@ -1518,10 +1472,11 @@ hints += indicator return hints, indent, indicator - def write_folded(self, text): - # type: (Any) -> None + def write_folded(self, text: Any, comment: Any) -> None: hints, _indent, _indicator = self.determine_block_hints(text) - self.write_indicator('>' + hints, True) + if not isinstance(comment, str): + comment = '' + self.write_indicator('>' + hints + comment, True) if _indicator == '+': self.open_ended = True self.write_line_break() @@ -1584,8 +1539,7 @@ spaces = ch == ' ' end += 1 - def write_literal(self, text, comment=None): - # type: (Any, Any) -> None + def write_literal(self, text: Any, comment: Any = None) -> None: hints, _indent, _indicator = self.determine_block_hints(text) # if comment is not None: # try: @@ -1638,8 +1592,7 @@ breaks = ch in '\n\x85\u2028\u2029' end += 1 - def write_plain(self, text, split=True): - # type: (Any, Any) -> None + def write_plain(self, text: Any, split: Any = True) -> None: if self.root_context: if self.requested_indent is not None: self.write_line_break() @@ -1693,6 +1646,13 @@ else: if ch is None or ch in ' \n\x85\u2028\u2029': data = textstart:end + if ( + len(data) > self.best_width + and self.indent is not None + and self.column > self.indent + ): + # words longer than line length get a line of their own + self.write_indent() self.column += len(data) if self.encoding: data = data.encode(self.encoding) # type: ignore @@ -1707,10 +1667,9 @@ breaks = ch in '\n\x85\u2028\u2029' end += 1 - def write_comment(self, comment, pre=False): - # type: (Any, bool) -> None + def write_comment(self, comment: Any, pre: bool = False) -> None: value = comment.value - # nprintf('{:02d} {:02d} {!r}'.format(self.column, comment.start_mark.column, value)) + # nprintf(f'{self.column:02d} {comment.start_mark.column:02d} {value!r}') if not pre and value-1 == '\n': value = value:-1 try: @@ -1743,8 +1702,7 @@ if not pre: self.write_line_break() - def write_pre_comment(self, event): - # type: (Any) -> bool + def write_pre_comment(self, event: Any) -> bool: comments = event.comment1 if comments is None: return False @@ -1759,14 +1717,36 @@ if isinstance(event, start_events): comment.pre_done = True except TypeError: - sys.stdout.write('eventtt {} {}'.format(type(event), event)) + sys.stdout.write(f'eventtt {type(event)} {event}') raise return True - def write_post_comment(self, event): - # type: (Any) -> bool + def write_post_comment(self, event: Any) -> bool: if self.event.comment0 is None: return False comment = event.comment0 self.write_comment(comment) return True + + +class RoundTripEmitter(Emitter): + def prepare_tag(self, ctag: Any) -> Any: + if not ctag: + raise EmitterError('tag must not be empty') + tag = str(ctag) + # print('handling', repr(tag)) + if tag == '!' or tag == '!!': + return tag + handle = ctag.handle + suffix = ctag.suffix + prefixes = sorted(self.tag_prefixes.keys()) + # print('handling', repr(tag), repr(suffix), repr(handle)) + if handle is None: + for prefix in prefixes: + if tag.startswith(prefix) and (prefix == '!' or len(prefix) < len(tag)): + handle = self.tag_prefixesprefix + suffix = suffixlen(prefix) : + if handle: + return f'{handle!s}{suffix!s}' + else: + return f'!<{suffix!s}>'
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/error.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/error.py
Changed
@@ -3,10 +3,7 @@ import warnings import textwrap -from ruamel.yaml.compat import _F - -if False: # MYPY - from typing import Any, Dict, Optional, List, Text # NOQA +from typing import Any, Dict, Optional, List, Text # NOQA __all__ = @@ -25,33 +22,24 @@ class StreamMark: __slots__ = 'name', 'index', 'line', 'column' - def __init__(self, name, index, line, column): - # type: (Any, int, int, int) -> None + def __init__(self, name: Any, index: int, line: int, column: int) -> None: self.name = name self.index = index self.line = line self.column = column - def __str__(self): - # type: () -> Any - where = _F( - ' in "{sname!s}", line {sline1:d}, column {scolumn1:d}', - sname=self.name, - sline1=self.line + 1, - scolumn1=self.column + 1, - ) + def __str__(self) -> Any: + where = f' in "{self.name!s}", line {self.line + 1:d}, column {self.column + 1:d}' return where - def __eq__(self, other): - # type: (Any) -> bool + def __eq__(self, other: Any) -> bool: if self.line != other.line or self.column != other.column: return False if self.name != other.name or self.index != other.index: return False return True - def __ne__(self, other): - # type: (Any) -> bool + def __ne__(self, other: Any) -> bool: return not self.__eq__(other) @@ -62,14 +50,14 @@ class StringMark(StreamMark): __slots__ = 'name', 'index', 'line', 'column', 'buffer', 'pointer' - def __init__(self, name, index, line, column, buffer, pointer): - # type: (Any, int, int, int, Any, Any) -> None + def __init__( + self, name: Any, index: int, line: int, column: int, buffer: Any, pointer: Any + ) -> None: StreamMark.__init__(self, name, index, line, column) self.buffer = buffer self.pointer = pointer - def get_snippet(self, indent=4, max_length=75): - # type: (int, int) -> Any + def get_snippet(self, indent: int = 4, max_length: int = 75) -> Any: if self.buffer is None: # always False return None head = "" @@ -90,7 +78,7 @@ break snippet = self.bufferstart:end caret = '^' - caret = '^ (line: {})'.format(self.line + 1) + caret = f'^ (line: {self.line + 1})' return ( ' ' * indent + head @@ -101,28 +89,16 @@ + caret ) - def __str__(self): - # type: () -> Any + def __str__(self) -> Any: snippet = self.get_snippet() - where = _F( - ' in "{sname!s}", line {sline1:d}, column {scolumn1:d}', - sname=self.name, - sline1=self.line + 1, - scolumn1=self.column + 1, - ) + where = f' in "{self.name!s}", line {self.line + 1:d}, column {self.column + 1:d}' if snippet is not None: where += ':\n' + snippet return where - def __repr__(self): - # type: () -> Any + def __repr__(self) -> Any: snippet = self.get_snippet() - where = _F( - ' in "{sname!s}", line {sline1:d}, column {scolumn1:d}', - sname=self.name, - sline1=self.line + 1, - scolumn1=self.column + 1, - ) + where = f' in "{self.name!s}", line {self.line + 1:d}, column {self.column + 1:d}' if snippet is not None: where += ':\n' + snippet return where @@ -131,8 +107,7 @@ class CommentMark: __slots__ = ('column',) - def __init__(self, column): - # type: (Any) -> None + def __init__(self, column: Any) -> None: self.column = column @@ -143,14 +118,13 @@ class MarkedYAMLError(YAMLError): def __init__( self, - context=None, - context_mark=None, - problem=None, - problem_mark=None, - note=None, - warn=None, - ): - # type: (Any, Any, Any, Any, Any, Any) -> None + context: Any = None, + context_mark: Any = None, + problem: Any = None, + problem_mark: Any = None, + note: Any = None, + warn: Any = None, + ) -> None: self.context = context self.context_mark = context_mark self.problem = problem @@ -158,9 +132,8 @@ self.note = note # warn is ignored - def __str__(self): - # type: () -> Any - lines = # type: Liststr + def __str__(self) -> Any: + lines: Liststr = if self.context is not None: lines.append(self.context) if self.context_mark is not None and ( @@ -192,14 +165,13 @@ class MarkedYAMLWarning(YAMLWarning): def __init__( self, - context=None, - context_mark=None, - problem=None, - problem_mark=None, - note=None, - warn=None, - ): - # type: (Any, Any, Any, Any, Any, Any) -> None + context: Any = None, + context_mark: Any = None, + problem: Any = None, + problem_mark: Any = None, + note: Any = None, + warn: Any = None, + ) -> None: self.context = context self.context_mark = context_mark self.problem = problem @@ -207,9 +179,8 @@ self.note = note self.warn = warn - def __str__(self): - # type: () -> Any - lines = # type: Liststr + def __str__(self) -> Any: + lines: Liststr = if self.context is not None: lines.append(self.context) if self.context_mark is not None and ( @@ -254,30 +225,26 @@ class MantissaNoDotYAML1_1Warning(YAMLWarning): - def __init__(self, node, flt_str): - # type: (Any, Any) -> None + def __init__(self, node: Any, flt_str: Any) -> None: self.node = node self.flt = flt_str - def __str__(self): - # type: () -> Any + def __str__(self) -> Any: line = self.node.start_mark.line col = self.node.start_mark.column - return """ + return f""" In YAML 1.1 floating point values should have a dot ('.') in their mantissa. See the Floating-Point Language-Independent Type for YAML™ Version 1.1 specification ( http://yaml.org/type/float.html ). This dot is not required for JSON nor for YAML 1.2 -Correct your float: "{}" on line: {}, column: {} +Correct your float: "{self.flt}" on line: {line}, column: {col} or alternatively include the following in your code: import warnings warnings.simplefilter('ignore', ruamel.yaml.error.MantissaNoDotYAML1_1Warning) -""".format( - self.flt, line, col - ) +""" warnings.simplefilter('once', MantissaNoDotYAML1_1Warning) @@ -290,14 +257,13 @@ class MarkedYAMLFutureWarning(YAMLFutureWarning): def __init__( self, - context=None, - context_mark=None, - problem=None, - problem_mark=None, - note=None, - warn=None, - ): - # type: (Any, Any, Any, Any, Any, Any) -> None + context: Any = None, + context_mark: Any = None, + problem: Any = None, + problem_mark: Any = None, + note: Any = None, + warn: Any = None, + ) -> None: self.context = context self.context_mark = context_mark self.problem = problem @@ -305,9 +271,8 @@ self.note = note self.warn = warn - def __str__(self): - # type: () -> Any - lines = # type: Liststr + def __str__(self) -> Any: + lines: Liststr = if self.context is not None: lines.append(self.context)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/events.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/events.py
Changed
@@ -1,25 +1,24 @@ # coding: utf-8 -from ruamel.yaml.compat import _F - # Abstract classes. -if False: # MYPY - from typing import Any, Dict, Optional, List # NOQA +from typing import Any, Dict, Optional, List # NOQA +from ruamel.yaml.tag import Tag SHOW_LINES = False -def CommentCheck(): - # type: () -> None +def CommentCheck() -> None: pass class Event: __slots__ = 'start_mark', 'end_mark', 'comment' + crepr = 'Unspecified Event' - def __init__(self, start_mark=None, end_mark=None, comment=CommentCheck): - # type: (Any, Any, Any) -> None + def __init__( + self, start_mark: Any = None, end_mark: Any = None, comment: Any = CommentCheck + ) -> None: self.start_mark = start_mark self.end_mark = end_mark # assert comment is not CommentCheck @@ -27,29 +26,24 @@ comment = None self.comment = comment - def __repr__(self): - # type: () -> Any + def __repr__(self) -> Any: if True: arguments = if hasattr(self, 'value'): # if you use repr(getattr(self, 'value')) then flake8 complains about # abuse of getattr with a constant. When you change to self.value # then mypy throws an error - arguments.append(repr(self.value)) # type: ignore + arguments.append(repr(self.value)) for key in 'anchor', 'tag', 'implicit', 'flow_style', 'style': v = getattr(self, key, None) if v is not None: - arguments.append(_F('{key!s}={v!r}', key=key, v=v)) + arguments.append(f'{key!s}={v!r}') if self.comment not in None, CommentCheck: - arguments.append('comment={!r}'.format(self.comment)) + arguments.append(f'comment={self.comment!r}') if SHOW_LINES: arguments.append( - '({}:{}/{}:{})'.format( - self.start_mark.line, - self.start_mark.column, - self.end_mark.line, - self.end_mark.column, - ) + f'({self.start_mark.line}:{self.start_mark.column}/' + f'{self.end_mark.line}:{self.end_mark.column})' ) arguments = ', '.join(arguments) # type: ignore else: @@ -58,48 +52,49 @@ for key in 'anchor', 'tag', 'implicit', 'value', 'flow_style', 'style' if hasattr(self, key) - arguments = ', '.join( - _F('{k!s}={attr!r}', k=key, attr=getattr(self, key)) for key in attributes - ) + arguments = ', '.join(f'{key!s}={getattr(self, key)!r}' for key in attributes) if self.comment not in None, CommentCheck: - arguments += ', comment={!r}'.format(self.comment) - return _F( - '{self_class_name!s}({arguments!s})', - self_class_name=self.__class__.__name__, - arguments=arguments, - ) + arguments += f', comment={self.comment!r}' + return f'{self.__class__.__name__!s}({arguments!s})' + + def compact_repr(self) -> str: + return f'{self.crepr}' class NodeEvent(Event): __slots__ = ('anchor',) - def __init__(self, anchor, start_mark=None, end_mark=None, comment=None): - # type: (Any, Any, Any, Any) -> None + def __init__( + self, anchor: Any, start_mark: Any = None, end_mark: Any = None, comment: Any = None + ) -> None: Event.__init__(self, start_mark, end_mark, comment) self.anchor = anchor class CollectionStartEvent(NodeEvent): - __slots__ = 'tag', 'implicit', 'flow_style', 'nr_items' + __slots__ = 'ctag', 'implicit', 'flow_style', 'nr_items' def __init__( self, - anchor, - tag, - implicit, - start_mark=None, - end_mark=None, - flow_style=None, - comment=None, - nr_items=None, - ): - # type: (Any, Any, Any, Any, Any, Any, Any, Optionalint) -> None + anchor: Any, + tag: Any, + implicit: Any, + start_mark: Any = None, + end_mark: Any = None, + flow_style: Any = None, + comment: Any = None, + nr_items: Optionalint = None, + ) -> None: NodeEvent.__init__(self, anchor, start_mark, end_mark, comment) - self.tag = tag + self.ctag = tag self.implicit = implicit self.flow_style = flow_style self.nr_items = nr_items + @property + def tag(self) -> Optionalstr: + return None if self.ctag is None else str(self.ctag) + class CollectionEndEvent(Event): __slots__ = () @@ -110,87 +105,160 @@ class StreamStartEvent(Event): __slots__ = ('encoding',) + crepr = '+STR' - def __init__(self, start_mark=None, end_mark=None, encoding=None, comment=None): - # type: (Any, Any, Any, Any) -> None + def __init__( + self, + start_mark: Any = None, + end_mark: Any = None, + encoding: Any = None, + comment: Any = None, + ) -> None: Event.__init__(self, start_mark, end_mark, comment) self.encoding = encoding class StreamEndEvent(Event): __slots__ = () + crepr = '-STR' class DocumentStartEvent(Event): __slots__ = 'explicit', 'version', 'tags' + crepr = '+DOC' def __init__( self, - start_mark=None, - end_mark=None, - explicit=None, - version=None, - tags=None, - comment=None, - ): - # type: (Any, Any, Any, Any, Any, Any) -> None + start_mark: Any = None, + end_mark: Any = None, + explicit: Any = None, + version: Any = None, + tags: Any = None, + comment: Any = None, + ) -> None: Event.__init__(self, start_mark, end_mark, comment) self.explicit = explicit self.version = version self.tags = tags + def compact_repr(self) -> str: + start = ' ---' if self.explicit else '' + return f'{self.crepr}{start}' + class DocumentEndEvent(Event): __slots__ = ('explicit',) + crepr = '-DOC' - def __init__(self, start_mark=None, end_mark=None, explicit=None, comment=None): - # type: (Any, Any, Any, Any) -> None + def __init__( + self, + start_mark: Any = None, + end_mark: Any = None, + explicit: Any = None, + comment: Any = None, + ) -> None: Event.__init__(self, start_mark, end_mark, comment) self.explicit = explicit + def compact_repr(self) -> str: + end = ' ...' if self.explicit else '' + return f'{self.crepr}{end}' + class AliasEvent(NodeEvent): __slots__ = 'style' + crepr = '=ALI' - def __init__(self, anchor, start_mark=None, end_mark=None, style=None, comment=None): - # type: (Any, Any, Any, Any, Any) -> None + def __init__( + self, + anchor: Any, + start_mark: Any = None, + end_mark: Any = None, + style: Any = None, + comment: Any = None, + ) -> None: NodeEvent.__init__(self, anchor, start_mark, end_mark, comment) self.style = style + def compact_repr(self) -> str: + return f'{self.crepr} *{self.anchor}' + class ScalarEvent(NodeEvent): - __slots__ = 'tag', 'implicit', 'value', 'style' + __slots__ = 'ctag', 'implicit', 'value', 'style' + crepr = '=VAL' def __init__( self, - anchor, - tag, - implicit, - value, - start_mark=None, - end_mark=None, - style=None, - comment=None, - ): - # type: (Any, Any, Any, Any, Any, Any, Any, Any) -> None + anchor: Any, + tag: Any, + implicit: Any, + value: Any, + start_mark: Any = None, + end_mark: Any = None, + style: Any = None, + comment: Any = None, + ) -> None: NodeEvent.__init__(self, anchor, start_mark, end_mark, comment) - self.tag = tag + self.ctag = tag self.implicit = implicit self.value = value self.style = style + @property + def tag(self) -> Optionalstr: + return None if self.ctag is None else str(self.ctag) + + @tag.setter + def tag(self, val: Any) -> None: + if isinstance(val, str): + val = Tag(suffix=val) + self.ctag = val + + def compact_repr(self) -> str: + style = ':' if self.style is None else self.style + anchor = f'&{self.anchor} ' if self.anchor else '' + tag = f'<{self.tag!s}> ' if self.tag else '' + value = self.value + for ch, rep in + ('\\', '\\\\'), + ('\t', '\\t'), + ('\n', '\\n'), + ('\a', ''), # remove from folded + ('\r', '\\r'), + ('\b', '\\b'), + : + value = value.replace(ch, rep) + return f'{self.crepr} {anchor}{tag}{style}{value}' + class SequenceStartEvent(CollectionStartEvent): __slots__ = () + crepr = '+SEQ' + + def compact_repr(self) -> str: + flow = ' ' if self.flow_style else '' + anchor = f' &{self.anchor}' if self.anchor else '' + tag = f' <{self.tag!s}>' if self.tag else '' + return f'{self.crepr}{flow}{anchor}{tag}' class SequenceEndEvent(CollectionEndEvent): __slots__ = () + crepr = '-SEQ' class MappingStartEvent(CollectionStartEvent): __slots__ = () + crepr = '+MAP' + + def compact_repr(self) -> str: + flow = ' {}' if self.flow_style else '' + anchor = f' &{self.anchor}' if self.anchor else '' + tag = f' <{self.tag!s}>' if self.tag else '' + return f'{self.crepr}{flow}{anchor}{tag}' class MappingEndEvent(CollectionEndEvent): __slots__ = () + crepr = '-MAP'
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/loader.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/loader.py
Changed
@@ -12,16 +12,19 @@ ) from ruamel.yaml.resolver import VersionedResolver -if False: # MYPY - from typing import Any, Dict, List, Union, Optional # NOQA - from ruamel.yaml.compat import StreamTextType, VersionType # NOQA +from typing import Any, Dict, List, Union, Optional # NOQA +from ruamel.yaml.compat import StreamTextType, VersionType # NOQA __all__ = 'BaseLoader', 'SafeLoader', 'Loader', 'RoundTripLoader' class BaseLoader(Reader, Scanner, Parser, Composer, BaseConstructor, VersionedResolver): - def __init__(self, stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None + def __init__( + self, + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, + ) -> None: self.comment_handling = None Reader.__init__(self, stream, loader=self) Scanner.__init__(self, loader=self) @@ -32,8 +35,12 @@ class SafeLoader(Reader, Scanner, Parser, Composer, SafeConstructor, VersionedResolver): - def __init__(self, stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None + def __init__( + self, + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, + ) -> None: self.comment_handling = None Reader.__init__(self, stream, loader=self) Scanner.__init__(self, loader=self) @@ -44,8 +51,12 @@ class Loader(Reader, Scanner, Parser, Composer, Constructor, VersionedResolver): - def __init__(self, stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None + def __init__( + self, + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, + ) -> None: self.comment_handling = None Reader.__init__(self, stream, loader=self) Scanner.__init__(self, loader=self) @@ -63,8 +74,12 @@ RoundTripConstructor, VersionedResolver, ): - def __init__(self, stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None + def __init__( + self, + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, + ) -> None: # self.reader = Reader.__init__(self, stream) self.comment_handling = None # issue 385 Reader.__init__(self, stream, loader=self)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/main.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/main.py
Changed
@@ -30,13 +30,13 @@ Constructor, RoundTripConstructor, ) -from ruamel.yaml.loader import Loader as UnsafeLoader +from ruamel.yaml.loader import Loader as UnsafeLoader # NOQA from ruamel.yaml.comments import CommentedMap, CommentedSeq, C_PRE -if False: # MYPY - from typing import List, Set, Dict, Union, Any, Callable, Optional, Text # NOQA - from ruamel.yaml.compat import StreamType, StreamTextType, VersionType # NOQA - from pathlib import Path +from typing import List, Set, Dict, Union, Any, Callable, Optional, Text, Type # NOQA +from types import TracebackType +from ruamel.yaml.compat import StreamType, StreamTextType, VersionType # NOQA +from pathlib import Path # NOQA try: from _ruamel_yaml import CParser, CEmitter # type: ignore @@ -51,8 +51,14 @@ class YAML: - def __init__(self, *, typ=None, pure=False, output=None, plug_ins=None): # input=None, - # type: (Any, OptionalText, Any, Any, Any) -> None + def __init__( + self: Any, + *, + typ: OptionalUnionListText, Text = None, + pure: Any = False, + output: Any = None, + plug_ins: Any = None, + ) -> None: # input=None, """ typ: 'rt'/None -> RoundTripLoader/RoundTripDumper, (default) 'safe' -> SafeLoader/SafeDumper, @@ -68,20 +74,20 @@ # self._input = input self._output = output - self._context_manager = None # type: Any + self._context_manager: Any = None - self.plug_ins = # type: ListAny + self.plug_ins: ListAny = for pu in ( if plug_ins is None else plug_ins) + self.official_plug_ins(): file_name = pu.replace(os.sep, '.') self.plug_ins.append(import_module(file_name)) - self.Resolver = ruamel.yaml.resolver.VersionedResolver # type: Any + self.Resolver: Any = ruamel.yaml.resolver.VersionedResolver self.allow_unicode = True - self.Reader = None # type: Any - self.Representer = None # type: Any - self.Constructor = None # type: Any - self.Scanner = None # type: Any - self.Serializer = None # type: Any - self.default_flow_style = None # type: Any + self.Reader: Any = None + self.Representer: Any = None + self.Constructor: Any = None + self.Scanner: Any = None + self.Serializer: Any = None + self.default_flow_style: Any = None self.comment_handling = None typ_found = 1 setup_rt = False @@ -112,7 +118,7 @@ elif 'rtsc' in self.typ: self.default_flow_style = False # no optimized rt-dumper yet - self.Emitter = ruamel.yaml.emitter.Emitter + self.Emitter = ruamel.yaml.emitter.RoundTripEmitter self.Serializer = ruamel.yaml.serializer.Serializer self.Representer = ruamel.yaml.representer.RoundTripRepresenter self.Scanner = ruamel.yaml.scanner.RoundTripScannerSC @@ -127,7 +133,7 @@ if setup_rt: self.default_flow_style = False # no optimized rt-dumper yet - self.Emitter = ruamel.yaml.emitter.Emitter + self.Emitter = ruamel.yaml.emitter.RoundTripEmitter self.Serializer = ruamel.yaml.serializer.Serializer self.Representer = ruamel.yaml.representer.RoundTripRepresenter self.Scanner = ruamel.yaml.scanner.RoundTripScanner @@ -139,29 +145,29 @@ self.stream = None self.canonical = None self.old_indent = None - self.width = None + self.width: Unionint, None = None self.line_break = None - self.map_indent = None - self.sequence_indent = None - self.sequence_dash_offset = 0 + self.map_indent: Unionint, None = None + self.sequence_indent: Unionint, None = None + self.sequence_dash_offset: int = 0 self.compact_seq_seq = None self.compact_seq_map = None self.sort_base_mapping_type_on_output = None # default: sort self.top_level_colon_align = None self.prefix_colon = None - self.version = None - self.preserve_quotes = None + self._version: OptionalAny = None + self.preserve_quotes: Optionalbool = None self.allow_duplicate_keys = False # duplicate keys in map, set self.encoding = 'utf-8' - self.explicit_start = None - self.explicit_end = None + self.explicit_start: Unionbool, None = None + self.explicit_end: Unionbool, None = None self.tags = None self.default_style = None self.top_level_block_style_scalar_no_indent_error_1_1 = False # directives end indicator with single scalar document - self.scalar_after_indicator = None + self.scalar_after_indicator: Optionalbool = None # a, b: 1, c: {d: 2} vs. a, {b: 1}, {c: {d: 2}} self.brace_single_entry_mapping_in_flow_sequence = False for module in self.plug_ins: @@ -171,12 +177,11 @@ break if typ_found == 0: raise NotImplementedError( - 'typ "{}"not recognised (need to install plug-in?)'.format(self.typ) + f'typ "{self.typ}" not recognised (need to install plug-in?)' ) @property - def reader(self): - # type: () -> Any + def reader(self) -> Any: try: return self._reader # type: ignore except AttributeError: @@ -184,8 +189,7 @@ return self._reader @property - def scanner(self): - # type: () -> Any + def scanner(self) -> Any: try: return self._scanner # type: ignore except AttributeError: @@ -193,8 +197,7 @@ return self._scanner @property - def parser(self): - # type: () -> Any + def parser(self) -> Any: attr = '_' + sys._getframe().f_code.co_name if not hasattr(self, attr): if self.Parser is not CParser: @@ -215,16 +218,14 @@ return getattr(self, attr) @property - def composer(self): - # type: () -> Any + def composer(self) -> Any: attr = '_' + sys._getframe().f_code.co_name if not hasattr(self, attr): setattr(self, attr, self.Composer(loader=self)) return getattr(self, attr) @property - def constructor(self): - # type: () -> Any + def constructor(self) -> Any: attr = '_' + sys._getframe().f_code.co_name if not hasattr(self, attr): cnst = self.Constructor(preserve_quotes=self.preserve_quotes, loader=self) @@ -233,16 +234,14 @@ return getattr(self, attr) @property - def resolver(self): - # type: () -> Any + def resolver(self) -> Any: attr = '_' + sys._getframe().f_code.co_name if not hasattr(self, attr): setattr(self, attr, self.Resolver(version=self.version, loader=self)) return getattr(self, attr) @property - def emitter(self): - # type: () -> Any + def emitter(self) -> Any: attr = '_' + sys._getframe().f_code.co_name if not hasattr(self, attr): if self.Emitter is not CEmitter: @@ -277,8 +276,7 @@ return getattr(self, attr) @property - def serializer(self): - # type: () -> Any + def serializer(self) -> Any: attr = '_' + sys._getframe().f_code.co_name if not hasattr(self, attr): setattr( @@ -296,8 +294,7 @@ return getattr(self, attr) @property - def representer(self): - # type: () -> Any + def representer(self) -> Any: attr = '_' + sys._getframe().f_code.co_name if not hasattr(self, attr): repres = self.Representer( @@ -310,8 +307,7 @@ setattr(self, attr, repres) return getattr(self, attr) - def scan(self, stream): - # type: (StreamTextType) -> Any + def scan(self, stream: StreamTextType) -> Any: """ Scan a YAML stream and produce scanning tokens. """ @@ -334,8 +330,7 @@ except AttributeError: pass - def parse(self, stream): - # type: (StreamTextType) -> Any + def parse(self, stream: StreamTextType) -> Any: """ Parse a YAML stream and produce parsing events. """ @@ -358,8 +353,7 @@ except AttributeError: pass - def compose(self, stream): - # type: (UnionPath, StreamTextType) -> Any + def compose(self, stream: UnionPath, StreamTextType) -> Any: """ Parse the first YAML document in a stream and produce the corresponding representation tree. @@ -382,8 +376,7 @@ except AttributeError: pass - def compose_all(self, stream): - # type: (UnionPath, StreamTextType) -> Any + def compose_all(self, stream: UnionPath, StreamTextType) -> Any: """ Parse all YAML documents in a stream and produce corresponding representation trees. @@ -416,8 +409,7 @@ # raise TypeError("Need a stream argument when not loading from context manager") # return self.load_one(stream) - def load(self, stream): - # type: (UnionPath, StreamTextType) -> Any + def load(self, stream: UnionPath, StreamTextType) -> Any: """ at this point you either have the non-pure Parser (which has its own reader and scanner) or you have the pure Parser. @@ -443,8 +435,7 @@ except AttributeError: pass - def load_all(self, stream): # *, skip=None): - # type: (UnionPath, StreamTextType) -> Any + def load_all(self, stream: UnionPath, StreamTextType) -> Any: # *, skip=None): if not hasattr(stream, 'read') and hasattr(stream, 'open'): # pathlib.Path() instance with stream.open('r') as fp: @@ -470,8 +461,7 @@ except AttributeError: pass - def get_constructor_parser(self, stream): - # type: (StreamTextType) -> Any + def get_constructor_parser(self, stream: StreamTextType) -> Any: """ the old cyaml needs special setup, and therefore the stream """ @@ -502,8 +492,13 @@ # rslvr = ruamel.yaml.resolver.Resolver class XLoader(self.Parser, self.Constructor, rslvr): # type: ignore - def __init__(selfx, stream, version=self.version, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> None # NOQA + def __init__( + selfx, + stream: StreamTextType, + version: OptionalVersionType = self.version, + preserve_quotes: Optionalbool = None, + ) -> None: + # NOQA CParser.__init__(selfx, stream) selfx._parser = selfx._composer = selfx self.Constructor.__init__(selfx, loader=selfx) @@ -515,8 +510,7 @@ return loader, loader return self.constructor, self.parser - def emit(self, events, stream): - # type: (Any, Any) -> None + def emit(self, events: Any, stream: Any) -> None: """ Emit YAML parsing events into a stream. If stream is None, return the produced string instead. @@ -531,16 +525,14 @@ except AttributeError: raise - def serialize(self, node, stream): - # type: (Any, OptionalStreamType) -> Any + def serialize(self, node: Any, stream: OptionalStreamType) -> Any: """ Serialize a representation tree into a YAML stream. If stream is None, return the produced string instead. """ self.serialize_all(node, stream) - def serialize_all(self, nodes, stream): - # type: (Any, OptionalStreamType) -> Any + def serialize_all(self, nodes: Any, stream: OptionalStreamType) -> Any: """ Serialize a sequence of representation trees into a YAML stream. If stream is None, return the produced string instead. @@ -557,15 +549,16 @@ except AttributeError: raise - def dump(self, data, stream=None, *, transform=None): - # type: (Any, UnionPath, StreamType, Any, Any) -> Any + def dump( + self: Any, data: UnionPath, StreamType, stream: Any = None, *, transform: Any = None + ) -> Any: if self._context_manager: if not self._output: raise TypeError('Missing output stream while dumping from context manager') if transform is not None: + x = self.__class__.__name__ raise TypeError( - '{}.dump() in the context manager cannot have transform keyword ' - ''.format(self.__class__.__name__) + f'{x}.dump() in the context manager cannot have transform keyword' ) self._context_manager.dump(data) else: # old style @@ -573,8 +566,9 @@ raise TypeError('Need a stream argument when not dumping from context manager') return self.dump_all(data, stream, transform=transform) - def dump_all(self, documents, stream, *, transform=None): - # type: (Any, UnionPath, StreamType, Any) -> Any + def dump_all( + self, documents: Any, stream: UnionPath, StreamType, *, transform: Any = None + ) -> Any: if self._context_manager: raise NotImplementedError self._output = stream @@ -585,8 +579,7 @@ self._output = None self._context_manager = None - def Xdump_all(self, documents, stream, *, transform=None): - # type: (Any, Any, Any) -> Any + def Xdump_all(self, documents: Any, stream: Any, *, transform: Any = None) -> Any: """ Serialize a sequence of Python objects into a YAML stream. """ @@ -596,7 +589,7 @@ return self.dump_all(documents, fp, transform=transform) # The stream should have the methods `write` and possibly `flush`. if self.top_level_colon_align is True: - tlca = max(len(str(x)) for x in documents0) # type: Any + tlca: Any = max(len(str(x)) for x in documents0) else: tlca = self.top_level_colon_align if transform is not None: @@ -635,8 +628,7 @@ fstream.write(transform(val)) return None - def get_serializer_representer_emitter(self, stream, tlca): - # type: (StreamType, Any) -> Any + def get_serializer_representer_emitter(self, stream: StreamType, tlca: Any) -> Any: # we have only .Serializer to deal with (vs .Reader & .Scanner), much simpler if self.Emitter is not CEmitter: if self.Serializer is None: @@ -664,25 +656,25 @@ class XDumper(CEmitter, self.Representer, rslvr): # type: ignore def __init__( - selfx, - stream, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, - ): - # type: (StreamType, Any, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> None # NOQA + selfx: StreamType, + stream: Any, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, + ) -> None: + # NOQA CEmitter.__init__( selfx, stream, @@ -722,23 +714,20 @@ return dumper, dumper, dumper # basic types - def map(self, **kw): - # type: (Any) -> Any + def map(self, **kw: Any) -> Any: if 'rt' in self.typ: return CommentedMap(**kw) else: return dict(**kw) - def seq(self, *args): - # type: (Any) -> Any + def seq(self, *args: Any) -> Any: if 'rt' in self.typ: return CommentedSeq(*args) else: return list(*args) # helpers - def official_plug_ins(self): - # type: () -> Any + def official_plug_ins(self) -> Any: """search for list of subdirs that are plug-ins, if __file__ is not available, e.g. single file installers that are not properly emulating a file-system (issue 324) no plug-ins will be found. If any are packaged, you know which file that are @@ -753,10 +742,9 @@ res = x.replace(gpbd, "")1:-3 for x in glob.glob(bd + '/*/__plug_in__.py') return res - def register_class(self, cls): - # type:(Any) -> Any + def register_class(self, cls: Any) -> Any: """ - register a class for dumping loading + register a class for dumping/loading - if it has attribute yaml_tag use that to register, else use class name - if it has methods to_yaml/from_yaml use those to dump/load else dump attributes as mapping @@ -766,8 +754,7 @@ self.representer.add_representer(cls, cls.to_yaml) except AttributeError: - def t_y(representer, data): - # type: (Any, Any) -> Any + def t_y(representer: Any, data: Any) -> Any: return representer.represent_yaml_object( tag, data, cls, flow_style=representer.default_flow_style ) @@ -777,8 +764,7 @@ self.constructor.add_constructor(tag, cls.from_yaml) except AttributeError: - def f_y(constructor, node): - # type: (Any, Any) -> Any + def f_y(constructor: Any, node: Any) -> Any: return constructor.construct_yaml_object(node, cls) self.constructor.add_constructor(tag, f_y) @@ -786,13 +772,16 @@ # ### context manager - def __enter__(self): - # type: () -> Any + def __enter__(self) -> Any: self._context_manager = YAMLContextManager(self) return self - def __exit__(self, typ, value, traceback): - # type: (Any, Any, Any) -> None + def __exit__( + self, + typ: OptionalTypeBaseException, + value: OptionalBaseException, + traceback: OptionalTracebackType, + ) -> None: if typ: nprint('typ', typ) self._context_manager.teardown_output() @@ -800,8 +789,7 @@ self._context_manager = None # ### backwards compatibility - def _indent(self, mapping=None, sequence=None, offset=None): - # type: (Any, Any, Any) -> None + def _indent(self, mapping: Any = None, sequence: Any = None, offset: Any = None) -> None: if mapping is not None: self.map_indent = mapping if sequence is not None: @@ -810,34 +798,47 @@ self.sequence_dash_offset = offset @property - def indent(self): - # type: () -> Any + def version(self) -> OptionalAny: + return self._version + + @version.setter + def version(self, val: OptionalVersionType) -> None: + if val is None: + self._version = val + return + if isinstance(val, str): + sval = tuple(int(x) for x in val.split('.')) + else: + sval = tuple(int(x) for x in val) + assert len(sval) == 2, f'version can only have major.minor, got {val}' + assert sval0 == 1, f'version major part can only be 1, got {val}' + assert sval1 in 1, 2, f'version minor part can only be 2 or 1, got {val}' + self._version = sval + + @property + def indent(self) -> Any: return self._indent @indent.setter - def indent(self, val): - # type: (Any) -> None + def indent(self, val: Any) -> None: self.old_indent = val @property - def block_seq_indent(self): - # type: () -> Any + def block_seq_indent(self) -> Any: return self.sequence_dash_offset @block_seq_indent.setter - def block_seq_indent(self, val): - # type: (Any) -> None + def block_seq_indent(self, val: Any) -> None: self.sequence_dash_offset = val - def compact(self, seq_seq=None, seq_map=None): - # type: (Any, Any) -> None + def compact(self, seq_seq: Any = None, seq_map: Any = None) -> None: self.compact_seq_seq = seq_seq self.compact_seq_map = seq_map class YAMLContextManager: - def __init__(self, yaml, transform=None): - # type: (Any, Any) -> None # used to be: (Any, OptionalCallable) -> None + def __init__(self, yaml: Any, transform: Any = None) -> None: + # used to be: (Any, OptionalCallable) -> None self._yaml = yaml self._output_inited = False self._output_path = None @@ -868,8 +869,7 @@ else: self._output = BytesIO() - def teardown_output(self): - # type: () -> None + def teardown_output(self) -> None: if self._output_inited: self._yaml.serializer.close() else: @@ -897,18 +897,16 @@ if self._output_path is not None: self._output.close() - def init_output(self, first_data): - # type: (Any) -> None + def init_output(self, first_data: Any) -> None: if self._yaml.top_level_colon_align is True: - tlca = max(len(str(x)) for x in first_data) # type: Any + tlca: Any = max(len(str(x)) for x in first_data) else: tlca = self._yaml.top_level_colon_align self._yaml.get_serializer_representer_emitter(self._output, tlca) self._yaml.serializer.open() self._output_inited = True - def dump(self, data): - # type: (Any) -> None + def dump(self, data: Any) -> None: if not self._output_inited: self.init_output(data) try: @@ -942,8 +940,7 @@ # pass -def yaml_object(yml): - # type: (Any) -> Any +def yaml_object(yml: Any) -> Any: """ decorator for classes that needs to dump/load objects The tag for such objects is taken from the class attribute yaml_tag (or the class name in lowercase in case unavailable) @@ -951,15 +948,13 @@ loading, default routines (dumping a mapping of the attributes) used otherwise. """ - def yo_deco(cls): - # type: (Any) -> Any + def yo_deco(cls: Any) -> Any: tag = getattr(cls, 'yaml_tag', '!' + cls.__name__) try: yml.representer.add_representer(cls, cls.to_yaml) except AttributeError: - def t_y(representer, data): - # type: (Any, Any) -> Any + def t_y(representer: Any, data: Any) -> Any: return representer.represent_yaml_object( tag, data, cls, flow_style=representer.default_flow_style ) @@ -969,8 +964,7 @@ yml.constructor.add_constructor(tag, cls.from_yaml) except AttributeError: - def f_y(constructor, node): - # type: (Any, Any) -> Any + def f_y(constructor: Any, node: Any) -> Any: return constructor.construct_yaml_object(node, cls) yml.constructor.add_constructor(tag, f_y) @@ -980,27 +974,27 @@ ######################################################################################## -def warn_deprecation(fun, method, arg=''): - # type: (Any, Any, str) -> None - from ruamel.yaml.compat import _F - +def warn_deprecation(fun: Any, method: Any, arg: str = '') -> None: warnings.warn( - _F( - '\n{fun} will be removed, use\n\n yaml=YAML({arg})\n yaml.{method}(...)\n\ninstead', # NOQA - fun=fun, - method=method, - arg=arg, - ), + f'\n{fun} will be removed, use\n\n yaml=YAML({arg})\n yaml.{method}(...)\n\ninstead', # NOQA PendingDeprecationWarning, # this will show when testing with pytest/tox stacklevel=3, ) +def error_deprecation(fun: Any, method: Any, arg: str = '') -> None: + warnings.warn( + f'\n{fun} has been removed, use\n\n yaml=YAML({arg})\n yaml.{method}(...)\n\ninstead', # NOQA + DeprecationWarning, + stacklevel=3, + ) + sys.exit(1) + + ######################################################################################## -def scan(stream, Loader=Loader): - # type: (StreamTextType, Any) -> Any +def scan(stream: StreamTextType, Loader: Any = Loader) -> Any: """ Scan a YAML stream and produce scanning tokens. """ @@ -1013,8 +1007,7 @@ loader._parser.dispose() -def parse(stream, Loader=Loader): - # type: (StreamTextType, Any) -> Any +def parse(stream: StreamTextType, Loader: Any = Loader) -> Any: """ Parse a YAML stream and produce parsing events. """ @@ -1027,8 +1020,7 @@ loader._parser.dispose() -def compose(stream, Loader=Loader): - # type: (StreamTextType, Any) -> Any +def compose(stream: StreamTextType, Loader: Any = Loader) -> Any: """ Parse the first YAML document in a stream and produce the corresponding representation tree. @@ -1041,8 +1033,7 @@ loader.dispose() -def compose_all(stream, Loader=Loader): - # type: (StreamTextType, Any) -> Any +def compose_all(stream: StreamTextType, Loader: Any = Loader) -> Any: """ Parse all YAML documents in a stream and produce corresponding representation trees. @@ -1056,8 +1047,9 @@ loader._parser.dispose() -def load(stream, Loader=None, version=None, preserve_quotes=None): - # type: (Any, Any, Any, Any) -> Any +def load( + stream: Any, Loader: Any = None, version: Any = None, preserve_quotes: Any = None +) -> Any: """ Parse the first YAML document in a stream and produce the corresponding Python object. @@ -1081,8 +1073,10 @@ pass -def load_all(stream, Loader=None, version=None, preserve_quotes=None): - # type: (Any, Any, Any, Any) -> Any # NOQA +def load_all( + stream: Any, Loader: Any = None, version: Any = None, preserve_quotes: Any = None +) -> Any: + # NOQA """ Parse all YAML documents in a stream and produce corresponding Python objects. @@ -1107,8 +1101,7 @@ pass -def safe_load(stream, version=None): - # type: (StreamTextType, OptionalVersionType) -> Any +def safe_load(stream: StreamTextType, version: OptionalVersionType = None) -> Any: """ Parse the first YAML document in a stream and produce the corresponding Python object. @@ -1118,8 +1111,7 @@ return load(stream, SafeLoader, version) -def safe_load_all(stream, version=None): - # type: (StreamTextType, OptionalVersionType) -> Any +def safe_load_all(stream: StreamTextType, version: OptionalVersionType = None) -> Any: """ Parse all YAML documents in a stream and produce corresponding Python objects. @@ -1129,8 +1121,11 @@ return load_all(stream, SafeLoader, version) -def round_trip_load(stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> Any +def round_trip_load( + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, +) -> Any: """ Parse the first YAML document in a stream and produce the corresponding Python object. @@ -1140,8 +1135,11 @@ return load(stream, RoundTripLoader, version, preserve_quotes=preserve_quotes) -def round_trip_load_all(stream, version=None, preserve_quotes=None): - # type: (StreamTextType, OptionalVersionType, Optionalbool) -> Any +def round_trip_load_all( + stream: StreamTextType, + version: OptionalVersionType = None, + preserve_quotes: Optionalbool = None, +) -> Any: """ Parse all YAML documents in a stream and produce corresponding Python objects. @@ -1152,16 +1150,16 @@ def emit( - events, - stream=None, - Dumper=Dumper, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, -): - # type: (Any, OptionalStreamType, Any, Optionalbool, Unionint, None, Optionalint, Optionalbool, Any) -> Any # NOQA + events: Any, + stream: OptionalStreamType = None, + Dumper: Any = Dumper, + canonical: Optionalbool = None, + indent: Unionint, None = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, +) -> Any: + # NOQA """ Emit YAML parsing events into a stream. If stream is None, return the produced string instead. @@ -1196,21 +1194,21 @@ def serialize_all( - nodes, - stream=None, - Dumper=Dumper, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=enc, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, -): - # type: (Any, OptionalStreamType, Any, Any, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, OptionalVersionType, Any) -> Any # NOQA + nodes: Any, + stream: OptionalStreamType = None, + Dumper: Any = Dumper, + canonical: Any = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = enc, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: OptionalVersionType = None, + tags: Any = None, +) -> Any: + # NOQA """ Serialize a sequence of representation trees into a YAML stream. If stream is None, return the produced string instead. @@ -1251,8 +1249,9 @@ return getvalue() -def serialize(node, stream=None, Dumper=Dumper, **kwds): - # type: (Any, OptionalStreamType, Any, Any) -> Any +def serialize( + node: Any, stream: OptionalStreamType = None, Dumper: Any = Dumper, **kwds: Any +) -> Any: """ Serialize a representation tree into a YAML stream. If stream is None, return the produced string instead. @@ -1262,26 +1261,26 @@ def dump_all( - documents, - stream=None, - Dumper=Dumper, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=enc, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, -): - # type: (Any, OptionalStreamType, Any, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, Any, Any, Any, Any, Any) -> Any # NOQA + documents: Any, + stream: OptionalStreamType = None, + Dumper: Any = Dumper, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = enc, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: Any = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, +) -> Any: + # NOQA """ Serialize a sequence of Python objects into a YAML stream. If stream is None, return the produced string instead. @@ -1335,24 +1334,24 @@ def dump( - data, - stream=None, - Dumper=Dumper, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=enc, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, -): - # type: (Any, OptionalStreamType, Any, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, OptionalVersionType, Any, Any) -> OptionalAny # NOQA + data: Any, + stream: OptionalStreamType = None, + Dumper: Any = Dumper, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = enc, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: OptionalVersionType = None, + tags: Any = None, + block_seq_indent: Any = None, +) -> Any: + # NOQA """ Serialize a Python object into a YAML stream. If stream is None, return the produced string instead. @@ -1381,19 +1380,7 @@ ) -def safe_dump_all(documents, stream=None, **kwds): - # type: (Any, OptionalStreamType, Any) -> OptionalAny - """ - Serialize a sequence of Python objects into a YAML stream. - Produce only basic YAML tags. - If stream is None, return the produced string instead. - """ - warn_deprecation('safe_dump_all', 'dump_all', arg="typ='safe', pure=True") - return dump_all(documents, stream, Dumper=SafeDumper, **kwds) - - -def safe_dump(data, stream=None, **kwds): - # type: (Any, OptionalStreamType, Any) -> OptionalAny +def safe_dump(data: Any, stream: OptionalStreamType = None, **kwds: Any) -> Any: """ Serialize a Python object into a YAML stream. Produce only basic YAML tags. @@ -1404,26 +1391,25 @@ def round_trip_dump( - data, - stream=None, - Dumper=RoundTripDumper, - default_style=None, - default_flow_style=None, - canonical=None, - indent=None, - width=None, - allow_unicode=None, - line_break=None, - encoding=enc, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - block_seq_indent=None, - top_level_colon_align=None, - prefix_colon=None, -): - # type: (Any, OptionalStreamType, Any, Any, Any, Optionalbool, Optionalint, Optionalint, Optionalbool, Any, Any, Optionalbool, Optionalbool, OptionalVersionType, Any, Any, Any, Any) -> OptionalAny # NOQA + data: Any, + stream: OptionalStreamType = None, + Dumper: Any = RoundTripDumper, + default_style: Any = None, + default_flow_style: Any = None, + canonical: Optionalbool = None, + indent: Optionalint = None, + width: Optionalint = None, + allow_unicode: Optionalbool = None, + line_break: Any = None, + encoding: Any = enc, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: OptionalVersionType = None, + tags: Any = None, + block_seq_indent: Any = None, + top_level_colon_align: Any = None, + prefix_colon: Any = None, +) -> Any: allow_unicode = True if allow_unicode is None else allow_unicode warn_deprecation('round_trip_dump', 'dump') return dump_all( @@ -1453,9 +1439,13 @@ def add_implicit_resolver( - tag, regexp, first=None, Loader=None, Dumper=None, resolver=Resolver -): - # type: (Any, Any, Any, Any, Any, Any) -> None + tag: Any, + regexp: Any, + first: Any = None, + Loader: Any = None, + Dumper: Any = None, + resolver: Any = Resolver, +) -> None: """ Add an implicit scalar detector. If an implicit scalar value matches the given regexp, @@ -1486,8 +1476,14 @@ # this code currently not tested -def add_path_resolver(tag, path, kind=None, Loader=None, Dumper=None, resolver=Resolver): - # type: (Any, Any, Any, Any, Any, Any) -> None +def add_path_resolver( + tag: Any, + path: Any, + kind: Any = None, + Loader: Any = None, + Dumper: Any = None, + resolver: Any = Resolver, +) -> None: """ Add a path based resolver for the given tag. A path is a list of keys that forms a path @@ -1517,8 +1513,9 @@ raise NotImplementedError -def add_constructor(tag, object_constructor, Loader=None, constructor=Constructor): - # type: (Any, Any, Any, Any) -> None +def add_constructor( + tag: Any, object_constructor: Any, Loader: Any = None, constructor: Any = Constructor +) -> None: """ Add an object constructor for the given tag. object_onstructor is a function that accepts a Loader instance @@ -1542,8 +1539,9 @@ raise NotImplementedError -def add_multi_constructor(tag_prefix, multi_constructor, Loader=None, constructor=Constructor): - # type: (Any, Any, Any, Any) -> None +def add_multi_constructor( + tag_prefix: Any, multi_constructor: Any, Loader: Any = None, constructor: Any = Constructor +) -> None: """ Add a multi-constructor for the given tag prefix. Multi-constructor is called for a node if its tag starts with tag_prefix. @@ -1568,8 +1566,9 @@ raise NotImplementedError -def add_representer(data_type, object_representer, Dumper=None, representer=Representer): - # type: (Any, Any, Any, Any) -> None +def add_representer( + data_type: Any, object_representer: Any, Dumper: Any = None, representer: Any = Representer +) -> None: """ Add a representer for the given type. object_representer is a function accepting a Dumper instance @@ -1595,8 +1594,9 @@ # this code currently not tested -def add_multi_representer(data_type, multi_representer, Dumper=None, representer=Representer): - # type: (Any, Any, Any, Any) -> None +def add_multi_representer( + data_type: Any, multi_representer: Any, Dumper: Any = None, representer: Any = Representer +) -> None: """ Add a representer for the given type. multi_representer is a function accepting a Dumper instance @@ -1626,8 +1626,7 @@ The metaclass for YAMLObject. """ - def __init__(cls, name, bases, kwds): - # type: (Any, Any, Any) -> None + def __init__(cls, name: Any, bases: Any, kwds: Any) -> None: super().__init__(name, bases, kwds) if 'yaml_tag' in kwds and kwds'yaml_tag' is not None: cls.yaml_constructor.add_constructor(cls.yaml_tag, cls.from_yaml) # type: ignore @@ -1645,20 +1644,18 @@ yaml_constructor = Constructor yaml_representer = Representer - yaml_tag = None # type: Any - yaml_flow_style = None # type: Any + yaml_tag: Any = None + yaml_flow_style: Any = None @classmethod - def from_yaml(cls, constructor, node): - # type: (Any, Any) -> Any + def from_yaml(cls, constructor: Any, node: Any) -> Any: """ Convert a representation node to a Python object. """ return constructor.construct_yaml_object(node, cls) @classmethod - def to_yaml(cls, representer, data): - # type: (Any, Any) -> Any + def to_yaml(cls, representer: Any, data: Any) -> Any: """ Convert a Python object to a representation node. """
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/nodes.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/nodes.py
Changed
@@ -2,26 +2,41 @@ import sys -from ruamel.yaml.compat import _F - -if False: # MYPY - from typing import Dict, Any, Text # NOQA +from typing import Dict, Any, Text, Optional # NOQA +from ruamel.yaml.tag import Tag class Node: - __slots__ = 'tag', 'value', 'start_mark', 'end_mark', 'comment', 'anchor' + __slots__ = 'ctag', 'value', 'start_mark', 'end_mark', 'comment', 'anchor' - def __init__(self, tag, value, start_mark, end_mark, comment=None, anchor=None): - # type: (Any, Any, Any, Any, Any, Any) -> None - self.tag = tag + def __init__( + self, + tag: Any, + value: Any, + start_mark: Any, + end_mark: Any, + comment: Any = None, + anchor: Any = None, + ) -> None: + # you can still get a string from the serializer + self.ctag = tag if isinstance(tag, Tag) else Tag(suffix=tag) self.value = value self.start_mark = start_mark self.end_mark = end_mark self.comment = comment self.anchor = anchor - def __repr__(self): - # type: () -> Any + @property + def tag(self) -> Optionalstr: + return None if self.ctag is None else str(self.ctag) + + @tag.setter + def tag(self, val: Any) -> None: + if isinstance(val, str): + val = Tag(suffix=val) + self.ctag = val + + def __repr__(self) -> Any: value = self.value # if isinstance(value, list): # if len(value) == 0: @@ -36,29 +51,19 @@ # else: # value = repr(value) value = repr(value) - return _F( - '{class_name!s}(tag={self_tag!r}, value={value!s})', - class_name=self.__class__.__name__, - self_tag=self.tag, - value=value, - ) + return f'{self.__class__.__name__!s}(tag={self.tag!r}, value={value!s})' - def dump(self, indent=0): - # type: (int) -> None + def dump(self, indent: int = 0) -> None: + xx = self.__class__.__name__ + xi = ' ' * indent if isinstance(self.value, str): - sys.stdout.write( - '{}{}(tag={!r}, value={!r})\n'.format( - ' ' * indent, self.__class__.__name__, self.tag, self.value - ) - ) + sys.stdout.write(f'{xi}{xx}(tag={self.tag!r}, value={self.value!r})\n') if self.comment: - sys.stdout.write(' {}comment: {})\n'.format(' ' * indent, self.comment)) + sys.stdout.write(f' {xi}comment: {self.comment})\n') return - sys.stdout.write( - '{}{}(tag={!r})\n'.format(' ' * indent, self.__class__.__name__, self.tag) - ) + sys.stdout.write(f'{xi}{xx}(tag={self.tag!r})\n') if self.comment: - sys.stdout.write(' {}comment: {})\n'.format(' ' * indent, self.comment)) + sys.stdout.write(f' {xi}comment: {self.comment})\n') for v in self.value: if isinstance(v, tuple): for v1 in v: @@ -66,7 +71,7 @@ elif isinstance(v, Node): v.dump(indent + 1) else: - sys.stdout.write('Node value type? {}\n'.format(type(v))) + sys.stdout.write(f'Node value type? {type(v)}\n') class ScalarNode(Node): @@ -83,9 +88,15 @@ id = 'scalar' def __init__( - self, tag, value, start_mark=None, end_mark=None, style=None, comment=None, anchor=None - ): - # type: (Any, Any, Any, Any, Any, Any, Any) -> None + self, + tag: Any, + value: Any, + start_mark: Any = None, + end_mark: Any = None, + style: Any = None, + comment: Any = None, + anchor: Any = None, + ) -> None: Node.__init__(self, tag, value, start_mark, end_mark, comment=comment, anchor=anchor) self.style = style @@ -95,15 +106,14 @@ def __init__( self, - tag, - value, - start_mark=None, - end_mark=None, - flow_style=None, - comment=None, - anchor=None, - ): - # type: (Any, Any, Any, Any, Any, Any, Any) -> None + tag: Any, + value: Any, + start_mark: Any = None, + end_mark: Any = None, + flow_style: Any = None, + comment: Any = None, + anchor: Any = None, + ) -> None: Node.__init__(self, tag, value, start_mark, end_mark, comment=comment) self.flow_style = flow_style self.anchor = anchor @@ -120,15 +130,14 @@ def __init__( self, - tag, - value, - start_mark=None, - end_mark=None, - flow_style=None, - comment=None, - anchor=None, - ): - # type: (Any, Any, Any, Any, Any, Any, Any) -> None + tag: Any, + value: Any, + start_mark: Any = None, + end_mark: Any = None, + flow_style: Any = None, + comment: Any = None, + anchor: Any = None, + ) -> None: CollectionNode.__init__( self, tag, value, start_mark, end_mark, flow_style, comment, anchor )
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/parser.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/parser.py
Changed
@@ -80,16 +80,15 @@ from ruamel.yaml.scanner import Scanner, RoundTripScanner, ScannerError # NOQA from ruamel.yaml.scanner import BlankLineComment from ruamel.yaml.comments import C_PRE, C_POST, C_SPLIT_ON_FIRST_BLANK -from ruamel.yaml.compat import _F, nprint, nprintf # NOQA +from ruamel.yaml.compat import nprint, nprintf # NOQA +from ruamel.yaml.tag import Tag -if False: # MYPY - from typing import Any, Dict, Optional, List, Optional # NOQA +from typing import Any, Dict, Optional, List, Optional # NOQA __all__ = 'Parser', 'RoundTripParser', 'ParserError' -def xprintf(*args, **kw): - # type: (Any, Any) -> Any +def xprintf(*args: Any, **kw: Any) -> Any: return nprintf(*args, **kw) pass @@ -104,42 +103,36 @@ DEFAULT_TAGS = {'!': '!', '!!': 'tag:yaml.org,2002:'} - def __init__(self, loader): - # type: (Any) -> None + def __init__(self, loader: Any) -> None: self.loader = loader if self.loader is not None and getattr(self.loader, '_parser', None) is None: self.loader._parser = self self.reset_parser() - def reset_parser(self): - # type: () -> None + def reset_parser(self) -> None: # Reset the state attributes (to clear self-references) self.current_event = self.last_event = None - self.tag_handles = {} # type: DictAny, Any - self.states = # type: ListAny - self.marks = # type: ListAny - self.state = self.parse_stream_start # type: Any + self.tag_handles: DictAny, Any = {} + self.states: ListAny = + self.marks: ListAny = + self.state: Any = self.parse_stream_start - def dispose(self): - # type: () -> None + def dispose(self) -> None: self.reset_parser() @property - def scanner(self): - # type: () -> Any + def scanner(self) -> Any: if hasattr(self.loader, 'typ'): return self.loader.scanner return self.loader._scanner @property - def resolver(self): - # type: () -> Any + def resolver(self) -> Any: if hasattr(self.loader, 'typ'): return self.loader.resolver return self.loader._resolver - def check_event(self, *choices): - # type: (Any) -> bool + def check_event(self, *choices: Any) -> bool: # Check the type of the next event. if self.current_event is None: if self.state: @@ -152,16 +145,14 @@ return True return False - def peek_event(self): - # type: () -> Any + def peek_event(self) -> Any: # Get the next event. if self.current_event is None: if self.state: self.current_event = self.state() return self.current_event - def get_event(self): - # type: () -> Any + def get_event(self) -> Any: # Get the next event and proceed further. if self.current_event is None: if self.state: @@ -178,8 +169,7 @@ # implicit_document ::= block_node DOCUMENT-END* # explicit_document ::= DIRECTIVE* DOCUMENT-START block_node? DOCUMENT-END* - def parse_stream_start(self): - # type: () -> Any + def parse_stream_start(self) -> Any: # Parse the stream start. token = self.scanner.get_token() self.move_token_comment(token) @@ -190,10 +180,10 @@ return event - def parse_implicit_document_start(self): - # type: () -> Any + def parse_implicit_document_start(self) -> Any: # Parse an implicit document. if not self.scanner.check_token(DirectiveToken, DocumentStartToken, StreamEndToken): + # don't need copy, as an implicit tag doesn't add tag_handles self.tag_handles = self.DEFAULT_TAGS token = self.scanner.peek_token() start_mark = end_mark = token.start_mark @@ -208,8 +198,7 @@ else: return self.parse_document_start() - def parse_document_start(self): - # type: () -> Any + def parse_document_start(self) -> Any: # Parse any extra document end indicators. while self.scanner.check_token(DocumentEndToken): self.scanner.get_token() @@ -220,10 +209,8 @@ raise ParserError( None, None, - _F( - "expected '<document start>', but found {pt!r}", - pt=self.scanner.peek_token().id, - ), + "expected '<document start>', " + f'but found {self.scanner.peek_token().id,!r}', self.scanner.peek_token().start_mark, ) token = self.scanner.get_token() @@ -232,10 +219,14 @@ # if self.loader is not None and \ # end_mark.line != self.scanner.peek_token().start_mark.line: # self.loader.scalar_after_indicator = False - event = DocumentStartEvent( - start_mark, end_mark, explicit=True, version=version, tags=tags, - comment=token.comment - ) # type: Any + event: Any = DocumentStartEvent( + start_mark, + end_mark, + explicit=True, + version=version, + tags=tags, + comment=token.comment, + ) self.states.append(self.parse_document_end) self.state = self.parse_document_content else: @@ -247,14 +238,25 @@ self.state = None return event - def parse_document_end(self): - # type: () -> Any + def parse_document_end(self) -> Any: # Parse the document end. token = self.scanner.peek_token() start_mark = end_mark = token.start_mark explicit = False if self.scanner.check_token(DocumentEndToken): token = self.scanner.get_token() + # if token.end_mark.line != self.peek_event().start_mark.line: + pt = self.scanner.peek_token() + if not isinstance(pt, StreamEndToken) and ( + token.end_mark.line == pt.start_mark.line + ): + raise ParserError( + None, + None, + 'found non-comment content after document end marker, ' + f'{self.scanner.peek_token().id,!r}', + self.scanner.peek_token().start_mark, + ) end_mark = token.end_mark explicit = True event = DocumentEndEvent(start_mark, end_mark, explicit=explicit) @@ -263,12 +265,15 @@ if self.resolver.processing_version == (1, 1): self.state = self.parse_document_start else: - self.state = self.parse_implicit_document_start + if explicit: + # found a document end marker, can be followed by implicit document + self.state = self.parse_implicit_document_start + else: + self.state = self.parse_document_start return event - def parse_document_content(self): - # type: () -> Any + def parse_document_content(self) -> Any: if self.scanner.check_token( DirectiveToken, DocumentStartToken, DocumentEndToken, StreamEndToken ): @@ -278,8 +283,7 @@ else: return self.parse_block_node() - def process_directives(self): - # type: () -> Any + def process_directives(self) -> Any: yaml_version = None self.tag_handles = {} while self.scanner.check_token(DirectiveToken): @@ -302,14 +306,11 @@ handle, prefix = token.value if handle in self.tag_handles: raise ParserError( - None, - None, - _F('duplicate tag handle {handle!r}', handle=handle), - token.start_mark, + None, None, f'duplicate tag handle {handle!r}', token.start_mark, ) self.tag_handleshandle = prefix if bool(self.tag_handles): - value = yaml_version, self.tag_handles.copy() # type: Any + value: Any = (yaml_version, self.tag_handles.copy()) else: value = yaml_version, None if self.loader is not None and hasattr(self.loader, 'tags'): @@ -339,27 +340,27 @@ # block_collection ::= block_sequence | block_mapping # flow_collection ::= flow_sequence | flow_mapping - def parse_block_node(self): - # type: () -> Any + def parse_block_node(self) -> Any: return self.parse_node(block=True) - def parse_flow_node(self): - # type: () -> Any + def parse_flow_node(self) -> Any: return self.parse_node() - def parse_block_node_or_indentless_sequence(self): - # type: () -> Any + def parse_block_node_or_indentless_sequence(self) -> Any: return self.parse_node(block=True, indentless_sequence=True) - def transform_tag(self, handle, suffix): - # type: (Any, Any) -> Any - return self.tag_handleshandle + suffix + # def transform_tag(self, handle: Any, suffix: Any) -> Any: + # return self.tag_handleshandle + suffix - def parse_node(self, block=False, indentless_sequence=False): - # type: (bool, bool) -> Any + def select_tag_transform(self, tag: Tag) -> None: + if tag is None: + return + tag.select_transform(False) + + def parse_node(self, block: bool = False, indentless_sequence: bool = False) -> Any: if self.scanner.check_token(AliasToken): token = self.scanner.get_token() - event = AliasEvent(token.value, token.start_mark, token.end_mark) # type: Any + event: Any = AliasEvent(token.value, token.start_mark, token.end_mark) self.state = self.states.pop() return event @@ -376,39 +377,34 @@ token = self.scanner.get_token() tag_mark = token.start_mark end_mark = token.end_mark - tag = token.value + # tag = token.value + tag = Tag( + handle=token.value0, suffix=token.value1, handles=self.tag_handles, + ) elif self.scanner.check_token(TagToken): token = self.scanner.get_token() start_mark = tag_mark = token.start_mark end_mark = token.end_mark - tag = token.value + # tag = token.value + tag = Tag(handle=token.value0, suffix=token.value1, handles=self.tag_handles) if self.scanner.check_token(AnchorToken): token = self.scanner.get_token() start_mark = tag_mark = token.start_mark end_mark = token.end_mark anchor = token.value if tag is not None: - handle, suffix = tag - if handle is not None: - if handle not in self.tag_handles: - raise ParserError( - 'while parsing a node', - start_mark, - _F('found undefined tag handle {handle!r}', handle=handle), - tag_mark, - ) - tag = self.transform_tag(handle, suffix) - else: - tag = suffix - # if tag == '!': - # raise ParserError("while parsing a node", start_mark, - # "found non-specific tag '!'", tag_mark, - # "Please check 'http://pyyaml.org/wiki/YAMLNonSpecificTag' - # and share your opinion.") + self.select_tag_transform(tag) + if tag.check_handle(): + raise ParserError( + 'while parsing a node', + start_mark, + f'found undefined tag handle {tag.handle!r}', + tag_mark, + ) if start_mark is None: start_mark = end_mark = self.scanner.peek_token().start_mark event = None - implicit = tag is None or tag == '!' + implicit = tag is None or str(tag) == '!' if indentless_sequence and self.scanner.check_token(BlockEntryToken): comment = None pt = self.scanner.peek_token() @@ -421,7 +417,7 @@ comment = pt.comment end_mark = self.scanner.peek_token().end_mark event = SequenceStartEvent( - anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment + anchor, tag, implicit, start_mark, end_mark, flow_style=False, comment=comment, ) self.state = self.parse_indentless_sequence_entry return event @@ -430,17 +426,17 @@ token = self.scanner.get_token() # self.scanner.peek_token_same_line_comment(token) end_mark = token.end_mark - if (token.plain and tag is None) or tag == '!': - implicit = (True, False) + if (token.plain and tag is None) or str(tag) == '!': + dimplicit = (True, False) elif tag is None: - implicit = (False, True) + dimplicit = (False, True) else: - implicit = (False, False) + dimplicit = (False, False) # nprint('se', token.value, token.comment) event = ScalarEvent( anchor, tag, - implicit, + dimplicit, token.value, start_mark, end_mark, @@ -507,9 +503,9 @@ node = 'flow' token = self.scanner.peek_token() raise ParserError( - _F('while parsing a {node!s} node', node=node), + f'while parsing a {node!s} node', start_mark, - _F('expected the node content, but found {token_id!r}', token_id=token.id), + f'expected the node content, but found {token.id!r}', token.start_mark, ) return event @@ -517,16 +513,14 @@ # block_sequence ::= BLOCK-SEQUENCE-START (BLOCK-ENTRY block_node?)* # BLOCK-END - def parse_block_sequence_first_entry(self): - # type: () -> Any + def parse_block_sequence_first_entry(self) -> Any: token = self.scanner.get_token() # move any comment from start token # self.move_token_comment(token) self.marks.append(token.start_mark) return self.parse_block_sequence_entry() - def parse_block_sequence_entry(self): - # type: () -> Any + def parse_block_sequence_entry(self) -> Any: if self.scanner.check_token(BlockEntryToken): token = self.scanner.get_token() self.move_token_comment(token) @@ -541,7 +535,7 @@ raise ParserError( 'while parsing a block collection', self.marks-1, - _F('expected <block end>, but found {token_id!r}', token_id=token.id), + f'expected <block end>, but found {token.id!r}', token.start_mark, ) token = self.scanner.get_token() # BlockEndToken @@ -557,8 +551,7 @@ # - entry # - nested - def parse_indentless_sequence_entry(self): - # type: () -> Any + def parse_indentless_sequence_entry(self) -> Any: if self.scanner.check_token(BlockEntryToken): token = self.scanner.get_token() self.move_token_comment(token) @@ -587,14 +580,12 @@ # (VALUE block_node_or_indentless_sequence?)?)* # BLOCK-END - def parse_block_mapping_first_key(self): - # type: () -> Any + def parse_block_mapping_first_key(self) -> Any: token = self.scanner.get_token() self.marks.append(token.start_mark) return self.parse_block_mapping_key() - def parse_block_mapping_key(self): - # type: () -> Any + def parse_block_mapping_key(self) -> Any: if self.scanner.check_token(KeyToken): token = self.scanner.get_token() self.move_token_comment(token) @@ -612,7 +603,7 @@ raise ParserError( 'while parsing a block mapping', self.marks-1, - _F('expected <block end>, but found {token_id!r}', token_id=token.id), + f'expected <block end>, but found {token.id!r}', token.start_mark, ) token = self.scanner.get_token() @@ -622,8 +613,7 @@ self.marks.pop() return event - def parse_block_mapping_value(self): - # type: () -> Any + def parse_block_mapping_value(self) -> Any: if self.scanner.check_token(ValueToken): token = self.scanner.get_token() # value token might have post comment move it to e.g. block @@ -662,14 +652,12 @@ # For `flow_sequence_entry`, the part `KEY flow_node? (VALUE flow_node?)?` # generate an inline mapping (set syntax). - def parse_flow_sequence_first_entry(self): - # type: () -> Any + def parse_flow_sequence_first_entry(self) -> Any: token = self.scanner.get_token() self.marks.append(token.start_mark) return self.parse_flow_sequence_entry(first=True) - def parse_flow_sequence_entry(self, first=False): - # type: (bool) -> Any + def parse_flow_sequence_entry(self, first: bool = False) -> Any: if not self.scanner.check_token(FlowSequenceEndToken): if not first: if self.scanner.check_token(FlowEntryToken): @@ -679,15 +667,15 @@ raise ParserError( 'while parsing a flow sequence', self.marks-1, - _F("expected ',' or '', but got {token_id!r}", token_id=token.id), + f"expected ',' or '', but got {token.id!r}", token.start_mark, ) if self.scanner.check_token(KeyToken): token = self.scanner.peek_token() - event = MappingStartEvent( + event: Any = MappingStartEvent( None, None, True, token.start_mark, token.end_mark, flow_style=True - ) # type: Any + ) self.state = self.parse_flow_sequence_entry_mapping_key return event elif not self.scanner.check_token(FlowSequenceEndToken): @@ -699,8 +687,7 @@ self.marks.pop() return event - def parse_flow_sequence_entry_mapping_key(self): - # type: () -> Any + def parse_flow_sequence_entry_mapping_key(self) -> Any: token = self.scanner.get_token() if not self.scanner.check_token(ValueToken, FlowEntryToken, FlowSequenceEndToken): self.states.append(self.parse_flow_sequence_entry_mapping_value) @@ -709,8 +696,7 @@ self.state = self.parse_flow_sequence_entry_mapping_value return self.process_empty_scalar(token.end_mark) - def parse_flow_sequence_entry_mapping_value(self): - # type: () -> Any + def parse_flow_sequence_entry_mapping_value(self) -> Any: if self.scanner.check_token(ValueToken): token = self.scanner.get_token() if not self.scanner.check_token(FlowEntryToken, FlowSequenceEndToken): @@ -724,8 +710,7 @@ token = self.scanner.peek_token() return self.process_empty_scalar(token.start_mark) - def parse_flow_sequence_entry_mapping_end(self): - # type: () -> Any + def parse_flow_sequence_entry_mapping_end(self) -> Any: self.state = self.parse_flow_sequence_entry token = self.scanner.peek_token() return MappingEndEvent(token.start_mark, token.start_mark) @@ -736,14 +721,12 @@ # FLOW-MAPPING-END # flow_mapping_entry ::= flow_node | KEY flow_node? (VALUE flow_node?)? - def parse_flow_mapping_first_key(self): - # type: () -> Any + def parse_flow_mapping_first_key(self) -> Any: token = self.scanner.get_token() self.marks.append(token.start_mark) return self.parse_flow_mapping_key(first=True) - def parse_flow_mapping_key(self, first=False): - # type: (Any) -> Any + def parse_flow_mapping_key(self, first: Any = False) -> Any: if not self.scanner.check_token(FlowMappingEndToken): if not first: if self.scanner.check_token(FlowEntryToken): @@ -753,7 +736,7 @@ raise ParserError( 'while parsing a flow mapping', self.marks-1, - _F("expected ',' or '}}', but got {token_id!r}", token_id=token.id), + f"expected ',' or '}}', but got {token.id!r}", token.start_mark, ) if self.scanner.check_token(KeyToken): @@ -780,8 +763,7 @@ self.marks.pop() return event - def parse_flow_mapping_value(self): - # type: () -> Any + def parse_flow_mapping_value(self) -> Any: if self.scanner.check_token(ValueToken): token = self.scanner.get_token() if not self.scanner.check_token(FlowEntryToken, FlowMappingEndToken): @@ -795,45 +777,30 @@ token = self.scanner.peek_token() return self.process_empty_scalar(token.start_mark) - def parse_flow_mapping_empty_value(self): - # type: () -> Any + def parse_flow_mapping_empty_value(self) -> Any: self.state = self.parse_flow_mapping_key return self.process_empty_scalar(self.scanner.peek_token().start_mark) - def process_empty_scalar(self, mark, comment=None): - # type: (Any, Any) -> Any + def process_empty_scalar(self, mark: Any, comment: Any = None) -> Any: return ScalarEvent(None, None, (True, False), "", mark, mark, comment=comment) - def move_token_comment(self, token, nt=None, empty=False): - # type: (Any, OptionalAny, Optionalbool) -> Any + def move_token_comment( + self, token: Any, nt: OptionalAny = None, empty: Optionalbool = False + ) -> Any: pass class RoundTripParser(Parser): """roundtrip is a safe loader, that wants to see the unmangled tag""" - def transform_tag(self, handle, suffix): - # type: (Any, Any) -> Any - # return self.tag_handleshandle+suffix - if handle == '!!' and suffix in ( - 'null', - 'bool', - 'int', - 'float', - 'binary', - 'timestamp', - 'omap', - 'pairs', - 'set', - 'str', - 'seq', - 'map', - ): - return Parser.transform_tag(self, handle, suffix) - return handle + suffix + def select_tag_transform(self, tag: Tag) -> None: + if tag is None: + return + tag.select_transform(True) - def move_token_comment(self, token, nt=None, empty=False): - # type: (Any, OptionalAny, Optionalbool) -> Any + def move_token_comment( + self, token: Any, nt: OptionalAny = None, empty: Optionalbool = False + ) -> Any: token.move_old_comment(self.scanner.peek_token() if nt is None else nt, empty=empty) @@ -843,12 +810,12 @@ # some of the differences are based on the superclass testing # if self.loader.comment_handling is not None - def move_token_comment(self, token, nt=None, empty=False): - # type: (Any, Any, Any, Optionalbool) -> None + def move_token_comment( + self: Any, token: Any, nt: Any = None, empty: Optionalbool = False + ) -> None: token.move_new_comment(self.scanner.peek_token() if nt is None else nt, empty=empty) - def distribute_comment(self, comment, line): - # type: (Any, Any) -> Any + def distribute_comment(self, comment: Any, line: Any) -> Any: # ToDo, look at indentation of the comment to determine attachment if comment is None: return None
View file
_service:tar_scm:ruamel.yaml-0.17.32.tar.gz/pyproject.toml
Added
@@ -0,0 +1,4 @@ +build-system +requires = "setuptools", "wheel" +# test +build-backend = "setuptools.build_meta"
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/reader.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/reader.py
Changed
@@ -22,46 +22,35 @@ import codecs from ruamel.yaml.error import YAMLError, FileMark, StringMark, YAMLStreamError -from ruamel.yaml.compat import _F # NOQA from ruamel.yaml.util import RegExp -if False: # MYPY - from typing import Any, Dict, Optional, List, Union, Text, Tuple, Optional # NOQA -# from ruamel.yaml.compat import StreamTextType # NOQA +from typing import Any, Dict, Optional, List, Union, Text, Tuple, Optional # NOQA +# from ruamel.yaml.compat import StreamTextType # NOQA __all__ = 'Reader', 'ReaderError' class ReaderError(YAMLError): - def __init__(self, name, position, character, encoding, reason): - # type: (Any, Any, Any, Any, Any) -> None + def __init__( + self, name: Any, position: Any, character: Any, encoding: Any, reason: Any + ) -> None: self.name = name self.character = character self.position = position self.encoding = encoding self.reason = reason - def __str__(self): - # type: () -> Any + def __str__(self) -> Any: if isinstance(self.character, bytes): - return _F( - "'{self_encoding!s}' codec can't decode byte #x{ord_self_character:02x}: " - '{self_reason!s}\n' - ' in "{self_name!s}", position {self_position:d}', - self_encoding=self.encoding, - ord_self_character=ord(self.character), - self_reason=self.reason, - self_name=self.name, - self_position=self.position, + return ( + f"'{self.encoding!s}' codec can't decode byte #x{ord(self.character):02x}: " + f'{self.reason!s}\n' + f' in "{self.name!s}", position {self.position:d}' ) else: - return _F( - 'unacceptable character #x{self_character:04x}: {self_reason!s}\n' - ' in "{self_name!s}", position {self_position:d}', - self_character=self.character, - self_reason=self.reason, - self_name=self.name, - self_position=self.position, + return ( + f'unacceptable character #x{self.character:04x}: {self.reason!s}\n' + f' in "{self.name!s}", position {self.position:d}' ) @@ -79,39 +68,35 @@ # Yeah, it's ugly and slow. - def __init__(self, stream, loader=None): - # type: (Any, Any) -> None + def __init__(self, stream: Any, loader: Any = None) -> None: self.loader = loader if self.loader is not None and getattr(self.loader, '_reader', None) is None: self.loader._reader = self self.reset_reader() - self.stream = stream # type: Any # as .read is called + self.stream: Any = stream # as .read is called - def reset_reader(self): - # type: () -> None - self.name = None # type: Any + def reset_reader(self) -> None: + self.name: Any = None self.stream_pointer = 0 self.eof = True self.buffer = "" self.pointer = 0 - self.raw_buffer = None # type: Any + self.raw_buffer: Any = None self.raw_decode = None - self.encoding = None # type: OptionalText + self.encoding: OptionalText = None self.index = 0 self.line = 0 self.column = 0 @property - def stream(self): - # type: () -> Any + def stream(self) -> Any: try: return self._stream except AttributeError: - raise YAMLStreamError('input stream needs to specified') + raise YAMLStreamError('input stream needs to be specified') @stream.setter - def stream(self, val): - # type: (Any) -> None + def stream(self, val: Any) -> None: if val is None: return self._stream = None @@ -132,22 +117,19 @@ self.raw_buffer = None self.determine_encoding() - def peek(self, index=0): - # type: (int) -> Text + def peek(self, index: int = 0) -> Text: try: return self.bufferself.pointer + index except IndexError: self.update(index + 1) return self.bufferself.pointer + index - def prefix(self, length=1): - # type: (int) -> Any + def prefix(self, length: int = 1) -> Any: if self.pointer + length >= len(self.buffer): self.update(length) return self.bufferself.pointer : self.pointer + length - def forward_1_1(self, length=1): - # type: (int) -> None + def forward_1_1(self, length: int = 1) -> None: if self.pointer + length + 1 >= len(self.buffer): self.update(length + 1) while length != 0: @@ -163,8 +145,7 @@ self.column += 1 length -= 1 - def forward(self, length=1): - # type: (int) -> None + def forward(self, length: int = 1) -> None: if self.pointer + length + 1 >= len(self.buffer): self.update(length + 1) while length != 0: @@ -178,8 +159,7 @@ self.column += 1 length -= 1 - def get_mark(self): - # type: () -> Any + def get_mark(self) -> Any: if self.stream is None: return StringMark( self.name, self.index, self.line, self.column, self.buffer, self.pointer @@ -187,8 +167,7 @@ else: return FileMark(self.name, self.index, self.line, self.column) - def determine_encoding(self): - # type: () -> None + def determine_encoding(self) -> None: while not self.eof and (self.raw_buffer is None or len(self.raw_buffer) < 2): self.update_raw() if isinstance(self.raw_buffer, bytes): @@ -210,8 +189,7 @@ _printable_ascii = ('\x09\x0A\x0D' + "".join(map(chr, range(0x20, 0x7F)))).encode('ascii') @classmethod - def _get_non_printable_ascii(cls, data): # type: ignore - # type: (Text, bytes) -> OptionalTupleint, Text + def _get_non_printable_ascii(cls: Text, data: bytes) -> OptionalTupleint, Text: # type: ignore # NOQA ascii_bytes = data.encode('ascii') # type: ignore non_printables = ascii_bytes.translate(None, cls._printable_ascii) # type: ignore if not non_printables: @@ -220,23 +198,20 @@ return ascii_bytes.index(non_printable), non_printable.decode('ascii') @classmethod - def _get_non_printable_regex(cls, data): - # type: (Text) -> OptionalTupleint, Text + def _get_non_printable_regex(cls, data: Text) -> OptionalTupleint, Text: match = cls.NON_PRINTABLE.search(data) if not bool(match): return None return match.start(), match.group() @classmethod - def _get_non_printable(cls, data): - # type: (Text) -> OptionalTupleint, Text + def _get_non_printable(cls, data: Text) -> OptionalTupleint, Text: try: return cls._get_non_printable_ascii(data) # type: ignore except UnicodeEncodeError: return cls._get_non_printable_regex(data) - def check_printable(self, data): - # type: (Any) -> None + def check_printable(self, data: Any) -> None: non_printable_match = self._get_non_printable(data) if non_printable_match is not None: start, character = non_printable_match @@ -249,8 +224,7 @@ 'special characters are not allowed', ) - def update(self, length): - # type: (int) -> None + def update(self, length: int) -> None: if self.raw_buffer is None: return self.buffer = self.bufferself.pointer : @@ -281,8 +255,7 @@ self.raw_buffer = None break - def update_raw(self, size=None): - # type: (Optionalint) -> None + def update_raw(self, size: Optionalint = None) -> None: if size is None: size = 4096 data = self.stream.read(size)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/representer.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/representer.py
Changed
@@ -3,7 +3,7 @@ from ruamel.yaml.error import * # NOQA from ruamel.yaml.nodes import * # NOQA from ruamel.yaml.compat import ordereddict -from ruamel.yaml.compat import _F, nprint, nprintf # NOQA +from ruamel.yaml.compat import nprint, nprintf # NOQA from ruamel.yaml.scalarstring import ( LiteralScalarString, FoldedScalarString, @@ -35,8 +35,7 @@ import copyreg import base64 -if False: # MYPY - from typing import Dict, List, Any, Union, Text, Optional # NOQA +from typing import Dict, List, Any, Union, Text, Optional # NOQA # fmt: off __all__ = 'BaseRepresenter', 'SafeRepresenter', 'Representer', @@ -50,24 +49,27 @@ class BaseRepresenter: - yaml_representers = {} # type: DictAny, Any - yaml_multi_representers = {} # type: DictAny, Any + yaml_representers: DictAny, Any = {} + yaml_multi_representers: DictAny, Any = {} - def __init__(self, default_style=None, default_flow_style=None, dumper=None): - # type: (Any, Any, Any, Any) -> None + def __init__( + self: Any, + default_style: Any = None, + default_flow_style: Any = None, + dumper: Any = None, + ) -> None: self.dumper = dumper if self.dumper is not None: self.dumper._representer = self self.default_style = default_style self.default_flow_style = default_flow_style - self.represented_objects = {} # type: DictAny, Any - self.object_keeper = # type: ListAny - self.alias_key = None # type: Optionalint + self.represented_objects: DictAny, Any = {} + self.object_keeper: ListAny = + self.alias_key: Optionalint = None self.sort_base_mapping_type_on_output = True @property - def serializer(self): - # type: () -> Any + def serializer(self) -> Any: try: if hasattr(self.dumper, 'typ'): return self.dumper.serializer @@ -75,16 +77,14 @@ except AttributeError: return self # cyaml - def represent(self, data): - # type: (Any) -> None + def represent(self, data: Any) -> None: node = self.represent_data(data) self.serializer.serialize(node) self.represented_objects = {} self.object_keeper = self.alias_key = None - def represent_data(self, data): - # type: (Any) -> Any + def represent_data(self, data: Any) -> Any: if self.ignore_aliases(data): self.alias_key = None else: @@ -117,8 +117,7 @@ # self.represented_objectsalias_key = node return node - def represent_key(self, data): - # type: (Any) -> Any + def represent_key(self, data: Any) -> Any: """ David Fraser: Extract a method to represent keys in mappings, so that a subclass can choose not to quote them (for example) @@ -128,21 +127,20 @@ return self.represent_data(data) @classmethod - def add_representer(cls, data_type, representer): - # type: (Any, Any) -> None + def add_representer(cls, data_type: Any, representer: Any) -> None: if 'yaml_representers' not in cls.__dict__: cls.yaml_representers = cls.yaml_representers.copy() cls.yaml_representersdata_type = representer @classmethod - def add_multi_representer(cls, data_type, representer): - # type: (Any, Any) -> None + def add_multi_representer(cls, data_type: Any, representer: Any) -> None: if 'yaml_multi_representers' not in cls.__dict__: cls.yaml_multi_representers = cls.yaml_multi_representers.copy() cls.yaml_multi_representersdata_type = representer - def represent_scalar(self, tag, value, style=None, anchor=None): - # type: (Any, Any, Any, Any) -> Any + def represent_scalar( + self, tag: Any, value: Any, style: Any = None, anchor: Any = None + ) -> ScalarNode: if style is None: style = self.default_style comment = None @@ -150,14 +148,19 @@ comment = getattr(value, 'comment', None) if comment: comment = None, comment + if isinstance(tag, str): + tag = Tag(suffix=tag) node = ScalarNode(tag, value, style=style, comment=comment, anchor=anchor) if self.alias_key is not None: self.represented_objectsself.alias_key = node return node - def represent_sequence(self, tag, sequence, flow_style=None): - # type: (Any, Any, Any) -> Any - value = # type: ListAny + def represent_sequence( + self, tag: Any, sequence: Any, flow_style: Any = None + ) -> SequenceNode: + value: ListAny = + if isinstance(tag, str): + tag = Tag(suffix=tag) node = SequenceNode(tag, value, flow_style=flow_style) if self.alias_key is not None: self.represented_objectsself.alias_key = node @@ -174,9 +177,10 @@ node.flow_style = best_style return node - def represent_omap(self, tag, omap, flow_style=None): - # type: (Any, Any, Any) -> Any - value = # type: ListAny + def represent_omap(self, tag: Any, omap: Any, flow_style: Any = None) -> SequenceNode: + value: ListAny = + if isinstance(tag, str): + tag = Tag(suffix=tag) node = SequenceNode(tag, value, flow_style=flow_style) if self.alias_key is not None: self.represented_objectsself.alias_key = node @@ -195,9 +199,10 @@ node.flow_style = best_style return node - def represent_mapping(self, tag, mapping, flow_style=None): - # type: (Any, Any, Any) -> Any - value = # type: ListAny + def represent_mapping(self, tag: Any, mapping: Any, flow_style: Any = None) -> MappingNode: + value: ListAny = + if isinstance(tag, str): + tag = Tag(suffix=tag) node = MappingNode(tag, value, flow_style=flow_style) if self.alias_key is not None: self.represented_objectsself.alias_key = node @@ -224,14 +229,12 @@ node.flow_style = best_style return node - def ignore_aliases(self, data): - # type: (Any) -> bool + def ignore_aliases(self, data: Any) -> bool: return False class SafeRepresenter(BaseRepresenter): - def ignore_aliases(self, data): - # type: (Any) -> bool + def ignore_aliases(self, data: Any) -> bool: # https://docs.python.org/3/reference/expressions.html#parenthesized-forms : # "i.e. two occurrences of the empty tuple may or may not yield the same object" # so "data is ()" should not be used @@ -241,16 +244,13 @@ return True return False - def represent_none(self, data): - # type: (Any) -> Any + def represent_none(self, data: Any) -> ScalarNode: return self.represent_scalar('tag:yaml.org,2002:null', 'null') - def represent_str(self, data): - # type: (Any) -> Any + def represent_str(self, data: Any) -> Any: return self.represent_scalar('tag:yaml.org,2002:str', data) - def represent_binary(self, data): - # type: (Any) -> Any + def represent_binary(self, data: Any) -> ScalarNode: if hasattr(base64, 'encodebytes'): data = base64.encodebytes(data).decode('ascii') else: @@ -258,8 +258,7 @@ data = base64.encodestring(data).decode('ascii') # type: ignore return self.represent_scalar('tag:yaml.org,2002:binary', data, style='|') - def represent_bool(self, data, anchor=None): - # type: (Any, OptionalAny) -> Any + def represent_bool(self, data: Any, anchor: OptionalAny = None) -> ScalarNode: try: value = self.dumper.boolean_representationbool(data) except AttributeError: @@ -269,16 +268,14 @@ value = 'false' return self.represent_scalar('tag:yaml.org,2002:bool', value, anchor=anchor) - def represent_int(self, data): - # type: (Any) -> Any + def represent_int(self, data: Any) -> ScalarNode: return self.represent_scalar('tag:yaml.org,2002:int', str(data)) inf_value = 1e300 while repr(inf_value) != repr(inf_value * inf_value): inf_value *= inf_value - def represent_float(self, data): - # type: (Any) -> Any + def represent_float(self, data: Any) -> ScalarNode: if data != data or (data == 0.0 and data == 1.0): value = '.nan' elif data == self.inf_value: @@ -299,8 +296,7 @@ value = value.replace('e', '.0e', 1) return self.represent_scalar('tag:yaml.org,2002:float', value) - def represent_list(self, data): - # type: (Any) -> Any + def represent_list(self, data: Any) -> SequenceNode: # pairs = (len(data) > 0 and isinstance(data, list)) # if pairs: # for item in data: @@ -316,42 +312,37 @@ # (item_key, item_value))) # return SequenceNode('tag:yaml.org,2002:pairs', value) - def represent_dict(self, data): - # type: (Any) -> Any + def represent_dict(self, data: Any) -> MappingNode: return self.represent_mapping('tag:yaml.org,2002:map', data) - def represent_ordereddict(self, data): - # type: (Any) -> Any + def represent_ordereddict(self, data: Any) -> SequenceNode: return self.represent_omap('tag:yaml.org,2002:omap', data) - def represent_set(self, data): - # type: (Any) -> Any - value = {} # type: DictAny, None + def represent_set(self, data: Any) -> MappingNode: + value: DictAny, None = {} for key in data: valuekey = None return self.represent_mapping('tag:yaml.org,2002:set', value) - def represent_date(self, data): - # type: (Any) -> Any + def represent_date(self, data: Any) -> ScalarNode: value = data.isoformat() return self.represent_scalar('tag:yaml.org,2002:timestamp', value) - def represent_datetime(self, data): - # type: (Any) -> Any + def represent_datetime(self, data: Any) -> ScalarNode: value = data.isoformat(' ') return self.represent_scalar('tag:yaml.org,2002:timestamp', value) - def represent_yaml_object(self, tag, data, cls, flow_style=None): - # type: (Any, Any, Any, Any) -> Any + def represent_yaml_object( + self, tag: Any, data: Any, cls: Any, flow_style: Any = None + ) -> MappingNode: if hasattr(data, '__getstate__'): state = data.__getstate__() else: state = data.__dict__.copy() return self.represent_mapping(tag, state, flow_style=flow_style) - def represent_undefined(self, data): - # type: (Any) -> None - raise RepresenterError(_F('cannot represent an object: {data!s}', data=data)) + def represent_undefined(self, data: Any) -> None: + raise RepresenterError(f'cannot represent an object: {data!s}') SafeRepresenter.add_representer(type(None), SafeRepresenter.represent_none) @@ -391,39 +382,32 @@ class Representer(SafeRepresenter): - def represent_complex(self, data): - # type: (Any) -> Any + def represent_complex(self, data: Any) -> Any: if data.imag == 0.0: data = repr(data.real) elif data.real == 0.0: - data = _F('{data_imag!r}j', data_imag=data.imag) + data = f'{data.imag!r}j' elif data.imag > 0: - data = _F('{data_real!r}+{data_imag!r}j', data_real=data.real, data_imag=data.imag) + data = f'{data.real!r}+{data.imag!r}j' else: - data = _F('{data_real!r}{data_imag!r}j', data_real=data.real, data_imag=data.imag) + data = f'{data.real!r}{data.imag!r}j' return self.represent_scalar('tag:yaml.org,2002:python/complex', data) - def represent_tuple(self, data): - # type: (Any) -> Any + def represent_tuple(self, data: Any) -> SequenceNode: return self.represent_sequence('tag:yaml.org,2002:python/tuple', data) - def represent_name(self, data): - # type: (Any) -> Any + def represent_name(self, data: Any) -> ScalarNode: try: - name = _F( - '{modname!s}.{qualname!s}', modname=data.__module__, qualname=data.__qualname__ - ) + name = f'{data.__module__!s}.{data.__qualname__!s}' except AttributeError: # ToDo: check if this can be reached in Py3 - name = _F('{modname!s}.{name!s}', modname=data.__module__, name=data.__name__) + name = f'{data.__module__!s}.{data.__name__!s}' return self.represent_scalar('tag:yaml.org,2002:python/name:' + name, "") - def represent_module(self, data): - # type: (Any) -> Any + def represent_module(self, data: Any) -> ScalarNode: return self.represent_scalar('tag:yaml.org,2002:python/module:' + data.__name__, "") - def represent_object(self, data): - # type: (Any) -> Any + def represent_object(self, data: Any) -> UnionSequenceNode, MappingNode: # We use __reduce__ API to save the data. data.__reduce__ returns # a tuple of length 2-5: # (function, args, state, listitems, dictitems) @@ -441,14 +425,14 @@ # !!python/object/apply node. cls = type(data) - if cls in copyreg.dispatch_table: # type: ignore - reduce = copyreg.dispatch_tablecls(data) # type: ignore + if cls in copyreg.dispatch_table: + reduce: Any = copyreg.dispatch_tablecls(data) elif hasattr(data, '__reduce_ex__'): reduce = data.__reduce_ex__(2) elif hasattr(data, '__reduce__'): reduce = data.__reduce__() else: - raise RepresenterError(_F('cannot represent object: {data!r}', data=data)) + raise RepresenterError(f'cannot represent object: {data!r}') reduce = (list(reduce) + None * 5):5 function, args, state, listitems, dictitems = reduce args = list(args) @@ -467,14 +451,10 @@ tag = 'tag:yaml.org,2002:python/object/apply:' newobj = False try: - function_name = _F( - '{fun!s}.{qualname!s}', fun=function.__module__, qualname=function.__qualname__ - ) + function_name = f'{function.__module__!s}.{function.__qualname__!s}' except AttributeError: # ToDo: check if this can be reached in Py3 - function_name = _F( - '{fun!s}.{name!s}', fun=function.__module__, name=function.__name__ - ) + function_name = f'{function.__module__!s}.{function.__name__!s}' if not args and not listitems and not dictitems and isinstance(state, dict) and newobj: return self.represent_mapping( 'tag:yaml.org,2002:python/object:' + function_name, state @@ -514,8 +494,9 @@ # need to add type here and write out the .comment # in serializer and emitter - def __init__(self, default_style=None, default_flow_style=None, dumper=None): - # type: (Any, Any, Any) -> None + def __init__( + self, default_style: Any = None, default_flow_style: Any = None, dumper: Any = None + ) -> None: if not hasattr(dumper, 'typ') and default_flow_style is None: default_flow_style = False SafeRepresenter.__init__( @@ -525,8 +506,7 @@ dumper=dumper, ) - def ignore_aliases(self, data): - # type: (Any) -> bool + def ignore_aliases(self, data: Any) -> bool: try: if data.anchor is not None and data.anchor.value is not None: return False @@ -534,15 +514,13 @@ pass return SafeRepresenter.ignore_aliases(self, data) - def represent_none(self, data): - # type: (Any) -> Any + def represent_none(self, data: Any) -> ScalarNode: if len(self.represented_objects) == 0 and not self.serializer.use_explicit_start: # this will be open ended (although it is not yet) return self.represent_scalar('tag:yaml.org,2002:null', 'null') return self.represent_scalar('tag:yaml.org,2002:null', "") - def represent_literal_scalarstring(self, data): - # type: (Any) -> Any + def represent_literal_scalarstring(self, data: Any) -> ScalarNode: tag = None style = '|' anchor = data.yaml_anchor(any=True) @@ -551,8 +529,7 @@ represent_preserved_scalarstring = represent_literal_scalarstring - def represent_folded_scalarstring(self, data): - # type: (Any) -> Any + def represent_folded_scalarstring(self, data: Any) -> ScalarNode: tag = None style = '>' anchor = data.yaml_anchor(any=True) @@ -566,32 +543,30 @@ tag = 'tag:yaml.org,2002:str' return self.represent_scalar(tag, data, style=style, anchor=anchor) - def represent_single_quoted_scalarstring(self, data): - # type: (Any) -> Any + def represent_single_quoted_scalarstring(self, data: Any) -> ScalarNode: tag = None style = "'" anchor = data.yaml_anchor(any=True) tag = 'tag:yaml.org,2002:str' return self.represent_scalar(tag, data, style=style, anchor=anchor) - def represent_double_quoted_scalarstring(self, data): - # type: (Any) -> Any + def represent_double_quoted_scalarstring(self, data: Any) -> ScalarNode: tag = None style = '"' anchor = data.yaml_anchor(any=True) tag = 'tag:yaml.org,2002:str' return self.represent_scalar(tag, data, style=style, anchor=anchor) - def represent_plain_scalarstring(self, data): - # type: (Any) -> Any + def represent_plain_scalarstring(self, data: Any) -> ScalarNode: tag = None style = '' anchor = data.yaml_anchor(any=True) tag = 'tag:yaml.org,2002:str' return self.represent_scalar(tag, data, style=style, anchor=anchor) - def insert_underscore(self, prefix, s, underscore, anchor=None): - # type: (Any, Any, Any, Any) -> Any + def insert_underscore( + self, prefix: Any, s: Any, underscore: Any, anchor: Any = None + ) -> ScalarNode: if underscore is None: return self.represent_scalar('tag:yaml.org,2002:int', prefix + s, anchor=anchor) if underscore0: @@ -607,57 +582,54 @@ s += '_' return self.represent_scalar('tag:yaml.org,2002:int', prefix + s, anchor=anchor) - def represent_scalar_int(self, data): - # type: (Any) -> Any + def represent_scalar_int(self, data: Any) -> ScalarNode: if data._width is not None: - s = '{:0{}d}'.format(data, data._width) + s = f'{data:0{data._width}d}' else: s = format(data, 'd') anchor = data.yaml_anchor(any=True) return self.insert_underscore("", s, data._underscore, anchor=anchor) - def represent_binary_int(self, data): - # type: (Any) -> Any + def represent_binary_int(self, data: Any) -> ScalarNode: if data._width is not None: # cannot use '{:#0{}b}', that strips the zeros - s = '{:0{}b}'.format(data, data._width) + s = f'{data:0{data._width}b}' else: s = format(data, 'b') anchor = data.yaml_anchor(any=True) return self.insert_underscore('0b', s, data._underscore, anchor=anchor) - def represent_octal_int(self, data): - # type: (Any) -> Any + def represent_octal_int(self, data: Any) -> ScalarNode: if data._width is not None: # cannot use '{:#0{}o}', that strips the zeros - s = '{:0{}o}'.format(data, data._width) + s = f'{data:0{data._width}o}' else: s = format(data, 'o') anchor = data.yaml_anchor(any=True) - return self.insert_underscore('0o', s, data._underscore, anchor=anchor) + prefix = '0o' + if getattr(self.serializer, 'use_version', None) == (1, 1): + prefix = '0' + return self.insert_underscore(prefix, s, data._underscore, anchor=anchor) - def represent_hex_int(self, data): - # type: (Any) -> Any + def represent_hex_int(self, data: Any) -> ScalarNode: if data._width is not None: # cannot use '{:#0{}x}', that strips the zeros - s = '{:0{}x}'.format(data, data._width) + s = f'{data:0{data._width}x}' else: s = format(data, 'x') anchor = data.yaml_anchor(any=True) return self.insert_underscore('0x', s, data._underscore, anchor=anchor) - def represent_hex_caps_int(self, data): - # type: (Any) -> Any + def represent_hex_caps_int(self, data: Any) -> ScalarNode: if data._width is not None: # cannot use '{:#0{}X}', that strips the zeros - s = '{:0{}X}'.format(data, data._width) + s = f'{data:0{data._width}X}' else: s = format(data, 'X') anchor = data.yaml_anchor(any=True) return self.insert_underscore('0x', s, data._underscore, anchor=anchor) - def represent_scalar_float(self, data): - # type: (Any) -> Any + def represent_scalar_float(self, data: Any) -> ScalarNode: """ this is way more complicated """ value = None anchor = data.yaml_anchor(any=True) @@ -671,27 +643,26 @@ return self.represent_scalar('tag:yaml.org,2002:float', value, anchor=anchor) if data._exp is None and data._prec > 0 and data._prec == data._width - 1: # no exponent, but trailing dot - value = '{}{:d}.'.format(data._m_sign if data._m_sign else "", abs(int(data))) + value = f'{data._m_sign if data._m_sign else ""}{abs(int(data)):d}.' elif data._exp is None: # no exponent, "normal" dot prec = data._prec ms = data._m_sign if data._m_sign else "" - # -1 for the dot - value = '{}{:0{}.{}f}'.format( - ms, abs(data), data._width - len(ms), data._width - prec - 1 - ) - if prec == 0 or (prec == 1 and ms != ""): - value = value.replace('0.', '.') + if prec < 0: + value = f'{ms}{abs(int(data)):0{data._width - len(ms)}d}' + else: + # -1 for the dot + value = f'{ms}{abs(data):0{data._width - len(ms)}.{data._width - prec - 1}f}' + if prec == 0 or (prec == 1 and ms != ""): + value = value.replace('0.', '.') while len(value) < data._width: value += '0' else: # exponent - m, es = '{:{}.{}e}'.format( - # data, data._width, data._width - data._prec + (1 if data._m_sign else 0) - data, - data._width, - data._width + (1 if data._m_sign else 0), - ).split('e') + ( + m, + es, + ) = f'{data:{data._width}.{data._width + (1 if data._m_sign else 0)}e}'.split('e') w = data._width if data._prec > 0 else (data._width + 1) if data < 0: w += 1 @@ -711,10 +682,10 @@ while (len(m1) + len(m2) - (1 if data._m_sign else 0)) < data._width: m2 += '0' e -= 1 - value = m1 + m2 + data._exp + '{:{}0{}d}'.format(e, esgn, data._e_width) + value = m1 + m2 + data._exp + f'{e:{esgn}0{data._e_width}d}' elif data._prec == 0: # mantissa with trailing dot e -= len(m2) - value = m1 + m2 + '.' + data._exp + '{:{}0{}d}'.format(e, esgn, data._e_width) + value = m1 + m2 + '.' + data._exp + f'{e:{esgn}0{data._e_width}d}' else: if data._m_lead0 > 0: m2 = '0' * (data._m_lead0 - 1) + m1 + m2 @@ -725,15 +696,16 @@ m1 += m20 m2 = m21: e -= 1 - value = m1 + '.' + m2 + data._exp + '{:{}0{}d}'.format(e, esgn, data._e_width) + value = m1 + '.' + m2 + data._exp + f'{e:{esgn}0{data._e_width}d}' if value is None: value = repr(data).lower() return self.represent_scalar('tag:yaml.org,2002:float', value, anchor=anchor) - def represent_sequence(self, tag, sequence, flow_style=None): - # type: (Any, Any, Any) -> Any - value = # type: ListAny + def represent_sequence( + self, tag: Any, sequence: Any, flow_style: Any = None + ) -> SequenceNode: + value: ListAny = # if the flow_style is None, the flow style tacked on to the object # explicitly will be taken. If that is None as well the default flow # style rules @@ -745,6 +717,8 @@ anchor = sequence.yaml_anchor() except AttributeError: anchor = None + if isinstance(tag, str): + tag = Tag(suffix=tag) node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor) if self.alias_key is not None: self.represented_objectsself.alias_key = node @@ -786,8 +760,7 @@ node.flow_style = best_style return node - def merge_comments(self, node, comments): - # type: (Any, Any) -> Any + def merge_comments(self, node: Any, comments: Any) -> Any: if comments is None: assert hasattr(node, 'comment') return node @@ -802,8 +775,7 @@ node.comment = comments return node - def represent_key(self, data): - # type: (Any) -> Any + def represent_key(self, data: Any) -> Any: if isinstance(data, CommentedKeySeq): self.alias_key = None return self.represent_sequence('tag:yaml.org,2002:seq', data, flow_style=True) @@ -812,9 +784,8 @@ return self.represent_mapping('tag:yaml.org,2002:map', data, flow_style=True) return SafeRepresenter.represent_key(self, data) - def represent_mapping(self, tag, mapping, flow_style=None): - # type: (Any, Any, Any) -> Any - value = # type: ListAny + def represent_mapping(self, tag: Any, mapping: Any, flow_style: Any = None) -> MappingNode: + value: ListAny = try: flow_style = mapping.fa.flow_style(flow_style) except AttributeError: @@ -823,6 +794,8 @@ anchor = mapping.yaml_anchor() except AttributeError: anchor = None + if isinstance(tag, str): + tag = Tag(suffix=tag) node = MappingNode(tag, value, flow_style=flow_style, anchor=anchor) if self.alias_key is not None: self.represented_objectsself.alias_key = node @@ -897,12 +870,13 @@ else: arg = self.represent_data(merge_list) arg.flow_style = True - value.insert(merge_pos, (ScalarNode('tag:yaml.org,2002:merge', '<<'), arg)) + value.insert( + merge_pos, (ScalarNode(Tag(suffix='tag:yaml.org,2002:merge'), '<<'), arg) + ) return node - def represent_omap(self, tag, omap, flow_style=None): - # type: (Any, Any, Any) -> Any - value = # type: ListAny + def represent_omap(self, tag: Any, omap: Any, flow_style: Any = None) -> SequenceNode: + value: ListAny = try: flow_style = omap.fa.flow_style(flow_style) except AttributeError: @@ -911,6 +885,8 @@ anchor = omap.yaml_anchor() except AttributeError: anchor = None + if isinstance(tag, str): + tag = Tag(suffix=tag) node = SequenceNode(tag, value, flow_style=flow_style, anchor=anchor) if self.alias_key is not None: self.represented_objectsself.alias_key = node @@ -964,12 +940,11 @@ node.flow_style = best_style return node - def represent_set(self, setting): - # type: (Any) -> Any + def represent_set(self, setting: Any) -> MappingNode: flow_style = False - tag = 'tag:yaml.org,2002:set' + tag = Tag(suffix='tag:yaml.org,2002:set') # return self.represent_mapping(tag, value) - value = # type: ListAny + value: ListAny = flow_style = setting.fa.flow_style(flow_style) try: anchor = setting.yaml_anchor() @@ -1017,39 +992,38 @@ best_style = best_style return node - def represent_dict(self, data): - # type: (Any) -> Any + def represent_dict(self, data: Any) -> MappingNode: """write out tag if saved on loading""" try: - t = data.tag.value + _ = data.tag except AttributeError: - t = None - if t: - if t.startswith('!!'): - tag = 'tag:yaml.org,2002:' + t2: - else: - tag = t + tag = Tag(suffix='tag:yaml.org,2002:map') else: - tag = 'tag:yaml.org,2002:map' + if data.tag.trval: + if data.tag.startswith('!!'): + tag = Tag(suffix='tag:yaml.org,2002:' + data.tag.trval2:) + else: + tag = data.tag + else: + tag = Tag(suffix='tag:yaml.org,2002:map') return self.represent_mapping(tag, data) - def represent_list(self, data): - # type: (Any) -> Any + def represent_list(self, data: Any) -> SequenceNode: try: - t = data.tag.value + _ = data.tag except AttributeError: - t = None - if t: - if t.startswith('!!'): - tag = 'tag:yaml.org,2002:' + t2: - else: - tag = t + tag = Tag(suffix='tag:yaml.org,2002:seq') else: - tag = 'tag:yaml.org,2002:seq' + if data.tag.trval: + if data.tag.startswith('!!'): + tag = Tag(suffix='tag:yaml.org,2002:' + data.tag.trval2:) + else: + tag = data.tag + else: + tag = Tag(suffix='tag:yaml.org,2002:seq') return self.represent_sequence(tag, data) - def represent_datetime(self, data): - # type: (Any) -> Any + def represent_datetime(self, data: Any) -> ScalarNode: inter = 'T' if data._yaml't' else ' ' _yaml = data._yaml if _yaml'delta': @@ -1061,10 +1035,12 @@ value += _yaml'tz' return self.represent_scalar('tag:yaml.org,2002:timestamp', value) - def represent_tagged_scalar(self, data): - # type: (Any) -> Any + def represent_tagged_scalar(self, data: Any) -> ScalarNode: try: - tag = data.tag.value + if data.tag.handle == '!!': + tag = f'{data.tag.handle} {data.tag.suffix}' + else: + tag = data.tag except AttributeError: tag = None try: @@ -1073,16 +1049,16 @@ anchor = None return self.represent_scalar(tag, data.value, style=data.style, anchor=anchor) - def represent_scalar_bool(self, data): - # type: (Any) -> Any + def represent_scalar_bool(self, data: Any) -> ScalarNode: try: anchor = data.yaml_anchor() except AttributeError: anchor = None return SafeRepresenter.represent_bool(self, data, anchor=anchor) - def represent_yaml_object(self, tag, data, cls, flow_style=None): - # type: (Any, Any, Any, OptionalAny) -> Any + def represent_yaml_object( + self, tag: Any, data: Any, cls: Any, flow_style: OptionalAny = None + ) -> MappingNode: if hasattr(data, '__getstate__'): state = data.__getstate__() else:
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/resolver.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/resolver.py
Changed
@@ -2,11 +2,11 @@ import re -if False: # MYPY - from typing import Any, Dict, List, Union, Text, Optional # NOQA - from ruamel.yaml.compat import VersionType # NOQA +from typing import Any, Dict, List, Union, Text, Optional # NOQA +from ruamel.yaml.compat import VersionType # NOQA -from ruamel.yaml.compat import _DEFAULT_YAML_VERSION, _F # NOQA +from ruamel.yaml.tag import Tag +from ruamel.yaml.compat import _DEFAULT_YAML_VERSION # NOQA from ruamel.yaml.error import * # NOQA from ruamel.yaml.nodes import MappingNode, ScalarNode, SequenceNode # NOQA from ruamel.yaml.util import RegExp # NOQA @@ -103,25 +103,23 @@ class BaseResolver: - DEFAULT_SCALAR_TAG = 'tag:yaml.org,2002:str' - DEFAULT_SEQUENCE_TAG = 'tag:yaml.org,2002:seq' - DEFAULT_MAPPING_TAG = 'tag:yaml.org,2002:map' + DEFAULT_SCALAR_TAG = Tag(suffix='tag:yaml.org,2002:str') + DEFAULT_SEQUENCE_TAG = Tag(suffix='tag:yaml.org,2002:seq') + DEFAULT_MAPPING_TAG = Tag(suffix='tag:yaml.org,2002:map') - yaml_implicit_resolvers = {} # type: DictAny, Any - yaml_path_resolvers = {} # type: DictAny, Any + yaml_implicit_resolvers: DictAny, Any = {} + yaml_path_resolvers: DictAny, Any = {} - def __init__(self, loadumper=None): - # type: (Any, Any) -> None + def __init__(self: Any, loadumper: Any = None) -> None: self.loadumper = loadumper if self.loadumper is not None and getattr(self.loadumper, '_resolver', None) is None: self.loadumper._resolver = self.loadumper - self._loader_version = None # type: Any - self.resolver_exact_paths = # type: ListAny - self.resolver_prefix_paths = # type: ListAny + self._loader_version: Any = None + self.resolver_exact_paths: ListAny = + self.resolver_prefix_paths: ListAny = @property - def parser(self): - # type: () -> Any + def parser(self) -> Any: if self.loadumper is not None: if hasattr(self.loadumper, 'typ'): return self.loadumper.parser @@ -129,8 +127,7 @@ return None @classmethod - def add_implicit_resolver_base(cls, tag, regexp, first): - # type: (Any, Any, Any) -> None + def add_implicit_resolver_base(cls, tag: Any, regexp: Any, first: Any) -> None: if 'yaml_implicit_resolvers' not in cls.__dict__: # deepcopy doesn't work here cls.yaml_implicit_resolvers = dict( @@ -142,8 +139,7 @@ cls.yaml_implicit_resolvers.setdefault(ch, ).append((tag, regexp)) @classmethod - def add_implicit_resolver(cls, tag, regexp, first): - # type: (Any, Any, Any) -> None + def add_implicit_resolver(cls, tag: Any, regexp: Any, first: Any) -> None: if 'yaml_implicit_resolvers' not in cls.__dict__: # deepcopy doesn't work here cls.yaml_implicit_resolvers = dict( @@ -159,8 +155,7 @@ # def add_implicit_resolver(cls, tag, regexp, first): @classmethod - def add_path_resolver(cls, tag, path, kind=None): - # type: (Any, Any, Any) -> None + def add_path_resolver(cls, tag: Any, path: Any, kind: Any = None) -> None: # Note: `add_path_resolver` is experimental. The API could be changed. # `new_path` is a pattern that is matched against the path from the # root to the node that is being considered. `node_path` elements are @@ -175,7 +170,7 @@ # against a sequence value with the index equal to `index_check`. if 'yaml_path_resolvers' not in cls.__dict__: cls.yaml_path_resolvers = cls.yaml_path_resolvers.copy() - new_path = # type: ListAny + new_path: ListAny = for element in path: if isinstance(element, (list, tuple)): if len(element) == 2: @@ -184,9 +179,7 @@ node_check = element0 index_check = True else: - raise ResolverError( - _F('Invalid path element: {element!s}', element=element) - ) + raise ResolverError(f'Invalid path element: {element!s}') else: node_check = None index_check = element @@ -201,13 +194,9 @@ and not isinstance(node_check, str) and node_check is not None ): - raise ResolverError( - _F('Invalid node checker: {node_check!s}', node_check=node_check) - ) + raise ResolverError(f'Invalid node checker: {node_check!s}') if not isinstance(index_check, (str, int)) and index_check is not None: - raise ResolverError( - _F('Invalid index checker: {index_check!s}', index_check=index_check) - ) + raise ResolverError(f'Invalid index checker: {index_check!s}') new_path.append((node_check, index_check)) if kind is str: kind = ScalarNode @@ -216,11 +205,10 @@ elif kind is dict: kind = MappingNode elif kind not in ScalarNode, SequenceNode, MappingNode and kind is not None: - raise ResolverError(_F('Invalid node kind: {kind!s}', kind=kind)) + raise ResolverError(f'Invalid node kind: {kind!s}') cls.yaml_path_resolverstuple(new_path), kind = tag - def descend_resolver(self, current_node, current_index): - # type: (Any, Any) -> None + def descend_resolver(self, current_node: Any, current_index: Any) -> None: if not self.yaml_path_resolvers: return exact_paths = {} @@ -242,15 +230,15 @@ self.resolver_exact_paths.append(exact_paths) self.resolver_prefix_paths.append(prefix_paths) - def ascend_resolver(self): - # type: () -> None + def ascend_resolver(self) -> None: if not self.yaml_path_resolvers: return self.resolver_exact_paths.pop() self.resolver_prefix_paths.pop() - def check_resolver_prefix(self, depth, path, kind, current_node, current_index): - # type: (int, Any, Any, Any, Any) -> bool + def check_resolver_prefix( + self, depth: int, path: Any, kind: Any, current_node: Any, current_index: Any + ) -> bool: node_check, index_check = pathdepth - 1 if isinstance(node_check, str): if current_node.tag != node_check: @@ -272,8 +260,7 @@ return False return True - def resolve(self, kind, value, implicit): - # type: (Any, Any, Any) -> Any + def resolve(self, kind: Any, value: Any, implicit: Any) -> Any: if kind is ScalarNode and implicit0: if value == "": resolvers = self.yaml_implicit_resolvers.get("", ) @@ -282,14 +269,14 @@ resolvers += self.yaml_implicit_resolvers.get(None, ) for tag, regexp in resolvers: if regexp.match(value): - return tag + return Tag(suffix=tag) implicit = implicit1 if bool(self.yaml_path_resolvers): exact_paths = self.resolver_exact_paths-1 if kind in exact_paths: - return exact_pathskind + return Tag(suffix=exact_pathskind) if None in exact_paths: - return exact_pathsNone + return Tag(suffix=exact_pathsNone) if kind is ScalarNode: return self.DEFAULT_SCALAR_TAG elif kind is SequenceNode: @@ -298,8 +285,7 @@ return self.DEFAULT_MAPPING_TAG @property - def processing_version(self): - # type: () -> Any + def processing_version(self) -> Any: return None @@ -320,24 +306,25 @@ and Yes/No/On/Off booleans. """ - def __init__(self, version=None, loader=None, loadumper=None): - # type: (OptionalVersionType, Any, Any) -> None + def __init__( + self, version: OptionalVersionType = None, loader: Any = None, loadumper: Any = None + ) -> None: if loader is None and loadumper is not None: loader = loadumper BaseResolver.__init__(self, loader) self._loader_version = self.get_loader_version(version) - self._version_implicit_resolver = {} # type: DictAny, Any + self._version_implicit_resolver: DictAny, Any = {} - def add_version_implicit_resolver(self, version, tag, regexp, first): - # type: (VersionType, Any, Any, Any) -> None + def add_version_implicit_resolver( + self, version: VersionType, tag: Any, regexp: Any, first: Any + ) -> None: if first is None: first = None impl_resolver = self._version_implicit_resolver.setdefault(version, {}) for ch in first: impl_resolver.setdefault(ch, ).append((tag, regexp)) - def get_loader_version(self, version): - # type: (OptionalVersionType) -> Any + def get_loader_version(self, version: OptionalVersionType) -> Any: if version is None or isinstance(version, tuple): return version if isinstance(version, list): @@ -346,8 +333,7 @@ return tuple(map(int, version.split('.'))) @property - def versioned_resolver(self): - # type: () -> Any + def versioned_resolver(self) -> Any: """ select the resolver based on the version we are parsing """ @@ -360,8 +346,7 @@ self.add_version_implicit_resolver(version, x1, x2, x3) return self._version_implicit_resolverversion - def resolve(self, kind, value, implicit): - # type: (Any, Any, Any) -> Any + def resolve(self, kind: Any, value: Any, implicit: Any) -> Any: if kind is ScalarNode and implicit0: if value == "": resolvers = self.versioned_resolver.get("", ) @@ -370,14 +355,14 @@ resolvers += self.versioned_resolver.get(None, ) for tag, regexp in resolvers: if regexp.match(value): - return tag + return Tag(suffix=tag) implicit = implicit1 if bool(self.yaml_path_resolvers): exact_paths = self.resolver_exact_paths-1 if kind in exact_paths: - return exact_pathskind + return Tag(suffix=exact_pathskind) if None in exact_paths: - return exact_pathsNone + return Tag(suffix=exact_pathsNone) if kind is ScalarNode: return self.DEFAULT_SCALAR_TAG elif kind is SequenceNode: @@ -386,8 +371,7 @@ return self.DEFAULT_MAPPING_TAG @property - def processing_version(self): - # type: () -> Any + def processing_version(self) -> Any: try: version = self.loadumper._scanner.yaml_version except AttributeError:
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/ruamel.yaml.egg-info/PKG-INFO -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/ruamel.yaml.egg-info/PKG-INFO
Changed
@@ -1,13 +1,12 @@ Metadata-Version: 2.1 Name: ruamel.yaml -Version: 0.17.21 +Version: 0.17.32 Summary: ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order Home-page: https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree Author: Anthon van der Neut Author-email: a.van.der.neut@ruamel.eu License: MIT license Keywords: yaml 1.2 parser round-trip preserve quotes order config -Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License @@ -15,8 +14,7 @@ Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.10 -Classifier: Programming Language :: Python :: 3.5 -Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.11 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 @@ -36,32 +34,26 @@ ``ruamel.yaml`` is a YAML 1.2 loader/dumper package for Python. -:version: 0.17.21 -:updated: 2022-02-12 +:version: 0.17.32 +:updated: 2023-06-17 :documentation: http://yaml.readthedocs.io :repository: https://sourceforge.net/projects/ruamel-yaml/ :pypi: https://pypi.org/project/ruamel.yaml/ -*The 0.16.13 release was the last that was tested to be working on Python 2.7. -The 0.17.21 is the last one tested to be working on Python 3.5, -that is also the last release supporting old PyYAML functions, you'll have to create a -`YAML()` instance and use its `.load()` and `.dump()` methods.* +*Starting with 0.17.22 only Python 3.7+ is supported. +The 0.17 series is also the last to support old PyYAML functions, replace it by +creating a `YAML()` instance and use its `.load()` and `.dump()` methods.* +New(er) functionality is usually only available via the new API. -*Please adjust your dependencies accordingly if necessary. (`ruamel.yaml<0.18`)* +The 0.17.21 was the last one tested to be working on Python 3.5 and 3.6 (the +latter was not tested, because +tox/virtualenv stopped supporting that EOL version). +The 0.16.13 release was the last that was tested to be working on Python 2.7. -Starting with version 0.15.0 the way YAML files are loaded and dumped -has been changing, see the API doc for details. Currently existing -functionality will throw a warning before being changed/removed. -**For production systems already using a pre 0.16 version, you should -pin the version being used with ``ruamel.yaml<=0.15``** if you cannot -fully test upgrading to a newer version. For new usage -pin to the minor version tested ( ``ruamel.yaml<=0.17``) or even to the -exact version used. +*Please adjust/pin your dependencies accordingly if necessary. (`ruamel.yaml<0.18`)* -New functionality is usually only available via the new API, so -make sure you use it and stop using the `ruamel.yaml.safe_load()`, -`ruamel.yaml.round_trip_load()` and `ruamel.yaml.load()` functions -(and their `....dump()` counterparts). +There are now two extra plug-in packages (`ruamel.yaml.bytes` and `ruamel.yaml.string`) +for those not wanting to do the streaming to a `io.BytesIO/StringIO` buffer themselves. If your package uses ``ruamel.yaml`` and is not listed on PyPI, drop me an email, preferably with some information on how you use the @@ -99,6 +91,96 @@ .. should insert NEXT: at the beginning of line for next key (with empty line) +0.17.32 (2023-06-17): + - fix issue with scanner getting stuck in infinite loop + +0.17.31 (2023-05-31): + - added tag.setter on `ScalarEvent` and on `Node`, that takes either + a `Tag` instance, or a str + (reported by `Sorin Sbarnea <https://sourceforge.net/u/ssbarnea/profile/>`__) + +0.17.30 (2023-05-30): + - fix issue 467, caused by Tag instances not being hashable (reported by + `Douglas Raillard + <https://bitbucket.org/%7Bcf052d92-a278-4339-9aa8-de41923bb556%7D/>`__) + +0.17.29 (2023-05-30): + - changed the internals of the tag property from a string to a class which allows + for preservation of the original handle and suffix. This should + result in better results using documents with %TAG directives, as well + as preserving URI escapes in tag suffixes. + +0.17.28 (2023-05-26): + - fix for issue 464: documents ending with document end marker without final newline + fail to load (reported by `Mariusz Rusiniak <https://sourceforge.net/u/r2dan/profile/>`__) + +0.17.27 (2023-05-25): + - fix issue with inline mappings as value for merge keys + (reported by Sirish on `StackOverflow <https://stackoverflow.com/q/76331049/1307905>`__) + - fix for 468, error inserting after accessing merge attribute on ``CommentedMap`` + (reported by `Bastien gerard <https://sourceforge.net/u/bagerard/>`__) + - fix for issue 461 pop + insert on same `CommentedMap` key throwing error + (reported by `John Thorvald Wodder II <https://sourceforge.net/u/jwodder/profile/>`__) + +0.17.26 (2023-05-09): + - fix for error on edge cage for issue 459 + +0.17.25 (2023-05-09): + - fix for regression while dumping wrapped strings with too many backslashes removed + (issue 459, reported by `Lele Gaifax <https://sourceforge.net/u/lele/profile/>`__) + +0.17.24 (2023-05-06): + - rewrite of ``CommentedMap.insert()``. If you have a merge key in + the YAML document for the mapping you insert to, the position value should + be the one as you look at the YAML input. + This fixes issue 453 where other + keys of a merged in mapping would show up after an insert (reported by + `Alex Miller <https://sourceforge.net/u/millerdevel/profile/>`__). It + also fixes a call to `.insert()` resulting into the merge key to move + to be the first key if it wasn't already and it is also now possible + to insert a key before a merge key (even if the fist key in the mapping). + - fix (in the pure Python implementation including default) for issue 447. + (reported by `Jack Cherng <https://sourceforge.net/u/jfcherng/profile/>`__, + also brought up by brent on + `StackOverflow <https://stackoverflow.com/q/40072485/1307905>`__) + +0.17.23 (2023-05-05): + - fix 458, error on plain scalars starting with word longer than width. + (reported by `Kyle Larose <https://sourceforge.net/u/klarose/profile/>`__) + - fix for ``.update()`` no longer correctly handling keyword arguments + (reported by John Lin on <StackOverflow + `<https://stackoverflow.com/q/76089100/1307905>`__) + - fix issue 454: high Unicode (emojis) in quoted strings always + escaped (reported by `Michal Čihař <https://sourceforge.net/u/nijel/profile/>`__ + based on a question on StackOverflow). + - fix issue with emitter conservatively inserting extra backslashes in wrapped + quoted strings (reported by thebenman on `StackOverflow + <https://stackoverflow.com/q/75631454/1307905>`__) + +0.17.22 (2023-05-02): + + - fix issue 449 where the second exclamation marks got URL encoded (reported + and fixing PR provided by `John Stark <https://sourceforge.net/u/jods/profile/>`__) + - fix issue with indent != 2 and literal scalars with empty first line + (reported by wrdis on `StackOverflow <https://stackoverflow.com/q/75584262/1307905>`__) + - updated __repr__ of CommentedMap, now that Python's dict is ordered -> no more + ordereddict(list-of-tuples) + - merge MR 4, handling OctalInt in YAML 1.1 + (provided by `Jacob Floyd <https://sourceforge.net/u/cognifloyd/profile/>`_) + - fix loading of `!!float 42` (reported by Eric on + `Stack overflow <https://stackoverflow.com/a/71555107/1307905>`_) + - line numbers are now set on `CommentedKeySeq` and `CommentedKeyMap` (which + are created if you have a sequence resp. mapping as the key in a mapping) + - plain scalars: put single words longer than width on a line of their own, instead + of after the previous line (issue 427, reported by `Antoine Cotten + <https://sourceforge.net/u/antoineco/profile/>`_). Caveat: this currently results in a + space ending the previous line. + - fix for folded scalar part of 421: comments after ">" on first line of folded + scalars are now preserved (as were those in the same position on literal scalars). + Issue reported by Jacob Floyd. + - added stacklevel to warnings + - typing changed from Py2 compatible comments to Py3, removed various Py2-isms + 0.17.21 (2022-02-12): - fix bug in calling `.compose()` method with `pathlib.Path` instance. @@ -137,7 +219,7 @@ attrs with `@attr.s()` (both reported by `ssph <https://sourceforge.net/u/sph/>`__) 0.17.11 (2021-08-19): - - fix error baseclass for ``DuplicateKeyErorr`` (reported by `Łukasz Rogalski + - fix error baseclass for ``DuplicateKeyError`` (reported by `Łukasz Rogalski <https://sourceforge.net/u/lrogalski/>`__) - fix typo in reader error message, causing `KeyError` during reader error (reported by `MTU <https://sourceforge.net/u/mtu/>`__) @@ -288,5 +370,3 @@ For older changes see the file `CHANGES <https://sourceforge.net/p/ruamel-yaml/code/ci/default/tree/CHANGES>`_ - -
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/ruamel.yaml.egg-info/SOURCES.txt -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/ruamel.yaml.egg-info/SOURCES.txt
Changed
@@ -2,6 +2,7 @@ LICENSE MANIFEST.in README.rst +pyproject.toml setup.py ./LICENSE ./__init__.py @@ -30,13 +31,13 @@ ./scalarstring.py ./scanner.py ./serializer.py +./tag.py ./timestamp.py ./tokens.py ./util.py -.ruamel/__init__.py ruamel.yaml.egg-info/PKG-INFO ruamel.yaml.egg-info/SOURCES.txt ruamel.yaml.egg-info/dependency_links.txt -ruamel.yaml.egg-info/namespace_packages.txt +ruamel.yaml.egg-info/not-zip-safe ruamel.yaml.egg-info/requires.txt ruamel.yaml.egg-info/top_level.txt \ No newline at end of file
View file
_service:tar_scm:ruamel.yaml-0.17.32.tar.gz/ruamel.yaml.egg-info/not-zip-safe
Added
@@ -0,0 +1,1 @@ +
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/ruamel.yaml.egg-info/requires.txt -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/ruamel.yaml.egg-info/requires.txt
Changed
@@ -1,6 +1,6 @@ -:platform_python_implementation=="CPython" and python_version<"3.11" -ruamel.yaml.clib>=0.2.6 +:platform_python_implementation=="CPython" and python_version<"3.12" +ruamel.yaml.clib>=0.2.7 docs ryd
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/scalarbool.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/scalarbool.py
Changed
@@ -11,15 +11,13 @@ from ruamel.yaml.anchor import Anchor -if False: # MYPY - from typing import Text, Any, Dict, List # NOQA +from typing import Text, Any, Dict, List # NOQA __all__ = 'ScalarBoolean' class ScalarBoolean(int): - def __new__(cls, *args, **kw): - # type: (Any, Any, Any) -> Any + def __new__(cls: Any, *args: Any, **kw: Any) -> Any: anchor = kw.pop('anchor', None) b = int.__new__(cls, *args, **kw) if anchor is not None: @@ -27,21 +25,18 @@ return b @property - def anchor(self): - # type: () -> Any + def anchor(self) -> Any: if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) - def yaml_anchor(self, any=False): - # type: (bool) -> Any + def yaml_anchor(self, any: bool = False) -> Any: if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None - def yaml_set_anchor(self, value, always_dump=False): - # type: (Any, bool) -> None + def yaml_set_anchor(self, value: Any, always_dump: bool = False) -> None: self.anchor.value = value self.anchor.always_dump = always_dump
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/scalarfloat.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/scalarfloat.py
Changed
@@ -3,15 +3,13 @@ import sys from ruamel.yaml.anchor import Anchor -if False: # MYPY - from typing import Text, Any, Dict, List # NOQA +from typing import Text, Any, Dict, List # NOQA __all__ = 'ScalarFloat', 'ExponentialFloat', 'ExponentialCapsFloat' class ScalarFloat(float): - def __new__(cls, *args, **kw): - # type: (Any, Any, Any) -> Any + def __new__(cls: Any, *args: Any, **kw: Any) -> Any: width = kw.pop('width', None) prec = kw.pop('prec', None) m_sign = kw.pop('m_sign', None) @@ -34,24 +32,21 @@ v.yaml_set_anchor(anchor, always_dump=True) return v - def __iadd__(self, a): # type: ignore - # type: (Any) -> Any + def __iadd__(self, a: Any) -> Any: # type: ignore return float(self) + a x = type(self)(self + a) x._width = self._width x._underscore = self._underscore: if self._underscore is not None else None # NOQA return x - def __ifloordiv__(self, a): # type: ignore - # type: (Any) -> Any + def __ifloordiv__(self, a: Any) -> Any: # type: ignore return float(self) // a x = type(self)(self // a) x._width = self._width x._underscore = self._underscore: if self._underscore is not None else None # NOQA return x - def __imul__(self, a): # type: ignore - # type: (Any) -> Any + def __imul__(self, a: Any) -> Any: # type: ignore return float(self) * a x = type(self)(self * a) x._width = self._width @@ -59,16 +54,14 @@ x._prec = self._prec # check for others return x - def __ipow__(self, a): # type: ignore - # type: (Any) -> Any + def __ipow__(self, a: Any) -> Any: # type: ignore return float(self) ** a x = type(self)(self ** a) x._width = self._width x._underscore = self._underscore: if self._underscore is not None else None # NOQA return x - def __isub__(self, a): # type: ignore - # type: (Any) -> Any + def __isub__(self, a: Any) -> Any: # type: ignore return float(self) - a x = type(self)(self - a) x._width = self._width @@ -76,49 +69,35 @@ return x @property - def anchor(self): - # type: () -> Any + def anchor(self) -> Any: if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) - def yaml_anchor(self, any=False): - # type: (bool) -> Any + def yaml_anchor(self, any: bool = False) -> Any: if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None - def yaml_set_anchor(self, value, always_dump=False): - # type: (Any, bool) -> None + def yaml_set_anchor(self, value: Any, always_dump: bool = False) -> None: self.anchor.value = value self.anchor.always_dump = always_dump - def dump(self, out=sys.stdout): - # type: (Any) -> Any + def dump(self, out: Any = sys.stdout) -> None: out.write( - 'ScalarFloat({}| w:{}, p:{}, s:{}, lz:{}, _:{}|{}, w:{}, s:{})\n'.format( - self, - self._width, # type: ignore - self._prec, # type: ignore - self._m_sign, # type: ignore - self._m_lead0, # type: ignore - self._underscore, # type: ignore - self._exp, # type: ignore - self._e_width, # type: ignore - self._e_sign, # type: ignore - ) + f'ScalarFloat({self}| w:{self._width}, p:{self._prec}, ' # type: ignore + f's:{self._m_sign}, lz:{self._m_lead0}, _:{self._underscore}|{self._exp}' + f', w:{self._e_width}, s:{self._e_sign})\n' ) class ExponentialFloat(ScalarFloat): - def __new__(cls, value, width=None, underscore=None): - # type: (Any, Any, Any) -> Any + def __new__(cls, value: Any, width: Any = None, underscore: Any = None) -> Any: return ScalarFloat.__new__(cls, value, width=width, underscore=underscore) class ExponentialCapsFloat(ScalarFloat): - def __new__(cls, value, width=None, underscore=None): - # type: (Any, Any, Any) -> Any + def __new__(cls, value: Any, width: Any = None, underscore: Any = None) -> Any: return ScalarFloat.__new__(cls, value, width=width, underscore=underscore)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/scalarint.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/scalarint.py
Changed
@@ -2,15 +2,13 @@ from ruamel.yaml.anchor import Anchor -if False: # MYPY - from typing import Text, Any, Dict, List # NOQA +from typing import Text, Any, Dict, List # NOQA __all__ = 'ScalarInt', 'BinaryInt', 'OctalInt', 'HexInt', 'HexCapsInt', 'DecimalInt' class ScalarInt(int): - def __new__(cls, *args, **kw): - # type: (Any, Any, Any) -> Any + def __new__(cls: Any, *args: Any, **kw: Any) -> Any: width = kw.pop('width', None) underscore = kw.pop('underscore', None) anchor = kw.pop('anchor', None) @@ -21,8 +19,7 @@ v.yaml_set_anchor(anchor, always_dump=True) return v - def __iadd__(self, a): # type: ignore - # type: (Any) -> Any + def __iadd__(self, a: Any) -> Any: # type: ignore x = type(self)(self + a) x._width = self._width # type: ignore x._underscore = ( # type: ignore @@ -30,8 +27,7 @@ ) # NOQA return x - def __ifloordiv__(self, a): # type: ignore - # type: (Any) -> Any + def __ifloordiv__(self, a: Any) -> Any: # type: ignore x = type(self)(self // a) x._width = self._width # type: ignore x._underscore = ( # type: ignore @@ -39,8 +35,7 @@ ) # NOQA return x - def __imul__(self, a): # type: ignore - # type: (Any) -> Any + def __imul__(self, a: Any) -> Any: # type: ignore x = type(self)(self * a) x._width = self._width # type: ignore x._underscore = ( # type: ignore @@ -48,8 +43,7 @@ ) # NOQA return x - def __ipow__(self, a): # type: ignore - # type: (Any) -> Any + def __ipow__(self, a: Any) -> Any: # type: ignore x = type(self)(self ** a) x._width = self._width # type: ignore x._underscore = ( # type: ignore @@ -57,8 +51,7 @@ ) # NOQA return x - def __isub__(self, a): # type: ignore - # type: (Any) -> Any + def __isub__(self, a: Any) -> Any: # type: ignore x = type(self)(self - a) x._width = self._width # type: ignore x._underscore = ( # type: ignore @@ -67,35 +60,34 @@ return x @property - def anchor(self): - # type: () -> Any + def anchor(self) -> Any: if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) - def yaml_anchor(self, any=False): - # type: (bool) -> Any + def yaml_anchor(self, any: bool = False) -> Any: if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None - def yaml_set_anchor(self, value, always_dump=False): - # type: (Any, bool) -> None + def yaml_set_anchor(self, value: Any, always_dump: bool = False) -> None: self.anchor.value = value self.anchor.always_dump = always_dump class BinaryInt(ScalarInt): - def __new__(cls, value, width=None, underscore=None, anchor=None): - # type: (Any, Any, Any, Any) -> Any + def __new__( + cls, value: Any, width: Any = None, underscore: Any = None, anchor: Any = None + ) -> Any: return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor) class OctalInt(ScalarInt): - def __new__(cls, value, width=None, underscore=None, anchor=None): - # type: (Any, Any, Any, Any) -> Any + def __new__( + cls, value: Any, width: Any = None, underscore: Any = None, anchor: Any = None + ) -> Any: return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor) @@ -106,22 +98,25 @@ class HexInt(ScalarInt): """uses lower case (a-f)""" - def __new__(cls, value, width=None, underscore=None, anchor=None): - # type: (Any, Any, Any, Any) -> Any + def __new__( + cls, value: Any, width: Any = None, underscore: Any = None, anchor: Any = None + ) -> Any: return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor) class HexCapsInt(ScalarInt): """uses upper case (A-F)""" - def __new__(cls, value, width=None, underscore=None, anchor=None): - # type: (Any, Any, Any, Any) -> Any + def __new__( + cls, value: Any, width: Any = None, underscore: Any = None, anchor: Any = None + ) -> Any: return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor) class DecimalInt(ScalarInt): """needed if anchor""" - def __new__(cls, value, width=None, underscore=None, anchor=None): - # type: (Any, Any, Any, Any) -> Any + def __new__( + cls, value: Any, width: Any = None, underscore: Any = None, anchor: Any = None + ) -> Any: return ScalarInt.__new__(cls, value, width=width, underscore=underscore, anchor=anchor)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/scalarstring.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/scalarstring.py
Changed
@@ -2,8 +2,8 @@ from ruamel.yaml.anchor import Anchor -if False: # MYPY - from typing import Text, Any, Dict, List # NOQA +from typing import Text, Any, Dict, List # NOQA +from ruamel.yaml.compat import SupportsIndex __all__ = 'ScalarString', @@ -21,35 +21,30 @@ class ScalarString(str): __slots__ = Anchor.attrib - def __new__(cls, *args, **kw): - # type: (Any, Any) -> Any + def __new__(cls, *args: Any, **kw: Any) -> Any: anchor = kw.pop('anchor', None) ret_val = str.__new__(cls, *args, **kw) if anchor is not None: ret_val.yaml_set_anchor(anchor, always_dump=True) return ret_val - def replace(self, old, new, maxreplace=-1): - # type: (Any, Any, int) -> Any + def replace(self, old: Any, new: Any, maxreplace: SupportsIndex = -1) -> Any: return type(self)((str.replace(self, old, new, maxreplace))) @property - def anchor(self): - # type: () -> Any + def anchor(self) -> Any: if not hasattr(self, Anchor.attrib): setattr(self, Anchor.attrib, Anchor()) return getattr(self, Anchor.attrib) - def yaml_anchor(self, any=False): - # type: (bool) -> Any + def yaml_anchor(self, any: bool = False) -> Any: if not hasattr(self, Anchor.attrib): return None if any or self.anchor.always_dump: return self.anchor return None - def yaml_set_anchor(self, value, always_dump=False): - # type: (Any, bool) -> None + def yaml_set_anchor(self, value: Any, always_dump: bool = False) -> None: self.anchor.value = value self.anchor.always_dump = always_dump @@ -59,8 +54,7 @@ style = '|' - def __new__(cls, value, anchor=None): - # type: (Text, Any) -> Any + def __new__(cls, value: Text, anchor: Any = None) -> Any: return ScalarString.__new__(cls, value, anchor=anchor) @@ -72,8 +66,7 @@ style = '>' - def __new__(cls, value, anchor=None): - # type: (Text, Any) -> Any + def __new__(cls, value: Text, anchor: Any = None) -> Any: return ScalarString.__new__(cls, value, anchor=anchor) @@ -82,8 +75,7 @@ style = "'" - def __new__(cls, value, anchor=None): - # type: (Text, Any) -> Any + def __new__(cls, value: Text, anchor: Any = None) -> Any: return ScalarString.__new__(cls, value, anchor=anchor) @@ -92,8 +84,7 @@ style = '"' - def __new__(cls, value, anchor=None): - # type: (Text, Any) -> Any + def __new__(cls, value: Text, anchor: Any = None) -> Any: return ScalarString.__new__(cls, value, anchor=anchor) @@ -102,18 +93,15 @@ style = '' - def __new__(cls, value, anchor=None): - # type: (Text, Any) -> Any + def __new__(cls, value: Text, anchor: Any = None) -> Any: return ScalarString.__new__(cls, value, anchor=anchor) -def preserve_literal(s): - # type: (Text) -> Text +def preserve_literal(s: Text) -> Text: return LiteralScalarString(s.replace('\r\n', '\n').replace('\r', '\n')) -def walk_tree(base, map=None): - # type: (Any, Any) -> None +def walk_tree(base: Any, map: Any = None) -> None: """ the routine here walks over a simple yaml tree (recursing in dict values and list items) and converts strings that @@ -133,7 +121,7 @@ if isinstance(base, MutableMapping): for k in base: - v = basek # type: Text + v: Text = basek if isinstance(v, str): for ch in map: if ch in v:
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/scanner.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/scanner.py
Changed
@@ -31,11 +31,10 @@ import inspect from ruamel.yaml.error import MarkedYAMLError, CommentMark # NOQA from ruamel.yaml.tokens import * # NOQA -from ruamel.yaml.compat import _F, check_anchorname_char, nprint, nprintf # NOQA +from ruamel.yaml.compat import check_anchorname_char, nprint, nprintf # NOQA -if False: # MYPY - from typing import Any, Dict, Optional, List, Union, Text # NOQA - from ruamel.yaml.compat import VersionType # NOQA +from typing import Any, Dict, Optional, List, Union, Text # NOQA +from ruamel.yaml.compat import VersionType # NOQA __all__ = 'Scanner', 'RoundTripScanner', 'ScannerError' @@ -45,8 +44,7 @@ _SPACE_TAB = ' \t' -def xprintf(*args, **kw): - # type: (Any, Any) -> Any +def xprintf(*args: Any, **kw: Any) -> Any: return nprintf(*args, **kw) pass @@ -58,8 +56,9 @@ class SimpleKey: # See below simple keys treatment. - def __init__(self, token_number, required, index, line, column, mark): - # type: (Any, Any, int, int, int, Any) -> None + def __init__( + self, token_number: Any, required: Any, index: int, line: int, column: int, mark: Any + ) -> None: self.token_number = token_number self.required = required self.index = index @@ -69,8 +68,7 @@ class Scanner: - def __init__(self, loader=None): - # type: (Any) -> None + def __init__(self, loader: Any = None) -> None: """Initialize the scanner.""" # It is assumed that Scanner and Reader will have a common descendant. # Reader do the dirty work of checking for BOM and converting the @@ -86,24 +84,22 @@ self.loader._scanner = self self.reset_scanner() self.first_time = False - self.yaml_version = None # type: Any + self.yaml_version: Any = None @property - def flow_level(self): - # type: () -> int + def flow_level(self) -> int: return len(self.flow_context) - def reset_scanner(self): - # type: () -> None + def reset_scanner(self) -> None: # Had we reached the end of the stream? self.done = False # flow_context is an expanding/shrinking list consisting of '{' and '' # for each unclosed flow context. If empty list that means block context - self.flow_context = # type: ListText + self.flow_context: ListText = # List of processed tokens that are not yet emitted. - self.tokens = # type: ListAny + self.tokens: ListAny = # Add the STREAM-START token. self.fetch_stream_start() @@ -115,7 +111,7 @@ self.indent = -1 # Past indentation levels. - self.indents = # type: Listint + self.indents: Listint = # Variables related to simple keys treatment. @@ -145,11 +141,10 @@ # (token_number, required, index, line, column, mark) # A simple key may start with ALIAS, ANCHOR, TAG, SCALAR(flow), # '', or '{' tokens. - self.possible_simple_keys = {} # type: DictAny, Any + self.possible_simple_keys: DictAny, Any = {} @property - def reader(self): - # type: () -> Any + def reader(self) -> Any: try: return self._scanner_reader # type: ignore except AttributeError: @@ -160,16 +155,14 @@ return self._scanner_reader @property - def scanner_processing_version(self): # prefix until un-composited - # type: () -> Any + def scanner_processing_version(self) -> Any: # prefix until un-composited if hasattr(self.loader, 'typ'): return self.loader.resolver.processing_version return self.loader.processing_version # Public methods. - def check_token(self, *choices): - # type: (Any) -> bool + def check_token(self, *choices: Any) -> bool: # Check if the next token is one of the given types. while self.need_more_tokens(): self.fetch_more_tokens() @@ -181,16 +174,14 @@ return True return False - def peek_token(self): - # type: () -> Any + def peek_token(self) -> Any: # Return the next token, but do not delete if from the queue. while self.need_more_tokens(): self.fetch_more_tokens() if len(self.tokens) > 0: return self.tokens0 - def get_token(self): - # type: () -> Any + def get_token(self) -> Any: # Return the next token. while self.need_more_tokens(): self.fetch_more_tokens() @@ -200,8 +191,7 @@ # Private methods. - def need_more_tokens(self): - # type: () -> bool + def need_more_tokens(self) -> bool: if self.done: return False if len(self.tokens) == 0: @@ -213,12 +203,10 @@ return True return False - def fetch_comment(self, comment): - # type: (Any) -> None + def fetch_comment(self, comment: Any) -> None: raise NotImplementedError - def fetch_more_tokens(self): - # type: () -> Any + def fetch_more_tokens(self) -> Any: # Eat whitespaces and comments until we reach the next token. comment = self.scan_to_next_token() if comment is not None: # never happens for base scanner @@ -323,14 +311,13 @@ raise ScannerError( 'while scanning for the next token', None, - _F('found character {ch!r} that cannot start any token', ch=ch), + f'found character {ch!r} that cannot start any token', self.reader.get_mark(), ) # Simple keys treatment. - def next_possible_simple_key(self): - # type: () -> Any + def next_possible_simple_key(self) -> Any: # Return the number of the nearest possible simple key. Actually we # don't need to loop through the whole dictionary. We may replace it # with the following code: @@ -345,8 +332,7 @@ min_token_number = key.token_number return min_token_number - def stale_possible_simple_keys(self): - # type: () -> None + def stale_possible_simple_keys(self) -> None: # Remove entries that are no longer possible simple keys. According to # the YAML specification, simple keys # - should be limited to a single line, @@ -365,8 +351,7 @@ ) del self.possible_simple_keyslevel - def save_possible_simple_key(self): - # type: () -> None + def save_possible_simple_key(self) -> None: # The next token may start a simple key. We check if it's possible # and save its position. This function is called for # ALIAS, ANCHOR, TAG, SCALAR(flow), '', and '{'. @@ -389,8 +374,7 @@ ) self.possible_simple_keysself.flow_level = key - def remove_possible_simple_key(self): - # type: () -> None + def remove_possible_simple_key(self) -> None: # Remove the saved possible key position at the current flow level. if self.flow_level in self.possible_simple_keys: key = self.possible_simple_keysself.flow_level @@ -407,8 +391,7 @@ # Indentation functions. - def unwind_indent(self, column): - # type: (Any) -> None + def unwind_indent(self, column: Any) -> None: # In flow context, tokens should respect indentation. # Actually the condition should be `self.indent >= column` according to # the spec. But this condition will prohibit intuitively correct @@ -432,8 +415,7 @@ self.indent = self.indents.pop() self.tokens.append(BlockEndToken(mark, mark)) - def add_indent(self, column): - # type: (int) -> bool + def add_indent(self, column: int) -> bool: # Check if we need to increase indentation. if self.indent < column: self.indents.append(self.indent) @@ -443,8 +425,7 @@ # Fetchers. - def fetch_stream_start(self): - # type: () -> None + def fetch_stream_start(self) -> None: # We always add STREAM-START as the first token and STREAM-END as the # last token. # Read the token. @@ -452,8 +433,7 @@ # Add STREAM-START. self.tokens.append(StreamStartToken(mark, mark, encoding=self.reader.encoding)) - def fetch_stream_end(self): - # type: () -> None + def fetch_stream_end(self) -> None: # Set the current intendation to -1. self.unwind_indent(-1) # Reset simple keys. @@ -467,8 +447,7 @@ # The steam is finished. self.done = True - def fetch_directive(self): - # type: () -> None + def fetch_directive(self) -> None: # Set the current intendation to -1. self.unwind_indent(-1) @@ -479,16 +458,13 @@ # Scan and add DIRECTIVE. self.tokens.append(self.scan_directive()) - def fetch_document_start(self): - # type: () -> None + def fetch_document_start(self) -> None: self.fetch_document_indicator(DocumentStartToken) - def fetch_document_end(self): - # type: () -> None + def fetch_document_end(self) -> None: self.fetch_document_indicator(DocumentEndToken) - def fetch_document_indicator(self, TokenClass): - # type: (Any) -> None + def fetch_document_indicator(self, TokenClass: Any) -> None: # Set the current intendation to -1. self.unwind_indent(-1) @@ -503,16 +479,13 @@ end_mark = self.reader.get_mark() self.tokens.append(TokenClass(start_mark, end_mark)) - def fetch_flow_sequence_start(self): - # type: () -> None + def fetch_flow_sequence_start(self) -> None: self.fetch_flow_collection_start(FlowSequenceStartToken, to_push='') - def fetch_flow_mapping_start(self): - # type: () -> None + def fetch_flow_mapping_start(self) -> None: self.fetch_flow_collection_start(FlowMappingStartToken, to_push='{') - def fetch_flow_collection_start(self, TokenClass, to_push): - # type: (Any, Text) -> None + def fetch_flow_collection_start(self, TokenClass: Any, to_push: Text) -> None: # '' and '{' may start a simple key. self.save_possible_simple_key() # Increase the flow level. @@ -525,16 +498,13 @@ end_mark = self.reader.get_mark() self.tokens.append(TokenClass(start_mark, end_mark)) - def fetch_flow_sequence_end(self): - # type: () -> None + def fetch_flow_sequence_end(self) -> None: self.fetch_flow_collection_end(FlowSequenceEndToken) - def fetch_flow_mapping_end(self): - # type: () -> None + def fetch_flow_mapping_end(self) -> None: self.fetch_flow_collection_end(FlowMappingEndToken) - def fetch_flow_collection_end(self, TokenClass): - # type: (Any) -> None + def fetch_flow_collection_end(self, TokenClass: Any) -> None: # Reset possible simple key on the current level. self.remove_possible_simple_key() # Decrease the flow level. @@ -552,8 +522,7 @@ end_mark = self.reader.get_mark() self.tokens.append(TokenClass(start_mark, end_mark)) - def fetch_flow_entry(self): - # type: () -> None + def fetch_flow_entry(self) -> None: # Simple keys are allowed after ','. self.allow_simple_key = True # Reset possible simple key on the current level. @@ -564,8 +533,7 @@ end_mark = self.reader.get_mark() self.tokens.append(FlowEntryToken(start_mark, end_mark)) - def fetch_block_entry(self): - # type: () -> None + def fetch_block_entry(self) -> None: # Block context needs additional checks. if not self.flow_level: # Are we allowed to start a new entry? @@ -592,8 +560,7 @@ end_mark = self.reader.get_mark() self.tokens.append(BlockEntryToken(start_mark, end_mark)) - def fetch_key(self): - # type: () -> None + def fetch_key(self) -> None: # Block context needs additional checks. if not self.flow_level: @@ -620,8 +587,7 @@ end_mark = self.reader.get_mark() self.tokens.append(KeyToken(start_mark, end_mark)) - def fetch_value(self): - # type: () -> None + def fetch_value(self) -> None: # Do we determine a simple key? if self.flow_level in self.possible_simple_keys: # Add KEY. @@ -681,8 +647,7 @@ end_mark = self.reader.get_mark() self.tokens.append(ValueToken(start_mark, end_mark)) - def fetch_alias(self): - # type: () -> None + def fetch_alias(self) -> None: # ALIAS could be a simple key. self.save_possible_simple_key() # No simple keys after ALIAS. @@ -690,8 +655,7 @@ # Scan and add ALIAS. self.tokens.append(self.scan_anchor(AliasToken)) - def fetch_anchor(self): - # type: () -> None + def fetch_anchor(self) -> None: # ANCHOR could start a simple key. self.save_possible_simple_key() # No simple keys after ANCHOR. @@ -699,8 +663,7 @@ # Scan and add ANCHOR. self.tokens.append(self.scan_anchor(AnchorToken)) - def fetch_tag(self): - # type: () -> None + def fetch_tag(self) -> None: # TAG could start a simple key. self.save_possible_simple_key() # No simple keys after TAG. @@ -708,16 +671,13 @@ # Scan and add TAG. self.tokens.append(self.scan_tag()) - def fetch_literal(self): - # type: () -> None + def fetch_literal(self) -> None: self.fetch_block_scalar(style='|') - def fetch_folded(self): - # type: () -> None + def fetch_folded(self) -> None: self.fetch_block_scalar(style='>') - def fetch_block_scalar(self, style): - # type: (Any) -> None + def fetch_block_scalar(self, style: Any) -> None: # A simple key may follow a block scalar. self.allow_simple_key = True # Reset possible simple key on the current level. @@ -725,16 +685,13 @@ # Scan and add SCALAR. self.tokens.append(self.scan_block_scalar(style)) - def fetch_single(self): - # type: () -> None + def fetch_single(self) -> None: self.fetch_flow_scalar(style="'") - def fetch_double(self): - # type: () -> None + def fetch_double(self) -> None: self.fetch_flow_scalar(style='"') - def fetch_flow_scalar(self, style): - # type: (Any) -> None + def fetch_flow_scalar(self, style: Any) -> None: # A flow scalar could be a simple key. self.save_possible_simple_key() # No simple keys after flow scalars. @@ -742,8 +699,7 @@ # Scan and add SCALAR. self.tokens.append(self.scan_flow_scalar(style)) - def fetch_plain(self): - # type: () -> None + def fetch_plain(self) -> None: # A plain scalar could be a simple key. self.save_possible_simple_key() # No simple keys after plain scalars. But note that `scan_plain` will @@ -755,45 +711,39 @@ # Checkers. - def check_directive(self): - # type: () -> Any + def check_directive(self) -> Any: # DIRECTIVE: ^ '%' ... # The '%' indicator is already checked. if self.reader.column == 0: return True return None - def check_document_start(self): - # type: () -> Any + def check_document_start(self) -> Any: # DOCUMENT-START: ^ '---' (' '|'\n') if self.reader.column == 0: if self.reader.prefix(3) == '---' and self.reader.peek(3) in _THE_END_SPACE_TAB: return True return None - def check_document_end(self): - # type: () -> Any + def check_document_end(self) -> Any: # DOCUMENT-END: ^ '...' (' '|'\n') if self.reader.column == 0: if self.reader.prefix(3) == '...' and self.reader.peek(3) in _THE_END_SPACE_TAB: return True return None - def check_block_entry(self): - # type: () -> Any + def check_block_entry(self) -> Any: # BLOCK-ENTRY: '-' (' '|'\n') return self.reader.peek(1) in _THE_END_SPACE_TAB - def check_key(self): - # type: () -> Any + def check_key(self) -> Any: # KEY(flow context): '?' if bool(self.flow_level): return True # KEY(block context): '?' (' '|'\n') return self.reader.peek(1) in _THE_END_SPACE_TAB - def check_value(self): - # type: () -> Any + def check_value(self) -> Any: # VALUE(flow context): ':' if self.scanner_processing_version == (1, 1): if bool(self.flow_level): @@ -811,8 +761,7 @@ # VALUE(block context): ':' (' '|'\n') return self.reader.peek(1) in _THE_END_SPACE_TAB - def check_plain(self): - # type: () -> Any + def check_plain(self) -> Any: # A plain scalar may start with any non-space character except: # '-', '?', ':', ',', '', '', '{', '}', # '#', '&', '*', '!', '|', '>', '\'', '\"', @@ -848,8 +797,7 @@ # Scanners. - def scan_to_next_token(self): - # type: () -> Any + def scan_to_next_token(self) -> Any: # We ignore spaces, line breaks and comments. # If we find a line break in the block context, we set the flag # `allow_simple_key` on. @@ -874,8 +822,9 @@ srf() found = False _the_end = _THE_END + white_space = ' \t' if self.flow_level > 0 else ' ' while not found: - while srp() == ' ': + while srp() in white_space: srf() if srp() == '#': while srp() not in _the_end: @@ -887,8 +836,7 @@ found = True return None - def scan_directive(self): - # type: () -> Any + def scan_directive(self) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward @@ -909,8 +857,7 @@ self.scan_directive_ignored_line(start_mark) return DirectiveToken(name, value, start_mark, end_mark) - def scan_directive_name(self, start_mark): - # type: (Any) -> Any + def scan_directive_name(self, start_mark: Any) -> Any: # See the specification for details. length = 0 srp = self.reader.peek @@ -922,7 +869,7 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F('expected alphabetic or numeric character, but found {ch!r}', ch=ch), + f'expected alphabetic or numeric character, but found {ch!r}', self.reader.get_mark(), ) value = self.reader.prefix(length) @@ -932,13 +879,12 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F('expected alphabetic or numeric character, but found {ch!r}', ch=ch), + f'expected alphabetic or numeric character, but found {ch!r}', self.reader.get_mark(), ) return value - def scan_yaml_directive_value(self, start_mark): - # type: (Any) -> Any + def scan_yaml_directive_value(self, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward @@ -949,7 +895,7 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F("expected a digit or '.', but found {srp_call!r}", srp_call=srp()), + f"expected a digit or '.', but found {srp()!r}", self.reader.get_mark(), ) srf() @@ -958,14 +904,13 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F("expected a digit or '.', but found {srp_call!r}", srp_call=srp()), + f"expected a digit or '.', but found {srp()!r}", self.reader.get_mark(), ) self.yaml_version = (major, minor) return self.yaml_version - def scan_yaml_directive_number(self, start_mark): - # type: (Any) -> Any + def scan_yaml_directive_number(self, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward @@ -974,7 +919,7 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F('expected a digit, but found {ch!r}', ch=ch), + f'expected a digit, but found {ch!r}', self.reader.get_mark(), ) length = 0 @@ -984,8 +929,7 @@ srf(length) return value - def scan_tag_directive_value(self, start_mark): - # type: (Any) -> Any + def scan_tag_directive_value(self, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward @@ -997,8 +941,7 @@ prefix = self.scan_tag_directive_prefix(start_mark) return (handle, prefix) - def scan_tag_directive_handle(self, start_mark): - # type: (Any) -> Any + def scan_tag_directive_handle(self, start_mark: Any) -> Any: # See the specification for details. value = self.scan_tag_handle('directive', start_mark) ch = self.reader.peek() @@ -1006,13 +949,12 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F("expected ' ', but found {ch!r}", ch=ch), + f"expected ' ', but found {ch!r}", self.reader.get_mark(), ) return value - def scan_tag_directive_prefix(self, start_mark): - # type: (Any) -> Any + def scan_tag_directive_prefix(self, start_mark: Any) -> Any: # See the specification for details. value = self.scan_tag_uri('directive', start_mark) ch = self.reader.peek() @@ -1020,13 +962,12 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F("expected ' ', but found {ch!r}", ch=ch), + f"expected ' ', but found {ch!r}", self.reader.get_mark(), ) return value - def scan_directive_ignored_line(self, start_mark): - # type: (Any) -> None + def scan_directive_ignored_line(self, start_mark: Any) -> None: # See the specification for details. srp = self.reader.peek srf = self.reader.forward @@ -1040,13 +981,12 @@ raise ScannerError( 'while scanning a directive', start_mark, - _F('expected a comment or a line break, but found {ch!r}', ch=ch), + f'expected a comment or a line break, but found {ch!r}', self.reader.get_mark(), ) self.scan_line_break() - def scan_anchor(self, TokenClass): - # type: (Any) -> Any + def scan_anchor(self, TokenClass: Any) -> Any: # The specification does not restrict characters for anchors and # aliases. This may lead to problems, for instance, the document: # *alias, value @@ -1072,9 +1012,9 @@ ch = srp(length) if not length: raise ScannerError( - _F('while scanning an {name!s}', name=name), + f'while scanning an {name!s}', start_mark, - _F('expected alphabetic or numeric character, but found {ch!r}', ch=ch), + f'expected alphabetic or numeric character, but found {ch!r}', self.reader.get_mark(), ) value = self.reader.prefix(length) @@ -1084,20 +1024,26 @@ # assert ch1 == ch if ch not in '\0 \t\r\n\x85\u2028\u2029?:,{}%@`': raise ScannerError( - _F('while scanning an {name!s}', name=name), + f'while scanning an {name!s}', start_mark, - _F('expected alphabetic or numeric character, but found {ch!r}', ch=ch), + f'expected alphabetic or numeric character, but found {ch!r}', self.reader.get_mark(), ) end_mark = self.reader.get_mark() return TokenClass(value, start_mark, end_mark) - def scan_tag(self): - # type: () -> Any + def scan_tag(self) -> Any: # See the specification for details. srp = self.reader.peek start_mark = self.reader.get_mark() ch = srp(1) + short_handle = '!' + if ch == '!': + short_handle = '!!' + self.reader.forward() + srp = self.reader.peek + ch = srp(1) + if ch == '<': handle = None self.reader.forward(2) @@ -1106,13 +1052,13 @@ raise ScannerError( 'while parsing a tag', start_mark, - _F("expected '>', but found {srp_call!r}", srp_call=srp()), + f"expected '>' but found {srp()!r}", self.reader.get_mark(), ) self.reader.forward() elif ch in _THE_END_SPACE_TAB: handle = None - suffix = '!' + suffix = short_handle self.reader.forward() else: length = 1 @@ -1123,11 +1069,11 @@ break length += 1 ch = srp(length) - handle = '!' + handle = short_handle if use_handle: handle = self.scan_tag_handle('tag', start_mark) else: - handle = '!' + handle = short_handle self.reader.forward() suffix = self.scan_tag_uri('tag', start_mark) ch = srp() @@ -1135,15 +1081,14 @@ raise ScannerError( 'while scanning a tag', start_mark, - _F("expected ' ', but found {ch!r}", ch=ch), + f"expected ' ', but found {ch!r}", self.reader.get_mark(), ) value = (handle, suffix) end_mark = self.reader.get_mark() return TagToken(value, start_mark, end_mark) - def scan_block_scalar(self, style, rt=False): - # type: (Any, Optionalbool) -> Any + def scan_block_scalar(self, style: Any, rt: Optionalbool = False) -> Any: # See the specification for details. srp = self.reader.peek if style == '>': @@ -1151,7 +1096,7 @@ else: folded = False - chunks = # type: ListAny + chunks: ListAny = start_mark = self.reader.get_mark() # Scan the header. @@ -1227,7 +1172,7 @@ # Process trailing line breaks. The 'chomping' setting determines # whether they are included in the value. - trailing = # type: ListAny + trailing: ListAny = if chomping in None, True: chunks.append(line_break) if chomping is True: @@ -1266,8 +1211,7 @@ token.add_post_comment(comment) return token - def scan_block_scalar_indicators(self, start_mark): - # type: (Any) -> Any + def scan_block_scalar_indicators(self, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek chomping = None @@ -1312,13 +1256,12 @@ raise ScannerError( 'while scanning a block scalar', start_mark, - _F('expected chomping or indentation indicators, but found {ch!r}', ch=ch), + f'expected chomping or indentation indicators, but found {ch!r}', self.reader.get_mark(), ) return chomping, increment - def scan_block_scalar_ignored_line(self, start_mark): - # type: (Any) -> Any + def scan_block_scalar_ignored_line(self, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward @@ -1337,32 +1280,39 @@ raise ScannerError( 'while scanning a block scalar', start_mark, - _F('expected a comment or a line break, but found {ch!r}', ch=ch), + f'expected a comment or a line break, but found {ch!r}', self.reader.get_mark(), ) self.scan_line_break() return comment - def scan_block_scalar_indentation(self): - # type: () -> Any + def scan_block_scalar_indentation(self) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward chunks = + first_indent = -1 max_indent = 0 end_mark = self.reader.get_mark() while srp() in ' \r\n\x85\u2028\u2029': if srp() != ' ': + if first_indent < 0: + first_indent = self.reader.column chunks.append(self.scan_line_break()) end_mark = self.reader.get_mark() else: srf() if self.reader.column > max_indent: max_indent = self.reader.column + if first_indent > 0 and max_indent > first_indent: + start_mark = self.reader.get_mark() + raise ScannerError( + 'more indented follow up line than first in a block scalar', + start_mark, + ) return chunks, max_indent, end_mark - def scan_block_scalar_breaks(self, indent): - # type: (int) -> Any + def scan_block_scalar_breaks(self, indent: int) -> Any: # See the specification for details. chunks = srp = self.reader.peek @@ -1377,8 +1327,7 @@ srf() return chunks, end_mark - def scan_flow_scalar(self, style): - # type: (Any) -> Any + def scan_flow_scalar(self, style: Any) -> Any: # See the specification for details. # Note that we loose indentation rules for quoted scalars. Quoted # scalars don't need to adhere indentation because " and ' clearly @@ -1390,7 +1339,7 @@ else: double = False srp = self.reader.peek - chunks = # type: ListAny + chunks: ListAny = start_mark = self.reader.get_mark() quote = srp() self.reader.forward() @@ -1425,10 +1374,9 @@ ESCAPE_CODES = {'x': 2, 'u': 4, 'U': 8} - def scan_flow_scalar_non_spaces(self, double, start_mark): - # type: (Any, Any) -> Any + def scan_flow_scalar_non_spaces(self, double: Any, start_mark: Any) -> Any: # See the specification for details. - chunks = # type: ListAny + chunks: ListAny = srp = self.reader.peek srf = self.reader.forward while True: @@ -1459,12 +1407,8 @@ raise ScannerError( 'while scanning a double-quoted scalar', start_mark, - _F( - 'expected escape sequence of {length:d} hexdecimal ' - 'numbers, but found {srp_call!r}', - length=length, - srp_call=srp(k), - ), + f'expected escape sequence of {length:d} ' + f'hexdecimal numbers, but found {srp(k)!r}', self.reader.get_mark(), ) code = int(self.reader.prefix(length), 16) @@ -1477,14 +1421,13 @@ raise ScannerError( 'while scanning a double-quoted scalar', start_mark, - _F('found unknown escape character {ch!r}', ch=ch), + f'found unknown escape character {ch!r}', self.reader.get_mark(), ) else: return chunks - def scan_flow_scalar_spaces(self, double, start_mark): - # type: (Any, Any) -> Any + def scan_flow_scalar_spaces(self, double: Any, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek chunks = @@ -1513,10 +1456,9 @@ chunks.append(whitespaces) return chunks - def scan_flow_scalar_breaks(self, double, start_mark): - # type: (Any, Any) -> Any + def scan_flow_scalar_breaks(self, double: Any, start_mark: Any) -> Any: # See the specification for details. - chunks = # type: ListAny + chunks: ListAny = srp = self.reader.peek srf = self.reader.forward while True: @@ -1537,8 +1479,7 @@ else: return chunks - def scan_plain(self): - # type: () -> Any + def scan_plain(self) -> Any: # See the specification for details. # We add an additional restriction for the flow context: # plain scalars in the flow context cannot contain ',', ': ' and '?'. @@ -1546,7 +1487,7 @@ # Indentation rules are loosed for the flow context. srp = self.reader.peek srf = self.reader.forward - chunks = # type: ListAny + chunks: ListAny = start_mark = self.reader.get_mark() end_mark = start_mark indent = self.indent + 1 @@ -1554,14 +1495,16 @@ # document separators at the beginning of the line. # if indent == 0: # indent = 1 - spaces = # type: ListAny + spaces: ListAny = while True: length = 0 if srp() == '#': break while True: ch = srp(length) - if ch == ':' and srp(length + 1) not in _THE_END_SPACE_TAB: + if False and ch == ':' and srp(length + 1) == ',': + break + elif ch == ':' and srp(length + 1) not in _THE_END_SPACE_TAB: pass elif ch == '?' and self.scanner_processing_version != (1, 1): pass @@ -1626,8 +1569,7 @@ return token - def scan_plain_spaces(self, indent, start_mark): - # type: (Any, Any) -> Any + def scan_plain_spaces(self, indent: Any, start_mark: Any) -> Any: # See the specification for details. # The specification is really confusing about tabs in plain scalars. # We just forbid them completely. Do not use tabs in YAML! @@ -1664,8 +1606,7 @@ chunks.append(whitespaces) return chunks - def scan_tag_handle(self, name, start_mark): - # type: (Any, Any) -> Any + def scan_tag_handle(self, name: Any, start_mark: Any) -> Any: # See the specification for details. # For some strange reasons, the specification does not allow '_' in # tag handles. I have allowed it anyway. @@ -1673,9 +1614,9 @@ ch = srp() if ch != '!': raise ScannerError( - _F('while scanning an {name!s}', name=name), + f'while scanning an {name!s}', start_mark, - _F("expected '!', but found {ch!r}", ch=ch), + f"expected '!', but found {ch!r}", self.reader.get_mark(), ) length = 1 @@ -1687,9 +1628,9 @@ if ch != '!': self.reader.forward(length) raise ScannerError( - _F('while scanning an {name!s}', name=name), + f'while scanning an {name!s}', start_mark, - _F("expected '!', but found {ch!r}", ch=ch), + f"expected '!' but found {ch!r}", self.reader.get_mark(), ) length += 1 @@ -1697,8 +1638,7 @@ self.reader.forward(length) return value - def scan_tag_uri(self, name, start_mark): - # type: (Any, Any) -> Any + def scan_tag_uri(self, name: Any, start_mark: Any) -> Any: # See the specification for details. # Note: we do not check if URI is well-formed. srp = self.reader.peek @@ -1726,32 +1666,28 @@ length = 0 if not chunks: raise ScannerError( - _F('while parsing an {name!s}', name=name), + f'while parsing an {name!s}', start_mark, - _F('expected URI, but found {ch!r}', ch=ch), + f'expected URI, but found {ch!r}', self.reader.get_mark(), ) return "".join(chunks) - def scan_uri_escapes(self, name, start_mark): - # type: (Any, Any) -> Any + def scan_uri_escapes(self, name: Any, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward - code_bytes = # type: ListAny + code_bytes: ListAny = mark = self.reader.get_mark() while srp() == '%': srf() for k in range(2): if srp(k) not in '0123456789ABCDEFabcdef': raise ScannerError( - _F('while scanning an {name!s}', name=name), + f'while scanning an {name!s}', start_mark, - _F( - 'expected URI escape sequence of 2 hexdecimal numbers,' - ' but found {srp_call!r}', - srp_call=srp(k), - ), + f'expected URI escape sequence of 2 hexdecimal numbers, ' + f'but found {srp(k)!r}', self.reader.get_mark(), ) code_bytes.append(int(self.reader.prefix(2), 16)) @@ -1759,13 +1695,10 @@ try: value = bytes(code_bytes).decode('utf-8') except UnicodeDecodeError as exc: - raise ScannerError( - _F('while scanning an {name!s}', name=name), start_mark, str(exc), mark - ) + raise ScannerError(f'while scanning an {name!s}', start_mark, str(exc), mark) return value - def scan_line_break(self): - # type: () -> Any + def scan_line_break(self) -> Any: # Transforms: # '\r\n' : '\n' # '\r' : '\n' @@ -1788,8 +1721,7 @@ class RoundTripScanner(Scanner): - def check_token(self, *choices): - # type: (Any) -> bool + def check_token(self, *choices: Any) -> bool: # Check if the next token is one of the given types. while self.need_more_tokens(): self.fetch_more_tokens() @@ -1802,8 +1734,7 @@ return True return False - def peek_token(self): - # type: () -> Any + def peek_token(self) -> Any: # Return the next token, but do not delete if from the queue. while self.need_more_tokens(): self.fetch_more_tokens() @@ -1812,10 +1743,9 @@ return self.tokens0 return None - def _gather_comments(self): - # type: () -> Any + def _gather_comments(self) -> Any: """combine multiple comment lines and assign to next non-comment-token""" - comments = # type: ListAny + comments: ListAny = if not self.tokens: return comments if isinstance(self.tokens0, CommentToken): @@ -1837,8 +1767,7 @@ if not self.done and len(self.tokens) < 2: self.fetch_more_tokens() - def get_token(self): - # type: () -> Any + def get_token(self) -> Any: # Return the next token. while self.need_more_tokens(): self.fetch_more_tokens() @@ -1891,8 +1820,7 @@ return self.tokens.pop(0) return None - def fetch_comment(self, comment): - # type: (Any) -> None + def fetch_comment(self, comment: Any) -> None: value, start_mark, end_mark = comment while value and value-1 == ' ': # empty line within indented key context @@ -1902,8 +1830,7 @@ # scanner - def scan_to_next_token(self): - # type: () -> Any + def scan_to_next_token(self) -> Any: # We ignore spaces, line breaks and comments. # If we find a line break in the block context, we set the flag # `allow_simple_key` on. @@ -1922,14 +1849,14 @@ # We also need to add the check for `allow_simple_keys == True` to # `unwind_indent` before issuing BLOCK-END. # Scanners for block, flow, and plain scalars need to be modified. - srp = self.reader.peek srf = self.reader.forward if self.reader.index == 0 and srp() == '\uFEFF': srf() found = False + white_space = ' \t' if self.flow_level > 0 else ' ' while not found: - while srp() == ' ': + while srp() in white_space: srf() ch = srp() if ch == '#': @@ -1946,7 +1873,7 @@ break comment += ch srf() - # gather any blank lines following the comment too + # gather any blank lines following the comment ch = self.scan_line_break() while len(ch) > 0: comment += ch @@ -1975,8 +1902,7 @@ found = True return None - def scan_line_break(self, empty_line=False): - # type: (bool) -> Text + def scan_line_break(self, empty_line: bool = False) -> Text: # Transforms: # '\r\n' : '\n' # '\r' : '\n' @@ -1985,7 +1911,7 @@ # '\u2028' : '\u2028' # '\u2029 : '\u2029' # default : '' - ch = self.reader.peek() # type: Text + ch: Text = self.reader.peek() if ch in '\r\n\x85': if self.reader.prefix(2) == '\r\n': self.reader.forward(2) @@ -2000,10 +1926,40 @@ return ch return "" - def scan_block_scalar(self, style, rt=True): - # type: (Any, Optionalbool) -> Any + def scan_block_scalar(self, style: Any, rt: Optionalbool = True) -> Any: return Scanner.scan_block_scalar(self, style, rt=rt) + def scan_uri_escapes(self, name: Any, start_mark: Any) -> Any: + """ + The roundtripscanner doesn't do URI escaping + """ + # See the specification for details. + srp = self.reader.peek + srf = self.reader.forward + code_bytes: ListAny = + chunk = '' + mark = self.reader.get_mark() + while srp() == '%': + chunk += '%' + srf() + for k in range(2): + if srp(k) not in '0123456789ABCDEFabcdef': + raise ScannerError( + f'while scanning an {name!s}', + start_mark, + f'expected URI escape sequence of 2 hexdecimal numbers, ' + f'but found {srp(k)!r}', + self.reader.get_mark(), + ) + code_bytes.append(int(self.reader.prefix(2), 16)) + chunk += self.reader.prefix(2) + srf(2) + try: + _ = bytes(code_bytes).decode('utf-8') + except UnicodeDecodeError as exc: + raise ScannerError(f'while scanning an {name!s}', start_mark, str(exc), mark) + return chunk + # commenthandling 2021, differentiatiation not needed @@ -2016,8 +1972,7 @@ class CommentBase: __slots__ = ('value', 'line', 'column', 'used', 'function', 'fline', 'ufun', 'uline') - def __init__(self, value, line, column): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any, line: Any, column: Any) -> None: self.value = value self.line = line self.column = column @@ -2028,73 +1983,57 @@ self.ufun = None self.uline = None - def set_used(self, v='+'): - # type: (Any) -> None + def set_used(self, v: Any = '+') -> None: self.used = v info = inspect.getframeinfo(inspect.stack()10) self.ufun = info.function # type: ignore self.uline = info.lineno # type: ignore - def set_assigned(self): - # type: () -> None + def set_assigned(self) -> None: self.used = '|' - def __str__(self): - # type: () -> str - return _F('{value}', value=self.value) # type: ignore - - def __repr__(self): - # type: () -> str - return _F('{value!r}', value=self.value) # type: ignore - - def info(self): - # type: () -> str - return _F( # type: ignore - '{name}{used} {line:2}:{column:<2} "{value:40s} {function}:{fline} {ufun}:{uline}', - name=self.name, # type: ignore - line=self.line, - column=self.column, - value=self.value + '"', - used=self.used, - function=self.function, - fline=self.fline, - ufun=self.ufun, - uline=self.uline, + def __str__(self) -> str: + return f'{self.value}' + + def __repr__(self) -> str: + return f'{self.value!r}' + + def info(self) -> str: + xv = self.value + '"' + name = self.name # type: ignore + return ( + f'{name}{self.used} {self.line:2}:{self.column:<2} "{xv:40s} ' + f'{self.function}:{self.fline} {self.ufun}:{self.uline}' ) class EOLComment(CommentBase): name = 'EOLC' - def __init__(self, value, line, column): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any, line: Any, column: Any) -> None: super().__init__(value, line, column) class FullLineComment(CommentBase): name = 'FULL' - def __init__(self, value, line, column): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any, line: Any, column: Any) -> None: super().__init__(value, line, column) class BlankLineComment(CommentBase): name = 'BLNK' - def __init__(self, value, line, column): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any, line: Any, column: Any) -> None: super().__init__(value, line, column) class ScannedComments: - def __init__(self): - # type: (Any) -> None + def __init__(self: Any) -> None: self.comments = {} # type: ignore self.unused = # type: ignore - def add_eol_comment(self, comment, column, line): - # type: (Any, Any, Any) -> Any + def add_eol_comment(self, comment: Any, column: Any, line: Any) -> Any: # info = inspect.getframeinfo(inspect.stack()10) if comment.count('\n') == 1: assert comment-1 == '\n' @@ -2104,8 +2043,7 @@ self.unused.append(line) return retval - def add_blank_line(self, comment, column, line): - # type: (Any, Any, Any) -> Any + def add_blank_line(self, comment: Any, column: Any, line: Any) -> Any: # info = inspect.getframeinfo(inspect.stack()10) assert comment.count('\n') == 1 and comment-1 == '\n' assert line not in self.comments @@ -2113,8 +2051,7 @@ self.unused.append(line) return retval - def add_full_line_comment(self, comment, column, line): - # type: (Any, Any, Any) -> Any + def add_full_line_comment(self, comment: Any, column: Any, line: Any) -> Any: # info = inspect.getframeinfo(inspect.stack()10) assert comment.count('\n') == 1 and comment-1 == '\n' # if comment.startswith('# C12'): @@ -2124,30 +2061,21 @@ self.unused.append(line) return retval - def __getitem__(self, idx): - # type: (Any) -> Any + def __getitem__(self, idx: Any) -> Any: return self.commentsidx - def __str__(self): - # type: () -> Any + def __str__(self) -> Any: return ( 'ParsedComments:\n ' - + '\n '.join( - ( - _F('{lineno:2} {x}', lineno=lineno, x=x.info()) - for lineno, x in self.comments.items() - ) - ) + + '\n '.join((f'{lineno:2} {x.info()}' for lineno, x in self.comments.items())) + '\n' ) - def last(self): - # type: () -> str + def last(self) -> str: lineno, x = list(self.comments.items())-1 - return _F('{lineno:2} {x}\n', lineno=lineno, x=x.info()) # type: ignore + return f'{lineno:2} {x.info()}\n' - def any_unprocessed(self): - # type: () -> bool + def any_unprocessed(self) -> bool: # ToDo: might want to differentiate based on lineno return len(self.unused) > 0 # for lno, comment in reversed(self.comments.items()): @@ -2155,8 +2083,7 @@ # return True # return False - def unprocessed(self, use=False): - # type: (Any) -> Any + def unprocessed(self, use: Any = False) -> Any: while len(self.unused) > 0: first = self.unused.pop(0) if use else self.unused0 info = inspect.getframeinfo(inspect.stack()10) @@ -2165,8 +2092,7 @@ if use: self.commentsfirst.set_used() - def assign_pre(self, token): - # type: (Any) -> Any + def assign_pre(self, token: Any) -> Any: token_line = token.start_mark.line info = inspect.getframeinfo(inspect.stack()10) xprintf('assign_pre', token_line, self.unused, info.function, info.lineno) @@ -2179,8 +2105,7 @@ token.add_comment_pre(first) return gobbled - def assign_eol(self, tokens): - # type: (Any) -> Any + def assign_eol(self, tokens: Any) -> Any: try: comment_line = self.unused0 except IndexError: @@ -2235,8 +2160,7 @@ sys.exit(0) - def assign_post(self, token): - # type: (Any) -> Any + def assign_post(self, token: Any) -> Any: token_line = token.start_mark.line info = inspect.getframeinfo(inspect.stack()10) xprintf('assign_post', token_line, self.unused, info.function, info.lineno) @@ -2249,28 +2173,21 @@ token.add_comment_post(first) return gobbled - def str_unprocessed(self): - # type: () -> Any + def str_unprocessed(self) -> Any: return ''.join( - ( - _F(' {ind:2} {x}\n', ind=ind, x=x.info()) - for ind, x in self.comments.items() - if x.used == ' ' - ) + (f' {ind:2} {x.info()}\n' for ind, x in self.comments.items() if x.used == ' ') ) class RoundTripScannerSC(Scanner): # RoundTripScanner Split Comments - def __init__(self, *arg, **kw): - # type: (Any, Any) -> None + def __init__(self, *arg: Any, **kw: Any) -> None: super().__init__(*arg, **kw) assert self.loader is not None # comments isinitialised on .need_more_tokens and persist on # self.loader.parsed_comments self.comments = None - def get_token(self): - # type: () -> Any + def get_token(self) -> Any: # Return the next token. while self.need_more_tokens(): self.fetch_more_tokens() @@ -2282,8 +2199,7 @@ self.tokens_taken += 1 return self.tokens.pop(0) - def need_more_tokens(self): - # type: () -> bool + def need_more_tokens(self) -> bool: if self.comments is None: self.loader.parsed_comments = self.comments = ScannedComments() # type: ignore if self.done: @@ -2309,8 +2225,7 @@ self.comments.assign_eol(self.tokens) # type: ignore return False - def scan_to_next_token(self): - # type: () -> None + def scan_to_next_token(self) -> None: srp = self.reader.peek srf = self.reader.forward if self.reader.index == 0 and srp() == '\uFEFF': @@ -2373,8 +2288,7 @@ found = True return None - def scan_empty_or_full_line_comments(self): - # type: () -> None + def scan_empty_or_full_line_comments(self) -> None: blmark = self.reader.get_mark() assert blmark.column == 0 blanks = "" @@ -2413,8 +2327,7 @@ self.reader.forward() ch = self.reader.peek() - def scan_block_scalar_ignored_line(self, start_mark): - # type: (Any) -> Any + def scan_block_scalar_ignored_line(self, start_mark: Any) -> Any: # See the specification for details. srp = self.reader.peek srf = self.reader.forward @@ -2435,7 +2348,7 @@ raise ScannerError( 'while scanning a block scalar', start_mark, - _F('expected a comment or a line break, but found {ch!r}', ch=ch), + f'expected a comment or a line break, but found {ch!r}', self.reader.get_mark(), ) if comment is not None:
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/serializer.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/serializer.py
Changed
@@ -18,9 +18,8 @@ ) from ruamel.yaml.nodes import MappingNode, ScalarNode, SequenceNode -if False: # MYPY - from typing import Any, Dict, Union, Text, Optional # NOQA - from ruamel.yaml.compat import VersionType # NOQA +from typing import Any, Dict, Union, Text, Optional # NOQA +from ruamel.yaml.compat import VersionType # NOQA __all__ = 'Serializer', 'SerializerError' @@ -32,19 +31,19 @@ class Serializer: # 'id' and 3+ numbers, but not 000 - ANCHOR_TEMPLATE = 'id%03d' + ANCHOR_TEMPLATE = 'id{:03d}' ANCHOR_RE = RegExp('id(?!000$)\\d{3,}') def __init__( self, - encoding=None, - explicit_start=None, - explicit_end=None, - version=None, - tags=None, - dumper=None, - ): - # type: (Any, Optionalbool, Optionalbool, OptionalVersionType, Any, Any) -> None # NOQA + encoding: Any = None, + explicit_start: Optionalbool = None, + explicit_end: Optionalbool = None, + version: OptionalVersionType = None, + tags: Any = None, + dumper: Any = None, + ) -> None: + # NOQA self.dumper = dumper if self.dumper is not None: self.dumper._serializer = self @@ -56,28 +55,25 @@ else: self.use_version = version # type: ignore self.use_tags = tags - self.serialized_nodes = {} # type: DictAny, Any - self.anchors = {} # type: DictAny, Any + self.serialized_nodes: DictAny, Any = {} + self.anchors: DictAny, Any = {} self.last_anchor_id = 0 - self.closed = None # type: Optionalbool + self.closed: Optionalbool = None self._templated_id = None @property - def emitter(self): - # type: () -> Any + def emitter(self) -> Any: if hasattr(self.dumper, 'typ'): return self.dumper.emitter return self.dumper._emitter @property - def resolver(self): - # type: () -> Any + def resolver(self) -> Any: if hasattr(self.dumper, 'typ'): self.dumper.resolver return self.dumper._resolver - def open(self): - # type: () -> None + def open(self) -> None: if self.closed is None: self.emitter.emit(StreamStartEvent(encoding=self.use_encoding)) self.closed = False @@ -86,8 +82,7 @@ else: raise SerializerError('serializer is already opened') - def close(self): - # type: () -> None + def close(self) -> None: if self.closed is None: raise SerializerError('serializer is not opened') elif not self.closed: @@ -97,8 +92,7 @@ # def __del__(self): # self.close() - def serialize(self, node): - # type: (Any) -> None + def serialize(self, node: Any) -> None: if dbg(DBG_NODE): nprint('Serializing nodes') node.dump() @@ -118,8 +112,7 @@ self.anchors = {} self.last_anchor_id = 0 - def anchor_node(self, node): - # type: (Any) -> None + def anchor_node(self, node: Any) -> None: if node in self.anchors: if self.anchorsnode is None: self.anchorsnode = self.generate_anchor(node) @@ -139,19 +132,17 @@ self.anchor_node(key) self.anchor_node(value) - def generate_anchor(self, node): - # type: (Any) -> Any + def generate_anchor(self, node: Any) -> Any: try: anchor = node.anchor.value except: # NOQA anchor = None if anchor is None: self.last_anchor_id += 1 - return self.ANCHOR_TEMPLATE % self.last_anchor_id + return self.ANCHOR_TEMPLATE.format(self.last_anchor_id) return anchor - def serialize_node(self, node, parent, index): - # type: (Any, Any, Any) -> None + def serialize_node(self, node: Any, parent: Any, index: Any) -> None: alias = self.anchorsnode if node in self.serialized_nodes: node_style = getattr(node, 'style', None) @@ -167,14 +158,14 @@ detected_tag = self.resolver.resolve(ScalarNode, node.value, (True, False)) default_tag = self.resolver.resolve(ScalarNode, node.value, (False, True)) implicit = ( - (node.tag == detected_tag), - (node.tag == default_tag), - node.tag.startswith('tag:yaml.org,2002:'), + (node.ctag == detected_tag), + (node.ctag == default_tag), + node.tag.startswith('tag:yaml.org,2002:'), # type: ignore ) self.emitter.emit( ScalarEvent( alias, - node.tag, + node.ctag, implicit, node.value, style=node.style, @@ -182,7 +173,7 @@ ) ) elif isinstance(node, SequenceNode): - implicit = node.tag == self.resolver.resolve(SequenceNode, node.value, True) + implicit = node.ctag == self.resolver.resolve(SequenceNode, node.value, True) comment = node.comment end_comment = None seq_comment = None @@ -197,7 +188,7 @@ self.emitter.emit( SequenceStartEvent( alias, - node.tag, + node.ctag, implicit, flow_style=node.flow_style, comment=node.comment, @@ -209,7 +200,7 @@ index += 1 self.emitter.emit(SequenceEndEvent(comment=seq_comment, end_comment)) elif isinstance(node, MappingNode): - implicit = node.tag == self.resolver.resolve(MappingNode, node.value, True) + implicit = node.ctag == self.resolver.resolve(MappingNode, node.value, True) comment = node.comment end_comment = None map_comment = None @@ -222,7 +213,7 @@ self.emitter.emit( MappingStartEvent( alias, - node.tag, + node.ctag, implicit, flow_style=node.flow_style, comment=node.comment, @@ -236,6 +227,5 @@ self.resolver.ascend_resolver() -def templated_id(s): - # type: (Text) -> Any +def templated_id(s: Text) -> Any: return Serializer.ANCHOR_RE.match(s)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/setup.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/setup.py
Changed
@@ -1,15 +1,13 @@ # # header # coding: utf-8 -# dd: 20200903 - -from __future__ import print_function, absolute_import, division, unicode_literals +# dd: 20230418 # # __init__.py parser import sys import os import datetime -import traceback +from textwrap import dedent sys.path = path for path in sys.path if path not in os.getcwd(), "" import platform # NOQA @@ -20,13 +18,13 @@ from setuptools.command import install_lib # NOQA from setuptools.command.sdist import sdist as _sdist # NOQA -try: - from setuptools.namespaces import Installer as NameSpaceInstaller # NOQA -except ImportError: - msg = ('You should use the latest setuptools. The namespaces.py file that this setup.py' - ' uses was added in setuptools 28.7.0 (Oct 2016)') - print(msg) - sys.exit() +# try: +# from setuptools.namespaces import Installer as NameSpaceInstaller # NOQA +# except ImportError: +# msg = ('You should use the latest setuptools. The namespaces.py file that this setup.py' +# ' uses was added in setuptools 28.7.0 (Oct 2016)') +# print(msg) +# sys.exit() if __name__ != '__main__': raise NotImplementedError('should never include setup.py') @@ -278,8 +276,7 @@ class MySdist(_sdist): def initialize_options(self): _sdist.initialize_options(self) - # see pep 527, new uploads should be tar.gz or .zip - # fmt = getattr(self, 'tarfmt', None) + # failed expiriment, see pep 527, new uploads should be tar.gz or .zip # because of unicode_literals # self.formats = fmt if fmt else b'bztar' if sys.version_info < (3, ) else 'bztar' dist_base = os.environ.get('PYDISTBASE') @@ -317,8 +314,8 @@ self._split = None self.depth = self.full_package_name.count('.') self.nested = self._pkg_data.get('nested', False) - if self.nested: - NameSpaceInstaller.install_namespaces = lambda x: None + # if self.nested: + # NameSpaceInstaller.install_namespaces = lambda x: None self.command = None self.python_version() self._pkg = None, None # required and pre-installable packages @@ -387,9 +384,6 @@ return self._split @property - def namespace_packages(self): - return self.split: self.depth - def namespace_directories(self, depth=None): """return list of directories where the namespace should be created / can be found @@ -410,23 +404,11 @@ } if 'extra_packages' in self._pkg_data: return d - if len(self.split) > 1: # only if package namespace - dself.split0 = self.namespace_directories(1)0 + # if len(self.split) > 1: # only if package namespace + # dself.split0 = self.namespace_directories(1)0 + # print('d', d, os.getcwd()) return d - def create_dirs(self): - """create the directories necessary for namespace packaging""" - directories = self.namespace_directories(self.depth) - if not directories: - return - if not os.path.exists(directories0): - for d in directories: - os.mkdir(d) - with open(os.path.join(d, '__init__.py'), 'w') as fp: - fp.write( - 'import pkg_resources\n' 'pkg_resources.declare_namespace(__name__)\n' - ) - def python_version(self): supported = self._pkg_data.get('supported') if supported is None: @@ -718,7 +700,8 @@ @property def packages(self): - s = self.split + # s = self.split + s = self._pkg_data'full_package_name' # fixed this in package_data, the keys there must be non-unicode for py27 # if sys.version_info < (3, 0): # s = x.encode('utf-8') for x in self.split @@ -754,7 +737,7 @@ except ValueError: pass self._ext_modules = - no_test_compile = False + no_test_compile = True if '--restructuredtext' in sys.argv: no_test_compile = True elif 'sdist' in sys.argv: @@ -768,77 +751,7 @@ ) self._ext_modules.append(ext) return self._ext_modules - - print('sys.argv', sys.argv) - import tempfile - import shutil - from textwrap import dedent - - import distutils.sysconfig - import distutils.ccompiler - from distutils.errors import CompileError, LinkError - - for target in self._pkg_data.get('ext_modules', ): # list of dicts - ext = Extension( - self.pn(target'name'), - sources=self.pn(x) for x in target'src', - libraries=self.pn(x) for x in target.get('lib'), - ) - # debug('test1 in target', 'test' in target, target) - if 'test' not in target: # no test, just hope it works - self._ext_modules.append(ext) - continue - if sys.version_info:2 == (3, 4) and platform.system() == 'Windows': - # this is giving problems on appveyor, so skip - if 'FORCE_C_BUILD_TEST' not in os.environ: - self._ext_modules.append(ext) - continue - # write a temporary .c file to compile - c_code = dedent(target'test') - try: - tmp_dir = tempfile.mkdtemp(prefix='tmp_ruamel_') - bin_file_name = 'test' + self.pn(target'name') - file_name = os.path.join(tmp_dir, bin_file_name + '.c') - print('test compiling', file_name, '->', bin_file_name, end=' ') - with open(file_name, 'w') as fp: # write source - fp.write(c_code) - # and try to compile it - compiler = distutils.ccompiler.new_compiler() - assert isinstance(compiler, distutils.ccompiler.CCompiler) - # do any platform specific initialisations - distutils.sysconfig.customize_compiler(compiler) - # make sure you can reach header files because compile does change dir - compiler.add_include_dir(os.getcwd()) - if sys.version_info < (3,): - tmp_dir = tmp_dir.encode('utf-8') - # used to be a different directory, not necessary - compile_out_dir = tmp_dir - try: - compiler.link_executable( - compiler.compile(file_name, output_dir=compile_out_dir), - bin_file_name, - output_dir=tmp_dir, - libraries=ext.libraries, - ) - except CompileError: - debug('compile error:', file_name) - print('compile error:', file_name) - raise - except LinkError: - debug('link error', file_name) - print('link error', file_name) - raise - print('OK') - self._ext_modules.append(ext) - except Exception as e: # NOQA - debug('Exception:', e) - print('Exception:', e) - sys.exit(1) - if sys.version_info:2 == (3, 4) and platform.system() == 'Windows': - traceback.print_exc() - finally: - shutil.rmtree(tmp_dir) - return self._ext_modules + # this used to use distutils @property def test_suite(self): @@ -854,10 +767,6 @@ if os.path.exists(file_name): # add it if not in there? return False with open(file_name, 'w') as fp: - if os.path.exists('LICENSE'): - fp.write('metadata\nlicense_file = LICENSE\n') - else: - print('\n\n>>>>>> LICENSE file not found <<<<<\n\n') if self._pkg_data.get('universal'): fp.write('bdist_wheel\nuniversal = 1\n') try: @@ -869,25 +778,71 @@ return True -# # call setup +class TmpFiles: + def __init__(self, pkg_data, py_project=True, keep=False): + self._rm_after = + self._pkg_data = pkg_data + self._py_project = py_project + self._bdist_wheel = 'bdist_wheel' in sys.argv + self._keep = keep + + def __enter__(self): + self.bdist_wheel() + self.py_project() + + def bdist_wheel(self): + """pyproject doesn't allow for universal, so use setup.cfg if necessary + """ + file_name = 'setup.cfg' + if not self._bdist_wheel or os.path.exists(file_name): + return + if self._pkg_data.get('universal'): + self._rm_after.append(file_name) + with open(file_name, 'w') as fp: + fp.write('bdist_wheel\nuniversal = 1\n') + + def py_project(self): + """ + to prevent pip from complaining, or is it too late to create it from setup.py + """ + file_name = 'pyproject.toml' + if not self._py_project or os.path.exists(file_name): + return + self._rm_after.append(file_name) + with open(file_name, 'w') as fp: + fp.write(dedent("""\ + build-system + requires = "setuptools", "wheel" + # test + build-backend = "setuptools.build_meta" + """)) + + def __exit__(self, typ, value, traceback): + if self._keep: + return + for p in self._rm_after: + if not os.path.exists(p): + print('file {} already removed'.format(p)) + else: + os.unlink(p) + + +# call setup def main(): dump_kw = '--dump-kw' if dump_kw in sys.argv: import wheel - import distutils import setuptools + import pip print('python: ', sys.version) + print('pip: ', pip.__version__) print('setuptools:', setuptools.__version__) - print('distutils: ', distutils.__version__) print('wheel: ', wheel.__version__) nsp = NameSpacePackager(pkg_data) nsp.check() - nsp.create_dirs() + # nsp.create_dirs() MySdist.nsp = nsp - if pkg_data.get('tarfmt'): - MySdist.tarfmt = pkg_data.get('tarfmt') - cmdclass = dict(install_lib=MyInstallLib, sdist=MySdist) if _bdist_wheel_available: MyBdistWheel.nsp = nsp @@ -895,7 +850,6 @@ kw = dict( name=nsp.full_package_name, - namespace_packages=nsp.namespace_packages, version=version_str, packages=nsp.packages, python_requires=nsp.python_requires, @@ -914,12 +868,13 @@ package_data=nsp.package_data, ext_modules=nsp.ext_modules, test_suite=nsp.test_suite, + zip_safe=False, ) if '--version' not in sys.argv and ('--verbose' in sys.argv or dump_kw in sys.argv): for k in sorted(kw): v = kwk - print(' "{0}": "{1}",'.format(k, v)) + print(' "{0}": {1},'.format(k, repr(v))) # if '--record' in sys.argv: # return if dump_kw in sys.argv: @@ -931,31 +886,32 @@ except Exception: pass - if nsp.wheel(kw, setup): - return - for x in '-c', 'egg_info', '--egg-base', 'pip-egg-info': - if x not in sys.argv: - break - else: - # we're doing a tox setup install any starred package by searching up the source tree - # until you match your/package/name for your.package.name - for p in nsp.install_pre: - import subprocess - - # search other source - setup_path = os.path.join(*p.split('.') + 'setup.py') - try_dir = os.path.dirname(sys.executable) - while len(try_dir) > 1: - full_path_setup_py = os.path.join(try_dir, setup_path) - if os.path.exists(full_path_setup_py): - pip = sys.executable.replace('python', 'pip') - cmd = pip, 'install', os.path.dirname(full_path_setup_py) - # with open('/var/tmp/notice', 'a') as fp: - # print('installing', cmd, file=fp) - subprocess.check_output(cmd) - break - try_dir = os.path.dirname(try_dir) - setup(**kw) + # if nsp.wheel(kw, setup): + # return + with TmpFiles(pkg_data, keep=True): + for x in '-c', 'egg_info', '--egg-base', 'pip-egg-info': + if x not in sys.argv: + break + else: + # we're doing a tox setup install any starred package by searching up the + # source tree until you match your/package/name for your.package.name + for p in nsp.install_pre: + import subprocess + + # search other source + setup_path = os.path.join(*p.split('.') + 'setup.py') + try_dir = os.path.dirname(sys.executable) + while len(try_dir) > 1: + full_path_setup_py = os.path.join(try_dir, setup_path) + if os.path.exists(full_path_setup_py): + pip = sys.executable.replace('python', 'pip') + cmd = pip, 'install', os.path.dirname(full_path_setup_py) + # with open('/var/tmp/notice', 'a') as fp: + # print('installing', cmd, file=fp) + subprocess.check_output(cmd) + break + try_dir = os.path.dirname(try_dir) + setup(**kw) main()
View file
_service:tar_scm:ruamel.yaml-0.17.32.tar.gz/tag.py
Added
@@ -0,0 +1,124 @@ +# coding: utf-8 + +""" +In round-trip mode the original tag needs to be preserved, but the tag +transformed based on the directives needs to be available as well. + +A Tag that is created during loading has a handle and a suffix. +Not all objects loaded currently have a Tag, that .tag attribute can be None +A Tag that is created for dumping only (on an object loaded without a tag) has a suffix +only. +""" + +from typing import Any, Dict, Optional, List, Union, Optional, Iterator # NOQA + +tag_attrib = '_yaml_tag' + + +class Tag: + """store original tag information for roundtripping""" + + attrib = tag_attrib + + def __init__(self, handle: Any = None, suffix: Any = None, handles: Any = None) -> None: + self.handle = handle + self.suffix = suffix + self.handles = handles + self._transform_type: Optionalbool = None + + def __repr__(self) -> str: + return f'{self.__class__.__name__}({self.trval!r})' + + def __str__(self) -> str: + return f'{self.trval}' + + def __hash__(self) -> int: + try: + return self._hash_id # type: ignore + except AttributeError: + self._hash_id = res = hash((self.handle, self.suffix)) + return res + + def __eq__(self, other: Any) -> bool: + # other should not be a string, but the serializer sometimes provides these + if isinstance(other, str): + return self.trval == other + return bool(self.trval == other.trval) + + def startswith(self, x: str) -> bool: + if self.trval is not None: + return self.trval.startswith(x) + return False + + @property + def trval(self) -> Optionalstr: + try: + return self._trval + except AttributeError: + pass + if self.handle is None: + self._trval: Optionalstr = self.uri_decoded_suffix + return self._trval + assert self._transform_type is not None + if not self._transform_type: + # the non-round-trip case + self._trval = self.handlesself.handle + self.uri_decoded_suffix + return self._trval + # round-trip case + if self.handle == '!!' and self.suffix in ( + 'null', + 'bool', + 'int', + 'float', + 'binary', + 'timestamp', + 'omap', + 'pairs', + 'set', + 'str', + 'seq', + 'map', + ): + self._trval = self.handlesself.handle + self.uri_decoded_suffix + else: + # self._trval = self.handle + self.suffix + self._trval = self.handlesself.handle + self.uri_decoded_suffix + return self._trval + + value = trval + + @property + def uri_decoded_suffix(self) -> Optionalstr: + try: + return self._uri_decoded_suffix + except AttributeError: + pass + if self.suffix is None: + self._uri_decoded_suffix: Optionalstr = None + return None + res = '' + # don't have to check for scanner errors here + idx = 0 + while idx < len(self.suffix): + ch = self.suffixidx + idx += 1 + if ch != '%': + res += ch + else: + res += chr(int(self.suffixidx : idx + 2, 16)) + idx += 2 + self._uri_decoded_suffix = res + return res + + def select_transform(self, val: bool) -> None: + """ + val: False -> non-round-trip + True -> round-trip + """ + assert self._transform_type is None + self._transform_type = val + + def check_handle(self) -> bool: + if self.handle is None: + return False + return self.handle not in self.handles
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/timestamp.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/timestamp.py
Changed
@@ -5,39 +5,36 @@ # ToDo: at least on PY3 you could probably attach the tzinfo correctly to the object # a more complete datetime might be used by safe loading as well +# +# add type information (iso8601, spaced) -if False: # MYPY - from typing import Any, Dict, Optional, List # NOQA +from typing import Any, Dict, Optional, List # NOQA class TimeStamp(datetime.datetime): - def __init__(self, *args, **kw): - # type: (Any, Any) -> None - self._yaml = dict(t=False, tz=None, delta=0) # type: DictAny, Any + def __init__(self, *args: Any, **kw: Any) -> None: + self._yaml: DictAny, Any = dict(t=False, tz=None, delta=0) - def __new__(cls, *args, **kw): # datetime is immutable - # type: (Any, Any) -> Any + def __new__(cls, *args: Any, **kw: Any) -> Any: # datetime is immutable return datetime.datetime.__new__(cls, *args, **kw) - def __deepcopy__(self, memo): - # type: (Any) -> Any + def __deepcopy__(self, memo: Any) -> Any: ts = TimeStamp(self.year, self.month, self.day, self.hour, self.minute, self.second) ts._yaml = copy.deepcopy(self._yaml) return ts def replace( self, - year=None, - month=None, - day=None, - hour=None, - minute=None, - second=None, - microsecond=None, - tzinfo=True, - fold=None, - ): - # type: (Any, Any, Any, Any, Any, Any, Any, Any, Any) -> Any + year: Any = None, + month: Any = None, + day: Any = None, + hour: Any = None, + minute: Any = None, + second: Any = None, + microsecond: Any = None, + tzinfo: Any = True, + fold: Any = None, + ) -> Any: if year is None: year = self.year if month is None:
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/tokens.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/tokens.py
Changed
@@ -1,10 +1,9 @@ # coding: utf-8 -from ruamel.yaml.compat import _F, nprintf # NOQA +from ruamel.yaml.compat import nprintf # NOQA -if False: # MYPY - from typing import Text, Any, Dict, Optional, List # NOQA - from .error import StreamMark # NOQA +from typing import Text, Any, Dict, Optional, List # NOQA +from .error import StreamMark # NOQA SHOW_LINES = True @@ -12,23 +11,19 @@ class Token: __slots__ = 'start_mark', 'end_mark', '_comment' - def __init__(self, start_mark, end_mark): - # type: (StreamMark, StreamMark) -> None + def __init__(self, start_mark: StreamMark, end_mark: StreamMark) -> None: self.start_mark = start_mark self.end_mark = end_mark - def __repr__(self): - # type: () -> Any + def __repr__(self) -> Any: # attributes = key for key in self.__slots__ if not key.endswith('_mark') and # hasattr('self', key) attributes = key for key in self.__slots__ if not key.endswith('_mark') attributes.sort() # arguments = ', '.join( - # _F('{key!s}={gattr!r})', key=key, gattr=getattr(self, key)) for key in attributes + # f'{key!s}={getattr(self, key)!r})' for key in attributes # ) - arguments = - _F('{key!s}={gattr!r}', key=key, gattr=getattr(self, key)) for key in attributes - + arguments = f'{key!s}={getattr(self, key)!r}' for key in attributes if SHOW_LINES: try: arguments.append('line: ' + str(self.start_mark.line)) @@ -38,16 +33,14 @@ arguments.append('comment: ' + str(self._comment)) except: # NOQA pass - return '{}({})'.format(self.__class__.__name__, ', '.join(arguments)) + return f'{self.__class__.__name__}({", ".join(arguments)})' @property - def column(self): - # type: () -> int + def column(self) -> int: return self.start_mark.column @column.setter - def column(self, pos): - # type: (Any) -> None + def column(self, pos: Any) -> None: self.start_mark.column = pos # old style ( <= 0.17) is a TWO element list with first being the EOL @@ -61,8 +54,7 @@ # new style routines add one comment at a time # going to be deprecated in favour of add_comment_eol/post - def add_post_comment(self, comment): - # type: (Any) -> None + def add_post_comment(self, comment: Any) -> None: if not hasattr(self, '_comment'): self._comment = None, None else: @@ -73,8 +65,7 @@ self._comment0 = comment # going to be deprecated in favour of add_comment_pre - def add_pre_comments(self, comments): - # type: (Any) -> None + def add_pre_comments(self, comments: Any) -> None: if not hasattr(self, '_comment'): self._comment = None, None else: @@ -84,8 +75,7 @@ return # new style - def add_comment_pre(self, comment): - # type: (Any) -> None + def add_comment_pre(self, comment: Any) -> None: if not hasattr(self, '_comment'): self._comment = , None, None # type: ignore else: @@ -94,8 +84,7 @@ self._comment0 = # type: ignore self._comment0.append(comment) # type: ignore - def add_comment_eol(self, comment, comment_type): - # type: (Any, Any) -> None + def add_comment_eol(self, comment: Any, comment_type: Any) -> None: if not hasattr(self, '_comment'): self._comment = None, None, None else: @@ -107,8 +96,7 @@ # nprintf('commy', self.comment, comment_type) self._comment1comment_type = comment # type: ignore - def add_comment_post(self, comment): - # type: (Any) -> None + def add_comment_post(self, comment: Any) -> None: if not hasattr(self, '_comment'): self._comment = None, None, # type: ignore else: @@ -117,17 +105,14 @@ self._comment2 = # type: ignore self._comment2.append(comment) # type: ignore - # def get_comment(self): - # # type: () -> Any + # def get_comment(self) -> Any: # return getattr(self, '_comment', None) @property - def comment(self): - # type: () -> Any + def comment(self) -> Any: return getattr(self, '_comment', None) - def move_old_comment(self, target, empty=False): - # type: (Any, bool) -> Any + def move_old_comment(self, target: Any, empty: bool = False) -> Any: """move a comment from this token to target (normally next token) used to combine e.g. comments before a BlockEntryToken to the ScalarToken that follows it @@ -149,15 +134,14 @@ # nprint('mco2:', self, target, target.comment, empty) return self if c0 and tc0 or c1 and tc1: - raise NotImplementedError(_F('overlap in comment {c!r} {tc!r}', c=c, tc=tc)) + raise NotImplementedError(f'overlap in comment {c!r} {tc!r}') if c0: tc0 = c0 if c1: tc1 = c1 return self - def split_old_comment(self): - # type: () -> Any + def split_old_comment(self) -> Any: """ split the post part of a comment, and return it as comment to be added. Delete second part if None, None abc: # this goes to sequence @@ -172,8 +156,7 @@ delattr(self, '_comment') return ret_val - def move_new_comment(self, target, empty=False): - # type: (Any, bool) -> Any + def move_new_comment(self, target: Any, empty: bool = False) -> Any: """move a comment from this token to target (normally next token) used to combine e.g. comments before a BlockEntryToken to the ScalarToken that follows it @@ -197,7 +180,7 @@ # if self and target have both pre, eol or post comments, something seems wrong for idx in range(3): if cidx is not None and tcidx is not None: - raise NotImplementedError(_F('overlap in comment {c!r} {tc!r}', c=c, tc=tc)) + raise NotImplementedError(f'overlap in comment {c!r} {tc!r}') # move the comment parts for idx in range(3): if cidx: @@ -213,8 +196,7 @@ __slots__ = 'name', 'value' id = '<directive>' - def __init__(self, name, value, start_mark, end_mark): - # type: (Any, Any, Any, Any) -> None + def __init__(self, name: Any, value: Any, start_mark: Any, end_mark: Any) -> None: Token.__init__(self, start_mark, end_mark) self.name = name self.value = value @@ -234,8 +216,9 @@ __slots__ = ('encoding',) id = '<stream start>' - def __init__(self, start_mark=None, end_mark=None, encoding=None): - # type: (Any, Any, Any) -> None + def __init__( + self, start_mark: Any = None, end_mark: Any = None, encoding: Any = None + ) -> None: Token.__init__(self, start_mark, end_mark) self.encoding = encoding @@ -284,9 +267,8 @@ __slots__ = () id = '?' - # def x__repr__(self): - # return 'KeyToken({})'.format( - # self.start_mark.bufferself.start_mark.index:.split(None, 1)0) +# def x__repr__(self): +# return f'KeyToken({self.start_mark.bufferself.start_mark.index:.split(None, 1)0})' class ValueToken(Token): @@ -308,8 +290,7 @@ __slots__ = ('value',) id = '<alias>' - def __init__(self, value, start_mark, end_mark): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any, start_mark: Any, end_mark: Any) -> None: Token.__init__(self, start_mark, end_mark) self.value = value @@ -318,8 +299,7 @@ __slots__ = ('value',) id = '<anchor>' - def __init__(self, value, start_mark, end_mark): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any, start_mark: Any, end_mark: Any) -> None: Token.__init__(self, start_mark, end_mark) self.value = value @@ -328,8 +308,7 @@ __slots__ = ('value',) id = '<tag>' - def __init__(self, value, start_mark, end_mark): - # type: (Any, Any, Any) -> None + def __init__(self, value: Any, start_mark: Any, end_mark: Any) -> None: Token.__init__(self, start_mark, end_mark) self.value = value @@ -338,8 +317,9 @@ __slots__ = 'value', 'plain', 'style' id = '<scalar>' - def __init__(self, value, plain, start_mark, end_mark, style=None): - # type: (Any, Any, Any, Any, Any) -> None + def __init__( + self, value: Any, plain: Any, start_mark: Any, end_mark: Any, style: Any = None + ) -> None: Token.__init__(self, start_mark, end_mark) self.value = value self.plain = plain @@ -347,11 +327,12 @@ class CommentToken(Token): - __slots__ = '_value', 'pre_done' + __slots__ = '_value', '_column', 'pre_done' id = '<comment>' - def __init__(self, value, start_mark=None, end_mark=None, column=None): - # type: (Any, Any, Any, Any) -> None + def __init__( + self, value: Any, start_mark: Any = None, end_mark: Any = None, column: Any = None + ) -> None: if start_mark is None: assert column is not None self._column = column @@ -359,25 +340,21 @@ self._value = value @property - def value(self): - # type: () -> str + def value(self) -> str: if isinstance(self._value, str): return self._value return "".join(self._value) @value.setter - def value(self, val): - # type: (Any) -> None + def value(self, val: Any) -> None: self._value = val - def reset(self): - # type: () -> None + def reset(self) -> None: if hasattr(self, 'pre_done'): delattr(self, 'pre_done') - def __repr__(self): - # type: () -> Any - v = '{!r}'.format(self.value) + def __repr__(self) -> Any: + v = f'{self.value!r}' if SHOW_LINES: try: v += ', line: ' + str(self.start_mark.line) @@ -387,10 +364,9 @@ v += ', col: ' + str(self.start_mark.column) except: # NOQA pass - return 'CommentToken({})'.format(v) + return f'CommentToken({v})' - def __eq__(self, other): - # type: (Any) -> bool + def __eq__(self, other: Any) -> bool: if self.start_mark != other.start_mark: return False if self.end_mark != other.end_mark: @@ -399,6 +375,5 @@ return False return True - def __ne__(self, other): - # type: (Any) -> bool + def __ne__(self, other: Any) -> bool: return not self.__eq__(other)
View file
_service:tar_scm:ruamel.yaml-0.17.21.tar.gz/util.py -> _service:tar_scm:ruamel.yaml-0.17.32.tar.gz/util.py
Changed
@@ -9,9 +9,8 @@ import re -if False: # MYPY - from typing import Any, Dict, Optional, List, Text # NOQA - from .compat import StreamTextType # NOQA +from typing import Any, Dict, Optional, List, Text, Callable, Union # NOQA +from .compat import StreamTextType # NOQA class LazyEval: @@ -25,25 +24,21 @@ return value (or, prior to evaluation, func and arguments), in its closure. """ - def __init__(self, func, *args, **kwargs): - # type: (Any, Any, Any) -> None - def lazy_self(): - # type: () -> Any + def __init__(self, func: Callable..., Any, *args: Any, **kwargs: Any) -> None: + def lazy_self() -> Any: return_value = func(*args, **kwargs) object.__setattr__(self, 'lazy_self', lambda: return_value) return return_value object.__setattr__(self, 'lazy_self', lazy_self) - def __getattribute__(self, name): - # type: (Any) -> Any + def __getattribute__(self, name: str) -> Any: lazy_self = object.__getattribute__(self, 'lazy_self') if name == 'lazy_self': return lazy_self return getattr(lazy_self(), name) - def __setattr__(self, name, value): - # type: (Any, Any) -> None + def __setattr__(self, name: str, value: Any) -> None: setattr(self.lazy_self(), name, value) @@ -65,9 +60,19 @@ def create_timestamp( - year, month, day, t, hour, minute, second, fraction, tz, tz_sign, tz_hour, tz_minute -): - # type: (Any, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any, Any) -> Any + year: Any, + month: Any, + day: Any, + t: Any, + hour: Any, + minute: Any, + second: Any, + fraction: Any, + tz: Any, + tz_sign: Any, + tz_hour: Any, + tz_minute: Any, +) -> Uniondatetime.datetime, datetime.date: # create a timestamp from match against timestamp_regexp MAX_FRAC = 999999 year = int(year) @@ -122,8 +127,7 @@ # if you use this in your code, I suggest adding a test in your test suite # that check this routines output against a known piece of your YAML # before upgrades to this code break your round-tripped YAML -def load_yaml_guess_indent(stream, **kw): - # type: (StreamTextType, Any) -> Any +def load_yaml_guess_indent(stream: StreamTextType, **kw: Any) -> Any: """guess the indent and block sequence indent of yaml stream/string returns round_trip_loaded stream, indent level, block sequence indent @@ -134,15 +138,14 @@ from .main import YAML # load a YAML document, guess the indentation, if you use TABs you are on your own - def leading_spaces(line): - # type: (Any) -> int + def leading_spaces(line: Any) -> int: idx = 0 while idx < len(line) and lineidx == ' ': idx += 1 return idx if isinstance(stream, str): - yaml_str = stream # type: Any + yaml_str: Any = stream elif isinstance(stream, bytes): # most likely, but the Reader checks BOM for this yaml_str = stream.decode('utf-8') @@ -183,11 +186,10 @@ if indent is None and map_indent is not None: indent = map_indent yaml = YAML() - return yaml.load(yaml_str, **kw), indent, block_seq_indent # type: ignore + return yaml.load(yaml_str, **kw), indent, block_seq_indent -def configobj_walker(cfg): - # type: (Any) -> Any +def configobj_walker(cfg: Any) -> Any: """ walks over a ConfigObj (INI file with comments) generating corresponding YAML output (including comments @@ -206,8 +208,7 @@ yield c -def _walk_section(s, level=0): - # type: (Any, int) -> Any +def _walk_section(s: Any, level: int = 0) -> Any: from configobj import Section assert isinstance(s, Section) @@ -221,7 +222,7 @@ x = '|\n' + i + x.strip().replace('\n', '\n' + i) elif ':' in x: x = "'" + x.replace("'", "''") + "'" - line = '{0}{1}: {2}'.format(indent, name, x) + line = f'{indent}{name}: {x}' c = s.inline_commentsname if c: line += ' ' + c @@ -229,7 +230,7 @@ for name in s.sections: for c in s.commentsname: yield indent + c.strip() - line = '{0}{1}:'.format(indent, name) + line = f'{indent}{name}:' c = s.inline_commentsname if c: line += ' ' + c
Locations
Projects
Search
Status Monitor
Help
Open Build Service
OBS Manuals
API Documentation
OBS Portal
Reporting a Bug
Contact
Mailing List
Forums
Chat (IRC)
Twitter
Open Build Service (OBS)
is an
openSUSE project
.
浙ICP备2022010568号-2