experimental/cuda-ubi9/: charset-normalizer-3.3.2 metadata and description

Homepage Simple index

The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet.

author Ahmed TAHRI
author_email ahmed.tahri@cloudnursery.dev
classifiers
  • Development Status :: 5 - Production/Stable
  • License :: OSI Approved :: MIT License
  • Intended Audience :: Developers
  • Topic :: Software Development :: Libraries :: Python Modules
  • Operating System :: OS Independent
  • Programming Language :: Python
  • Programming Language :: Python :: 3
  • Programming Language :: Python :: 3.7
  • Programming Language :: Python :: 3.8
  • Programming Language :: Python :: 3.9
  • Programming Language :: Python :: 3.10
  • Programming Language :: Python :: 3.11
  • Programming Language :: Python :: 3.12
  • Programming Language :: Python :: Implementation :: PyPy
  • Topic :: Text Processing :: Linguistic
  • Topic :: Utilities
  • Typing :: Typed
description_content_type text/markdown
keywords encoding,charset,charset-detector,detector,normalization,unicode,chardet,detect
license MIT
project_urls
  • Bug Reports, https://github.com/Ousret/charset_normalizer/issues
  • Documentation, https://charset-normalizer.readthedocs.io/en/latest
provides_extras unicode_backport
requires_python >=3.7.0
File Tox results History
charset_normalizer-3.3.2-py3-none-any.whl
Size
47 KB
Type
Python Wheel
Python
3

Charset Detection, for Everyone πŸ‘‹

The Real First Universal Charset Detector
Download Count Total

Featured Packages
Static Badge Static Badge

In other language (unofficial port - by the community)
Static Badge

A library that helps you read text from an unknown charset encoding.
Motivated by chardet, I'm trying to resolve the issue by taking a new approach. All IANA character set names for which the Python core library provides codecs are supported.

>>>>> πŸ‘‰ Try Me Online Now, Then Adopt Me πŸ‘ˆ <<<<<

This project offers you an alternative to Universal Charset Encoding Detector, also known as Chardet.

Feature Chardet Charset Normalizer cChardet
Fast ❌ βœ… βœ…
Universal** ❌ βœ… ❌
Reliable without distinguishable standards ❌ βœ… βœ…
Reliable with distinguishable standards βœ… βœ… βœ…
License LGPL-2.1
restrictive
MIT MPL-1.1
restrictive
Native Python βœ… βœ… ❌
Detect spoken language ❌ βœ… N/A
UnicodeDecodeError Safety ❌ βœ… ❌
Whl Size (min) 193.6 kB 42 kB ~200 kB
Supported Encoding 33 πŸŽ‰ 99 40

Reading Normalized TextCat Reading Text

** : They are clearly using specific code for a specific encoding even if covering most of used one
Did you got there because of the logs? See https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html

⚑ Performance

This package offer better performance than its counterpart Chardet. Here are some numbers.

Package Accuracy Mean per file (ms) File per sec (est)
chardet 86 % 200 ms 5 file/sec
charset-normalizer 98 % 10 ms 100 file/sec
Package 99th percentile 95th percentile 50th percentile
chardet 1200 ms 287 ms 23 ms
charset-normalizer 100 ms 50 ms 5 ms

Chardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload.

Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows. And yes, these results might change at any time. The dataset can be updated to include more files. The actual delays heavily depends on your CPU capabilities. The factors should remain the same. Keep in mind that the stats are generous and that Chardet accuracy vs our is measured using Chardet initial capability (eg. Supported Encoding) Challenge-them if you want.

✨ Installation

Using pip:

pip install charset-normalizer -U

πŸš€ Basic Usage

CLI

This package comes with a CLI.

usage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD]
                  file [file ...]

The Real First Universal Charset Detector. Discover originating encoding used
on text file. Normalize text to unicode.

positional arguments:
  files                 File(s) to be analysed

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         Display complementary information about file if any.
                        Stdout will contain logs about the detection process.
  -a, --with-alternative
                        Output complementary possibilities if any. Top-level
                        JSON WILL be a list.
  -n, --normalize       Permit to normalize input file. If not set, program
                        does not write anything.
  -m, --minimal         Only output the charset detected to STDOUT. Disabling
                        JSON output.
  -r, --replace         Replace file when trying to normalize it instead of
                        creating a new one.
  -f, --force           Replace file without asking if you are sure, use this
                        flag with caution.
  -t THRESHOLD, --threshold THRESHOLD
                        Define a custom maximum amount of chaos allowed in
                        decoded content. 0. <= chaos <= 1.
  --version             Show version information and exit.
normalizer ./data/sample.1.fr.srt

or

python -m charset_normalizer ./data/sample.1.fr.srt

πŸŽ‰ Since version 1.4.0 the CLI produce easily usable stdout result in JSON format.

{
    "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt",
    "encoding": "cp1252",
    "encoding_aliases": [
        "1252",
        "windows_1252"
    ],
    "alternative_encodings": [
        "cp1254",
        "cp1256",
        "cp1258",
        "iso8859_14",
        "iso8859_15",
        "iso8859_16",
        "iso8859_3",
        "iso8859_9",
        "latin_1",
        "mbcs"
    ],
    "language": "French",
    "alphabets": [
        "Basic Latin",
        "Latin-1 Supplement"
    ],
    "has_sig_or_bom": false,
    "chaos": 0.149,
    "coherence": 97.152,
    "unicode_path": null,
    "is_preferred": true
}

Python

Just print out normalized text

from charset_normalizer import from_path

results = from_path('./my_subtitle.srt')

print(str(results.best()))

Upgrade your code without effort

from charset_normalizer import detect

The above code will behave the same as chardet. We ensure that we offer the best (reasonable) BC result possible.

See the docs for advanced usage : readthedocs.io

πŸ˜‡ Why

When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a reliable alternative using a completely different method. Also! I never back down on a good challenge!

I don't care about the originating charset encoding, because two different tables can produce two identical rendered string. What I want is to get readable text, the best I can.

In a way, I'm brute forcing text decoding. How cool is that ? 😎

Don't confuse package ftfy with charset-normalizer or chardet. ftfy goal is to repair unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode.

🍰 How

Wait a minute, what is noise/mess and coherence according to YOU ?

Noise : I opened hundred of text files, written by humans, with the wrong encoding table. I observed, then I established some ground rules about what is obvious when it seems like a mess. I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to improve or rewrite it.

Coherence : For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design.

⚑ Known limitations

⚠️ About Python EOLs

If you are running:

Upgrade your Python interpreter as soon as possible.

πŸ‘€ Contributing

Contributions, issues and feature requests are very much welcome.
Feel free to check issues page if you want to contribute.

πŸ“ License

Copyright Β© Ahmed TAHRI @Ousret.
This project is MIT licensed.

Characters frequencies used in this project Β© 2012 Denny VrandečiΔ‡

πŸ’Ό For Enterprise

Professional support for charset-normalizer is available as part of the Tidelift Subscription. Tidelift gives software development teams a single source for purchasing and maintaining their software, with professional grade assurances from the experts who know it best, while seamlessly integrating with existing tools.

Changelog

All notable changes to charset-normalizer will be documented in this file. This project adheres to Semantic Versioning. The format is based on Keep a Changelog.

3.3.2 (2023-10-31)

Fixed

Added

3.3.1 (2023-10-22)

Changed

3.3.0 (2023-09-30)

Added

Removed

Changed

Fixed

3.2.0 (2023-06-07)

Changed

Added

Fixed

3.1.0 (2023-03-06)

Added

Removed

Changed

3.0.1 (2022-11-18)

Fixed

Changed

3.0.0 (2022-10-20)

Added

Changed

Fixed

Removed

3.0.0rc1 (2022-10-18)

Added

Changed

Fixed

Removed

3.0.0b2 (2022-08-21)

Added

Removed

Fixed

3.0.0b1 (2022-08-15)

Changed

Removed

2.1.1 (2022-08-19)

Deprecated

Changed

Fixed

2.1.0 (2022-06-19)

Added

Changed

Fixed

Removed

Deprecated

2.0.12 (2022-02-12)

Fixed

2.0.11 (2022-01-30)

Added

Changed

2.0.10 (2022-01-04)

Fixed

Changed

2.0.9 (2021-12-03)

Changed

Fixed

2.0.8 (2021-11-24)

Changed

Fixed

Added

2.0.7 (2021-10-11)

Added

Changed

Removed

Fixed

2.0.6 (2021-09-18)

Fixed

Changed

2.0.5 (2021-09-14)

Changed

Removed

Fixed

2.0.4 (2021-07-30)

Fixed

Changed

2.0.3 (2021-07-16)

Changed

2.0.2 (2021-07-15)

Fixed

Changed

2.0.1 (2021-07-13)

Fixed

Changed

Added

2.0.0 (2021-07-02)

Changed

Removed

Deprecated

Fixed

1.4.1 (2021-05-28)

Fixed

1.4.0 (2021-05-21)

Removed

Fixed

Changed

Added

1.3.9 (2021-05-13)

Fixed

1.3.8 (2021-05-12)

Fixed

1.3.7 (2021-05-12)

Fixed

1.3.6 (2021-02-09)

Changed

1.3.5 (2021-02-08)

Fixed

Changed

Added

MIT License

Copyright (c) 2019 TAHRI Ahmed R.

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.