git: 2d8be352aace - main - textproc/py-tokenizer: Correct pkg-descr
- Go to: [ bottom of page ] [ top of archives ] [ this month ]
Date: Wed, 27 Aug 2025 14:40:33 UTC
The branch main has been updated by otis:
URL: https://cgit.FreeBSD.org/ports/commit/?id=2d8be352aaceb6f5195f45d2d94f92026573300e
commit 2d8be352aaceb6f5195f45d2d94f92026573300e
Author: Juraj Lutter <otis@FreeBSD.org>
AuthorDate: 2025-08-27 14:39:30 +0000
Commit: Juraj Lutter <otis@FreeBSD.org>
CommitDate: 2025-08-27 14:40:21 +0000
textproc/py-tokenizer: Correct pkg-descr
A text for different package has slipped through and made it
to pkg-descr of this package.
Put the correct description instead.
---
textproc/py-tokenizer/Makefile | 1 +
textproc/py-tokenizer/pkg-descr | 14 ++++----------
2 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/textproc/py-tokenizer/Makefile b/textproc/py-tokenizer/Makefile
index 4f8afff7b8be..b4ad88c9c8d9 100644
--- a/textproc/py-tokenizer/Makefile
+++ b/textproc/py-tokenizer/Makefile
@@ -1,5 +1,6 @@
PORTNAME= tokenizer
PORTVERSION= 3.5.0
+PORTREVISION= 1
CATEGORIES= textproc python
MASTER_SITES= PYPI
PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX}
diff --git a/textproc/py-tokenizer/pkg-descr b/textproc/py-tokenizer/pkg-descr
index 665fa0186f94..c1f700edffe5 100644
--- a/textproc/py-tokenizer/pkg-descr
+++ b/textproc/py-tokenizer/pkg-descr
@@ -1,11 +1,5 @@
-This python utility package helps to create lazy modules. A lazy module defers
-loading (some of) its attributes until these attributes are first accessed. The
-module's lazy attributes in turn are attributes of other modules. These other
-modules will be imported/loaded only when (and if) associated attributes are
-used. A lazy import strategy can drastically reduce runtime and memory
-consumption.
+Tokenizer: A tokenizer for Icelandic text
-Additionally, this package provides a utility for optional imports with which
-one can import a module globally while triggering associated import errors only
-at use-sites (when and if a dependency is actually required, for example in the
-context of a specific functionality).
+Tokenization is a necessary first step in many natural language processing
+tasks, such as word counting, parsing, spell checking, corpus generation, and
+statistical analysis of text.