Pandas 中文参考指南
Categorical data
这是一篇有关 pandas 类别数据类型的介绍,包括与 R 的 factor 的简短比较。
This is an introduction to pandas categorical data type, including a short comparison with R’s factor.
Categoricals 是与统计数据中的类别变量相对应的 pandas 数据类型。类别变量采用有限且通常是固定的可能值数量(在 R 中为 categories;levels)。示例包括性别、社会阶层、血型、国家归属、观察时间或李克特量表评级。
Categoricals are a pandas data type corresponding to categorical variables in statistics. A categorical variable takes on a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class, blood type, country affiliation, observation time or rating via Likert scales.
与统计类别变量相反,类别数据可能具有顺序(例如“强烈同意”与“同意”或“第一次观察”与“第二次观察”),但数值运算(加法、除法等)是不可能的。
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or ‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, …) are not possible.
类别数据的所有值都以 categories 或 np.nan 存在。顺序由 categories 的顺序定义,而不是值的词法顺序。在内部,数据结构由一个 categories 数组和一个 codes 整数数组组成,它们指向 categories 数组中的真实值。
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical order of the values. Internally, the data structure consists of a categories array and an integer array of codes which point to the real value in the categories array.
在以下情况下,类别数据类型很有用:
The categorical data type is useful in the following cases:
-
A string variable consisting of only a few different values. Converting such a string variable to a categorical variable will save some memory, see here.
-
The lexical order of a variable is not the same as the logical order (“one”, “two”, “three”). By converting to a categorical and specifying an order on the categories, sorting and min/max will use the logical order instead of the lexical order, see here.
-
As a signal to other Python libraries that this column should be treated as a categorical variable (e.g. to use suitable statistical methods or plot types).
See also the API docs on categoricals.
Object creation
Series creation
可以通过以下方法创建类别 Series 或 DataFrame 中的列:
Categorical Series or columns in a DataFrame can be created in several ways:
在构建 Series 时指定 dtype="category":
By specifying dtype="category" when constructing a Series:
In [1]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [2]: s
Out[2]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
通过将现有的 Series 或列转换为 category 数据类型:
By converting an existing Series or column to a category dtype:
In [3]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
In [4]: df["B"] = df["A"].astype("category")
In [5]: df
Out[5]:
A B
0 a a
1 b b
2 c c
3 a a
通过使用特殊功能,例如 cut(),该功能将数据分组为离散区间。请参阅文档中的 example on tiling。
By using special functions, such as cut(), which groups data into discrete bins. See the example on tiling in the docs.
In [6]: df = pd.DataFrame({"value": np.random.randint(0, 100, 20)})
In [7]: labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)]
In [8]: df["group"] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels)
In [9]: df.head(10)
Out[9]:
value group
0 65 60 - 69
1 49 40 - 49
2 56 50 - 59
3 43 40 - 49
4 43 40 - 49
5 91 90 - 99
6 32 30 - 39
7 87 80 - 89
8 36 30 - 39
9 8 0 - 9
通过将 pandas.Categorical 对象传递给 Series 或将其分配给 DataFrame。
By passing a pandas.Categorical object to a Series or assigning it to a DataFrame.
In [10]: raw_cat = pd.Categorical(
....: ["a", "b", "c", "a"], categories=["b", "c", "d"], ordered=False
....: )
....:
In [11]: s = pd.Series(raw_cat)
In [12]: s
Out[12]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): ['b', 'c', 'd']
In [13]: df = pd.DataFrame({"A": ["a", "b", "c", "a"]})
In [14]: df["B"] = raw_cat
In [15]: df
Out[15]:
A B
0 a NaN
1 b b
2 c c
3 a NaN
类别数据具有特定 category dtype:
Categorical data has a specific category dtype:
In [16]: df.dtypes
Out[16]:
A object
B category
dtype: object
DataFrame creation
与上一节类似,其中将单列转换为类别变量,DataFrame 中的所有列可以在构建期间或构建之后批量转换为类别变量。
Similar to the previous section where a single column was converted to categorical, all columns in a DataFrame can be batch converted to categorical either during or after construction.
可以通过在 DataFrame 构造器中指定 dtype="category",在构造期间完成此操作:
This can be done during construction by specifying dtype="category" in the DataFrame constructor:
In [17]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")}, dtype="category")
In [18]: df.dtypes
Out[18]:
A category
B category
dtype: object
请注意,每列中出现的类别不同;转换按列进行,因此仅列中出现的标签是类别:
Note that the categories present in each column differ; the conversion is done column by column, so only labels present in a given column are categories:
In [19]: df["A"]
Out[19]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [20]: df["B"]
Out[20]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (3, object): ['b', 'c', 'd']
类似地,现有的 DataFrame 中的所有列都可以使用 DataFrame.astype() 批量转换:
Analogously, all columns in an existing DataFrame can be batch converted using DataFrame.astype():
In [21]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")})
In [22]: df_cat = df.astype("category")
In [23]: df_cat.dtypes
Out[23]:
A category
B category
dtype: object
此转换也是按列进行的:
This conversion is likewise done column by column:
In [24]: df_cat["A"]
Out[24]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [25]: df_cat["B"]
Out[25]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (3, object): ['b', 'c', 'd']
Controlling behavior
在上述示例中,我们传递了 dtype='category',我们使用了默认行为:
In the examples above where we passed dtype='category', we used the default behavior:
-
Categories are inferred from the data.
-
Categories are unordered.
若要控制这些行为,请使用 CategoricalDtype 的实例,而不是传递 'category'。
To control those behaviors, instead of passing 'category', use an instance of CategoricalDtype.
In [26]: from pandas.api.types import CategoricalDtype
In [27]: s = pd.Series(["a", "b", "c", "a"])
In [28]: cat_type = CategoricalDtype(categories=["b", "c", "d"], ordered=True)
In [29]: s_cat = s.astype(cat_type)
In [30]: s_cat
Out[30]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): ['b' < 'c' < 'd']
类似地,可以使用 DataFrame 中的 CategoricalDtype 以确保类别在所有列之间保持一致。
Similarly, a CategoricalDtype can be used with a DataFrame to ensure that categories are consistent among all columns.
In [31]: from pandas.api.types import CategoricalDtype
In [32]: df = pd.DataFrame({"A": list("abca"), "B": list("bccd")})
In [33]: cat_type = CategoricalDtype(categories=list("abcd"), ordered=True)
In [34]: df_cat = df.astype(cat_type)
In [35]: df_cat["A"]
Out[35]:
0 a
1 b
2 c
3 a
Name: A, dtype: category
Categories (4, object): ['a' < 'b' < 'c' < 'd']
In [36]: df_cat["B"]
Out[36]:
0 b
1 c
2 c
3 d
Name: B, dtype: category
Categories (4, object): ['a' < 'b' < 'c' < 'd']
若要执行表级转换,其中 DataFrame 中的所有标签都被用作各列的类别,则可以按 categories = pd.unique(df.to_numpy().ravel()) 以编程方式确定 categories 参数。 |
To perform table-wise conversion, where all labels in the entire DataFrame are used as categories for each column, the categories parameter can be determined programmatically by categories = pd.unique(df.to_numpy().ravel()). |
如果你已有 codes 和 categories,则可使用 from_codes() 构造函数在普通构造函数模式下节省因子步骤:
If you already have codes and categories, you can use the from_codes() constructor to save the factorize step during normal constructor mode:
In [37]: splitter = np.random.choice([0, 1], 5, p=[0.5, 0.5])
In [38]: s = pd.Series(pd.Categorical.from_codes(splitter, categories=["train", "test"]))
Regaining original data
若要返回原始 Series 或 NumPy 数组,请使用 Series.astype(original_dtype) 或 np.asarray(categorical):
To get back to the original Series or NumPy array, use Series.astype(original_dtype) or np.asarray(categorical):
In [39]: s = pd.Series(["a", "b", "c", "a"])
In [40]: s
Out[40]:
0 a
1 b
2 c
3 a
dtype: object
In [41]: s2 = s.astype("category")
In [42]: s2
Out[42]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [43]: s2.astype(str)
Out[43]:
0 a
1 b
2 c
3 a
dtype: object
In [44]: np.asarray(s2)
Out[44]: array(['a', 'b', 'c', 'a'], dtype=object)
与 R 的 factor 函数不同,类别数据不将输入值转换为字符串;类别最终将获得与原始值相同的数据类型。 |
In contrast to R’s factor function, categorical data is not converting input values to strings; categories will end up the same data type as the original values. |
与 R 的 factor 函数不同之处在于,目前无法在创建时指定/更改标签。在创建后使用 categories 来更改类别。 |
In contrast to R’s factor function, there is currently no way to assign/change labels at creation time. Use categories to change the categories after creation time. |
CategoricalDtype
类别的类型由下列内容完全描述:
A categorical’s type is fully described by
-
categories: a sequence of unique values and no missing values
-
ordered: a boolean
该信息可存储在 CategoricalDtype 中。categories 参数是可选的,这表示应从创建 pandas.Categorical 时出现在数据中的内容中推断出实际类别。默认情况下,类别被假定为无序的。
This information can be stored in a CategoricalDtype. The categories argument is optional, which implies that the actual categories should be inferred from whatever is present in the data when the pandas.Categorical is created. The categories are assumed to be unordered by default.
In [45]: from pandas.api.types import CategoricalDtype
In [46]: CategoricalDtype(["a", "b", "c"])
Out[46]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=False, categories_dtype=object)
In [47]: CategoricalDtype(["a", "b", "c"], ordered=True)
Out[47]: CategoricalDtype(categories=['a', 'b', 'c'], ordered=True, categories_dtype=object)
In [48]: CategoricalDtype()
Out[48]: CategoricalDtype(categories=None, ordered=False, categories_dtype=None)
CategoricalDtype 可用于 pandas 期望为 dtype 的任何位置。例如 pandas.read_csv()、 pandas.DataFrame.astype() 或 Series 构造函数。
A CategoricalDtype can be used in any place pandas expects a dtype. For example pandas.read_csv(), pandas.DataFrame.astype(), or in the Series constructor.
作为一种便捷操作,在想让类别的默认行为无序,并等于数组中存在的集合值时,可以使用字符串 'category' 代替 CategoricalDtype。换句话说,dtype='category' 等于 dtype=CategoricalDtype()。 |
As a convenience, you can use the string 'category' in place of a CategoricalDtype when you want the default behavior of the categories being unordered, and equal to the set values present in the array. In other words, dtype='category' is equivalent to dtype=CategoricalDtype(). |
Equality semantics
CategoricalDtype 的两个实例在具有相同的类别和顺序时被比较为相等。比较两个无序类别时,categories 的顺序不被考虑。
Two instances of CategoricalDtype compare equal whenever they have the same categories and order. When comparing two unordered categoricals, the order of the categories is not considered.
In [49]: c1 = CategoricalDtype(["a", "b", "c"], ordered=False)
# Equal, since order is not considered when ordered=False
In [50]: c1 == CategoricalDtype(["b", "c", "a"], ordered=False)
Out[50]: True
# Unequal, since the second CategoricalDtype is ordered
In [51]: c1 == CategoricalDtype(["a", "b", "c"], ordered=True)
Out[51]: False
CategoricalDtype 的所有实例与字符串 'category' 相比较都相等。
All instances of CategoricalDtype compare equal to the string 'category'.
In [52]: c1 == "category"
Out[52]: True
Description
对类别数据使用 describe() 将产生类似于类型为 string 的 Series 或 DataFrame 的输出。
Using describe() on categorical data will produce similar output to a Series or DataFrame of type string.
In [53]: cat = pd.Categorical(["a", "c", "c", np.nan], categories=["b", "a", "c"])
In [54]: df = pd.DataFrame({"cat": cat, "s": ["a", "c", "c", np.nan]})
In [55]: df.describe()
Out[55]:
cat s
count 3 3
unique 2 2
top c c
freq 2 2
In [56]: df["cat"].describe()
Out[56]:
count 3
unique 2
top c
freq 2
Name: cat, dtype: object
Working with categories
类别数据具有 categories 和 ordered 属性,这些属性列出其可能的值以及排序是否重要。这些属性显示为 s.cat.categories 和 s.cat.ordered。如果你不手动指定类别和排序,则从传递的参数中推断出这些内容。
Categorical data has a categories and a ordered property, which list their possible values and whether the ordering matters or not. These properties are exposed as s.cat.categories and s.cat.ordered. If you don’t manually specify categories and ordering, they are inferred from the passed arguments.
In [57]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [58]: s.cat.categories
Out[58]: Index(['a', 'b', 'c'], dtype='object')
In [59]: s.cat.ordered
Out[59]: False
还可以按特定顺序传入类别:
It’s also possible to pass in the categories in a specific order:
In [60]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"], categories=["c", "b", "a"]))
In [61]: s.cat.categories
Out[61]: Index(['c', 'b', 'a'], dtype='object')
In [62]: s.cat.ordered
Out[62]: False
新类别数据不会被自动排序。你必须明确传递 ordered=True 以指示有序 Categorical。 |
New categorical data are not automatically ordered. You must explicitly pass ordered=True to indicate an ordered Categorical. |
unique() 的结果并不总是与 Series.cat.categories 相同,因为 Series.unique() 有一些保证,即它按出现顺序返回类别,并且只包括实际存在的类。 |
The result of unique() is not always the same as Series.cat.categories, because Series.unique() has a couple of guarantees, namely that it returns categories in the order of appearance, and it only includes values that are actually present. |
In [63]: s = pd.Series(list("babc")).astype(CategoricalDtype(list("abcd")))
In [64]: s
Out[64]:
0 b
1 a
2 b
3 c
dtype: category
Categories (4, object): ['a', 'b', 'c', 'd']
# categories
In [65]: s.cat.categories
Out[65]: Index(['a', 'b', 'c', 'd'], dtype='object')
# uniques
In [66]: s.unique()
Out[66]:
['b', 'a', 'c']
Categories (4, object): ['a', 'b', 'c', 'd']
Renaming categories
重命名类别是使用 rename_categories() 方法完成的:
Renaming categories is done by using the rename_categories() method:
In [67]: s = pd.Series(["a", "b", "c", "a"], dtype="category")
In [68]: s
Out[68]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [69]: new_categories = ["Group %s" % g for g in s.cat.categories]
In [70]: s = s.cat.rename_categories(new_categories)
In [71]: s
Out[71]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c']
# You can also pass a dict-like object to map the renaming
In [72]: s = s.cat.rename_categories({1: "x", 2: "y", 3: "z"})
In [73]: s
Out[73]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c']
与 R 的 factor 不同,类别数据可具有字符串以外的其他类型的类别。 |
In contrast to R’s factor, categorical data can have categories of other types than string. |
类别必须唯一,否则将引发 ValueError:
Categories must be unique or a ValueError is raised:
In [74]: try:
....: s = s.cat.rename_categories([1, 1, 1])
....: except ValueError as e:
....: print("ValueError:", str(e))
....:
ValueError: Categorical categories must be unique
类别也不能是 NaN,否则将引发 ValueError:
Categories must also not be NaN or a ValueError is raised:
In [75]: try:
....: s = s.cat.rename_categories([1, 2, np.nan])
....: except ValueError as e:
....: print("ValueError:", str(e))
....:
ValueError: Categorical categories cannot be null
Appending new categories
可通过使用 add_categories() 方法附加类别:
Appending categories can be done by using the add_categories() method:
In [76]: s = s.cat.add_categories([4])
In [77]: s.cat.categories
Out[77]: Index(['Group a', 'Group b', 'Group c', 4], dtype='object')
In [78]: s
Out[78]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (4, object): ['Group a', 'Group b', 'Group c', 4]
Removing categories
可通过使用 remove_categories() 方法删除类别。已删除的值将被 np.nan 替换:
Removing categories can be done by using the remove_categories() method. Values which are removed are replaced by np.nan.:
In [79]: s = s.cat.remove_categories([4])
In [80]: s
Out[80]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): ['Group a', 'Group b', 'Group c']
Removing unused categories
还可删除未使用类别:
Removing unused categories can also be done:
In [81]: s = pd.Series(pd.Categorical(["a", "b", "a"], categories=["a", "b", "c", "d"]))
In [82]: s
Out[82]:
0 a
1 b
2 a
dtype: category
Categories (4, object): ['a', 'b', 'c', 'd']
In [83]: s.cat.remove_unused_categories()
Out[83]:
0 a
1 b
2 a
dtype: category
Categories (2, object): ['a', 'b']
Setting categories
如果你希望一步删除并添加新类别(具有一定的速度优势),或只将类别设置为预定义规模,则可使用 set_categories()。
If you want to do remove and add new categories in one step (which has some speed advantage), or simply set the categories to a predefined scale, use set_categories().
In [84]: s = pd.Series(["one", "two", "four", "-"], dtype="category")
In [85]: s
Out[85]:
0 one
1 two
2 four
3 -
dtype: category
Categories (4, object): ['-', 'four', 'one', 'two']
In [86]: s = s.cat.set_categories(["one", "two", "three", "four"])
In [87]: s
Out[87]:
0 one
1 two
2 four
3 NaN
dtype: category
Categories (4, object): ['one', 'two', 'three', 'four']
请注意,Categorical.set_categories() 无法判断某个类别是故意省略还是因为拼写错误,或(在 Python3 中)由于类型差异(例如,NumPy S1 数据类型和 Python 字符串)。这可能会导致令人惊讶的行为! |
Be aware that Categorical.set_categories() cannot know whether some category is omitted intentionally or because it is misspelled or (under Python3) due to a type difference (e.g., NumPy S1 dtype and Python strings). This can result in surprising behaviour! |
Sorting and order
如果分类数据已排序(s.cat.ordered == True),则类别的顺序具有意义并且某些操作是可能的。如果分类是无序的,则 .min()/.max() 将引发 TypeError。
If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a meaning and certain operations are possible. If the categorical is unordered, .min()/.max() will raise a TypeError.
In [88]: s = pd.Series(pd.Categorical(["a", "b", "c", "a"], ordered=False))
In [89]: s = s.sort_values()
In [90]: s = pd.Series(["a", "b", "c", "a"]).astype(CategoricalDtype(ordered=True))
In [91]: s = s.sort_values()
In [92]: s
Out[92]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): ['a' < 'b' < 'c']
In [93]: s.min(), s.max()
Out[93]: ('a', 'c')
你可以通过使用 as_ordered() 将分类数据设置为有序,或通过使用 as_unordered() 将分类数据设置为无序。这些操作默认将返回一个新对象。
You can set categorical data to be ordered by using as_ordered() or unordered by using as_unordered(). These will by default return a new object.
In [94]: s.cat.as_ordered()
Out[94]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): ['a' < 'b' < 'c']
In [95]: s.cat.as_unordered()
Out[95]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
排序将使用类别定义的顺序,而不是数据类型上的任何字典顺序。这甚至适用于字符串和数字数据:
Sorting will use the order defined by categories, not any lexical order present on the data type. This is even true for strings and numeric data:
In [96]: s = pd.Series([1, 2, 3, 1], dtype="category")
In [97]: s = s.cat.set_categories([2, 3, 1], ordered=True)
In [98]: s
Out[98]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [99]: s = s.sort_values()
In [100]: s
Out[100]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [101]: s.min(), s.max()
Out[101]: (2, 1)
Reordering
可以通过 Categorical.reorder_categories() 和 Categorical.set_categories() 方法重新排序类别。对于 Categorical.reorder_categories(),所有旧类别必须包含在新类别中,并且不允许出现新类别。这将必然使排序顺序与类别顺序相同。
Reordering the categories is possible via the Categorical.reorder_categories() and the Categorical.set_categories() methods. For Categorical.reorder_categories(), all old categories must be included in the new categories and no new categories are allowed. This will necessarily make the sort order the same as the categories order.
In [102]: s = pd.Series([1, 2, 3, 1], dtype="category")
In [103]: s = s.cat.reorder_categories([2, 3, 1], ordered=True)
In [104]: s
Out[104]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [105]: s = s.sort_values()
In [106]: s
Out[106]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [107]: s.min(), s.max()
Out[107]: (2, 1)
请注意分配新类别和重新排序类别之间的差异:第一个重命名类别,因此在 Series 中重新命名了单个值,但是如果第一个位置是按最后排序的,则重命名的值仍将按最后排序。重新排序意味着值排序方式在之后是不同的,但 Series 中的单个值不会被更改。 |
Note the difference between assigning new categories and reordering the categories: the first renames categories and therefore the individual values in the Series, but if the first position was sorted last, the renamed value will still be sorted last. Reordering means that the way values are sorted is different afterwards, but not that individual values in the Series are changed. |
如果 Categorical 是无序的,则 Series.min() 和 Series.max() 将引发 TypeError。像 +、-、*、/ 这样的数值操作,以及基于它们的 Series.median() 操作(例如,在数组长度为偶数时需要计算两个值之间的平均值)不起作用并将引发 TypeError。 |
If the Categorical is not ordered, Series.min() and Series.max() will raise TypeError. Numeric operations like +, -, *, / and operations based on them (e.g. Series.median(), which would need to compute the mean between two values if the length of an array is even) do not work and raise a TypeError. |
Multi column sorting
带有类别数据类型的列将以与其他列类似的方式参与多列排序。类别的排序由该列的 categories 确定。
A categorical dtyped column will participate in a multi-column sort in a similar manner to other columns. The ordering of the categorical is determined by the categories of that column.
In [108]: dfs = pd.DataFrame(
.....: {
.....: "A": pd.Categorical(
.....: list("bbeebbaa"),
.....: categories=["e", "a", "b"],
.....: ordered=True,
.....: ),
.....: "B": [1, 2, 1, 2, 2, 1, 2, 1],
.....: }
.....: )
.....:
In [109]: dfs.sort_values(by=["A", "B"])
Out[109]:
A B
2 e 1
3 e 2
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
重新排序 categories 将更改未来的排序。
Reordering the categories changes a future sort.
In [110]: dfs["A"] = dfs["A"].cat.reorder_categories(["a", "b", "e"])
In [111]: dfs.sort_values(by=["A", "B"])
Out[111]:
A B
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
2 e 1
3 e 2
Comparisons
在三种情况下,分类数据可与其他对象进行比较:
Comparing categorical data with other objects is possible in three cases:
-
Comparing equality (== and !=) to a list-like object (list, Series, array, …) of the same length as the categorical data.
-
All comparisons (==, !=, >, >=, <, and ⇐) of categorical data to another categorical Series, when ordered==True and the categories are the same.
-
All comparisons of a categorical data to a scalar.
任何其他比较,尤其是具有不同类别或带有任何类似列表对象的分类的两个分类的“非等同”比较,将引发 TypeError。
All other comparisons, especially “non-equality” comparisons of two categoricals with different categories or a categorical with any list-like object, will raise a TypeError.
分类数据与一个 Series、np.array、list 或具有不同分类或排序的分类数据的所有“非等同”比较都会引发 TypeError,因为自定义的分类排序可能被以两种方式解释:一种考虑了排序,另一种没有。 |
Any “non-equality” comparisons of categorical data with a Series, np.array, list or categorical data with different categories or ordering will raise a TypeError because custom categories ordering could be interpreted in two ways: one with taking into account the ordering and one without. |
In [112]: cat = pd.Series([1, 2, 3]).astype(CategoricalDtype([3, 2, 1], ordered=True))
In [113]: cat_base = pd.Series([2, 2, 2]).astype(CategoricalDtype([3, 2, 1], ordered=True))
In [114]: cat_base2 = pd.Series([2, 2, 2]).astype(CategoricalDtype(ordered=True))
In [115]: cat
Out[115]:
0 1
1 2
2 3
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [116]: cat_base
Out[116]:
0 2
1 2
2 2
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [117]: cat_base2
Out[117]:
0 2
1 2
2 2
dtype: category
Categories (1, int64): [2]
与具有相同分类和排序或标量的分类进行比较是有效的:
Comparing to a categorical with the same categories and ordering or to a scalar works:
In [118]: cat > cat_base
Out[118]:
0 True
1 False
2 False
dtype: bool
In [119]: cat > 2
Out[119]:
0 True
1 False
2 False
dtype: bool
等同比较适用于任何长度相同的类似列表对象和标量:
Equality comparisons work with any list-like object of same length and scalars:
In [120]: cat == cat_base
Out[120]:
0 False
1 True
2 False
dtype: bool
In [121]: cat == np.array([1, 2, 3])
Out[121]:
0 True
1 True
2 True
dtype: bool
In [122]: cat == 2
Out[122]:
0 False
1 True
2 False
dtype: bool
这无效,因为分类不同:
This doesn’t work because the categories are not the same:
In [123]: try:
.....: cat > cat_base2
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Categoricals can only be compared if 'categories' are the same.
如果想要对分类序列与非分类数据的类似列表对象进行“非等同”比较,你需要明确说明,并将分类数据转换回原来的值:
If you want to do a “non-equality” comparison of a categorical series with a list-like object which is not categorical data, you need to be explicit and convert the categorical data back to the original values:
In [124]: base = np.array([1, 2, 3])
In [125]: try:
.....: cat > base
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot compare a Categorical for op __gt__ with type <class 'numpy.ndarray'>.
If you want to compare values, use 'np.asarray(cat) <op> other'.
In [126]: np.asarray(cat) > base
Out[126]: array([False, False, False])
当比较两个具有相同分类的无序分类时,不考虑顺序:
When you compare two unordered categoricals with the same categories, the order is not considered:
In [127]: c1 = pd.Categorical(["a", "b"], categories=["a", "b"], ordered=False)
In [128]: c2 = pd.Categorical(["a", "b"], categories=["b", "a"], ordered=False)
In [129]: c1 == c2
Out[129]: array([ True, True])
Operations
除了 Series.min()、 Series.max() 和 Series.mode(),可以使用以下操作对分类数据进行操作:
Apart from Series.min(), Series.max() and Series.mode(), the following operations are possible with categorical data:
Series 方法,例如 Series.value_counts(),将使用所有类别,即使某些类别不包含在数据中:
Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data:
In [130]: s = pd.Series(pd.Categorical(["a", "b", "c", "c"], categories=["c", "a", "b", "d"]))
In [131]: s.value_counts()
Out[131]:
c 2
a 1
b 1
d 0
Name: count, dtype: int64
DataFrame 方法,例如 DataFrame.sum(),在 observed=False 时也会显示“未使用”的类别。
DataFrame methods like DataFrame.sum() also show “unused” categories when observed=False.
In [132]: columns = pd.Categorical(
.....: ["One", "One", "Two"], categories=["One", "Two", "Three"], ordered=True
.....: )
.....:
In [133]: df = pd.DataFrame(
.....: data=[[1, 2, 3], [4, 5, 6]],
.....: columns=pd.MultiIndex.from_arrays([["A", "B", "B"], columns]),
.....: ).T
.....:
In [134]: df.groupby(level=1, observed=False).sum()
Out[134]:
0 1
One 3 9
Two 3 6
Three 0 0
在 observed=False 时,groupby 也将显示“未使用”的类别:
Groupby will also show “unused” categories when observed=False:
In [135]: cats = pd.Categorical(
.....: ["a", "b", "b", "b", "c", "c", "c"], categories=["a", "b", "c", "d"]
.....: )
.....:
In [136]: df = pd.DataFrame({"cats": cats, "values": [1, 2, 2, 2, 3, 4, 5]})
In [137]: df.groupby("cats", observed=False).mean()
Out[137]:
values
cats
a 1.0
b 2.0
c 4.0
d NaN
In [138]: cats2 = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
In [139]: df2 = pd.DataFrame(
.....: {
.....: "cats": cats2,
.....: "B": ["c", "d", "c", "d"],
.....: "values": [1, 2, 3, 4],
.....: }
.....: )
.....:
In [140]: df2.groupby(["cats", "B"], observed=False).mean()
Out[140]:
values
cats B
a c 1.0
d 2.0
b c 3.0
d 4.0
c c NaN
d NaN
数据透视表:
Pivot tables:
In [141]: raw_cat = pd.Categorical(["a", "a", "b", "b"], categories=["a", "b", "c"])
In [142]: df = pd.DataFrame({"A": raw_cat, "B": ["c", "d", "c", "d"], "values": [1, 2, 3, 4]})
In [143]: pd.pivot_table(df, values="values", index=["A", "B"], observed=False)
Out[143]:
values
A B
a c 1.0
d 2.0
b c 3.0
d 4.0
Data munging
经过优化的 pandas 数据访问方法 .loc、.iloc、.at 和 .iat 正常工作。唯一不同的是返回类型(用于获取),并且只能分配 categories 中已经有的值。
The optimized pandas data access methods .loc, .iloc, .at, and .iat, work as normal. The only difference is the return type (for getting) and that only values already in categories can be assigned.
Getting
如果切片运算返回一个 DataFrame 或一个类型为 Series 的列,则会保留 category 数据类型。
If the slicing operation returns either a DataFrame or a column of type Series, the category dtype is preserved.
In [144]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
In [145]: cats = pd.Series(["a", "b", "b", "b", "c", "c", "c"], dtype="category", index=idx)
In [146]: values = [1, 2, 2, 2, 3, 4, 5]
In [147]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)
In [148]: df.iloc[2:4, :]
Out[148]:
cats values
j b 2
k b 2
In [149]: df.iloc[2:4, :].dtypes
Out[149]:
cats category
values int64
dtype: object
In [150]: df.loc["h":"j", "cats"]
Out[150]:
h a
i b
j b
Name: cats, dtype: category
Categories (3, object): ['a', 'b', 'c']
In [151]: df[df["cats"] == "b"]
Out[151]:
cats values
i b 2
j b 2
k b 2
类别类型未保留的一个示例是,如果你获取一行:结果的 Series 数据类型为 object:
An example where the category type is not preserved is if you take one single row: the resulting Series is of dtype object:
# get the complete "h" row as a Series
In [152]: df.loc["h", :]
Out[152]:
cats a
values 1
Name: h, dtype: object
从分类数据返回单个项目也将返回该值,而不是长度为“1”的分类。
Returning a single item from categorical data will also return the value, not a categorical of length “1”.
In [153]: df.iat[0, 0]
Out[153]: 'a'
In [154]: df["cats"] = df["cats"].cat.rename_categories(["x", "y", "z"])
In [155]: df.at["h", "cats"] # returns a string
Out[155]: 'x'
这与 R 的 factor 函数相反,其中 factor(c(1,2,3))[1] 返回单个值 factor。 |
The is in contrast to R’s factor function, where factor(c(1,2,3))[1] returns a single value factor. |
要获取类型为 category 的单个值 Series,你需要传入一个包含单个值的一个列表:
To get a single value Series of type category, you pass in a list with a single value:
In [156]: df.loc[["h"], "cats"]
Out[156]:
h x
Name: cats, dtype: category
Categories (3, object): ['x', 'y', 'z']
String and datetime accessors
如果 s.cat.categories 为适当类型,则访问器 .dt 和 .str 将起作用:
The accessors .dt and .str will work if the s.cat.categories are of an appropriate type:
In [157]: str_s = pd.Series(list("aabb"))
In [158]: str_cat = str_s.astype("category")
In [159]: str_cat
Out[159]:
0 a
1 a
2 b
3 b
dtype: category
Categories (2, object): ['a', 'b']
In [160]: str_cat.str.contains("a")
Out[160]:
0 True
1 True
2 False
3 False
dtype: bool
In [161]: date_s = pd.Series(pd.date_range("1/1/2015", periods=5))
In [162]: date_cat = date_s.astype("category")
In [163]: date_cat
Out[163]:
0 2015-01-01
1 2015-01-02
2 2015-01-03
3 2015-01-04
4 2015-01-05
dtype: category
Categories (5, datetime64[ns]): [2015-01-01, 2015-01-02, 2015-01-03, 2015-01-04, 2015-01-05]
In [164]: date_cat.dt.day
Out[164]:
0 1
1 2
2 3
3 4
4 5
dtype: int32
返回的 Series(或 DataFrame)与使用 .str.<method> / .dt.<method> 在该类型(而不是 category 类型!)的 Series 中的类型相同。 |
The returned Series (or DataFrame) is of the same type as if you used the .str.<method> / .dt.<method> on a Series of that type (and not of type category!). |
这意味着,Series 访问器的上的方法和属性返回的值,以及该 Series 访问器上的方法和属性返回的值转换为类型 category 中的一类,将是相等的:
That means, that the returned values from methods and properties on the accessors of a Series and the returned values from methods and properties on the accessors of this Series transformed to one of type category will be equal:
In [165]: ret_s = str_s.str.contains("a")
In [166]: ret_cat = str_cat.str.contains("a")
In [167]: ret_s.dtype == ret_cat.dtype
Out[167]: True
In [168]: ret_s == ret_cat
Out[168]:
0 True
1 True
2 True
3 True
dtype: bool
这项工作在 categories 中完成,然后构造一个新的 Series。如果你有一个字符串类型的 Series,其中许多元素重复(即 Series 中唯一元素的数量远小于 Series 的长度),则这会对性能造成一定影响。在这种情况下,可以将原始 Series 转换为类型 category 中的一类,并在其上使用 .str.<method> 或 .dt.<property>,这样速度会更快。 |
The work is done on the categories and then a new Series is constructed. This has some performance implication if you have a Series of type string, where lots of elements are repeated (i.e. the number of unique elements in the Series is a lot smaller than the length of the Series). In this case it can be faster to convert the original Series to one of type category and use .str.<method> or .dt.<property> on that. |
Setting
在分类列(或 Series)中设置值,只要该值包含在 categories 中即可:
Setting values in a categorical column (or Series) works as long as the value is included in the categories:
In [169]: idx = pd.Index(["h", "i", "j", "k", "l", "m", "n"])
In [170]: cats = pd.Categorical(["a", "a", "a", "a", "a", "a", "a"], categories=["a", "b"])
In [171]: values = [1, 1, 1, 1, 1, 1, 1]
In [172]: df = pd.DataFrame({"cats": cats, "values": values}, index=idx)
In [173]: df.iloc[2:4, :] = [["b", 2], ["b", 2]]
In [174]: df
Out[174]:
cats values
h a 1
i a 1
j b 2
k b 2
l a 1
m a 1
n a 1
In [175]: try:
.....: df.iloc[2:4, :] = [["c", 3], ["c", 3]]
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot setitem on a Categorical with a new category, set the categories first
通过分配分类数据设置值也会检查 categories 是否匹配:
Setting values by assigning categorical data will also check that the categories match:
In [176]: df.loc["j":"k", "cats"] = pd.Categorical(["a", "a"], categories=["a", "b"])
In [177]: df
Out[177]:
cats values
h a 1
i a 1
j a 2
k a 2
l a 1
m a 1
n a 1
In [178]: try:
.....: df.loc["j":"k", "cats"] = pd.Categorical(["b", "b"], categories=["a", "b", "c"])
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot set a Categorical with another, without identical categories
将 Categorical 分配给其他类型的部分列将使用这些值:
Assigning a Categorical to parts of a column of other types will use the values:
In [179]: df = pd.DataFrame({"a": [1, 1, 1, 1, 1], "b": ["a", "a", "a", "a", "a"]})
In [180]: df.loc[1:2, "a"] = pd.Categorical(["b", "b"], categories=["a", "b"])
In [181]: df.loc[2:3, "b"] = pd.Categorical(["b", "b"], categories=["a", "b"])
In [182]: df
Out[182]:
a b
0 1 a
1 b a
2 b b
3 1 b
4 1 a
In [183]: df.dtypes
Out[183]:
a object
b object
dtype: object
Merging / concatenation
默认情况下,组合包含相同类别的 Series 或 DataFrames 会导致 category 数据类型,否则结果将取决于基础类别的类型。导致非分类数据类型的合并可能会占用更高的内存。使用 .astype 或 union_categoricals 来确保 category 结果。
By default, combining Series or DataFrames which contain the same categories results in category dtype, otherwise results will depend on the dtype of the underlying categories. Merges that result in non-categorical dtypes will likely have higher memory usage. Use .astype or union_categoricals to ensure category results.
In [184]: from pandas.api.types import union_categoricals
# same categories
In [185]: s1 = pd.Series(["a", "b"], dtype="category")
In [186]: s2 = pd.Series(["a", "b", "a"], dtype="category")
In [187]: pd.concat([s1, s2])
Out[187]:
0 a
1 b
0 a
1 b
2 a
dtype: category
Categories (2, object): ['a', 'b']
# different categories
In [188]: s3 = pd.Series(["b", "c"], dtype="category")
In [189]: pd.concat([s1, s3])
Out[189]:
0 a
1 b
0 b
1 c
dtype: object
# Output dtype is inferred based on categories values
In [190]: int_cats = pd.Series([1, 2], dtype="category")
In [191]: float_cats = pd.Series([3.0, 4.0], dtype="category")
In [192]: pd.concat([int_cats, float_cats])
Out[192]:
0 1.0
1 2.0
0 3.0
1 4.0
dtype: float64
In [193]: pd.concat([s1, s3]).astype("category")
Out[193]:
0 a
1 b
0 b
1 c
dtype: category
Categories (3, object): ['a', 'b', 'c']
In [194]: union_categoricals([s1.array, s3.array])
Out[194]:
['a', 'b', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
下表总结了合并 Categoricals 的结果:
The following table summarizes the results of merging Categoricals:
arg1
arg2
相同
identical
结果
result
category
category
真
True
category
category (object)
category (object)
假
False
object (推断数据类型)
object (dtype is inferred)
category (int)
category (float)
假
False
float (推断数据类型)
float (dtype is inferred)
Unioning
如果你想组合不一定具有相同类别的分类,则 union_categoricals() 函数会组合分类的类列表。新类别将是被组合类别的并集。
If you want to combine categoricals that do not necessarily have the same categories, the union_categoricals() function will combine a list-like of categoricals. The new categories will be the union of the categories being combined.
In [195]: from pandas.api.types import union_categoricals
In [196]: a = pd.Categorical(["b", "c"])
In [197]: b = pd.Categorical(["a", "b"])
In [198]: union_categoricals([a, b])
Out[198]:
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
默认情况下,结果类别将按其在数据中的出现顺序排序。如果您希望对类别进行词法排序,请使用 sort_categories=True 参数。
By default, the resulting categories will be ordered as they appear in the data. If you want the categories to be lexsorted, use sort_categories=True argument.
In [199]: union_categoricals([a, b], sort_categories=True)
Out[199]:
['b', 'c', 'a', 'b']
Categories (3, object): ['a', 'b', 'c']
union_categoricals 也适用于将两个具有相同类别和顺序信息(例如您也可以将 append 作为对象)的类别相结合的“简单”情形。
union_categoricals also works with the “easy” case of combining two categoricals of the same categories and order information (e.g. what you could also append for).
In [200]: a = pd.Categorical(["a", "b"], ordered=True)
In [201]: b = pd.Categorical(["a", "b", "a"], ordered=True)
In [202]: union_categoricals([a, b])
Out[202]:
['a', 'b', 'a', 'b', 'a']
Categories (2, object): ['a' < 'b']
下面引发 TypeError,因为类别已排序且不完全相同。
The below raises TypeError because the categories are ordered and not identical.
In [203]: a = pd.Categorical(["a", "b"], ordered=True)
In [204]: b = pd.Categorical(["a", "b", "c"], ordered=True)
In [205]: union_categoricals([a, b])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[205], line 1
----> 1 union_categoricals([a, b])
File ~/work/pandas/pandas/pandas/core/dtypes/concat.py:341, in union_categoricals(to_union, sort_categories, ignore_order)
339 if all(c.ordered for c in to_union):
340 msg = "to union ordered Categoricals, all categories must be the same"
--> 341 raise TypeError(msg)
342 raise TypeError("Categorical.ordered must be the same")
344 if ignore_order:
TypeError: to union ordered Categoricals, all categories must be the same
可以使用 ignore_ordered=True 参数组合具有不同类别或顺序的不同类别。
Ordered categoricals with different categories or orderings can be combined by using the ignore_ordered=True argument.
In [206]: a = pd.Categorical(["a", "b", "c"], ordered=True)
In [207]: b = pd.Categorical(["c", "b", "a"], ordered=True)
In [208]: union_categoricals([a, b], ignore_order=True)
Out[208]:
['a', 'b', 'c', 'c', 'b', 'a']
Categories (3, object): ['a', 'b', 'c']
union_categoricals() 也适用于包含类别数据的 CategoricalIndex 或 Series,但请注意,产生的数组将始终是普通的 Categorical:
union_categoricals() also works with a CategoricalIndex, or Series containing categorical data, but note that the resulting array will always be a plain Categorical:
In [209]: a = pd.Series(["b", "c"], dtype="category")
In [210]: b = pd.Series(["a", "b"], dtype="category")
In [211]: union_categoricals([a, b])
Out[211]:
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
组合类别项目时,union_categoricals 可能会重新编码类别的整数代码。这可能是你想要的,但如果你依赖于类别的精确编号,就要小心。 |
union_categoricals may recode the integer codes for categories when combining categoricals. This is likely what you want, but if you are relying on the exact numbering of the categories, be aware. |
In [212]: c1 = pd.Categorical(["b", "c"])
In [213]: c2 = pd.Categorical(["a", "b"])
In [214]: c1
Out[214]:
['b', 'c']
Categories (2, object): ['b', 'c']
# "b" is coded to 0
In [215]: c1.codes
Out[215]: array([0, 1], dtype=int8)
In [216]: c2
Out[216]:
['a', 'b']
Categories (2, object): ['a', 'b']
# "b" is coded to 1
In [217]: c2.codes
Out[217]: array([0, 1], dtype=int8)
In [218]: c = union_categoricals([c1, c2])
In [219]: c
Out[219]:
['b', 'c', 'a', 'b']
Categories (3, object): ['b', 'c', 'a']
# "b" is coded to 0 throughout, same as c1, different from c2
In [220]: c.codes
Out[220]: array([0, 1, 2, 0], dtype=int8)
Getting data in/out
你可以将包含 category 数据类型的数据写入 HDFStore。请参阅 here 了解示例和注意事项。
You can write data that contains category dtypes to a HDFStore. See here for an example and caveats.
还可以将数据写入 Stata 格式文件并从中读取数据。请参阅 here 了解示例和注意事项。
It is also possible to write data to and reading data from Stata format files. See here for an example and caveats.
写入 CSV 文件将转换数据,从而有效删除关于类别项目(类别和顺序)的任何信息。所以,如果你读回 CSV 文件,你必须将相关列转换回 category,并分配正确的类别和类别顺序。
Writing to a CSV file will convert the data, effectively removing any information about the categorical (categories and ordering). So if you read back the CSV file you have to convert the relevant columns back to category and assign the right categories and categories ordering.
In [221]: import io
In [222]: s = pd.Series(pd.Categorical(["a", "b", "b", "a", "a", "d"]))
# rename the categories
In [223]: s = s.cat.rename_categories(["very good", "good", "bad"])
# reorder the categories and add missing categories
In [224]: s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
In [225]: df = pd.DataFrame({"cats": s, "vals": [1, 2, 3, 4, 5, 6]})
In [226]: csv = io.StringIO()
In [227]: df.to_csv(csv)
In [228]: df2 = pd.read_csv(io.StringIO(csv.getvalue()))
In [229]: df2.dtypes
Out[229]:
Unnamed: 0 int64
cats object
vals int64
dtype: object
In [230]: df2["cats"]
Out[230]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: object
# Redo the category
In [231]: df2["cats"] = df2["cats"].astype("category")
In [232]: df2["cats"] = df2["cats"].cat.set_categories(
.....: ["very bad", "bad", "medium", "good", "very good"]
.....: )
.....:
In [233]: df2.dtypes
Out[233]:
Unnamed: 0 int64
cats category
vals int64
dtype: object
In [234]: df2["cats"]
Out[234]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: category
Categories (5, object): ['very bad', 'bad', 'medium', 'good', 'very good']
to_sql 的 SQL 数据库写入也遵循这一原则。
The same holds for writing to a SQL database with to_sql.
Missing data
Pandas 主要使用值 np.nan 来表示缺失数据。它在默认情况下不包含在计算中。请参阅 Missing Data section。
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section.
缺失值不应包含在 Categorical 的 categories 中,只应包含在 values 中。相反,众所周知 NaN 是不同的,并且始终是一个可能性。在使用 Categorical 的 codes 时,缺失值始终会有一个代码 -1。
Missing values should not be included in the Categorical’s categories, only in the values. Instead, it is understood that NaN is different, and is always a possibility. When working with the Categorical’s codes, missing values will always have a code of -1.
In [235]: s = pd.Series(["a", "b", np.nan, "a"], dtype="category")
# only two categories
In [236]: s
Out[236]:
0 a
1 b
2 NaN
3 a
dtype: category
Categories (2, object): ['a', 'b']
In [237]: s.cat.codes
Out[237]:
0 0
1 1
2 -1
3 0
dtype: int8
In [238]: s = pd.Series(["a", "b", np.nan], dtype="category")
In [239]: s
Out[239]:
0 a
1 b
2 NaN
dtype: category
Categories (2, object): ['a', 'b']
In [240]: pd.isna(s)
Out[240]:
0 False
1 False
2 True
dtype: bool
In [241]: s.fillna("a")
Out[241]:
0 a
1 b
2 a
dtype: category
Categories (2, object): ['a', 'b']
Differences to R’s factor
可以观察到与 R 的因子功能的以下差异:
The following differences to R’s factor functions can be observed:
-
R’s levels are named categories.
-
R’s levels are always of type string, while categories in pandas can be of any dtype.
-
It’s not possible to specify labels at creation time. Use s.cat.rename_categories(new_labels) afterwards.
-
In contrast to R’s factor function, using categorical data as the sole input to create a new categorical series will not remove unused categories but create a new categorical series which is equal to the passed in one!
-
R allows for missing values to be included in its levels (pandas’ categories). pandas does not allow NaN categories, but missing values can still be in the values.
Gotchas
Memory usage
Categorical 的内存使用与类别的数目及数据的长度成正比。相反,object 数据类型是一个常数乘以数据的长度。
The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In contrast, an object dtype is a constant times the length of the data.
In [242]: s = pd.Series(["foo", "bar"] * 1000)
# object dtype
In [243]: s.nbytes
Out[243]: 16000
# category dtype
In [244]: s.astype("category").nbytes
Out[244]: 2016
如果类别的数目接近了数据的长度,Categorical 将使用与相等的 object 数据类型表示几乎相同或更多的内存。 |
If the number of categories approaches the length of the data, the Categorical will use nearly the same or more memory than an equivalent object dtype representation. |
In [245]: s = pd.Series(["foo%04d" % i for i in range(2000)])
# object dtype
In [246]: s.nbytes
Out[246]: 16000
# category dtype
In [247]: s.astype("category").nbytes
Out[247]: 20000
Categorical is not a numpy array
目前,类别数据和底层 Categorical 是作为 Python 对象而不是低级 NumPy 数组数据类型来实现的。这导致了一些问题。
Currently, categorical data and the underlying Categorical is implemented as a Python object and not as a low-level NumPy array dtype. This leads to some problems.
NumPy 本身不知道新的 dtype:
NumPy itself doesn’t know about the new dtype:
In [248]: try:
.....: np.dtype("category")
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: data type 'category' not understood
In [249]: dtype = pd.Categorical(["a"]).dtype
In [250]: try:
.....: np.dtype(dtype)
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: Cannot interpret 'CategoricalDtype(categories=['a'], ordered=False, categories_dtype=object)' as a data type
数据类型比较工作:
Dtype comparisons work:
In [251]: dtype == np.str_
Out[251]: False
In [252]: np.str_ == dtype
Out[252]: False
要检查一个 Series 是否包含类别数据,请使用 hasattr(s, 'cat'):
To check if a Series contains Categorical data, use hasattr(s, 'cat'):
In [253]: hasattr(pd.Series(["a"], dtype="category"), "cat")
Out[253]: True
In [254]: hasattr(pd.Series(["a"]), "cat")
Out[254]: False
对 category 类型的 Series 使用 NumPy 函数不应该工作,因为 Categoricals 不是数字数据(即使 .categories 是数字数据)。
Using NumPy functions on a Series of type category should not work as Categoricals are not numeric data (even in the case that .categories is numeric).
In [255]: s = pd.Series(pd.Categorical([1, 2, 3, 4]))
In [256]: try:
.....: np.sum(s)
.....: except TypeError as e:
.....: print("TypeError:", str(e))
.....:
TypeError: 'Categorical' with dtype category does not support reduction 'sum'
如果这样的函数起作用,请在 pandas-dev/pandas 提交一个错误! |
If such a function works, please file a bug at pandas-dev/pandas! |
dtype in apply
pandas 目前在应用函数中不保留数据类型:如果您沿着行应用,则会得到 object dtype 的 Series(与获取一行 → 获取一个元素将返回一个基本类型相同),沿着列应用也会转换为对象。NaN 值不受影响。您可以在应用函数之前使用 fillna 来处理缺失值。
pandas currently does not preserve the dtype in apply functions: If you apply along rows you get a Series of object dtype (same as getting a row → getting one element will return a basic type) and applying along columns will also convert to object. NaN values are unaffected. You can use fillna to handle missing values before applying a function.
In [257]: df = pd.DataFrame(
.....: {
.....: "a": [1, 2, 3, 4],
.....: "b": ["a", "b", "c", "d"],
.....: "cats": pd.Categorical([1, 2, 3, 2]),
.....: }
.....: )
.....:
In [258]: df.apply(lambda row: type(row["cats"]), axis=1)
Out[258]:
0 <class 'int'>
1 <class 'int'>
2 <class 'int'>
3 <class 'int'>
dtype: object
In [259]: df.apply(lambda col: col.dtype, axis=0)
Out[259]:
a int64
b object
cats category
dtype: object
Categorical index
CategoricalIndex 是一种索引类型,可用于支持具有重复项的索引。这是一个容器,包裹 Categorical,并允许有效地索引和存储具有大量重复元素的索引。请参阅 advanced indexing docs 了解更多详细信息。
CategoricalIndex is a type of index that is useful for supporting indexing with duplicates. This is a container around a Categorical and allows efficient indexing and storage of an index with a large number of duplicated elements. See the advanced indexing docs for a more detailed explanation.
设置索引将创建一个 CategoricalIndex:
Setting the index will create a CategoricalIndex:
In [260]: cats = pd.Categorical([1, 2, 3, 4], categories=[4, 2, 3, 1])
In [261]: strings = ["a", "b", "c", "d"]
In [262]: values = [4, 2, 3, 1]
In [263]: df = pd.DataFrame({"strings": strings, "values": values}, index=cats)
In [264]: df.index
Out[264]: CategoricalIndex([1, 2, 3, 4], categories=[4, 2, 3, 1], ordered=False, dtype='category')
# This now sorts by the categories order
In [265]: df.sort_index()
Out[265]:
strings values
4 d 1
2 b 2
3 c 3
1 a 4
Side effects
从 Categorical 构造一个 Series 将不会复制输入 Categorical。这意味着对 Series 的更改在大多数情况下都会更改原始 Categorical:
Constructing a Series from a Categorical will not copy the input Categorical. This means that changes to the Series will in most cases change the original Categorical:
In [266]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])
In [267]: s = pd.Series(cat, name="cat")
In [268]: cat
Out[268]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [269]: s.iloc[0:2] = 10
In [270]: cat
Out[270]:
[10, 10, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
使用 copy=True 防止这种行为,或者简单地不要重复使用 Categoricals:
Use copy=True to prevent such a behaviour or simply don’t reuse Categoricals:
In [271]: cat = pd.Categorical([1, 2, 3, 10], categories=[1, 2, 3, 4, 10])
In [272]: s = pd.Series(cat, name="cat", copy=True)
In [273]: cat
Out[273]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [274]: s.iloc[0:2] = 10
In [275]: cat
Out[275]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
在某些情况下,当您提供 NumPy 数组而不是 Categorical 时,也会发生这种情况:使用 int 数组(例如 np.array([1,2,3,4]))将表现出相同的行为,而使用字符串数组(例如 np.array(["a","b","c","a"]))则不会。 |
This also happens in some cases when you supply a NumPy array instead of a Categorical: using an int array (e.g. np.array([1,2,3,4])) will exhibit the same behavior, while using a string array (e.g. np.array(["a","b","c","a"])) will not. |