narwhals.Series.str
contains(pattern, *, literal=False)
Check if string contains a substring that matches a pattern.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pattern
|
str
|
A Character sequence or valid regular expression pattern. |
required |
literal
|
bool
|
If True, treats the pattern as a literal string. If False, assumes the pattern is a regular expression. |
False
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with boolean values indicating if each string contains the pattern. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["cat", "dog", "rabbit and parrot", "dove", None]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_contains(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.contains("parrot|dove").to_native()
We can then pass any supported library such as pandas, Polars, or PyArrow to agnostic_contains
:
>>> agnostic_contains(s_pd)
0 False
1 False
2 True
3 True
4 None
dtype: object
>>> agnostic_contains(s_pl)
shape: (5,)
Series: '' [bool]
[
false
false
true
true
null
]
>>> agnostic_contains(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
false,
false,
true,
true,
null
]
]
ends_with(suffix)
Check if string values end with a substring.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
suffix
|
str
|
suffix substring |
required |
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with boolean values indicating if each string ends with the suffix. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["apple", "mango", None]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_ends_with(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.ends_with("ngo").to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_ends_with
:
>>> agnostic_ends_with(s_pd)
0 False
1 True
2 None
dtype: object
>>> agnostic_ends_with(s_pl)
shape: (3,)
Series: '' [bool]
[
false
true
null
]
>>> agnostic_ends_with(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
false,
true,
null
]
]
head(n=5)
Take the first n elements of each string.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n
|
int
|
Number of elements to take. Negative indexing is supported (see note (1.)) |
5
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series containing the first n characters of each string. |
Notes
- When the
n
input is negative,head
returns characters up to the n-th from the end of the string. For example, ifn = -3
, then all characters except the last three are returned. - If the length of the string has fewer than
n
characters, the full string is returned.
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["Atatata", "taata", "taatatata", "zukkyun"]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_head(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.head().to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_head
:
>>> agnostic_head(s_pd)
0 Atata
1 taata
2 taata
3 zukky
dtype: object
>>> agnostic_head(s_pl)
shape: (4,)
Series: '' [str]
[
"Atata"
"taata"
"taata"
"zukky"
]
>>> agnostic_head(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"Atata",
"taata",
"taata",
"zukky"
]
]
len_chars()
Return the length of each string as the number of characters.
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series containing the length of each string in characters. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["foo", "Café", "345", "東京", None]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_len_chars(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.len_chars().to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_len_chars
:
>>> agnostic_len_chars(s_pd)
0 3.0
1 4.0
2 3.0
3 2.0
4 NaN
dtype: float64
>>> agnostic_len_chars(s_pl)
shape: (5,)
Series: '' [u32]
[
3
4
3
2
null
]
>>> agnostic_len_chars(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
3,
4,
3,
2,
null
]
]
replace(pattern, value, *, literal=False, n=1)
Replace first matching regex/literal substring with a new string value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pattern
|
str
|
A valid regular expression pattern. |
required |
value
|
str
|
String that will replace the matched substring. |
required |
literal
|
bool
|
Treat |
False
|
n
|
int
|
Number of matches to replace. |
1
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with the regex/literal pattern replaced with the specified value. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["123abc", "abc abc123"]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_replace(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... s = s.str.replace("abc", "")
... return s.to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_replace
:
>>> agnostic_replace(s_pd)
0 123
1 abc123
dtype: object
>>> agnostic_replace(s_pl)
shape: (2,)
Series: '' [str]
[
"123"
" abc123"
]
>>> agnostic_replace(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"123",
" abc123"
]
]
replace_all(pattern, value, *, literal=False)
Replace all matching regex/literal substring with a new string value.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
pattern
|
str
|
A valid regular expression pattern. |
required |
value
|
str
|
String that will replace the matched substring. |
required |
literal
|
bool
|
Treat |
False
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with all occurrences of pattern replaced with the specified value. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["123abc", "abc abc123"]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_replace_all(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... s = s.str.replace_all("abc", "")
... return s.to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_replace_all
:
>>> agnostic_replace_all(s_pd)
0 123
1 123
dtype: object
>>> agnostic_replace_all(s_pl)
shape: (2,)
Series: '' [str]
[
"123"
" 123"
]
>>> agnostic_replace_all(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"123",
" 123"
]
]
slice(offset, length=None)
Create subslices of the string values of a Series.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
offset
|
int
|
Start index. Negative indexing is supported. |
required |
length
|
int | None
|
Length of the slice. If set to |
None
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series containing subslices of each string. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["pear", None, "papaya", "dragonfruit"]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_slice(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.slice(4, length=3).to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_slice
:
>>> agnostic_slice(s_pd)
0
1 None
2 ya
3 onf
dtype: object
>>> agnostic_slice(s_pl)
shape: (4,)
Series: '' [str]
[
""
null
"ya"
"onf"
]
>>> agnostic_slice(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"",
null,
"ya",
"onf"
]
]
Using negative indexes:
>>> def agnostic_slice(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.slice(-3).to_native()
>>> agnostic_slice(s_pd)
0 ear
1 None
2 aya
3 uit
dtype: object
>>> agnostic_slice(s_pl)
shape: (4,)
Series: '' [str]
[
"ear"
null
"aya"
"uit"
]
>>> agnostic_slice(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"ear",
null,
"aya",
"uit"
]
]
starts_with(prefix)
Check if string values start with a substring.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix
|
str
|
prefix substring |
required |
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with boolean values indicating if each string starts with the prefix. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["apple", "mango", None]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_starts_with(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.starts_with("app").to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_starts_with
:
>>> agnostic_starts_with(s_pd)
0 True
1 False
2 None
dtype: object
>>> agnostic_starts_with(s_pl)
shape: (3,)
Series: '' [bool]
[
true
false
null
]
>>> agnostic_starts_with(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
true,
false,
null
]
]
strip_chars(characters=None)
Remove leading and trailing characters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
characters
|
str | None
|
The set of characters to be removed. All combinations of this set of characters will be stripped from the start and end of the string. If set to None (default), all leading and trailing whitespace is removed instead. |
None
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with leading and trailing characters removed. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["apple", "\nmango"]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_strip_chars(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... s = s.str.strip_chars()
... return s.to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_strip_chars
:
>>> agnostic_strip_chars(s_pd)
0 apple
1 mango
dtype: object
>>> agnostic_strip_chars(s_pl)
shape: (2,)
Series: '' [str]
[
"apple"
"mango"
]
>>> agnostic_strip_chars(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"apple",
"mango"
]
]
tail(n=5)
Take the last n elements of each string.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
n
|
int
|
Number of elements to take. Negative indexing is supported (see note (1.)) |
5
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series containing the last n characters of each string. |
Notes
- When the
n
input is negative,tail
returns characters starting from the n-th from the beginning of the string. For example, ifn = -3
, then all characters except the first three are returned. - If the length of the string has fewer than
n
characters, the full string is returned.
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["Atatata", "taata", "taatatata", "zukkyun"]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_tail(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.tail().to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_tail
:
>>> agnostic_tail(s_pd)
0 atata
1 taata
2 atata
3 kkyun
dtype: object
>>> agnostic_tail(s_pl)
shape: (4,)
Series: '' [str]
[
"atata"
"taata"
"atata"
"kkyun"
]
>>> agnostic_tail(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"atata",
"taata",
"atata",
"kkyun"
]
]
to_datetime(format=None)
Parse Series with strings to a Series with Datetime dtype.
Notes
pandas defaults to nanosecond time unit, Polars to microsecond. Prior to pandas 2.0, nanoseconds were the only time unit supported in pandas, with no ability to set any other one. The ability to set the time unit in pandas, if the version permits, will arrive.
Warning
As different backends auto-infer format in different ways, if format=None
there is no guarantee that the result will be equal.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
format
|
str | None
|
Format to use for conversion. If set to None (default), the format is inferred from the data. |
None
|
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with datetime dtype. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["2020-01-01", "2020-01-02"]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_to_datetime(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.to_datetime(format="%Y-%m-%d").to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_to_datetime
:
>>> agnostic_to_datetime(s_pd)
0 2020-01-01
1 2020-01-02
dtype: datetime64[ns]
>>> agnostic_to_datetime(s_pl)
shape: (2,)
Series: '' [datetime[μs]]
[
2020-01-01 00:00:00
2020-01-02 00:00:00
]
>>> agnostic_to_datetime(s_pa)
<pyarrow.lib.ChunkedArray object at 0x...>
[
[
2020-01-01 00:00:00.000000,
2020-01-02 00:00:00.000000
]
]
to_lowercase()
Transform string to lowercase variant.
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with values converted to lowercase. |
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["APPLE", "MANGO", None]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_to_lowercase(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.to_lowercase().to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_to_lowercase
:
>>> agnostic_to_lowercase(s_pd)
0 apple
1 mango
2 None
dtype: object
>>> agnostic_to_lowercase(s_pl)
shape: (3,)
Series: '' [str]
[
"apple"
"mango"
null
]
>>> agnostic_to_lowercase(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"apple",
"mango",
null
]
]
to_uppercase()
Transform string to uppercase variant.
Returns:
Type | Description |
---|---|
SeriesT
|
A new Series with values converted to uppercase. |
Notes
The PyArrow backend will convert 'ß' to 'ẞ' instead of 'SS'. For more info see: https://github.com/apache/arrow/issues/34599 There may be other unicode-edge-case-related variations across implementations.
Examples:
>>> import pandas as pd
>>> import polars as pl
>>> import pyarrow as pa
>>> import narwhals as nw
>>> from narwhals.typing import IntoSeriesT
>>> data = ["apple", "mango", None]
>>> s_pd = pd.Series(data)
>>> s_pl = pl.Series(data)
>>> s_pa = pa.chunked_array([data])
We define a dataframe-agnostic function:
>>> def agnostic_to_uppercase(s_native: IntoSeriesT) -> IntoSeriesT:
... s = nw.from_native(s_native, series_only=True)
... return s.str.to_uppercase().to_native()
We can then pass any supported library such as pandas, Polars, or
PyArrow to agnostic_to_uppercase
:
>>> agnostic_to_uppercase(s_pd)
0 APPLE
1 MANGO
2 None
dtype: object
>>> agnostic_to_uppercase(s_pl)
shape: (3,)
Series: '' [str]
[
"APPLE"
"MANGO"
null
]
>>> agnostic_to_uppercase(s_pa)
<pyarrow.lib.ChunkedArray object at ...>
[
[
"APPLE",
"MANGO",
null
]
]