Experimental Pandas API¶
The main module through which interaction with the experimental API takes place.
See Experimental API Reference for details.
Notes
Some of experimental APIs deviate from pandas in order to provide improved performance.
Although the use of experimental backends and engines is available through the modin.pandas module when defining environment variable MODIN_EXPERIMENTAL=true, the use of experimental I/O functions is available only through the modin.experimental.pandas module.
Examples
>>> import modin.experimental.pandas as pd
>>> df = pd.read_csv_glob("data*.csv")
Experimental API Reference¶
- modin.experimental.pandas.read_sql(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, columns=None, chunksize=None, partition_column: Optional[str] = None, lower_bound: Optional[int] = None, upper_bound: Optional[int] = None, max_sessions: Optional[int] = None) modin.pandas.dataframe.DataFrame ¶
General documentation is available in modin.pandas.read_sql.
This experimental feature provides distributed reading from a sql file.
- Parameters
sql (str or SQLAlchemy Selectable (select or text object)) – SQL query to be executed or a table name.
con (SQLAlchemy connectable, str, or sqlite3 connection) – Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported. The user is responsible for engine disposal and connection closure for the SQLAlchemy connectable; str connections are closed automatically. See here.
index_col (str or list of str, optional) – Column(s) to set as index(MultiIndex).
coerce_float (bool, default: True) – Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point, useful for SQL result sets.
params (list, tuple or dict, optional) – List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249’s paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params= {‘name’ : ‘value’}.
parse_dates (list or dict, optional) –
List of column names to parse as dates.
Dict of
{column_name: format string}
where format string is strftime compatible in case of parsing string times, or is one of (D, s, ns, ms, us) in case of parsing integer timestamps.Dict of
{column_name: arg dict}
, where the arg dict corresponds to the keyword arguments ofpandas.to_datetime()
Especially useful with databases without native Datetime support, such as SQLite.
columns (list, optional) – List of column names to select from SQL table (only used when reading a table).
chunksize (int, optional) – If specified, return an iterator where chunksize is the number of rows to include in each chunk.
partition_column (str, optional) – Column used to share the data between the workers (MUST be a INTEGER column).
lower_bound (int, optional) – The minimum value to be requested from the partition_column.
upper_bound (int, optional) – The maximum value to be requested from the partition_column.
max_sessions (int, optional) – The maximum number of simultaneous connections allowed to use.
- Returns
- Return type
modin.DataFrame
- modin.experimental.pandas.read_csv_glob(filepath_or_buffer: Union[str, pathlib.Path, IO], sep=NoDefault.no_default, delimiter=None, header='infer', names=NoDefault.no_default, index_col=None, usecols=None, squeeze=False, prefix=NoDefault.no_default, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal: str = '.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, encoding_errors='strict', dialect=None, error_bad_lines=None, warn_bad_lines=None, on_bad_lines=None, skipfooter=0, doublequote=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None, storage_options: Optional[Dict[str, Any]] = None) modin.pandas.dataframe.DataFrame ¶
General documentation is available in modin.pandas.read_csv.
This experimental feature provides parallel reading from multiple csv files which are defined by glob pattern. Works for local files only!
- Parameters
**kwargs (dict) – Keyword arguments in modin.pandas.read_csv.
- Returns
- Return type
modin.DataFrame
- modin.experimental.pandas.read_pickle_distributed(filepath_or_buffer: Union[PathLike[str], str, IO, io.RawIOBase, io.BufferedIOBase, io.TextIOBase, _io.TextIOWrapper, mmap.mmap], compression: Optional[str] = 'infer', storage_options: Optional[Dict[str, Any]] = None)¶
Load pickled pandas object from files.
In experimental mode, we can use * in the filename. The files must contain parts of one dataframe, which can be obtained, for example, by to_pickle_distributed function. Note: the number of partitions is equal to the number of input files.
- Parameters
filepath_or_buffer (str, path object or file-like object) – File path, URL, or buffer where the pickled object will be loaded from. Accept URL. URL is not limited to S3 and GCS.
compression ({{'infer', 'gzip', 'bz2', 'zip', 'xz', None}}, default: 'infer') – If ‘infer’ and ‘path_or_url’ is path-like, then detect compression from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’, or ‘.xz’ (otherwise no compression) If ‘infer’ and ‘path_or_url’ is not path-like, then use None (= no decompression).
storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.
- Returns
unpickled
- Return type
same type as object stored in file
- class modin.experimental.pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=None, query_compiler=None)¶
Modin distributed representation of
pandas.DataFrame
.Internally, the data can be divided into partitions along both columns and rows in order to parallelize computations and utilize the user’s hardware as much as possible.
Inherit common for
DataFrame
-s andSeries
functionality from the BasePandasDataset class.- Parameters
data (DataFrame, Series, pandas.DataFrame, ndarray, Iterable or dict, optional) – Dict can contain
Series
, arrays, constants, dataclass or list-like objects. If data is a dict, column order follows insertion-order.index (Index or array-like, optional) – Index to use for resulting frame. Will default to
RangeIndex
if no indexing information part of input data and no index provided.columns (Index or array-like, optional) – Column labels to use for resulting frame. Will default to
RangeIndex
if no column labels are provided.dtype (str, np.dtype, or pandas.ExtensionDtype, optional) – Data type to force. Only a single dtype is allowed. If None, infer.
copy (bool, default: False) – Copy data from inputs. Only affects
pandas.DataFrame
/ 2d ndarray input.query_compiler (BaseQueryCompiler, optional) – A query compiler object to create the
DataFrame
from.
Notes
See pandas API documentation for pandas.DataFrame for more.
DataFrame
can be created either from passed data or query_compiler. If both parameters are provided, data source will be prioritized in the next order:Modin
DataFrame
orSeries
passed with data parameter.Query compiler from the query_compiler parameter.
Various pandas/NumPy/Python data structures passed with data parameter.
The last option is less desirable since import of such data structures is very inefficient, please use previously created Modin structures from the fist two options or import data using highly efficient Modin IO tools (for example
pd.read_csv
).- to_pickle_distributed(filepath_or_buffer: Union[PathLike[str], str, IO, io.RawIOBase, io.BufferedIOBase, io.TextIOBase, _io.TextIOWrapper, mmap.mmap], compression: Optional[Union[str, Dict[str, Any]]] = 'infer', protocol: int = 4, storage_options: Optional[Dict[str, Any]] = None)¶
Pickle (serialize) object to file.
If * in the filename all partitions are written to their own separate file, otherwise default pandas implementation is used.
- Parameters
filepath_or_buffer (str, path object or file-like object) – File path where the pickled object will be stored.
compression ({{'infer', 'gzip', 'bz2', 'zip', 'xz', None}}, default: 'infer') – A string representing the compression to use in the output file. By default, infers from the file extension in specified path. Compression mode may be any of the following possible values: {{‘infer’, ‘gzip’, ‘bz2’, ‘zip’, ‘xz’, None}}. If compression mode is ‘infer’ and path_or_buf is path-like, then detect compression mode from the following extensions: ‘.gz’, ‘.bz2’, ‘.zip’ or ‘.xz’. (otherwise no compression). If dict given and mode is ‘zip’ or inferred as ‘zip’, other entries passed as additional compression options.
protocol (int, default: pickle.HIGHEST_PROTOCOL) – Int which indicates which protocol should be used by the pickler, default HIGHEST_PROTOCOL (see [1]_ paragraph 12.1.2). The possible values are 0, 1, 2, 3, 4, 5. A negative value for the protocol parameter is equivalent to setting its value to HIGHEST_PROTOCOL. .. [1] https://docs.python.org/3/library/pickle.html.
storage_options (dict, optional) – Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.