Skip to content

Time-sampling features

light_curve.Duration

Bases: light_curve.light_curve_ext._FeatureEvaluator

Time-series duration

\[ t_{N-1} - t_0. \]

Note: cadence-dependent feature.

  • Depends on: time
  • Minimum number of observations: 1
  • Number of features: 1

Parameters:

Name Type Description Default
transform str or bool or None

Transformer to apply to the feature values. If str, must be one of: - 'default' - use default transformer for the feature, it same as giving True. The default for this feature is 'identity' - 'arcsinh' - Hyperbolic arcsine feature transformer - 'clipped_lg' - Decimal logarithm of a value clipped to a minimum value - 'identity' - Identity feature transformer - 'lg' - Decimal logarithm feature transformer - 'ln1p' - ln(1+x) feature transformer - 'sqrt' - Square root feature transformer If bool, must be True to use default transformer or False to disable. If None, no transformation is applied

required

Attributes:

Name Type Description
names list of str

Feature names

descriptions list of str

Feature descriptions

Methods:

Name Description
__call__

Extract features and return them as a numpy array

Parameters

t : numpy.ndarray of np.float32 or np.float64 dtype Time moments m : numpy.ndarray Signal in magnitude or fluxes. Refer to the feature description to decide which would work better in your case sigma : numpy.ndarray, optional Observation error, if None it is assumed to be unity fill_value : float or None, optional Value to fill invalid feature values, for example if count of observations is not enough to find a proper value. None causes exception for invalid features sorted : bool or None, optional Specifies if input array are sorted by time moments. True is for certainly sorted, False is for unsorted. If None is specified than sorting is checked and an exception is raised for unsorted t check : bool, optional Check all input arrays for NaNs, t and m for infinite values cast : bool, optional Allows non-numpy input and casting of arrays to a common dtype. If False, inputs must be np.ndarray instances with matched dtypes. Casting provides more flexibility with input types at the cost of performance. Returns


ndarray of np.float32 or np.float64 Extracted feature array

many

Parallel light curve feature extraction

It is a parallel executed equivalent of

def many(self, lcs, , fill_value=None, sorted=None, check=True): ... return np.stack( ... [ ... self( ... lc, ... fill_value=fill_value, ... sorted=sorted, ... check=check, ... cast=False, ... ) ... for lc in lcs ... ] ... )

Parameters

lcs : list of (t, m, sigma) or Arrow array Either a list of light curves packed into three-tuples (all numpy.ndarray of the same dtype), or an Arrow array/chunked array of type List<Struct<...>> where the selected fields share the same float dtype (float32 or float64). Arrow input is auto-detected via the arrow_c_array / arrow_c_stream protocol and enables zero-copy data access from pyarrow, polars, and other Arrow-compatible libraries. arrow_fields : list of (str or int) Required when lcs is an Arrow array. Field names or indices specifying which struct fields to use as t, m, and optionally sigma. Must contain 2 elements [t, m] or 3 elements [t, m, sigma]. Each element may be a field name (str) or a zero-based positional index (int); all elements must be of the same type. Ignored for non-Arrow input. fill_value : float or None, optional Fill invalid values by this or raise an exception if None sorted : bool or None, optional Specifies if input array are sorted by time moments, see call documentation for details check : bool, optional Check all input arrays for NaNs, t and m for infinite values n_jobs : int Number of tasks to run in paralell. Default is -1 which means run as many jobs as CPU count. See rayon rust crate documentation for details

default_transform = 'identity' class-attribute

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.str() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'.

supported_transforms = ['arcsinh', 'clipped_lg', 'identity', 'lg', 'ln1p', 'sqrt'] class-attribute

Built-in mutable sequence.

If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.


light_curve.MaximumTimeInterval

Bases: light_curve.light_curve_ext._FeatureEvaluator

Maximum time interval between consequent observations

\[ \max{(t_{i+1} - t_i)} \]

Note: highly cadence-dependent feature.

  • Depends on: time
  • Minimum number of observations: 2
  • Number of features: 1

Parameters:

Name Type Description Default
transform str or bool or None

Transformer to apply to the feature values. If str, must be one of: - 'default' - use default transformer for the feature, it same as giving True. The default for this feature is 'identity' - 'arcsinh' - Hyperbolic arcsine feature transformer - 'clipped_lg' - Decimal logarithm of a value clipped to a minimum value - 'identity' - Identity feature transformer - 'lg' - Decimal logarithm feature transformer - 'ln1p' - ln(1+x) feature transformer - 'sqrt' - Square root feature transformer If bool, must be True to use default transformer or False to disable. If None, no transformation is applied

required

Attributes:

Name Type Description
names list of str

Feature names

descriptions list of str

Feature descriptions

Methods:

Name Description
__call__

Extract features and return them as a numpy array

Parameters

t : numpy.ndarray of np.float32 or np.float64 dtype Time moments m : numpy.ndarray Signal in magnitude or fluxes. Refer to the feature description to decide which would work better in your case sigma : numpy.ndarray, optional Observation error, if None it is assumed to be unity fill_value : float or None, optional Value to fill invalid feature values, for example if count of observations is not enough to find a proper value. None causes exception for invalid features sorted : bool or None, optional Specifies if input array are sorted by time moments. True is for certainly sorted, False is for unsorted. If None is specified than sorting is checked and an exception is raised for unsorted t check : bool, optional Check all input arrays for NaNs, t and m for infinite values cast : bool, optional Allows non-numpy input and casting of arrays to a common dtype. If False, inputs must be np.ndarray instances with matched dtypes. Casting provides more flexibility with input types at the cost of performance. Returns


ndarray of np.float32 or np.float64 Extracted feature array

many

Parallel light curve feature extraction

It is a parallel executed equivalent of

def many(self, lcs, , fill_value=None, sorted=None, check=True): ... return np.stack( ... [ ... self( ... lc, ... fill_value=fill_value, ... sorted=sorted, ... check=check, ... cast=False, ... ) ... for lc in lcs ... ] ... )

Parameters

lcs : list of (t, m, sigma) or Arrow array Either a list of light curves packed into three-tuples (all numpy.ndarray of the same dtype), or an Arrow array/chunked array of type List<Struct<...>> where the selected fields share the same float dtype (float32 or float64). Arrow input is auto-detected via the arrow_c_array / arrow_c_stream protocol and enables zero-copy data access from pyarrow, polars, and other Arrow-compatible libraries. arrow_fields : list of (str or int) Required when lcs is an Arrow array. Field names or indices specifying which struct fields to use as t, m, and optionally sigma. Must contain 2 elements [t, m] or 3 elements [t, m, sigma]. Each element may be a field name (str) or a zero-based positional index (int); all elements must be of the same type. Ignored for non-Arrow input. fill_value : float or None, optional Fill invalid values by this or raise an exception if None sorted : bool or None, optional Specifies if input array are sorted by time moments, see call documentation for details check : bool, optional Check all input arrays for NaNs, t and m for infinite values n_jobs : int Number of tasks to run in paralell. Default is -1 which means run as many jobs as CPU count. See rayon rust crate documentation for details

default_transform = 'identity' class-attribute

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.str() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'.

supported_transforms = ['arcsinh', 'clipped_lg', 'identity', 'lg', 'ln1p', 'sqrt'] class-attribute

Built-in mutable sequence.

If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.


light_curve.MinimumTimeInterval

Bases: light_curve.light_curve_ext._FeatureEvaluator

Minimum time interval between consequent observations

\[ \min{(t_{i+1} - t_i)} \]

Note: highly cadence-dependent feature.

  • Depends on: time
  • Minimum number of observations: 2
  • Number of features: 1

Parameters:

Name Type Description Default
transform str or bool or None

Transformer to apply to the feature values. If str, must be one of: - 'default' - use default transformer for the feature, it same as giving True. The default for this feature is 'identity' - 'arcsinh' - Hyperbolic arcsine feature transformer - 'clipped_lg' - Decimal logarithm of a value clipped to a minimum value - 'identity' - Identity feature transformer - 'lg' - Decimal logarithm feature transformer - 'ln1p' - ln(1+x) feature transformer - 'sqrt' - Square root feature transformer If bool, must be True to use default transformer or False to disable. If None, no transformation is applied

required

Attributes:

Name Type Description
names list of str

Feature names

descriptions list of str

Feature descriptions

Methods:

Name Description
__call__

Extract features and return them as a numpy array

Parameters

t : numpy.ndarray of np.float32 or np.float64 dtype Time moments m : numpy.ndarray Signal in magnitude or fluxes. Refer to the feature description to decide which would work better in your case sigma : numpy.ndarray, optional Observation error, if None it is assumed to be unity fill_value : float or None, optional Value to fill invalid feature values, for example if count of observations is not enough to find a proper value. None causes exception for invalid features sorted : bool or None, optional Specifies if input array are sorted by time moments. True is for certainly sorted, False is for unsorted. If None is specified than sorting is checked and an exception is raised for unsorted t check : bool, optional Check all input arrays for NaNs, t and m for infinite values cast : bool, optional Allows non-numpy input and casting of arrays to a common dtype. If False, inputs must be np.ndarray instances with matched dtypes. Casting provides more flexibility with input types at the cost of performance. Returns


ndarray of np.float32 or np.float64 Extracted feature array

many

Parallel light curve feature extraction

It is a parallel executed equivalent of

def many(self, lcs, , fill_value=None, sorted=None, check=True): ... return np.stack( ... [ ... self( ... lc, ... fill_value=fill_value, ... sorted=sorted, ... check=check, ... cast=False, ... ) ... for lc in lcs ... ] ... )

Parameters

lcs : list of (t, m, sigma) or Arrow array Either a list of light curves packed into three-tuples (all numpy.ndarray of the same dtype), or an Arrow array/chunked array of type List<Struct<...>> where the selected fields share the same float dtype (float32 or float64). Arrow input is auto-detected via the arrow_c_array / arrow_c_stream protocol and enables zero-copy data access from pyarrow, polars, and other Arrow-compatible libraries. arrow_fields : list of (str or int) Required when lcs is an Arrow array. Field names or indices specifying which struct fields to use as t, m, and optionally sigma. Must contain 2 elements [t, m] or 3 elements [t, m, sigma]. Each element may be a field name (str) or a zero-based positional index (int); all elements must be of the same type. Ignored for non-Arrow input. fill_value : float or None, optional Fill invalid values by this or raise an exception if None sorted : bool or None, optional Specifies if input array are sorted by time moments, see call documentation for details check : bool, optional Check all input arrays for NaNs, t and m for infinite values n_jobs : int Number of tasks to run in paralell. Default is -1 which means run as many jobs as CPU count. See rayon rust crate documentation for details

default_transform = 'identity' class-attribute

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.str() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'.

supported_transforms = ['arcsinh', 'clipped_lg', 'identity', 'lg', 'ln1p', 'sqrt'] class-attribute

Built-in mutable sequence.

If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.


light_curve.ObservationCount

Bases: light_curve.light_curve_ext._FeatureEvaluator

Number of observations

\[ N \]

Note: cadence-dependent feature.

  • Depends on: nothing
  • Minimum number of observations: 0
  • Number of features: 1

Parameters:

Name Type Description Default
transform str or bool or None

Transformer to apply to the feature values. If str, must be one of: - 'default' - use default transformer for the feature, it same as giving True. The default for this feature is 'identity' - 'arcsinh' - Hyperbolic arcsine feature transformer - 'clipped_lg' - Decimal logarithm of a value clipped to a minimum value - 'identity' - Identity feature transformer - 'lg' - Decimal logarithm feature transformer - 'ln1p' - ln(1+x) feature transformer - 'sqrt' - Square root feature transformer If bool, must be True to use default transformer or False to disable. If None, no transformation is applied

required

Attributes:

Name Type Description
names list of str

Feature names

descriptions list of str

Feature descriptions

Methods:

Name Description
__call__

Extract features and return them as a numpy array

Parameters

t : numpy.ndarray of np.float32 or np.float64 dtype Time moments m : numpy.ndarray Signal in magnitude or fluxes. Refer to the feature description to decide which would work better in your case sigma : numpy.ndarray, optional Observation error, if None it is assumed to be unity fill_value : float or None, optional Value to fill invalid feature values, for example if count of observations is not enough to find a proper value. None causes exception for invalid features sorted : bool or None, optional Specifies if input array are sorted by time moments. True is for certainly sorted, False is for unsorted. If None is specified than sorting is checked and an exception is raised for unsorted t check : bool, optional Check all input arrays for NaNs, t and m for infinite values cast : bool, optional Allows non-numpy input and casting of arrays to a common dtype. If False, inputs must be np.ndarray instances with matched dtypes. Casting provides more flexibility with input types at the cost of performance. Returns


ndarray of np.float32 or np.float64 Extracted feature array

many

Parallel light curve feature extraction

It is a parallel executed equivalent of

def many(self, lcs, , fill_value=None, sorted=None, check=True): ... return np.stack( ... [ ... self( ... lc, ... fill_value=fill_value, ... sorted=sorted, ... check=check, ... cast=False, ... ) ... for lc in lcs ... ] ... )

Parameters

lcs : list of (t, m, sigma) or Arrow array Either a list of light curves packed into three-tuples (all numpy.ndarray of the same dtype), or an Arrow array/chunked array of type List<Struct<...>> where the selected fields share the same float dtype (float32 or float64). Arrow input is auto-detected via the arrow_c_array / arrow_c_stream protocol and enables zero-copy data access from pyarrow, polars, and other Arrow-compatible libraries. arrow_fields : list of (str or int) Required when lcs is an Arrow array. Field names or indices specifying which struct fields to use as t, m, and optionally sigma. Must contain 2 elements [t, m] or 3 elements [t, m, sigma]. Each element may be a field name (str) or a zero-based positional index (int); all elements must be of the same type. Ignored for non-Arrow input. fill_value : float or None, optional Fill invalid values by this or raise an exception if None sorted : bool or None, optional Specifies if input array are sorted by time moments, see call documentation for details check : bool, optional Check all input arrays for NaNs, t and m for infinite values n_jobs : int Number of tasks to run in paralell. Default is -1 which means run as many jobs as CPU count. See rayon rust crate documentation for details

default_transform = 'identity' class-attribute

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.str() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'.

supported_transforms = ['arcsinh', 'clipped_lg', 'identity', 'lg', 'ln1p', 'sqrt'] class-attribute

Built-in mutable sequence.

If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.


light_curve.TimeMean

Bases: light_curve.light_curve_ext._FeatureEvaluator

Mean time

\[ \langle t \rangle \equiv \frac1{N} \sum_i {t_i}. \]

Note: highly cadence-dependent feature.

  • Depends on: time
  • Minimum number of observations: 1
  • Number of features: 1

Parameters:

Name Type Description Default
transform str or bool or None

Transformer to apply to the feature values. If str, must be one of: - 'default' - use default transformer for the feature, it same as giving True. The default for this feature is 'identity' - 'arcsinh' - Hyperbolic arcsine feature transformer - 'clipped_lg' - Decimal logarithm of a value clipped to a minimum value - 'identity' - Identity feature transformer - 'lg' - Decimal logarithm feature transformer - 'ln1p' - ln(1+x) feature transformer - 'sqrt' - Square root feature transformer If bool, must be True to use default transformer or False to disable. If None, no transformation is applied

required

Attributes:

Name Type Description
names list of str

Feature names

descriptions list of str

Feature descriptions

Methods:

Name Description
__call__

Extract features and return them as a numpy array

Parameters

t : numpy.ndarray of np.float32 or np.float64 dtype Time moments m : numpy.ndarray Signal in magnitude or fluxes. Refer to the feature description to decide which would work better in your case sigma : numpy.ndarray, optional Observation error, if None it is assumed to be unity fill_value : float or None, optional Value to fill invalid feature values, for example if count of observations is not enough to find a proper value. None causes exception for invalid features sorted : bool or None, optional Specifies if input array are sorted by time moments. True is for certainly sorted, False is for unsorted. If None is specified than sorting is checked and an exception is raised for unsorted t check : bool, optional Check all input arrays for NaNs, t and m for infinite values cast : bool, optional Allows non-numpy input and casting of arrays to a common dtype. If False, inputs must be np.ndarray instances with matched dtypes. Casting provides more flexibility with input types at the cost of performance. Returns


ndarray of np.float32 or np.float64 Extracted feature array

many

Parallel light curve feature extraction

It is a parallel executed equivalent of

def many(self, lcs, , fill_value=None, sorted=None, check=True): ... return np.stack( ... [ ... self( ... lc, ... fill_value=fill_value, ... sorted=sorted, ... check=check, ... cast=False, ... ) ... for lc in lcs ... ] ... )

Parameters

lcs : list of (t, m, sigma) or Arrow array Either a list of light curves packed into three-tuples (all numpy.ndarray of the same dtype), or an Arrow array/chunked array of type List<Struct<...>> where the selected fields share the same float dtype (float32 or float64). Arrow input is auto-detected via the arrow_c_array / arrow_c_stream protocol and enables zero-copy data access from pyarrow, polars, and other Arrow-compatible libraries. arrow_fields : list of (str or int) Required when lcs is an Arrow array. Field names or indices specifying which struct fields to use as t, m, and optionally sigma. Must contain 2 elements [t, m] or 3 elements [t, m, sigma]. Each element may be a field name (str) or a zero-based positional index (int); all elements must be of the same type. Ignored for non-Arrow input. fill_value : float or None, optional Fill invalid values by this or raise an exception if None sorted : bool or None, optional Specifies if input array are sorted by time moments, see call documentation for details check : bool, optional Check all input arrays for NaNs, t and m for infinite values n_jobs : int Number of tasks to run in paralell. Default is -1 which means run as many jobs as CPU count. See rayon rust crate documentation for details

default_transform = 'identity' class-attribute

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.str() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'.

supported_transforms = ['arcsinh', 'clipped_lg', 'identity', 'lg', 'ln1p', 'sqrt'] class-attribute

Built-in mutable sequence.

If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.


light_curve.TimeStandardDeviation

Bases: light_curve.light_curve_ext._FeatureEvaluator

Standard deviation of time moments

\[ \sigma_t \equiv \frac{\sum_i {(t_i - \langle t \rangle)^2}}{N - 1}. \]

Note: highly cadence-dependent feature.

  • Depends on: time
  • Minimum number of observations: 2
  • Number of features: 1

Parameters:

Name Type Description Default
transform str or bool or None

Transformer to apply to the feature values. If str, must be one of: - 'default' - use default transformer for the feature, it same as giving True. The default for this feature is 'identity' - 'arcsinh' - Hyperbolic arcsine feature transformer - 'clipped_lg' - Decimal logarithm of a value clipped to a minimum value - 'identity' - Identity feature transformer - 'lg' - Decimal logarithm feature transformer - 'ln1p' - ln(1+x) feature transformer - 'sqrt' - Square root feature transformer If bool, must be True to use default transformer or False to disable. If None, no transformation is applied

required

Attributes:

Name Type Description
names list of str

Feature names

descriptions list of str

Feature descriptions

Methods:

Name Description
__call__

Extract features and return them as a numpy array

Parameters

t : numpy.ndarray of np.float32 or np.float64 dtype Time moments m : numpy.ndarray Signal in magnitude or fluxes. Refer to the feature description to decide which would work better in your case sigma : numpy.ndarray, optional Observation error, if None it is assumed to be unity fill_value : float or None, optional Value to fill invalid feature values, for example if count of observations is not enough to find a proper value. None causes exception for invalid features sorted : bool or None, optional Specifies if input array are sorted by time moments. True is for certainly sorted, False is for unsorted. If None is specified than sorting is checked and an exception is raised for unsorted t check : bool, optional Check all input arrays for NaNs, t and m for infinite values cast : bool, optional Allows non-numpy input and casting of arrays to a common dtype. If False, inputs must be np.ndarray instances with matched dtypes. Casting provides more flexibility with input types at the cost of performance. Returns


ndarray of np.float32 or np.float64 Extracted feature array

many

Parallel light curve feature extraction

It is a parallel executed equivalent of

def many(self, lcs, , fill_value=None, sorted=None, check=True): ... return np.stack( ... [ ... self( ... lc, ... fill_value=fill_value, ... sorted=sorted, ... check=check, ... cast=False, ... ) ... for lc in lcs ... ] ... )

Parameters

lcs : list of (t, m, sigma) or Arrow array Either a list of light curves packed into three-tuples (all numpy.ndarray of the same dtype), or an Arrow array/chunked array of type List<Struct<...>> where the selected fields share the same float dtype (float32 or float64). Arrow input is auto-detected via the arrow_c_array / arrow_c_stream protocol and enables zero-copy data access from pyarrow, polars, and other Arrow-compatible libraries. arrow_fields : list of (str or int) Required when lcs is an Arrow array. Field names or indices specifying which struct fields to use as t, m, and optionally sigma. Must contain 2 elements [t, m] or 3 elements [t, m, sigma]. Each element may be a field name (str) or a zero-based positional index (int); all elements must be of the same type. Ignored for non-Arrow input. fill_value : float or None, optional Fill invalid values by this or raise an exception if None sorted : bool or None, optional Specifies if input array are sorted by time moments, see call documentation for details check : bool, optional Check all input arrays for NaNs, t and m for infinite values n_jobs : int Number of tasks to run in paralell. Default is -1 which means run as many jobs as CPU count. See rayon rust crate documentation for details

default_transform = 'identity' class-attribute

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.str() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to 'strict'.

supported_transforms = ['arcsinh', 'clipped_lg', 'identity', 'lg', 'ln1p', 'sqrt'] class-attribute

Built-in mutable sequence.

If no argument is given, the constructor creates a new empty list. The argument must be an iterable if specified.