entities Package

basic Package

# analyzers Package¶

## fft¶

Calculate an FFT on a TimeSeries DataType and return a FourierSpectrum DataType.

class tvb.analyzers.fft.FFT(**kwargs)[source]
A class for calculating the FFT of a TimeSeries object of TVB and returning a FourierSpectrum object. A segment length and windowing function can be optionally specified. By default the time series is segmented into 1 second blocks and no windowing function is applied.
time_series : tvb.analyzers.fft.FFT.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The TimeSeries to which the FFT is to be applied.
segment_length : tvb.analyzers.fft.FFT.segment_length = Float(field_type=<class ‘float’>, default=1000.0, required=False)
The TimeSeries can be segmented into equally sized blocks (overlapping if necessary). The segment length determines the frequency resolution of the resulting power spectra – longer windows produce finer frequency resolution.
window_function : tvb.analyzers.fft.FFT.window_function = Attr(field_type=<class ‘str’>, default=None, required=False)
Windowing functions can be applied before the FFT is performed. Default is None, possibilities are: ‘hamming’; ‘bartlett’; ‘blackman’; and ‘hanning’. See, numpy.<function_name>.
detrend : tvb.analyzers.fft.FFT.detrend = Attr(field_type=<class ‘bool’>, default=True, required=False)
Detrending is not always appropriate. Default is True, False means no detrending is performed on the time series

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

detrend

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

evaluate()[source]

Calculate the FFT of time_series broken into segments of length segment_length and filtered by window_function.

extended_result_size(input_shape, segment_length, sample_period)[source]

Returns the storage size in Bytes of the extended result of the FFT. That is, it includes storage of the evaluated FourierSpectrum attributes such as power, phase, amplitude, etc.

result_shape(input_shape, segment_length, sample_period)[source]

Returns the shape of the main result (complex array) of the FFT.

result_size(input_shape, segment_length, sample_period)[source]

Returns the storage size in Bytes of the main result (complex array) of the FFT.

segment_length

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

window_function

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

## fmri_balloon¶

Implementation of differet BOLD signal models. Four different models are distinguished:

• CBM_N: Classical BOLD Model Non-linear
• CBM_L: Classical BOLD Model Linear
• RBM_N: Revised BOLD Model Non-linear (default)
• RBM_L: Revised BOLD Model Linear

Classical means that the coefficients used to compute the BOLD signal are derived as described in [Obata2004] . Revised coefficients are defined in [Stephan2007]

References:

 [Stephan2007] (1, 2) Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic models with DCM. NeuroImage 38: 387-401.
 [Obata2004] Obata, T.; Liu, T. T.; Miller, K. L.; Luh, W. M.; Wong, E. C.; Frank, L. R. & Buxton, R. B. (2004) Discrepancies between BOLD and flow dynamics in primary and supplementary motor areas: application of the balloon model to the interpretation of BOLD transients. Neuroimage, 21:144-153
class tvb.analyzers.fmri_balloon.BalloonModel(**kwargs)[source]

A class for calculating the simulated BOLD signal given a TimeSeries object of TVB and returning another TimeSeries object.

The haemodynamic model parameters based on constants for a 1.5 T scanner.

time_series : tvb.analyzers.fmri_balloon.BalloonModel.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The timeseries that represents the input neural activity
dt : tvb.analyzers.fmri_balloon.BalloonModel.dt = Float(field_type=<class ‘float’>, default=0.002, required=True)
The integration time step size for the balloon model (s). If none is provided, by default, the TimeSeries sample period is used.
integrator : tvb.analyzers.fmri_balloon.BalloonModel.integrator = Attr(field_type=<class ‘tvb.simulator.integrators.Integrator’>, default=<tvb.simulator.integrators.HeunDeterministic object at 0x7f71ae963fd0>, required=True)
A tvb.simulator.Integrator object which is an integration scheme with supporting attributes such as integration step size and noise specification for stochastic methods. It is used to compute the time courses of the balloon model state variables.
bold_model : tvb.analyzers.fmri_balloon.BalloonModel.bold_model = Attr(field_type=<class ‘str’>, default=’nonlinear’, required=True)
Select the set of equations for the BOLD model.
RBM : tvb.analyzers.fmri_balloon.BalloonModel.RBM = Attr(field_type=<class ‘bool’>, default=True, required=True)
Select classical vs revised BOLD model (CBM or RBM). Coefficients k1, k2 and k3 will be derived accordingly.
neural_input_transformation : tvb.analyzers.fmri_balloon.BalloonModel.neural_input_transformation = Attr(field_type=<class ‘str’>, default=’none’, required=True)
This represents the operation to perform on the state-variable(s) of the model used to generate the input TimeSeries. none takes the first state-variable as neural input;  abs_diff is the absolute value of the derivative (first order difference) of the first state variable; sum: sum all the state-variables of the input TimeSeries.
tau_s : tvb.analyzers.fmri_balloon.BalloonModel.tau_s = Float(field_type=<class ‘float’>, default=0.65, required=True)
Balloon model parameter. Time of signal decay (s)
tau_f : tvb.analyzers.fmri_balloon.BalloonModel.tau_f = Float(field_type=<class ‘float’>, default=0.41, required=True)
Balloon model parameter. Time of flow-dependent elimination or feedback regulation (s). The average time blood take to traverse the venous compartment. It is the ratio of resting blood volume (V0) to resting blood flow (F0).

tau_o : tvb.analyzers.fmri_balloon.BalloonModel.tau_o = Float(field_type=<class ‘float’>, default=0.98, required=True)

Balloon model parameter. Haemodynamic transit time (s). The average time blood take to traverse the venous compartment. It is the ratio of resting blood volume (V0) to resting blood flow (F0).
alpha : tvb.analyzers.fmri_balloon.BalloonModel.alpha = Float(field_type=<class ‘float’>, default=0.32, required=True)
Balloon model parameter. Stiffness parameter. Grubb’s exponent.
TE : tvb.analyzers.fmri_balloon.BalloonModel.TE = Float(field_type=<class ‘float’>, default=0.04, required=True)
BOLD parameter. Echo Time
V0 : tvb.analyzers.fmri_balloon.BalloonModel.V0 = Float(field_type=<class ‘float’>, default=4.0, required=True)
BOLD parameter. Resting blood volume fraction.
E0 : tvb.analyzers.fmri_balloon.BalloonModel.E0 = Float(field_type=<class ‘float’>, default=0.4, required=True)
BOLD parameter. Resting oxygen extraction fraction.
epsilon : tvb.analyzers.fmri_balloon.BalloonModel.epsilon = NArray(label=’$$\\epsilon$$‘, dtype=float64, default=array([0.5]), dim_names=(), ndim=None, required=True)
BOLD parameter. Ratio of intra- and extravascular signals. In principle this parameter could be derived from empirical data and spatialized.
nu_0 : tvb.analyzers.fmri_balloon.BalloonModel.nu_0 = Float(field_type=<class ‘float’>, default=40.3, required=True)
BOLD parameter. Frequency offset at the outer surface of magnetized vessels (Hz).
r_0 : tvb.analyzers.fmri_balloon.BalloonModel.r_0 = Float(field_type=<class ‘float’>, default=25.0, required=True)
BOLD parameter. Slope r0 of intravascular relaxation rate (Hz). Only used for revised coefficients.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

E0

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

RBM

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

TE

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

V0

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

alpha

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

balloon_dfun(state_variables, neural_input, local_coupling=0.0)[source]

The Balloon model equations. See Eqs. (4-10) in [Stephan2007] .. math:

\frac{ds}{dt} &= x - \kappa\,s - \gamma \,(f-1) \\
\frac{df}{dt} &= s \\
\frac{dv}{dt} &= \frac{1}{\tau_o} \, (f - v^{1/\alpha})\\
\frac{dq}{dt} &= \frac{1}{\tau_o}(f \, \frac{1-(1-E_0)^{1/\alpha}}{E_0} - v^{&/\alpha} \frac{q}{v})\\
\kappa &= \frac{1}{\tau_s}\\
\gamma &= \frac{1}{\tau_f}

bold_model

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

compute_derived_parameters()[source]

Compute derived parameters $$k_1$$, $$k_2$$ and $$k_3$$.

dt

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

epsilon

Declares a numpy array. dtype enforces the dtype. The default dtype is float32. An optional symbolic shape can be given, as a tuple of Dim attributes from the owning class. The shape will be enforced, but no broadcasting will be done. domain declares what values are allowed in this array. It can be any object that can be checked for membership Defaults are checked if they are in the declared domain. For performance reasons this does not happen on every attribute set.

evaluate()[source]

Calculate simulated BOLD signal

extended_result_size(input_shape)[source]

Returns the storage size in Bytes of the extended result of the .... That is, it includes storage of the evaluated ... attributes such as ..., etc.

input_transformation(time_series, mode)[source]

Perform an operation on the input time-series.

integrator

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

neural_input_transformation

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

nu_0

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

r_0

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

result_shape(input_shape)[source]

Returns the shape of the main result of fmri balloon ...

result_size(input_shape)[source]

Returns the storage size in Bytes of the main result of .

tau_f

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

tau_o

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

tau_s

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

## graph¶

Useful graph analyses.

tvb.analyzers.graph.betweenness_bin(A)[source]

Node betweenness centrality is the fraction of all shortest paths in the network that contain a given node. Nodes with high values of betweenness centrality participate in a large number of shortest paths.

Parameters: A – binary (directed/undirected) connection matrix (array) BC: a vector representing node between centrality vector.

Notes:

Betweenness centrality may be normalised to the range [0,1] as BC/[(N-1)(N-2)], where N is the number of nodes in the network.

Original Mika Rubinov, UNSW/U Cambridge, 2007-2012 - From BCT 2012-12-04

Reference: [1] Kintali (2008) arXiv:0809.1906v2 [cs.DS] (generalization to directed and disconnected graphs)

Author: Paula Sanz Leon

tvb.analyzers.graph.distance_inv(G)[source]

Compute the inverse shortest path lengths of G.

Parameters: G – binary undirected connection matrix D: matrix of inverse distances
tvb.analyzers.graph.efficiency_bin(A, compute_local_efficiency=False)[source]

Computes global efficiency or local efficiency of a connectivity matrix. The global efficiency is the average of inverse shortest path length, and is inversely related to the characteristic path length.

The local efficiency is the global efficiency computed on the neighborhood of the node, and is related to the clustering coefficient.

Parameters: A – array; binary undirected connectivity matrix. compute_local_efficiency – bool, optional flag to compute either local or global efficiency of the network. global efficiency (float) local efficiency (array)

References: [1] Latora and Marchiori (2001) Phys Rev Lett 87:198701.

Note

Algorithm: algebraic path count

Note

Original: Mika Rubinov, UNSW, 2008-2010 - From BCT 2012-12-04

Note

Tested with Numpy 1.7

Warning

tested against Matlab version... needs indexing improvement

Example:

>>> import numpy.random
>>> A = np.random.rand(5, 5)
>>> E = efficiency_bin(A)
>>> E.shape == (1, )
>>> True


If you want to compute the local efficiency for every node in the network:

>>> E = efficiency_bin(A, compute_local_efficiency=True)
>>> E.shape == (5, 1)
>>> True


Author: Paula Sanz Leon

tvb.analyzers.graph.get_components_sizes(A)[source]

Get connected components sizes. Returns the size of the largest component of an undirected graph specified by the binary and undirected connection matrix A.

Parameters: A – array - binary undirected (BU) connectivity matrix. largest component (float) size of the largest component Value Error - If A is not square.

Warning

Requires NetworkX

Author: Paula Sanz Leon

tvb.analyzers.graph.sequential_random_deletion(white_matter, random_sequence, nor)[source]

A strategy to lesion a connectivity matrix.

A single node is removed at each step until the network is reduced to only 2 nodes. This method represents a structural failure analysis and it should be run several times with different random sequences.

Parameters: white_matter – tvb Connectivity DataType (yes, it’s an example for TVB!) a connectivity DataType that has a ‘weights’ attribute. random_sequence – int array; a sequence of random integer numbers indicating which the nodes will be deleted at each step. nor – number of nodes of the original connectivity matrix. Node strength (number_of_nodes, number_of_nodes -2) Node degree (number_of_nodes, number_of_nodes -2) Global efficiency (number_of_nodes, ) Size of the largest component (number_of_nodes, )

References: Alstott et al. (2009).

Author: Paula Sanz Leon

tvb.analyzers.graph.sequential_targeted_deletion(white_matter, nor)[source]

A strategy to lesion a connectivity matrix.

A single node is removed at each step until the network is reduced to only 2 nodes. At each step different graph metrics are computed (degree, strength and betweenness centrality). The single node with the highest degree, strength or centrality is removed.

Parameters: white_matter – tvb Connectivity datatype (yes, it’s an example for TVB!) a connectivity datatype that has a ‘weights’ attribute. nor – number of nodes of the original connectivity matrix. Node strength (number_of_nodes, number_of_nodes -2) array Node degree (number_of_nodes, number_of_nodes -2) array Betweenness centrality (number_of_nodes, number_of_nodes -2) array Global efficiency (number_of_nodes, 3) array Size of the largest component (number_of_nodes, 3) array

References: Alstott et al. (2009).

Author: Paula Sanz Leon

## ica¶

Perform Independent Component Analysis on a TimeSeries Object and returns an IndependentComponents datatype.

class tvb.analyzers.ica.FastICA(**kwargs)[source]

Takes a TimeSeries datatype (x) and returns the unmixed temporal sources (S) and the estimated mixing matrix (A).

math: x = A S

ICA takes time-points as observations and nodes as variables.

It uses the FastICA algorithm implemented in the scikit-learn toolkit, and its intended usage is as a blind source separation method.

time_series : tvb.analyzers.ica.FastICA.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The timeseries to which the ICA is to be applied.
n_components : tvb.analyzers.ica.FastICA.n_components = Int(field_type=<class ‘int’>, default=None, required=False)
Number of principal components to unmix.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Run FastICA on the given time series data.

extended_result_size(input_shape)[source]

Returns the storage size in bytes of the extended result of the ICA.

n_components

Declares an integer This is different from Attr(field_type=int). The former enforces int subtypes This allows all integer types, including numpy ones that can be safely cast to the declared type according to numpy rules

result_shape(input_shape)[source]

Returns the shape of the mixing matrix.

result_size(input_shape)[source]

Returns the storage size in bytes of the mixing matrix of the ICA analysis, assuming 64-bit float.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

## ica_algorithm¶

tvb.analyzers.ica_algorithm.fastica(X, n_components=None, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, return_X_mean=False, compute_sources=True, return_n_iter=False)[source]

Perform Fast Independent Component Analysis.

Read more in the User Guide.

X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
n_components : int, optional
Number of components to extract. If None no dimension reduction is performed.
algorithm : {‘parallel’, ‘deflation’}, optional
Apply a parallel or deflational FASTICA algorithm.
whiten : boolean, optional
If True perform an initial whitening of the data. If False, the data is assumed to have already been preprocessed: it should be centered, normed and white. Otherwise you will get incorrect results. In this case the parameter n_components will be ignored.
fun : string or function, optional. Default: ‘logcosh’

The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example:

def my_g(x):
return x ** 3, np.mean(3 * x ** 2, axis=-1)
fun_args : dictionary, optional
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}
max_iter : int, optional
Maximum number of iterations to perform.
tol : float, optional
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
w_init : (n_components, n_components) array, optional
Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then an array of normal r.v.’s is used.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
return_X_mean : bool, optional
If True, X_mean is returned too.
compute_sources : bool, optional
If False, sources are not computed, but only the rotation matrix. This can save memory when working with big data. Defaults to True.
return_n_iter : bool, optional
Whether or not to return the number of iterations.
K : array, shape (n_components, n_features) | None.
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n_components principal components. If whiten is ‘False’, K is ‘None’.
W : array, shape (n_components, n_components)

Estimated un-mixing matrix. The mixing matrix can be obtained by:

w = np.dot(W, K.T)
A = w.T * (w * w.T).I

S : array, shape (n_samples, n_components) | None
Estimated source matrix
X_mean : array, shape (n_features, )
The mean over features. Returned only if return_X_mean is True.
n_iter : int
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge. This is returned only when return_n_iter is set to True.

The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix’ the data by estimating an un-mixing matrix W where S = W K X.

This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input.

Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430

## independent_component_analysis¶

class tvb.analyzers.independent_component_analysis.FastICA(n_components=None, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None)[source]

Bases: builtins.object

FastICA: a fast algorithm for Independent Component Analysis.

Read more in the User Guide.

n_components : int, optional
Number of components to use. If none is passed, all are used.
algorithm : {‘parallel’, ‘deflation’}
Apply parallel or deflational algorithm for FastICA.
whiten : boolean, optional
If whiten is false, the data is already considered to be whitened, and no whitening is performed.
fun : string or function, optional. Default: ‘logcosh’

The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. Example:

def my_g(x):
return x ** 3, (3 * x ** 2).mean(axis=-1)
fun_args : dictionary, optional
Arguments to send to the functional form. If empty and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}.
max_iter : int, optional
Maximum number of iterations during fit.
tol : float, optional
Tolerance on update at each iteration.
w_init : None of an (n_components, n_components) ndarray
The mixing matrix to be used to initialize the algorithm.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
components_ : 2D array, shape (n_components, n_features)
The unmixing matrix.
mixing_ : array, shape (n_features, n_components)
The mixing matrix.
n_iter_ : int
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge.

Implementation based on A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430

fit(X, y=None)[source]

Fit the model to X.

X : array-like, shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.

y : Ignored

self

tvb.analyzers.independent_component_analysis.fastica(X, n_components=None, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, return_X_mean=False, compute_sources=True, return_n_iter=False)[source]

Perform Fast Independent Component Analysis.

Read more in the User Guide.

X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
n_components : int, optional
Number of components to extract. If None no dimension reduction is performed.
algorithm : {‘parallel’, ‘deflation’}, optional
Apply a parallel or deflational FASTICA algorithm.
whiten : boolean, optional
If True perform an initial whitening of the data. If False, the data is assumed to have already been preprocessed: it should be centered, normed and white. Otherwise you will get incorrect results. In this case the parameter n_components will be ignored.
fun : string or function, optional. Default: ‘logcosh’

The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example:

def my_g(x):
return x ** 3, np.mean(3 * x ** 2, axis=-1)
fun_args : dictionary, optional
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}
max_iter : int, optional
Maximum number of iterations to perform.
tol : float, optional
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
w_init : (n_components, n_components) array, optional
Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then an array of normal r.v.’s is used.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
return_X_mean : bool, optional
If True, X_mean is returned too.
compute_sources : bool, optional
If False, sources are not computed, but only the rotation matrix. This can save memory when working with big data. Defaults to True.
return_n_iter : bool, optional
Whether or not to return the number of iterations.
K : array, shape (n_components, n_features) | None.
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n_components principal components. If whiten is ‘False’, K is ‘None’.
W : array, shape (n_components, n_components)

Estimated un-mixing matrix. The mixing matrix can be obtained by:

w = np.dot(W, K.T)
A = w.T * (w * w.T).I

S : array, shape (n_samples, n_components) | None
Estimated source matrix
X_mean : array, shape (n_features, )
The mean over features. Returned only if return_X_mean is True.
n_iter : int
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge. This is returned only when return_n_iter is set to True.

The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix’ the data by estimating an un-mixing matrix W where S = W K X.

This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input.

Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430

## metric_kuramoto_index¶

Filler analyzer: Takes a TimeSeries object and returns a Float.

class tvb.analyzers.metric_kuramoto_index.KuramotoIndex(**kwargs)[source]

Return the Kuramoto synchronization index.

Useful metric for a parameter analysis when the collective brain dynamics represent coupled oscillatory processes.

The order parameters are $$r$$ and $$Psi$$.

$r e^{i * \psi} = \frac{1}{N}\,\sum_{k=1}^N(e^{i*\theta_k})$

The first is the phase coherence of the population of oscillators (KSI) and the second is the average phase.

When $$r=0$$ means 0 coherence among oscillators.

Input: TimeSeries DataType

Output: Float

This is a crude indicator of synchronization among nodes over the entire network.

#NOTE: For the time being it is meant to be another global metric. However, it should be consider to have a sort of TimeSeriesDatatype for this analyzer.

time_series : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The TimeSeries for which the metric(s) will be computed.
start_point : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.start_point = Float(field_type=<class ‘float’>, default=500.0, required=False)
The start point determines how many points of the TimeSeries will be discarded before computing the metric. By default it drops the first 500 ms.
segment : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.segment = Int(field_type=<class ‘int’>, default=4, required=False)
Divide the input time-series into discrete equally sized sequences and use the last segment to compute the metric. It is only used when the start point is larger than the time-series length.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Kuramoto Synchronization Index

## metric_proxy_metastability¶

Filler analyzer: Takes a TimeSeries object and returns two Floats.

These metrics are described and used in:

Hellyer et al. The Control of Global Brain Dynamics: Opposing Actions of Frontoparietal Control and Default Mode Networks on Attention. The Journal of Neuroscience, January 8, 2014, 34(2):451– 461

Proxy of spatial coherence (V):

Proxy metastability (M): the variability in spatial coherence of the signal globally or locally (within a network) over time.

Proxy synchrony (S) : the reciprocal of mean spatial variance across time.

class tvb.analyzers.metric_proxy_metastability.ProxyMetastabilitySynchrony(**kwargs)[source]

Subtract the mean time-series and compute.

Input: TimeSeries DataType

Output: Float, Float

The two metrics given by this analyzers are a proxy for metastability and synchrony. The underlying dynamical model used in the article was the Kuramoto model.

$\begin{split}V(t) &= \frac{1}{N} \sum_{i=1}^{N} |S_i(t) - <S(t)>| \\ M(t) &= \sqrt{E[V(t)^{2}]-(E[V(t)])^{2}} \\ S(t) &= \frac{1}{\bar{V(t)}}\end{split}$
time_series : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The TimeSeries for which the metric(s) will be computed.
start_point : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.start_point = Float(field_type=<class ‘float’>, default=500.0, required=False)
The start point determines how many points of the TimeSeries will be discarded before computing the metric. By default it drops the first 500 ms.
segment : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.segment = Int(field_type=<class ‘int’>, default=4, required=False)
Divide the input time-series into discrete equally sized sequences and use the last segment to compute the metric. It is only used when the start point is larger than the time-series length.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Compute the zero centered variance of node variances for the time_series.

tvb.analyzers.metric_proxy_metastability.remove_mean(x, axis)[source]

Remove mean from numpy array along axis

## metric_variance_global¶

Filler analyzer: Takes a TimeSeries object and returns a Float.

class tvb.analyzers.metric_variance_global.GlobalVariance(**kwargs)[source]

Zero-centres all the time-series and then calculates the variance over all data points.

Input: TimeSeries DataType

Output: Float

This is a crude indicator of “excitability” or oscillation amplitude of the models over the entire network.

time_series : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The TimeSeries for which the metric(s) will be computed.
start_point : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.start_point = Float(field_type=<class ‘float’>, default=500.0, required=False)
The start point determines how many points of the TimeSeries will be discarded before computing the metric. By default it drops the first 500 ms.
segment : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.segment = Int(field_type=<class ‘int’>, default=4, required=False)
Divide the input time-series into discrete equally sized sequences and use the last segment to compute the metric. It is only used when the start point is larger than the time-series length.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Compute the zero centered global variance of the time_series.

## metric_variance_of_node_variance¶

Filler analyzer: Takes a TimeSeries object and returns a Float.

class tvb.analyzers.metric_variance_of_node_variance.VarianceNodeVariance(**kwargs)[source]

Zero-centres all the time-series, calculates the variance for each node time-series and returns the variance of the node variances.

Input: TimeSeries DataType

Output: Float

This is a crude indicator of how different the “excitability” of the model is from node to node.

time_series : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The TimeSeries for which the metric(s) will be computed.
start_point : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.start_point = Float(field_type=<class ‘float’>, default=500.0, required=False)
The start point determines how many points of the TimeSeries will be discarded before computing the metric. By default it drops the first 500 ms.
segment : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.segment = Int(field_type=<class ‘int’>, default=4, required=False)
Divide the input time-series into discrete equally sized sequences and use the last segment to compute the metric. It is only used when the start point is larger than the time-series length.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Compute the zero centered variance of node variances for the time_series.

## metrics_base¶

class tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm(**kwargs)[source]
This is a base class for all metrics on timeSeries dataTypes. Metric means an algorithm computing a single value for an entire TimeSeries.
time_series : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The TimeSeries for which the metric(s) will be computed.
start_point : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.start_point = Float(field_type=<class ‘float’>, default=500.0, required=False)
The start point determines how many points of the TimeSeries will be discarded before computing the metric. By default it drops the first 500 ms.
segment : tvb.analyzers.metrics_base.BaseTimeseriesMetricAlgorithm.segment = Int(field_type=<class ‘int’>, default=4, required=False)
Divide the input time-series into discrete equally sized sequences and use the last segment to compute the metric. It is only used when the start point is larger than the time-series length.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

This method needs to be implemented in each subclass. Will describe current algorithm.

Returns: single numeric value or a dictionary (displayLabel: numeric value) to be persisted.
segment

Declares an integer This is different from Attr(field_type=int). The former enforces int subtypes This allows all integer types, including numpy ones that can be safely cast to the declared type according to numpy rules

start_point

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

## node_coherence¶

Compute cross coherence between all nodes in a time series.

class tvb.analyzers.node_coherence.NodeCoherence(**kwargs)[source]

time_series : tvb.analyzers.node_coherence.NodeCoherence.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The timeseries to which the FFT is to be applied.
nfft : tvb.analyzers.node_coherence.NodeCoherence.nfft = Int(field_type=<class ‘int’>, default=256, required=True)
Should be a power of 2...

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Evaluate coherence on time series.

extended_result_size(input_shape)[source]

Returns the storage size in Bytes of the extended result of the FFT. That is, it includes storage of the evaluated FourierSpectrum attributes such as power, phase, amplitude, etc.

nfft

Declares an integer This is different from Attr(field_type=int). The former enforces int subtypes This allows all integer types, including numpy ones that can be safely cast to the declared type according to numpy rules

result_shape(input_shape)[source]

Returns the shape of the main result of NodeCoherence.

result_size(input_shape)[source]

Returns the storage size in Bytes of the main result of NodeCoherence.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

tvb.analyzers.node_coherence.coherence(data, sample_rate, nfft=256, imag=False)[source]

Vectorized coherence calculation by windowed FFT

tvb.analyzers.node_coherence.coherence_mlab(data, sample_rate, nfft=256)[source]
tvb.analyzers.node_coherence.hamming(M, sym=True)[source]

The M-point Hamming window. From scipy.signal

## node_complex_coherence¶

Calculate the cross spectrum and complex coherence on a TimeSeries datatype and return a ComplexCoherence datatype.

class tvb.analyzers.node_complex_coherence.NodeComplexCoherence(**kwargs)[source]

A class for calculating the FFT of a TimeSeries and returning a ComplexCoherenceSpectrum datatype.

This algorithm is based on the matlab function data2cs_event.m written by Guido Nolte:
 [Freyer_2012] Freyer, F.; Reinacher, M.; Nolte, G.; Dinse, H. R. and Ritter, P. Repetitive tactile stimulation changes resting-state functional connectivity-implications for treatment of sensorimotor decline. Front Hum Neurosci, Bernstein Focus State Dependencies of Learning and Bernstein Center for Computational Neuroscience Berlin, Germany., 2012, 6, 144

Input: originally the input could be 2D (tpts x nodes/channels), and it was possible to give a 3D array (e.g., tpspt x nodes/cahnnels x trials) via the segment_length attribute. Current TVB implementation can handle 4D or 2D TimeSeries datatypes. Be warned: the 4D TimeSeries will be averaged and squeezed.

Output: (main arrays) - the cross-spectrum - the complex coherence, from which the imaginary part can be extracted

By default the time series is segmented into 1 second epoch blocks and 0.5 second 50% overlapping segments to which a Hanning function is applied.

time_series : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The timeseries for which the CrossCoherence and ComplexCoherence is to be computed.
epoch_length : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.epoch_length = Float(field_type=<class ‘float’>, default=1000.0, required=False)
In general for lengthy EEG recordings (~30 min), the timeseries are divided into equally sized segments (~ 20-40s). These contain the event that is to be characterized by means of the cross coherence. Additionally each epoch block will be further divided into segments to which the FFT will be applied.
segment_length : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.segment_length = Float(field_type=<class ‘float’>, default=500.0, required=False)
The timeseries can be segmented into equally sized blocks (overlapping if necessary). The segment length determines the frequency resolution of the resulting power spectra – longer windows produce finer frequency resolution.
segment_shift : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.segment_shift = Float(field_type=<class ‘float’>, default=250.0, required=False)
Time length by which neighboring segments are shifted. e.g. segment shift = segment_length / 2 means 50% overlapping segments.
window_function : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.window_function = Attr(field_type=<class ‘str’>, default=’hanning’, required=False)
Windowing functions can be applied before the FFT is performed. Default is hanning, possibilities are: ‘hamming’; ‘bartlett’; ‘blackman’; and ‘hanning’. See, numpy.<function_name>.
average_segments : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.average_segments = Attr(field_type=<class ‘bool’>, default=True, required=False)
Flag. If True, compute the mean Cross Spectrum across segments.
subtract_epoch_average : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.subtract_epoch_average = Attr(field_type=<class ‘bool’>, default=True, required=False)
Flag. If True and if the number of epochs is > 1, you can optionally subtract the mean across epochs before computing the complex coherence.
Adds n zeros at the end of each segment and at the end of window_function. It is not yet functional.
detrend_ts : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.detrend_ts = Attr(field_type=<class ‘bool’>, default=False, required=False)
Flag. If True removes linear trend along the time dimension before applying FFT.
max_freq : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.max_freq = Float(field_type=<class ‘float’>, default=1024.0, required=False)
Maximum frequency points (e.g. 32., 64., 128.) represented in the output. Default is segment_length / 2 + 1.
npat : tvb.analyzers.node_complex_coherence.NodeComplexCoherence.npat = Float(field_type=<class ‘float’>, default=1.0, required=False)
This attribute appears to be related to an input projection matrix... Which is not yet implemented

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

average_segments

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

detrend_ts

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

epoch_length

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

evaluate()[source]

Calculate the FFT, Cross Coherence and Complex Coherence of time_series broken into (possibly) epochs and segments of length epoch_length and segment_length respectively, filtered by window_function.

extended_result_size(input_shape, max_freq, epoch_length, segment_length, segment_shift, sample_period, zeropad, average_segments)[source]

Returns the storage size in Bytes of the extended result of the ComplexCoherence. That is, it includes storage of the evaluated ComplexCoherence attributes such as ...

max_freq

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

npat

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

static result_shape(input_shape, max_freq, epoch_length, segment_length, segment_shift, sample_period, zeropad, average_segments)[source]

Returns the shape of the main result and the average over epochs

result_size(input_shape, max_freq, epoch_length, segment_length, segment_shift, sample_period, zeropad, average_segments)[source]

Returns the storage size in Bytes of the main result (complex array) of the ComplexCoherence

segment_length

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

segment_shift

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

subtract_epoch_average

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

window_function

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

Declares an integer This is different from Attr(field_type=int). The former enforces int subtypes This allows all integer types, including numpy ones that can be safely cast to the declared type according to numpy rules

## pca¶

Perform Principal Component Analysis (PCA) on a TimeSeries datatype and return a PrincipalComponents datatype.

class tvb.analyzers.pca.PCA(**kwargs)[source]

Return principal component weights and the fraction of the variance that they explain.

PCA takes time-points as observations and nodes as variables.

NOTE: The TimeSeries must be longer(more time-points) than the number of
nodes – Mostly a problem for TimeSeriesSurface datatypes, which, if sampled at 1024Hz, would need to be greater than 16 seconds long.
time_series : tvb.analyzers.pca.PCA.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The timeseries to which the PCA is to be applied. NOTE: The TimeSeries must be longer(more time-points) than the number of nodes – Mostly a problem for surface times-series, which, if sampled at 1024Hz, would need to be greater than 16 seconds long.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Compute the temporal covariance between nodes in the time_series.

extended_result_size(input_shape)[source]

Returns the storage size in Bytes of the extended result of the PCA. That is, it includes storage of the evaluated PrincipleComponents attributes such as norm_source, component_time_series, etc.

result_shape(input_shape)[source]

Returns the shape of the main result of the PCA analysis – compnnent weights matrix and a vector of fractions.

result_size(input_shape)[source]

Returns the storage size in Bytes of the results of the PCA analysis.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

class tvb.analyzers.pca.PCA_mlab(data)[source]

Bases: builtins.object

The code for this method has been taken and adapted from Matplotlib 2.1.0 Aug 2019

## wavelet¶

Calculate a wavelet transform on a TimeSeries datatype and return a WaveletSpectrum datatype.

class tvb.analyzers.wavelet.ContinuousWaveletTransform(**kwargs)[source]

A class for calculating the wavelet transform of a TimeSeries object of TVB and returning a WaveletSpectrum object. The sampling period and frequency range of the result can be specified. The mother wavelet can also be specified... (So far, only Morlet.)

References:
 [TBetal_1996] C. Tallon-Baudry et al, Stimulus Specificity of Phase-Locked and Non-Phase-Locked 40 Hz Visual Responses in Human., J Neurosci 16(13):4240-4249, 1996.
 [Mallat_1999] S. Mallat, A wavelet tour of signal processing., book, Academic Press, 1999.
time_series : tvb.analyzers.wavelet.ContinuousWaveletTransform.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The timeseries to which the wavelet is to be applied.
mother : tvb.analyzers.wavelet.ContinuousWaveletTransform.mother = Attr(field_type=<class ‘str’>, default=’morlet’, required=True)
The mother wavelet function used in the transform. Default is ‘morlet’, possibilities are: ‘morlet’...
sample_period : tvb.analyzers.wavelet.ContinuousWaveletTransform.sample_period = Float(field_type=<class ‘float’>, default=7.8125, required=True)
The sampling period of the computed wavelet spectrum. NOTE: This should be an integral multiple of the of the sampling period of the source time series, otherwise the actual resulting sample period will be the first correct value below that requested.
frequencies : tvb.analyzers.wavelet.ContinuousWaveletTransform.frequencies = Attr(field_type=<class ‘tvb.basic.neotraits._attr.Range’>, default=Range(lo=0.008, hi=0.06, step=0.002), required=True)
The frequency resolution and range returned. Requested frequencies are converted internally into appropriate scales.
normalisation : tvb.analyzers.wavelet.ContinuousWaveletTransform.normalisation = Attr(field_type=<class ‘str’>, default=’energy’, required=True)
The type of normalisation for the resulting wavet spectrum. Default is ‘energy’, options are: ‘energy’; ‘gabor’.
q_ratio : tvb.analyzers.wavelet.ContinuousWaveletTransform.q_ratio = Float(field_type=<class ‘float’>, default=5.0, required=True)
NFC. Must be greater than 5. Ratios of the center frequencies to bandwidths.

gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)

evaluate()[source]

Calculate the continuous wavelet transform of time_series.

extended_result_size(input_shape, input_sample_period)[source]

Returns the storage size in Bytes of the extended result of the continuous wavelet transform. That is, it includes storage of the evaluated WaveletCoefficients attributes such as power, phase, amplitude, etc.

frequencies

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

mother

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

normalisation

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.

q_ratio

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

result_shape(input_shape, input_sample_period)[source]

Returns the shape of the main result (complex array) of the continuous wavelet transform.

result_size(input_shape, input_sample_period)[source]

Returns the storage size in Bytes of the main result (complex array) of the continuous wavelet transform.

sample_period

Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.

Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.

time_series

An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.