analyzers
Package¶
fft
¶
Calculate an FFT on a TimeSeries DataType and return a FourierSpectrum DataType.
- tvb.analyzers.fft.SUPPORTED_WINDOWING_FUNCTIONS = {'bartlett': <function bartlett>, 'blackman': <function blackman>, 'hamming': <function hamming>, 'hanning': <function hanning>}¶
A module for calculating the FFT of a TimeSeries object of TVB and returning a FourierSpectrum object. A segment length and windowing function can be optionally specified. By default the time series is segmented into 1 second blocks and no windowing function is applied.
- tvb.analyzers.fft.compute_fast_fourier_transform(time_series, segment_length, window_function, detrend)[source]¶
# type: (TimeSeries, float, function, bool) -> FourierSpectrum Calculate the FFT of time_series broken into segments of length segment_length and filtered by window_function.
Parameters¶
time_series : TimeSeries The TimeSeries to which the FFT is to be applied.
segment_length : float The segment length determines the frequency resolution of the resulting power spectra – longer windows produce finer frequency resolution
window_function : str Windowing functions can be applied before the FFT is performed. Default is None, possibilities are: ‘hamming’; ‘bartlett’;’blackman’; and ‘hanning’. See, numpy.<function_name>.
detrend : bool Default is True, False means no detrending is performed on the time series.
fmri_balloon
¶
Implementation of differet BOLD signal models. Four different models are distinguished:
CBM_N: Classical BOLD Model Non-linear
CBM_L: Classical BOLD Model Linear
RBM_N: Revised BOLD Model Non-linear (default)
RBM_L: Revised BOLD Model Linear
Classical
means that the coefficients used to compute the BOLD signal are
derived as described in [Obata2004] . Revised
coefficients are defined in
[Stephan2007]
References:
Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic models with DCM. NeuroImage 38: 387-401.
Obata, T.; Liu, T. T.; Miller, K. L.; Luh, W. M.; Wong, E. C.; Frank, L. R. & Buxton, R. B. (2004) Discrepancies between BOLD and flow dynamics in primary and supplementary motor areas: application of the balloon model to the interpretation of BOLD transients. Neuroimage, 21:144-153
- class tvb.analyzers.fmri_balloon.BalloonModel(**kwargs)[source]¶
Bases:
HasTraits
Traited class [tvb.analyzers.fmri_balloon.BalloonModel]¶
A class for calculating the simulated BOLD signal given a TimeSeries object of TVB and returning another TimeSeries object.
The haemodynamic model parameters based on constants for a 1.5 T scanner.
Attributes declared¶
- time_seriestvb.analyzers.fmri_balloon.BalloonModel.time_series = Attr(field_type=<class ‘tvb.datatypes.time_series.TimeSeries’>, default=None, required=True)
The timeseries that represents the input neural activity
- integratortvb.analyzers.fmri_balloon.BalloonModel.integrator = Attr(field_type=<class ‘tvb.simulator.integrators.Integrator’>, default=<class ‘tvb.simulator.integrators.HeunDeterministic’>, required=True)
A tvb.simulator.Integrator object which is an integration scheme with supporting attributes such as integration step size and noise specification for stochastic methods. It is used to compute the time courses of the balloon model state variables.
- bold_modeltvb.analyzers.fmri_balloon.BalloonModel.bold_model = EnumAttr(field_type=<enum ‘BoldModels’>, default=<BoldModels.NONLINEAR: ‘nonlinear’>, required=True)
Select the set of equations for the BOLD model.
- RBMtvb.analyzers.fmri_balloon.BalloonModel.RBM = Attr(field_type=<class ‘bool’>, default=True, required=True)
Select classical vs revised BOLD model (CBM or RBM). Coefficients k1, k2 and k3 will be derived accordingly.
- normalize_neural_inputtvb.analyzers.fmri_balloon.BalloonModel.normalize_neural_input = Attr(field_type=<class ‘bool’>, default=False, required=True)
Set if the mean should be subtracted from the neural input.
- neural_input_transformationtvb.analyzers.fmri_balloon.BalloonModel.neural_input_transformation = EnumAttr(field_type=<enum ‘NeuralInputTransformations’>, default=<NeuralInputTransformations.NONE: ‘none’>, required=True)
This represents the operation to perform on the state-variable(s) of the model used to generate the input TimeSeries.
none
takes the first state-variable as neural input; `` abs_diff`` is the absolute value of the derivative (first order difference) of the first state variable;sum
: sum all the state-variables of the input TimeSeries.- tau_stvb.analyzers.fmri_balloon.BalloonModel.tau_s = Float(field_type=<class ‘float’>, default=1.54, required=True)
Balloon model parameter. Time of signal decay (s)
- tau_ftvb.analyzers.fmri_balloon.BalloonModel.tau_f = Float(field_type=<class ‘float’>, default=1.44, required=True)
Balloon model parameter. Time of flow-dependent elimination or feedback regulation (s). The average time blood take to traverse the venous compartment. It is the ratio of resting blood volume (V0) to resting blood flow (F0).
tau_o : tvb.analyzers.fmri_balloon.BalloonModel.tau_o = Float(field_type=<class ‘float’>, default=0.98, required=True)
Balloon model parameter. Haemodynamic transit time (s). The average time blood take to traverse the venous compartment. It is the ratio of resting blood volume (V0) to resting blood flow (F0).
- alphatvb.analyzers.fmri_balloon.BalloonModel.alpha = Float(field_type=<class ‘float’>, default=0.32, required=True)
Balloon model parameter. Stiffness parameter. Grubb’s exponent.
- TEtvb.analyzers.fmri_balloon.BalloonModel.TE = Float(field_type=<class ‘float’>, default=0.04, required=True)
BOLD parameter. Echo Time
- V0tvb.analyzers.fmri_balloon.BalloonModel.V0 = Float(field_type=<class ‘float’>, default=4.0, required=True)
BOLD parameter. Resting blood volume fraction.
- E0tvb.analyzers.fmri_balloon.BalloonModel.E0 = Float(field_type=<class ‘float’>, default=0.4, required=True)
BOLD parameter. Resting oxygen extraction fraction.
- epsilontvb.analyzers.fmri_balloon.BalloonModel.epsilon = NArray(label=’\(\\epsilon\)’, dtype=float64, default=array([0.5]), dim_names=(), ndim=None, required=True)
BOLD parameter. Ratio of intra- and extravascular signals. In principle this parameter could be derived from empirical data and spatialized.
- nu_0tvb.analyzers.fmri_balloon.BalloonModel.nu_0 = Float(field_type=<class ‘float’>, default=40.3, required=True)
BOLD parameter. Frequency offset at the outer surface of magnetized vessels (Hz).
- r_0tvb.analyzers.fmri_balloon.BalloonModel.r_0 = Float(field_type=<class ‘float’>, default=25.0, required=True)
BOLD parameter. Slope r0 of intravascular relaxation rate (Hz). Only used for
revised
coefficients.
gid : tvb.basic.neotraits._core.HasTraits.gid = Attr(field_type=<class ‘uuid.UUID’>, default=None, required=True)
- E0¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- RBM¶
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
- TE¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- V0¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- alpha¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- balloon_dfun(state_variables, neural_input, local_coupling=0.0)[source]¶
The Balloon model equations. See Eqs. (4-10) in [Stephan2007] .. math:
\frac{ds}{dt} &= x - \kappa\,s - \gamma \,(f-1) \\ \frac{df}{dt} &= s \\ \frac{dv}{dt} &= \frac{1}{\tau_o} \, (f - v^{1/\alpha})\\ \frac{dq}{dt} &= \frac{1}{\tau_o}(f \, \frac{1-(1-E_0)^{1/\alpha}}{E_0} - v^{&/\alpha} \frac{q}{v})\\ \kappa &= \frac{1}{\tau_s}\\ \gamma &= \frac{1}{\tau_f}
- bold_model¶
- epsilon¶
Declares a numpy array. dtype enforces the dtype. The default dtype is float64. An optional symbolic shape can be given, as a tuple of Dim attributes from the owning class. The shape will be enforced, but no broadcasting will be done. domain declares what values are allowed in this array. It can be any object that can be checked for membership Defaults are checked if they are in the declared domain. For performance reasons this does not happen on every attribute set.
- extended_result_size(input_shape)[source]¶
Returns the storage size in Bytes of the extended result of the …. That is, it includes storage of the evaluated … attributes such as …, etc.
- integrator¶
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
- neural_input_transformation¶
- normalize_neural_input¶
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
- nu_0¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- r_0¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- tau_f¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- tau_o¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- tau_s¶
Declares a float. This is different from Attr(field_type=float). The former enforces float subtypes. This allows any type that can be safely cast to the declared float type according to numpy rules.
Reading and writing this attribute is slower than a plain python attribute. In performance sensitive code you might want to use plain python attributes or even better local variables.
- time_series¶
An Attr declares the following about the attribute it describes: * the type * a default value shared by all instances * if the value might be missing * documentation It will resolve to attributes on the instance.
graph
¶
Useful graph analyses.
- tvb.analyzers.graph.betweenness_bin(A)[source]¶
Node betweenness centrality is the fraction of all shortest paths in the network that contain a given node. Nodes with high values of betweenness centrality participate in a large number of shortest paths.
- Parameters:
A – binary (directed/undirected) connection matrix (array)
- Returns:
BC: a vector representing node between centrality vector.
Notes:
Betweenness centrality may be normalised to the range [0,1] as BC/[(N-1)(N-2)], where N is the number of nodes in the network.
Original Mika Rubinov, UNSW/U Cambridge, 2007-2012 - From BCT 2012-12-04
Reference: [1] Kintali (2008) arXiv:0809.1906v2 [cs.DS] (generalization to directed and disconnected graphs)
Author: Paula Sanz Leon
- tvb.analyzers.graph.distance_inv(G)[source]¶
Compute the inverse shortest path lengths of G.
- Parameters:
G – binary undirected connection matrix
- Returns:
D: matrix of inverse distances
- tvb.analyzers.graph.efficiency_bin(A, compute_local_efficiency=False)[source]¶
Computes global efficiency or local efficiency of a connectivity matrix. The global efficiency is the average of inverse shortest path length, and is inversely related to the characteristic path length.
The local efficiency is the global efficiency computed on the neighborhood of the node, and is related to the clustering coefficient.
- Parameters:
A – array; binary undirected connectivity matrix.
compute_local_efficiency – bool, optional flag to compute either local or global efficiency of the network.
- Returns:
global efficiency (float)
local efficiency (array)
References: [1] Latora and Marchiori (2001) Phys Rev Lett 87:198701.
Note
Algorithm: algebraic path count
Note
Original: Mika Rubinov, UNSW, 2008-2010 - From BCT 2012-12-04
Note
Tested with Numpy 1.7
Warning
tested against Matlab version… needs indexing improvement
Example:
>>> import numpy.random >>> A = np.random.rand(5, 5) >>> E = efficiency_bin(A) >>> E.shape == (1, ) >>> True
If you want to compute the local efficiency for every node in the network:
>>> E = efficiency_bin(A, compute_local_efficiency=True) >>> E.shape == (5, 1) >>> True
Author: Paula Sanz Leon
- tvb.analyzers.graph.get_components_sizes(A)[source]¶
Get connected components sizes. Returns the size of the largest component of an undirected graph specified by the binary and undirected connection matrix A.
- Parameters:
A – array - binary undirected (BU) connectivity matrix.
- Returns:
largest component (float)
size of the largest component
- Raises:
Value Error - If A is not square.
Warning
Requires NetworkX
Author: Paula Sanz Leon
- tvb.analyzers.graph.sequential_random_deletion(white_matter, random_sequence, nor)[source]¶
A strategy to lesion a connectivity matrix.
A single node is removed at each step until the network is reduced to only 2 nodes. This method represents a structural failure analysis and it should be run several times with different random sequences.
- Parameters:
white_matter – tvb Connectivity DataType (yes, it’s an example for TVB!) a connectivity DataType that has a ‘weights’ attribute.
random_sequence – int array; a sequence of random integer numbers indicating which the nodes will be deleted at each step.
nor – number of nodes of the original connectivity matrix.
- Returns:
Node strength (number_of_nodes, number_of_nodes -2)
Node degree (number_of_nodes, number_of_nodes -2)
Global efficiency (number_of_nodes, )
Size of the largest component (number_of_nodes, )
References: Alstott et al. (2009).
Author: Paula Sanz Leon
- tvb.analyzers.graph.sequential_targeted_deletion(white_matter, nor)[source]¶
A strategy to lesion a connectivity matrix.
A single node is removed at each step until the network is reduced to only 2 nodes. At each step different graph metrics are computed (degree, strength and betweenness centrality). The single node with the highest degree, strength or centrality is removed.
- Parameters:
white_matter – tvb Connectivity datatype (yes, it’s an example for TVB!) a connectivity datatype that has a ‘weights’ attribute.
nor – number of nodes of the original connectivity matrix.
- Returns:
Node strength (number_of_nodes, number_of_nodes -2) array
Node degree (number_of_nodes, number_of_nodes -2) array
Betweenness centrality (number_of_nodes, number_of_nodes -2) array
Global efficiency (number_of_nodes, 3) array
Size of the largest component (number_of_nodes, 3) array
See also: sequential_random_deletion, localized_area_deletion
References: Alstott et al. (2009).
Author: Paula Sanz Leon
ica
¶
Perform Independent Component Analysis on a TimeSeries Object and returns an IndependentComponents datatype.
- tvb.analyzers.ica.compute_ica_decomposition(time_series, n_components)[source]¶
# type: (TimeSeries, int) -> IndependentComponents Run FastICA on the given time series data.
Parameters¶
time_series : TimeSeries The timeseries to which the ICA is to be applied.
n_components : int Number of principal components to unmix.
ica_algorithm
¶
- tvb.analyzers.ica_algorithm.fastica(X, n_components=None, algorithm='parallel', whiten=True, fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, return_X_mean=False, compute_sources=True, return_n_iter=False)[source]¶
Perform Fast Independent Component Analysis.
Read more in the User Guide.
Parameters¶
- Xarray-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and n_features is the number of features.
- n_componentsint, optional
Number of components to extract. If None no dimension reduction is performed.
- algorithm{‘parallel’, ‘deflation’}, optional
Apply a parallel or deflational FASTICA algorithm.
- whitenboolean, optional
If True perform an initial whitening of the data. If False, the data is assumed to have already been preprocessed: it should be centered, normed and white. Otherwise you will get incorrect results. In this case the parameter n_components will be ignored.
- funstring or function, optional. Default: ‘logcosh’
The functional form of the G function used in the approximation to neg-entropy. Could be either ‘logcosh’, ‘exp’, or ‘cube’. You can also provide your own function. It should return a tuple containing the value of the function, and of its derivative, in the point. The derivative should be averaged along its last dimension. Example:
- def my_g(x):
return x ** 3, np.mean(3 * x ** 2, axis=-1)
- fun_argsdictionary, optional
Arguments to send to the functional form. If empty or None and if fun=’logcosh’, fun_args will take value {‘alpha’ : 1.0}
- max_iterint, optional
Maximum number of iterations to perform.
- tolfloat, optional
A positive scalar giving the tolerance at which the un-mixing matrix is considered to have converged.
- w_init(n_components, n_components) array, optional
Initial un-mixing array of dimension (n.comp,n.comp). If None (default) then an array of normal r.v.’s is used.
- random_stateint, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
- return_X_meanbool, optional
If True, X_mean is returned too.
- compute_sourcesbool, optional
If False, sources are not computed, but only the rotation matrix. This can save memory when working with big data. Defaults to True.
- return_n_iterbool, optional
Whether or not to return the number of iterations.
Returns¶
- Karray, shape (n_components, n_features) | None.
If whiten is ‘True’, K is the pre-whitening matrix that projects data onto the first n_components principal components. If whiten is ‘False’, K is ‘None’.
- Warray, shape (n_components, n_components)
Estimated un-mixing matrix. The mixing matrix can be obtained by:
w = np.dot(W, K.T) A = w.T * (w * w.T).I
- Sarray, shape (n_samples, n_components) | None
Estimated source matrix
- X_meanarray, shape (n_features, )
The mean over features. Returned only if return_X_mean is True.
- n_iterint
If the algorithm is “deflation”, n_iter is the maximum number of iterations run across all components. Else they are just the number of iterations taken to converge. This is returned only when return_n_iter is set to True.
Notes¶
The data matrix X is considered to be a linear combination of non-Gaussian (independent) components i.e. X = AS where columns of S contain the independent components and A is a linear mixing matrix. In short ICA attempts to un-mix’ the data by estimating an un-mixing matrix W where ``S = W K X.`
This implementation was originally made for data of shape [n_features, n_samples]. Now the input is transposed before the algorithm is applied. This makes it slightly faster for Fortran-ordered input.
Implemented using FastICA: A. Hyvarinen and E. Oja, Independent Component Analysis: Algorithms and Applications, Neural Networks, 13(4-5), 2000, pp. 411-430
info
¶
This module implements information theoretic analyses.
TODO: Fix docstring of sampen TODO: Convert sampen to a traited class TODO: Fix compatibility with Python 3 and recent numpy
- tvb.analyzers.info.sampen(y, m=2, r=None, qse=False, taus=1, info=False, tile=<function tile>, na=None, abs=<ufunc 'absolute'>, log=<ufunc 'log'>, r_=<numpy.lib.index_tricks.RClass object>)[source]¶
Computes (quadratic) sample entropy of a given input signal y, with embedding dimension n, and a match tolerance of r (ref 2). If an array of scale factors, taus, are given, the signal will be coarsened by each factor and a corresponding entropy will be computed (ref 1). If no value for r is given, it will be set to 0.15*y.std().
Currently, the implementation is lazy and expects or coerces scale factors to integer values.
With qse=True (default) the probability p is normalized for the value of r, giving the quadratic sample entropy, such that results from different values of r can be meaningfully compared (ref 2).
- ref 1: Costa, M., Goldberger, A. L., and Peng C.-K. (2002) Multiscale Entropy
Analysis of Complex Physiologic Time Series. Phys Rev Lett 89 (6).
- ref 2: Lake, D. E. and Moorman, J. R. (2010) Accurate estimation of entropy
in very short physiological time series. Am J Physiol Heart Circ Physiol
To check that algorithm is working, look at ref 1, fig 1, and run
>>> sampen(numpy.random.randn(3*10000), r=.15, taus=numpy.r_[1:20], qse=False, m=2)
metric_kuramoto_index
¶
Filler analyzer: Takes a TimeSeries object and returns a Float.
metric_proxy_metastability
¶
Filler analyzer: Takes a TimeSeries object and returns two Floats.
These metrics are described and used in:
Hellyer et al. The Control of Global Brain Dynamics: Opposing Actions of Frontoparietal Control and Default Mode Networks on Attention. The Journal of Neuroscience, January 8, 2014, 34(2):451– 461
Proxy of spatial coherence (V):
Proxy metastability (M): the variability in spatial coherence of the signal globally or locally (within a network) over time.
Proxy synchrony (S) : the reciprocal of mean spatial variance across time.
- tvb.analyzers.metric_proxy_metastability.compute_proxy_metastability_metric(params)[source]¶
# type: dict(TimeSeries, float, int) -> (float, float) Compute the zero centered variance of node variances for the time_series.
Parameters¶
- paramsa dictionary containing
time_series : TimeSeries Input time series for which the metric will be computed.
start_point : float Determines how many points of the TimeSeries will be discarded before computing the metric
segment : int Divides the input time-series into discrete equally sized sequences and use the last segment to compute the metric. Only used when the start point is larger than the time-series length
metric_variance_global
¶
Filler analyzer: Takes a TimeSeries object and returns a Float.
- tvb.analyzers.metric_variance_global.compute_variance_global_metric(params)[source]¶
# type: dict(TimeSeries, float, int) -> float Compute the zero centered global variance of the time_series.
Parameters¶
- paramsa dictionary containing
time_series : TimeSeries Input time series for which the metric will be computed.
start_point : float Determines how many points of the TimeSeries will be discarded before computing the metric
segment : int Divides the input time-series into discrete equally sized sequences and use the last segment to compute the metric. Only used when the start point is larger than the time-series length
metric_variance_of_node_variance
¶
Filler analyzer: Takes a TimeSeries object and returns a Float.
- tvb.analyzers.metric_variance_of_node_variance.compute_variance_of_node_variance_metric(params)[source]¶
# type: dict(TimeSeries, float, int) -> float Compute the zero centered variance of node variances for the time_series.
Parameters¶
- paramsa dictionary containing
time_series : TimeSeries Input time series for which the metric will be computed.
start_point : float Determines how many points of the TimeSeries will be discarded before computing the metric
segment : int Divides the input time-series into discrete equally sized sequences and use the last segment to compute the metric. Only used when the start point is larger than the time-series length
node_coherence
¶
Compute cross coherence between all nodes in a time series.
- tvb.analyzers.node_coherence.calculate_cross_coherence(time_series, nfft)[source]¶
# type: (TimeSeries, int) -> CoherenceSpectrum # Adapter for cross-coherence algorithm(s) # Evaluate coherence on time series.
Parameters¶
time_series : TimeSeries The TimeSeries to which the Cross Coherence is to be applied.
nfft : int Data-points per block (should be a power of 2).
node_complex_coherence
¶
Calculate the cross spectrum and complex coherence on a TimeSeries datatype and return a ComplexCoherence datatype.
- tvb.analyzers.node_complex_coherence.calculate_complex_cross_coherence(time_series, epoch_length, segment_length, segment_shift, window_function, average_segments, subtract_epoch_average, zeropad, detrend_ts, max_freq, npat)[source]¶
# type: (TimeSeries, float, float, float, str, bool, bool, int, bool, float, float) -> ComplexCoherenceSpectrum Calculate the FFT, Cross Coherence and Complex Coherence of time_series broken into (possibly) epochs and segments of length epoch_length and segment_length respectively, filtered by window_function.
Parameters¶
time_series : TimeSeries The timeseries for which the CrossCoherence and ComplexCoherence is to be computed.
epoch_length : float In general for lengthy EEG recordings (~30 min), the timeseries are divided into equally sized segments (~ 20-40s). These contain the event that is to be characterized by means of the cross coherence. Additionally each epoch block will be further divided into segments to which the FFT will be applied.
segment_length : float The segment length determines the frequency resolution of the resulting power spectra – longer windows produce finer frequency resolution.
segment_shift : float Time length by which neighboring segments are shifted. e.g. segment shift = segment_length / 2 means 50% overlapping segments.
window_function : str Windowing functions can be applied before the FFT is performed.
average_segments : bool Flag. If True, compute the mean Cross Spectrum across segments.
subtract_epoch_average: bool Flag. If True and if the number of epochs is > 1, you can optionally subtract the mean across epochs before computing the complex coherence.
zeropad : int Adds n zeros at the end of each segment and at the end of window_function. It is not yet functional.
detrend_ts : bool Flag. If True removes linear trend along the time dimension before applying FFT.
max_freq : float Maximum frequency points (e.g. 32., 64., 128.) represented in the output. Default is segment_length / 2 + 1.
npat : float This attribute appears to be related to an input projection matrix… Which is not yet implemented.
- tvb.analyzers.node_complex_coherence.complex_coherence_result_shape(input_shape, max_freq, epoch_length, segment_length, segment_shift, sample_period, zeropad, average_segments)[source]¶
Returns the shape of the main result and the average over epochs
- tvb.analyzers.node_complex_coherence.log = <Logger tvb.analyzers.node_complex_coherence (INFO)>[source]¶
A module for calculating the FFT of a TimeSeries and returning a ComplexCoherenceSpectrum datatype.
- This algorithm is based on the matlab function data2cs_event.m written by Guido Nolte:
- [Freyer_2012]
Freyer, F.; Reinacher, M.; Nolte, G.; Dinse, H. R. and Ritter, P. Repetitive tactile stimulation changes resting-state functional connectivity-implications for treatment of sensorimotor decline. Front Hum Neurosci, Bernstein Focus State Dependencies of Learning and Bernstein Center for Computational Neuroscience Berlin, Germany., 2012, 6, 144
Input: originally the input could be 2D (tpts x nodes/channels), and it was possible to give a 3D array (e.g., tpspt x nodes/cahnnels x trials) via the segment_length attribute. Current TVB implementation can handle 4D or 2D TimeSeries datatypes. Be warned: the 4D TimeSeries will be averaged and squeezed.
Output: (main arrays) - the cross-spectrum - the complex coherence, from which the imaginary part can be extracted
By default the time series is segmented into 1 second epoch blocks and 0.5 second 50% overlapping segments to which a Hanning function is applied.
pca
¶
Perform Principal Component Analysis (PCA) on a TimeSeries datatype and return a PrincipalComponents datatype.
wavelet
¶
Calculate a wavelet transform on a TimeSeries datatype and return a WaveletSpectrum datatype.
- tvb.analyzers.wavelet.compute_continuous_wavelet_transform(time_series, frequencies, sample_period, q_ratio, normalisation, mother)[source]¶
# type: (TimeSeries, Range, float, float, str, str) -> WaveletCoefficients Calculate the continuous wavelet transform of time_series.
Parameters¶
time_series : TimeSeries The timeseries to which the wavelet is to be applied.
frequencies : Range The frequency resolution and range returned. Requested frequencies are expected to be in kHz.
sample_period : float The sampling period in ms of the computed wavelet spectrum.
q_ratio : float NFC. Must be greater than 5. Ratios of the center frequencies to bandwidths.
normalisation : str The type of normalisation for the resulting wavet spectrum. Default is ‘energy’, options are: ‘energy’; ‘gabor’.
mother : str The mother wavelet function used in the transform.
- tvb.analyzers.wavelet.log = <Logger tvb.analyzers.wavelet (INFO)>[source]¶
A module for calculating the wavelet transform of a TimeSeries object of TVB and returning a WaveletSpectrum object. The sampling period and frequency range of the result can be specified. The mother wavelet can also be specified… (So far, only Morlet.)
- References:
- [TBetal_1996]
C. Tallon-Baudry et al, Stimulus Specificity of Phase-Locked and Non-Phase-Locked 40 Hz Visual Responses in Human., J Neurosci 16(13):4240-4249, 1996.
[Mallat_1999]S. Mallat, A wavelet tour of signal processing., book, Academic Press, 1999.