cicyt UNIZAR

Data Analysis, Statistics and Probability

New submissions

[ total of 8 entries: 1-8 ]
[ showing up to 2000 entries per page: fewer | more ]

New submissions for Tue, 20 Mar 18

[1]  arXiv:1803.06638 [pdf, ps, other]
Title: Adaptive prior probabilities via optimization of risk and entropy
Comments: 15 pages, 3 figures
Subjects: Data Analysis, Statistics and Probability (physics.data-an); Statistical Mechanics (cond-mat.stat-mech); Computer Science and Game Theory (cs.GT)

An agent choosing between various actions tends to take the one with the lowest loss. But this choice is arguably too rigid (not adaptive) to be useful in complex situations, e.g. where exploration-exploitation trade-off is relevant, or in creative task solving. Here we study an agent that -- given a certain average utility invested into adaptation -- chooses his actions via probabilities obtained through optimizing the entropy. As we argue, entropy minimization corresponds to a risk-averse agent, whereas a risk-seeking agent will maximize the entropy. The entropy minimization can (under certain conditions) recover the epsilon-greedy probabilities known in reinforced learning. We show that the entropy minimization -- in contrast to its maximization -- leads to rudimentary forms of intelligent behavior: (i) the agent accounts for extreme events, especially when he did not invest much into adaptation. (ii) He chooses the action related to lesser loss (lesser of two evils) when confronted with two actions with comparable losses. (iii) The agent is subject to effects similar to cognitive dissonance and frustration. Neither of these features are shown by the risk-seeking agent whose probabilities are given by the maximum entropy. Mathematically, the difference between entropy maximization versus its minimization corresponds with maximizing a convex function (in a convex domain, i.e.convex programming) versus minimizing it (concave programming).

Cross-lists for Tue, 20 Mar 18

[2]  arXiv:1803.06441 (cross-list from eess.SP) [pdf, ps, other]
Title: A Novel Blaschke Unwinding Adaptive Fourier Decomposition based Signal Compression Algorithm with Application on ECG Signals
Subjects: Signal Processing (eess.SP); Data Analysis, Statistics and Probability (physics.data-an); Machine Learning (stat.ML)

This paper presents a novel signal compression algorithm based on the Blaschke unwinding adaptive Fourier decomposition (AFD). The Blaschke unwinding AFD is a newly developed signal decomposition theory. It utilizes the Nevanlinna factorization and the maximal selection principle in each decomposition step, and achieves a faster convergence rate with higher fidelity. The proposed compression algorithm is applied to the electrocardiogram signal. To assess the performance of the proposed compression algorithm, in addition to the generic assessment criteria, we consider the less discussed criteria related to the clinical needs -- for the heart rate variability analysis purpose, how accurate the R peak information is preserved is evaluated. The experiments are conducted on the MIT-BIH arrhythmia benchmark database. The results show that the proposed algorithm performs better than other state-of-the-art approaches. Meanwhile, it also well preserves the R peak information.

[3]  arXiv:1803.06473 (cross-list from astro-ph.IM) [pdf, other]
Title: Variational Inference as an alternative to MCMC for parameter estimation and model selection
Comments: 12 pages, 3 figures
Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Data Analysis, Statistics and Probability (physics.data-an)

Many problems in Astrophysics involve using Bayesian Inference to deal with problems of parameter estimation and model selection. In this paper, we introduce Variational Inference to solve these problems and compare how the results hold up to Markov Chain Monte Carlo which is the most common method. Variational Inference converts the inference problem into an optimization problem by approximating the posterior from a known family of distributions and using Kullback-Leibler divergence to measure closeness. Variational Inference takes advantage of fast optimization techniques which make it ideal to deal with large datasets and also makes it trivial to parallelize. As a proof of principle, we apply Variational Inference for parameter estimation and model comparison to four different problems in astrophysics where MCMC techniques were previously used: measuring exoplanet orbital parameters from radial velocity data, tests of periodicities in measurements of $G$, significance of a turnover in the spectral lag data of GRB 160625B , and estimating the mass of a galaxy cluster using weak lensing. We find that Variational Inference is much faster than MCMC for these problems.

[4]  arXiv:1803.06722 (cross-list from quant-ph) [pdf, ps, other]
Title: Excluding joint probabilities from quantum theory
Comments: 5 pages, no figures, Rapid Communication
Journal-ref: Phys. Rev. A 97, 030102(R) (2018)
Subjects: Quantum Physics (quant-ph); Statistical Mechanics (cond-mat.stat-mech); Data Analysis, Statistics and Probability (physics.data-an)

Quantum theory does not provide a unique definition for the joint probability of two non-commuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g. via quasi-probabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are non-contextual and are consistent with all constraints expected from a quantum probability. We study two non-commuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts to theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two non-commuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not anymore excluded, but they are still constrained by imprecise probabilities.

[5]  arXiv:1803.06915 (cross-list from math.CO) [pdf, other]
Title: Exploiting symmetry in network analysis
Comments: Main Text (7 pages) plus Supplementary Information (24 pages)
Subjects: Combinatorics (math.CO); Social and Information Networks (cs.SI); Data Analysis, Statistics and Probability (physics.data-an); Physics and Society (physics.soc-ph)

Virtually all network analyses involve structural measures or metrics between pairs of vertices, or of the vertices themselves. The large amount of redundancy present in real-world networks is inherited by such measures, and this has practical consequences which have not yet been explored in full generality, nor systematically exploited by network practitioners. Here we develop a complete framework to study and quantify the effect of redundancy on arbitrary network measures, and explain how to exploit redundancy in practice, achieving, for instance, remarkable lossless compression and computational reduction ratios in several real-world networks against some popular measures.

[6]  arXiv:1803.06918 (cross-list from math.DS) [pdf, other]
Title: Correcting Observation Model Error in Data Assimilation
Subjects: Dynamical Systems (math.DS); Data Analysis, Statistics and Probability (physics.data-an)

Standard methods of data assimilation assume prior knowledge of a model that describes the system dynamics and an observation function that maps the model state to a predicted output. An accurate mapping from model state to observation space is crucial in filtering schemes when adjusting the estimate of the system state during the filter's analysis step. However, in many applications the true observation function may be unknown and the available observation model may have significant errors, resulting in a suboptimal state estimate. We propose a method for observation model error correction within the filtering framework. The procedure involves an alternating minimization algorithm used to iteratively update a given observation function to increase consistency with the model and prior observations, using ideas from attractor reconstruction. The method is demonstrated on the Lorenz 1963 and Lorenz 1996 models, and on a single-column radiative transfer model with multicloud parameterization.

Replacements for Tue, 20 Mar 18

[7]  arXiv:1801.03726 (replaced) [pdf, other]
Title: Large deviation theory for diluted Wishart random matrices
Comments: 10 pages, 6 figures
Journal-ref: Phys. Rev. E 97, 032124 (2018)
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn); Statistical Mechanics (cond-mat.stat-mech); Data Analysis, Statistics and Probability (physics.data-an)
[8]  arXiv:1802.08901 (replaced) [pdf, ps, other]
Title: A quasi-physical dynamic reduced order model for thermospheric mass density via Hermitian Space Dynamic Mode Decomposition
Subjects: Space Physics (physics.space-ph); Atmospheric and Oceanic Physics (physics.ao-ph); Data Analysis, Statistics and Probability (physics.data-an)
[ total of 8 entries: 1-8 ]
[ showing up to 2000 entries per page: fewer | more ]

Disable MathJax (What is MathJax?)