id
stringlengths 9
16
| submitter
stringlengths 2
51
⌀ | title
stringlengths 5
243
| categories
stringlengths 5
69
| abstract
stringlengths 23
3.66k
| labels
stringlengths 5
184
| domain
stringclasses 9
values |
---|---|---|---|---|---|---|
2311.18274 | Thomas Cook | Semiparametric Efficient Inference in Adaptive Experiments | stat.ML cs.LG stat.ME | We consider the problem of efficient inference of the Average Treatment
Effect in a sequential experiment where the policy governing the assignment of
subjects to treatment or control can change over time. We first provide a
central limit theorem for the Adaptive Augmented Inverse-Probability Weighted
estimator, which is semiparametric efficient, under weaker assumptions than
those previously made in the literature. This central limit theorem enables
efficient inference at fixed sample sizes. We then consider a sequential
inference setting, deriving both asymptotic and nonasymptotic confidence
sequences that are considerably tighter than previous methods. These
anytime-valid methods enable inference under data-dependent stopping times
(sample sizes). Additionally, we use propensity score truncation techniques
from the recent off-policy estimation literature to reduce the finite sample
variance of our estimator without affecting the asymptotic variance. Empirical
results demonstrate that our methods yield narrower confidence sequences than
those previously developed in the literature while maintaining time-uniform
error control.
| Machine Learning, Machine Learning, Methodology | Statistics |
2103.05092 | Larry Wasserman | Forest Guided Smoothing | stat.ML cs.LG stat.ME | We use the output of a random forest to define a family of local smoothers
with spatially adaptive bandwidth matrices. The smoother inherits the
flexibility of the original forest but, since it is a simple, linear smoother,
it is very interpretable and it can be used for tasks that would be intractable
for the original forest. This includes bias correction, confidence intervals,
assessing variable importance and methods for exploring the structure of the
forest. We illustrate the method on some synthetic examples and on data related
to Covid-19.
| Machine Learning, Machine Learning, Methodology | Statistics |
2405.20039 | Jiacheng Miao | Task-Agnostic Machine Learning-Assisted Inference | stat.ML cs.LG stat.ME | Machine learning (ML) is playing an increasingly important role in scientific
research. In conjunction with classical statistical approaches, ML-assisted
analytical strategies have shown great promise in accelerating research
findings. This has also opened up a whole new field of methodological research
focusing on integrative approaches that leverage both ML and statistics to
tackle data science challenges. One type of study that has quickly gained
popularity employs ML to predict unobserved outcomes in massive samples and
then uses the predicted outcomes in downstream statistical inference. However,
existing methods designed to ensure the validity of this type of
post-prediction inference are limited to very basic tasks such as linear
regression analysis. This is because any extension of these approaches to new,
more sophisticated statistical tasks requires task-specific algebraic
derivations and software implementations, which ignores the massive library of
existing software tools already developed for complex inference tasks and
severely constrains the scope of post-prediction inference in real
applications. To address this challenge, we propose a novel statistical
framework for task-agnostic ML-assisted inference. It provides a
post-prediction inference solution that can be easily plugged into almost any
established data analysis routine. It delivers valid and efficient inference
that is robust to arbitrary choices of ML models, while allowing nearly all
existing analytical frameworks to be incorporated into the analysis of
ML-predicted outcomes. Through extensive experiments, we showcase the validity,
versatility, and superiority of our method compared to existing approaches.
| Machine Learning, Machine Learning, Methodology | Statistics |
2301.02190 | Michel Van De Velden | A general framework for implementing distances for categorical variables | stat.ML cs.LG stat.ME | The degree to which subjects differ from each other with respect to certain
properties measured by a set of variables, plays an important role in many
statistical methods. For example, classification, clustering, and data
visualization methods all require a quantification of differences in the
observed values. We can refer to the quantification of such differences, as
distance. An appropriate definition of a distance depends on the nature of the
data and the problem at hand. For distances between numerical variables, there
exist many definitions that depend on the size of the observed differences. For
categorical data, the definition of a distance is more complex, as there is no
straightforward quantification of the size of the observed differences.
Consequently, many proposals exist that can be used to measure differences
based on categorical variables. In this paper, we introduce a general framework
that allows for an efficient and transparent implementation of distances
between observations on categorical variables. We show that several existing
distances can be incorporated into the framework. Moreover, our framework quite
naturally leads to the introduction of new distance formulations and allows for
the implementation of flexible, case and data specific distance definitions.
Furthermore, in a supervised classification setting, the framework can be used
to construct distances that incorporate the association between the response
and predictor variables and hence improve the performance of distance-based
classifiers.
| Machine Learning, Machine Learning, Methodology | Statistics |
1312.4479 | Jean-Baptiste Durand | Parametric Modelling of Multivariate Count Data Using Probabilistic
Graphical Models | stat.ML cs.LG stat.ME | Multivariate count data are defined as the number of items of different
categories issued from sampling within a population, which individuals are
grouped into categories. The analysis of multivariate count data is a recurrent
and crucial issue in numerous modelling problems, particularly in the fields of
biology and ecology (where the data can represent, for example, children counts
associated with multitype branching processes), sociology and econometrics. We
focus on I) Identifying categories that appear simultaneously, or on the
contrary that are mutually exclusive. This is achieved by identifying
conditional independence relationships between the variables; II)Building
parsimonious parametric models consistent with these relationships; III)
Characterising and testing the effects of covariates on the joint distribution
of the counts. To achieve these goals, we propose an approach based on
graphical probabilistic models, and more specifically partially directed
acyclic graphs.
| Machine Learning, Machine Learning, Methodology | Statistics |
1805.05383 | Jeremias Knoblauch | Spatio-temporal Bayesian On-line Changepoint Detection with Model
Selection | stat.ML cs.LG stat.ME | Bayesian On-line Changepoint Detection is extended to on-line model selection
and non-stationary spatio-temporal processes. We propose spatially structured
Vector Autoregressions (VARs) for modelling the process between changepoints
(CPs) and give an upper bound on the approximation error of such models. The
resulting algorithm performs prediction, model selection and CP detection
on-line. Its time complexity is linear and its space complexity constant, and
thus it is two orders of magnitudes faster than its closest competitor. In
addition, it outperforms the state of the art for multivariate data.
| Machine Learning, Machine Learning, Methodology | Statistics |
2111.04597 | Ye Tian | Neyman-Pearson Multi-class Classification via Cost-sensitive Learning | stat.ML cs.LG stat.ME | Most existing classification methods aim to minimize the overall
misclassification error rate. However, in applications such as loan default
prediction, different types of errors can have varying consequences. To address
this asymmetry issue, two popular paradigms have been developed: the
Neyman-Pearson (NP) paradigm and the cost-sensitive (CS) paradigm. Previous
studies on the NP paradigm have primarily focused on the binary case, while the
multi-class NP problem poses a greater challenge due to its unknown
feasibility. In this work, we tackle the multi-class NP problem by establishing
a connection with the CS problem via strong duality and propose two algorithms.
We extend the concept of NP oracle inequalities, crucial in binary
classifications, to NP oracle properties in the multi-class context. Our
algorithms satisfy these NP oracle properties under certain conditions.
Furthermore, we develop practical algorithms to assess the feasibility and
strong duality in multi-class NP problems, which can offer practitioners the
landscape of a multi-class NP problem with various target error levels.
Simulations and real data studies validate the effectiveness of our algorithms.
To our knowledge, this is the first study to address the multi-class NP problem
with theoretical guarantees. The proposed algorithms have been implemented in
the R package \texttt{npcs}, which is available on CRAN.
| Machine Learning, Machine Learning, Methodology | Statistics |
2402.07868 | Sahel Iqbal | Nesting Particle Filters for Experimental Design in Dynamical Systems | stat.ML cs.LG stat.ME | In this paper, we propose a novel approach to Bayesian experimental design
for non-exchangeable data that formulates it as risk-sensitive policy
optimization. We develop the Inside-Out SMC$^2$ algorithm, a nested sequential
Monte Carlo technique to infer optimal designs, and embed it into a particle
Markov chain Monte Carlo framework to perform gradient-based policy
amortization. Our approach is distinct from other amortized experimental design
techniques, as it does not rely on contrastive estimators. Numerical validation
on a set of dynamical systems showcases the efficacy of our method in
comparison to other state-of-the-art strategies.
| Machine Learning, Machine Learning, Methodology | Statistics |
2005.00466 | Mike Laszkiewicz | Thresholded Adaptive Validation: Tuning the Graphical Lasso for Graph
Recovery | stat.ML cs.LG stat.ME | Many Machine Learning algorithms are formulated as regularized optimization
problems, but their performance hinges on a regularization parameter that needs
to be calibrated to each application at hand. In this paper, we propose a
general calibration scheme for regularized optimization problems and apply it
to the graphical lasso, which is a method for Gaussian graphical modeling. The
scheme is equipped with theoretical guarantees and motivates a thresholding
pipeline that can improve graph recovery. Moreover, requiring at most one line
search over the regularization path, the calibration scheme is computationally
more efficient than competing schemes that are based on resampling. Finally, we
show in simulations that our approach can improve on the graph recovery of
other approaches considerably.
| Machine Learning, Machine Learning, Methodology | Statistics |
1908.05287 | Mohsen Shahhosseini | Optimizing Ensemble Weights and Hyperparameters of Machine Learning
Models for Regression Problems | stat.ML cs.LG stat.ME | Aggregating multiple learners through an ensemble of models aim to make
better predictions by capturing the underlying distribution of the data more
accurately. Different ensembling methods, such as bagging, boosting, and
stacking/blending, have been studied and adopted extensively in research and
practice. While bagging and boosting focus more on reducing variance and bias,
respectively, stacking approaches target both by finding the optimal way to
combine base learners. In stacking with the weighted average, ensembles are
created from weighted averages of multiple base learners. It is known that
tuning hyperparameters of each base learner inside the ensemble weight
optimization process can produce better performing ensembles. To this end, an
optimization-based nested algorithm that considers tuning hyperparameters as
well as finding the optimal weights to combine ensembles (Generalized Weighted
Ensemble with Internally Tuned Hyperparameters (GEM-ITH)) is designed. Besides,
Bayesian search was used to speed-up the optimizing process, and a heuristic
was implemented to generate diverse and well-performing base learners. The
algorithm is shown to be generalizable to real data sets through analyses with
ten publicly available data sets.
| Machine Learning, Machine Learning, Methodology | Statistics |
2305.04086 | Gongbo Zhang | Efficient Learning for Selecting Top-m Context-Dependent Designs | stat.ML math.OC | We consider a simulation optimization problem for a context-dependent
decision-making, which aims to determine the top-m designs for all contexts.
Under a Bayesian framework, we formulate the optimal dynamic sampling decision
as a stochastic dynamic programming problem, and develop a sequential sampling
policy to efficiently learn the performance of each design under each context.
The asymptotically optimal sampling ratios are derived to attain the optimal
large deviations rate of the worst-case of probability of false selection. The
proposed sampling policy is proved to be consistent and its asymptotic sampling
ratios are asymptotically optimal. Numerical experiments demonstrate that the
proposed method improves the efficiency for selection of top-m
context-dependent designs.
| Machine Learning, Optimization and Control | Statistics |
1203.0565 | Taiji Suzuki | Fast learning rate of multiple kernel learning: Trade-off between
sparsity and smoothness | stat.ML math.ST stat.TH | We investigate the learning rate of multiple kernel learning (MKL) with
$\ell_1$ and elastic-net regularizations. The elastic-net regularization is a
composition of an $\ell_1$-regularizer for inducing the sparsity and an
$\ell_2$-regularizer for controlling the smoothness. We focus on a sparse
setting where the total number of kernels is large, but the number of nonzero
components of the ground truth is relatively small, and show sharper
convergence rates than the learning rates have ever shown for both $\ell_1$ and
elastic-net regularizations. Our analysis reveals some relations between the
choice of a regularization function and the performance. If the ground truth is
smooth, we show a faster convergence rate for the elastic-net regularization
with less conditions than $\ell_1$-regularization; otherwise, a faster
convergence rate for the $\ell_1$-regularization is shown.
| Machine Learning, Statistics Theory, Statistics Theory | Statistics |
1204.4154 | Nathan Lay | The Artificial Regression Market | stat.ML math.ST stat.TH | The Artificial Prediction Market is a recent machine learning technique for
multi-class classification, inspired from the financial markets. It involves a
number of trained market participants that bet on the possible outcomes and are
rewarded if they predict correctly. This paper generalizes the scope of the
Artificial Prediction Markets to regression, where there are uncountably many
possible outcomes and the error is usually the MSE. For that, we introduce the
reward kernel that rewards each participant based on its prediction error and
we derive the price equations. Using two reward kernels we obtain two different
learning rules, one of which is approximated using Hermite-Gauss quadrature.
The market setting makes it easy to aggregate specialized regressors that only
predict when an observation falls into their specialization domain. Experiments
show that regression markets based on the two learning rules outperform Random
Forest Regression on many UCI datasets and are rarely outperformed.
| Machine Learning, Statistics Theory, Statistics Theory | Statistics |
1401.0871 | Sakellarios Zairis | Stylistic Clusters and the Syrian/South Syrian Tradition of
First-Millennium BCE Levantine Ivory Carving: A Machine Learning Approach | stat.ML stat.AP | Thousands of first-millennium BCE ivory carvings have been excavated from
Neo-Assyrian sites in Mesopotamia (primarily Nimrud, Khorsabad, and Arslan
Tash) hundreds of miles from their Levantine production contexts. At present,
their specific manufacture dates and workshop localities are unknown. Relying
on subjective, visual methods, scholars have grappled with their classification
and regional attribution for over a century. This study combines visual
approaches with machine-learning techniques to offer data-driven perspectives
on the classification and attribution of this early Iron Age corpus. The study
sample consisted of 162 sculptures of female figures. We have developed an
algorithm that clusters the ivories based on a combination of descriptive and
anthropometric data. The resulting categories, which are based on purely
statistical criteria, show good agreement with conventional art historical
classifications, while revealing new perspectives, especially with regard to
the contested Syrian/South Syrian/Intermediate tradition. Specifically, we have
identified that objects of the Syrian/South Syrian/Intermediate tradition may
be more closely related to Phoenician objects than to North Syrian objects; we
offer a reconsideration of a subset of Phoenician objects, and we confirm
Syrian/South Syrian/Intermediate stylistic subgroups that might distinguish
networks of acquisition among the sites of Nimrud, Khorsabad, Arslan Tash and
the Levant. We have also identified which features are most significant in our
cluster assignments and might thereby be most diagnostic of regional carving
traditions. In short, our study both corroborates traditional visual
classification methods and demonstrates how machine-learning techniques may be
employed to reveal complementary information not accessible through the
exclusively visual analysis of an archaeological corpus.
| Machine Learning, Applications | Statistics |
1405.5576 | Sam Davanloo | On the Theoretical Guarantees for Parameter Estimation of Gaussian
Random Field Models: A Sparse Precision Matrix Approach | stat.ML stat.CO | Iterative methods for fitting a Gaussian Random Field (GRF) model via maximum
likelihood (ML) estimation requires solving a nonconvex optimization problem.
The problem is aggravated for anisotropic GRFs where the number of covariance
function parameters increases with the dimension. Even evaluation of the
likelihood function requires $O(n^3)$ floating point operations, where $n$
denotes the number of data locations. In this paper, we propose a new two-stage
procedure to estimate the parameters of second-order stationary GRFs. First, a
convex likelihood problem regularized with a weighted $\ell_1$-norm, utilizing
the available distance information between observation locations, is solved to
fit a sparse precision (inverse covariance) matrix to the observed data.
Second, the parameters of the covariance function are estimated by solving a
least squares problem. Theoretical error bounds for the solutions of stage I
and II problems are provided, and their tightness are investigated.
| Machine Learning, Computation | Statistics |
0901.2730 | Jun Zhu | Maximum Entropy Discrimination Markov Networks | stat.ML stat.ME | In this paper, we present a novel and general framework called {\it Maximum
Entropy Discrimination Markov Networks} (MaxEnDNet), which integrates the
max-margin structured learning and Bayesian-style estimation and combines and
extends their merits. Major innovations of this model include: 1) It
generalizes the extant Markov network prediction rule based on a point
estimator of weights to a Bayesian-style estimator that integrates over a
learned distribution of the weights. 2) It extends the conventional max-entropy
discrimination learning of classification rule to a new structural max-entropy
discrimination paradigm of learning the distribution of Markov networks. 3) It
subsumes the well-known and powerful Maximum Margin Markov network (M$^3$N) as
a special case, and leads to a model similar to an $L_1$-regularized M$^3$N
that is simultaneously primal and dual sparse, or other types of Markov network
by plugging in different prior distributions of the weights. 4) It offers a
simple inference algorithm that combines existing variational inference and
convex-optimization based M$^3$N solvers as subroutines. 5) It offers a
PAC-Bayesian style generalization bound. This work represents the first
successful attempt to combine Bayesian-style learning (based on generative
models) with structured maximum margin learning (based on a discriminative
model), and outperforms a wide array of competing methods for structured
input/output learning on both synthetic and real data sets.
| Machine Learning, Methodology | Statistics |
1802.03127 | Takayuki Kawashima | Robust and Sparse Regression in GLM by Stochastic Optimization | stat.ML stat.ME | The generalized linear model (GLM) plays a key role in regression analyses.
In high-dimensional data, the sparse GLM has been used but it is not robust
against outliers. Recently, the robust methods have been proposed for the
specific example of the sparse GLM. Among them, we focus on the robust and
sparse linear regression based on the $\gamma$-divergence. The estimator of the
$\gamma$-divergence has strong robustness under heavy contamination. In this
paper, we extend the robust and sparse linear regression based on the
$\gamma$-divergence to the robust and sparse GLM based on the
$\gamma$-divergence with a stochastic optimization approach in order to obtain
the estimate. We adopt the randomized stochastic projected gradient descent as
a stochastic optimization approach and extend the established convergence
property to the classical first-order necessary condition. By virtue of the
stochastic optimization approach, we can efficiently estimate parameters for
very large problems. Particularly, we show the linear regression, logistic
regression and Poisson regression with $L_1$ regularization in detail as
specific examples of robust and sparse GLM. In numerical experiments and real
data analysis, the proposed method outperformed comparative methods.
| Machine Learning, Methodology | Statistics |
1905.08876 | Andrew Gelman | Many perspectives on Deborah Mayo's "Statistical Inference as Severe
Testing: How to Get Beyond the Statistics Wars" | stat.OT | The new book by philosopher Deborah Mayo is relevant to data science for
topical reasons, as she takes various controversial positions regarding
hypothesis testing and statistical practice, and also as an entry point to
thinking about the philosophy of statistics. The present article is a slightly
expanded version of a series of informal reviews and comments on Mayo's book.
We hope this discussion will introduce people to Mayo's ideas along with other
perspectives on the topics she addresses.
| Other Statistics | Statistics |
1811.06980 | Antonio Irpino PhD | Batch Self Organizing maps for distributional data using adaptive
distances | stat.OT | The paper deals with a Batch Self Organizing Map algorithm (DBSOM) for data
described by distributional-valued variables. This kind of variables is
characterized to take as values one-dimensional probability or frequency
distributions on a numeric support. The objective function optimized in the
algorithm depends on the choice of the distance measure. According to the
nature of the date, the $L_2$ Wasserstein distance is proposed as one of the
most suitable metrics to compare distributions. It is widely used in several
contexts of analysis of distributional data. Conventional batch SOM algorithms
consider that all variables are equally important for the training of the SOM.
However, it is well known that some variables are less relevant than others for
this task. In order to take into account the different contribution of the
variables we propose an adaptive version of the DBSOM algorithm that tackles
this problem with an additional step: a relevance weight is automatically
learned for each distributional-valued variable. Moreover, since the $L_2$
Wasserstein distance allows a decomposition into two components: one related to
the means and one related to the size and shape of the distributions, also
relevance weights are automatically learned for each of the measurement
components to emphasize the importance of the different estimated parameters of
the distributions. Examples of real and synthetic datasets of distributional
data illustrate the usefulness of the proposed DBSOM algorithms.
| Other Statistics | Statistics |
2007.12210 | Roger Peng | Reproducible Research: A Retrospective | stat.OT | Rapid advances in computing technology over the past few decades have spurred
two extraordinary phenomena in science: large-scale and high-throughput data
collection coupled with the creation and implementation of complex statistical
algorithms for data analysis. Together, these two phenomena have brought about
tremendous advances in scientific discovery but have also raised two serious
concerns, one relatively new and one quite familiar. The complexity of modern
data analyses raises questions about the reproducibility of the analyses,
meaning the ability of independent analysts to re-create the results claimed by
the original authors using the original data and analysis techniques. While
seemingly a straightforward concept, reproducibility of analyses is typically
thwarted by the lack of availability of the data and computer code that were
used in the analyses. A much more general concern is the replicability of
scientific findings, which concerns the frequency with which scientific claims
are confirmed by completely independent investigations. While the concepts of
reproduciblity and replicability are related, it is worth noting that they are
focused on quite different goals and address different aspects of scientific
progress. In this review, we will discuss the origins of reproducible research,
characterize the current status of reproduciblity in public health research,
and connect reproduciblity to current concerns about replicability of
scientific findings. Finally, we describe a path forward for improving both the
reproducibility and replicability of public health research in the future.
| Other Statistics | Statistics |
1903.08880 | John Galati | Three issues impeding communication of statistical methodology for
incomplete data | stat.OT | We identify three issues permeating the literature on statistical methodology
for incomplete data written for non-specialist statisticians and other
investigators. The first is a mathematical defect in the notation Yobs, Ymis
used to partition the data into observed and missing components. The second are
issues concerning the notation `P(R|Yobs, Ymis)=P(R|Yobs)' used for
communicating the definition of missing at random (MAR). And the third is the
framing of ignorability by emulating complete-data methods exactly, rather than
treating the question of ignorability on its own merits. These issues have been
present in the literature for a long time, and have simple remedies. The
purpose of this paper is to raise awareness of these issues, and to explain how
they can be remedied.
| Other Statistics | Statistics |
1209.4019 | Giles Hooker | Experimental design for Partially Observed Markov Decision Processes | stat.OT | This paper deals with the question of how to most effectively conduct
experiments in Partially Observed Markov Decision Processes so as to provide
data that is most informative about a parameter of interest. Methods from
Markov decision processes, especially dynamic programming, are introduced and
then used in an algorithm to maximize a relevant Fisher Information. The
algorithm is then applied to two POMDP examples. The methods developed can also
be applied to stochastic dynamical systems, by suitable discretization, and we
consequently show what control policies look like in the Morris-Lecar Neuron
model, and simulation results are presented. We discuss how parameter
dependence within these methods can be dealt with by the use of priors, and
develop tools to update control policies online. This is demonstrated in
another stochastic dynamical system describing growth dynamics of DNA template
in a PCR model.
| Other Statistics | Statistics |
1911.00535 | Alex Reinhart | Think-aloud interviews: A tool for exploring student statistical
reasoning | stat.OT | Think-aloud interviews have been a valuable but underused tool in statistics
education research. Think-alouds, in which students narrate their reasoning in
real time while solving problems, differ in important ways from other types of
cognitive interviews and related education research methods. Beyond the uses
already found in the statistics literature -- mostly validating the wording of
statistical concept inventory questions and studying student misconceptions --
we suggest other possible use cases for think-alouds and summarize
best-practice guidelines for designing think-aloud interview studies. Using
examples from our own experiences studying the local student body for our
introductory statistics courses, we illustrate how research goals should inform
study-design decisions and what kinds of insights think-alouds can provide. We
hope that our overview of think-alouds encourages more statistics educators and
researchers to begin using this method.
| Other Statistics | Statistics |
1905.10209 | {\L}ukasz Rajkowski | A score function for Bayesian cluster analysis | stat.OT | We propose a score function for Bayesian clustering. The function is
parameter free and captures the interplay between the within cluster variance
and the between cluster entropy of a clustering. It can be used to choose the
number of clusters in well-established clustering methods such as hierarchical
clustering or $K$-means algorithm.
| Other Statistics | Statistics |
2401.11000 | Jing (Janet) Lin | Human-Centric and Integrative Lighting Asset Management in Public
Libraries: Qualitative Insights and Challenges from a Swedish Field Study | stat.OT | Traditional lighting source reliability evaluations, often covering just half
of a lamp's volume, can misrepresent real-world performance. To overcome these
limitations,adopting advanced asset management strategies for a more holistic
evaluation is crucial. This paper investigates human-centric and integrative
lighting asset management in Swedish public libraries. Through field
observations, interviews, and gap analysis, the study highlights a disparity
between current lighting conditions and stakeholder expectations, with issues
like eye strain suggesting significant improvement potential. We propose a
shift towards more dynamic lighting asset management and reliability
evaluations, emphasizing continuous enhancement and comprehensive training in
human-centric and integrative lighting principles.
| Other Statistics | Statistics |
2009.02099 | Yudi Pawitan | Defending the P-value | stat.OT stat.AP | Attacks on the P-value are nothing new, but the recent attacks are
increasingly more serious. They come from more mainstream sources, with
widening targets such as a call to retire the significance testing altogether.
While well meaning, I believe these attacks are nevertheless misdirected:
Blaming the P-value for the naturally tentative trial-and-error process of
scientific discoveries, and presuming that banning the P-value would make the
process cleaner and less error-prone. However tentative, the skeptical
scientists still have to form unambiguous opinions, proximately to move forward
in their investigations and ultimately to present results to the wider
community. With obvious reasons, they constantly need to balance between the
false-positive and false-negative errors. How would banning the P-value or
significance tests help in this balancing act? It seems trite to say that this
balance will always depend on the relative costs or the trade-off between the
errors. These costs are highly context specific, varying by area of
applications or by stage of investigation. A calibrated but tunable knob, such
as that given by the P-value, is needed for controlling this balance. This
paper presents detailed arguments in support of the P-value.
| Other Statistics, Applications | Statistics |
1910.06964 | Charles Gray | \texttt{code::proof}: Prepare for \emph{most} weather conditions | stat.OT stat.ME | Computational tools for data analysis are being released daily on
repositories such as the Comprehensive R Archive Network. How we integrate
these tools to solve a problem in research is increasingly complex and
requiring frequent updates. To mitigate these \emph{Kafkaesque} computational
challenges in research, this manuscript proposes \emph{toolchain walkthrough},
an opinionated documentation of a scientific workflow. As a practical
complement to our proof-based argument~(Gray and Marwick, arXiv, 2019) for
reproducible data analysis, here we focus on the practicality of setting up a
reproducible research compendia, with unit tests, as a measure of
\texttt{code::proof}, confidence in computational algorithms.
| Other Statistics, Methodology | Statistics |
supr-con/9502001 | Mark Jarrell | Anomalous Normal-State Properties of High-T$_c$ Superconductors --
Intrinsic Properties of Strongly Correlated Electron Systems? | supr-con cond-mat.supr-con | A systematic study of optical and transport properties of the Hubbard model,
based on Metzner and Vollhardt's dynamical mean-field approximation, is
reviewed. This model shows interesting anomalous properties that are, in our
opinion, ubiquitous to single-band strongly correlated systems (for all spatial
dimensions greater than one), and also compare qualitatively with many
anomalous transport features of the high-T$_c$ cuprates. This anomalous
behavior of the normal-state properties is traced to a ``collective single-band
Kondo effect,'' in which a quasiparticle resonance forms at the Fermi level as
the temperature is lowered, ultimately yielding a strongly renormalized Fermi
liquid at zero temperature.
| Superconductivity | Physics |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.