text
stringlengths
6
128k
We consider dynamical systems on the space of functions taking values in a free associative algebra. The system is said to be integrable if it possesses an infinite dimensional Lie algebra of commuting symmetries. In this paper we propose a new approach to the problem of quantisation of dynamical systems, introduce the concept of quantisation ideals and provide meaningful examples.
The {\it straight-through estimator} (STE) is commonly used to optimize quantized neural networks, yet its contexts of effective performance are still unclear despite empirical successes.To make a step forward in this comprehension, we apply STE to a well-understood problem: {\it sparse support recovery}. We introduce the {\it Support Exploration Algorithm} (SEA), a novel algorithm promoting sparsity, and we analyze its performance in support recovery (a.k.a. model selection) problems. SEA explores more supports than the state-of-the-art, leading to superior performance in experiments, especially when the columns of $A$ are strongly coherent.The theoretical analysis considers recovery guarantees when the linear measurements matrix $A$ satisfies the {\it Restricted Isometry Property} (RIP).The sufficient conditions of recovery are comparable but more stringent than those of the state-of-the-art in sparse support recovery. Their significance lies mainly in their applicability to an instance of the STE.
Designing coherent processes is essential for developing quantum information technologies. We study coherent dynamics of two spatially separated electrons in a coupled semiconductor double quantum dot (DQD), in which various two-qubit operations are demonstrated just by adjusting the gate voltages. Especially, second-order correlated coherent oscillations provide functional quantum processes for making quantum correlation of separated particles. The results encourage searching quantum entanglement in electronic devices.
We exhibit a convex polynomial optimization problem for which the diagonally-dominant sum-of-squares (DSOS) and the scaled diagonally-dominant sum-of-squares (SDSOS) hierarchies, based on linear programming and second-order conic programming respectively, do not converge to the global infimum. The same goes for the r-DSOS and r-SDSOS hierarchies. This refutes the claim in the literature according to which the DSOS and SDSOS hierarchies can solve any polynomial optimization problem to arbitrary accuracy. In contrast, the Lasserre hierarchy based on semidefinite programming yields the global infimum and the global minimizer with the first order relaxation. We further observe that the dual to the SDSOS hierarchy is the moment hierarchy where every positive semidefinite constraint is relaxed to all necessary second-order conic constraints. As a result, the number of second-order conic constraints grows quadratically in function of the size of the positive semidefinite constraints in the Lasserre hierarchy. Together with the counterexample, this suggests that DSOS and SDSOS are not necessarily more tractable alternatives to sum-of-squares.
Policy gradient methods in actor-critic reinforcement learning (RL) have become perhaps the most promising approaches to solving continuous optimal control problems. However, the trial-and-error nature of RL and the inherent randomness associated with solution approximations cause variations in the learned optimal values and policies. This has significantly hindered their successful deployment in real life applications where control responses need to meet dynamic performance criteria deterministically. Here we propose a novel phased actor in actor-critic (PAAC) method, aiming at improving policy gradient estimation and thus the quality of the control policy. Specifically, PAAC accounts for both $Q$ value and TD error in its actor update. We prove qualitative properties of PAAC for learning convergence of the value and policy, solution optimality, and stability of system dynamics. Additionally, we show variance reduction in policy gradient estimation. PAAC performance is systematically and quantitatively evaluated in this study using DeepMind Control Suite (DMC). Results show that PAAC leads to significant performance improvement measured by total cost, learning variance, robustness, learning speed and success rate. As PAAC can be piggybacked onto general policy gradient learning frameworks, we select well-known methods such as direct heuristic dynamic programming (dHDP), deep deterministic policy gradient (DDPG) and their variants to demonstrate the effectiveness of PAAC. Consequently we provide a unified view on these related policy gradient algorithms.
When agents interact socially with different intentions, conflicts are difficult to avoid. Although how agents can resolve such problems autonomously has not been determined, dynamic characteristics of agency may shed light on underlying mechanisms. The current study focused on the sense of agency (SoA), a specific aspect of agency referring to congruence between the agent's intention in acting and the outcome. Employing predictive coding and active inference as theoretical frameworks of perception and action generation, we hypothesize that regulation of complexity in the evidence lower bound of an agent's model should affect the strength of the agent's SoA and should have a critical impact on social interactions. We built a computational model of imitative interaction between a robot and a human via visuo-proprioceptive sensation with a variational Bayes recurrent neural network, and simulated the model in the form of pseudo-imitative interaction using recorded human body movement data. A key feature of the model is that each modality's complexity can be regulated differently with a hyperparameter assigned to each module. We first searched for an optimal setting that endows the model with appropriate coordination of multimodal sensation. This revealed that the vision module's complexity should be more tightly regulated than that of the proprioception module. Using the optimally trained model, we examined how changing the tightness of complexity regulation after training affects the strength of the SoA during interactions. The results showed that with looser regulation, an agent tends to act more egocentrically, without adapting to the other. In contrast, with tighter regulation, the agent tends to follow the other by adjusting its intention. We conclude that the tightness of complexity regulation crucially affects the strength of the SoA and the dynamics of interactions between agents.
Oxide interfaces exhibit a broad range of physical effects stemming from broken inversion symmetry. In particular, they can display non-reciprocal phenomena when time reversal symmetry is also broken, e.g., by the application of a magnetic field. Examples include the direct and inverse Edelstein effects (DEE, IEE) that allow the interconversion between spin currents and charge currents. The DEE and IEE have been investigated in interfaces based on the perovskite SrTiO$_3$ (STO), albeit in separate studies focusing on one or the other. The demonstration of these effects remains mostly elusive in other oxide interface systems despite their blossoming in the last decade. Here, we report the observation of both the DEE and IEE in a new interfacial two-dimensional electron gas (2DEG) based on the perovskite oxide KTaO$_3$. We generate 2DEGs by the simple deposition of Al metal onto KTaO$_3$ single crystals, characterize them by angle-resolved photoemission spectroscopy and magnetotransport, and demonstrate the DEE through unidirectional magnetoresistance and the IEE by spin-pumping experiments. We compare the spin-charge interconversion efficiency with that of STO-based interfaces, relate it to the 2DEG electronic structure, and give perspectives for the implementation of KTaO$_3$ 2DEGs into spin-orbitronic devices.
Precision tests of the Standard Model and searches for beyond the Standard Model physics often require nuclear structure input. There has been a tremendous progress in the development of nuclear ab initio techniques capable of providing accurate nuclear wave functions. For the calculation of observables, matrix elements of complicated operators need to be evaluated. Typically, these matrix elements would contain spurious contributions from the center-of-mass (COM) motion. This could be problematic when precision results are sought. Here, we derive a transformation relying on properties of harmonic oscillator wave functions that allows an exact removal of the COM motion contamination applicable to any one-body operator depending on nucleon coordinates and momenta. Resulting many-nucleon matrix elements are translationally invariant provided that the nuclear eigenfunctions factorize as products of the intrinsic and COM components as is the case, e.g., in the no-core shell model approach. An application of the transformation has been recently demonstrated in calculations of the nuclear structure recoil corrections for the beta-decay of 6He.
We solve closed string theory in all regular homogeneous plane-wave backgrounds with homogeneous NS three-form field strength and a dilaton. The parameters of the model are constant symmetric and anti-symmetric matrices k_{ij} and f_{ij} associated with the metric, and a constant anti-symmetric matrix h_{ij} associated with the NS field strength. In the light-cone gauge the rotation parameters f_{ij} have a natural interpretation as a constant magnetic field. This is a generalisation of the standard Landau problem with oscillator energies now being non-trivial functions of the parameters f_{ij} and k_{ij}. We develop a general procedure for solving linear but non-diagonal equations for string coordinates, and determine the corresponding oscillator frequencies, the light-cone Hamiltonian and level matching condition. We investigate the resulting string spectrum in detail in the four-dimensional case and compare the results with previously studied examples. Throughout we will find that the presence of the rotation parameter f_{ij} can lead to certain unusual and unexpected features of the string spectrum like new massless states at non-zero string levels, stabilisation of otherwise unstable (tachyonic) modes, and discrete but not positive definite string oscillator spectra.
Price of anarchy quantifies the degradation of social welfare in games due to the lack of a centralized authority that can enforce the optimal outcome. At its antipodes, mechanism design studies how to ameliorate these effects by incentivizing socially desirable behavior and implementing the optimal state as equilibrium. In practice, the responsiveness to such measures depends on the wealth of each individual. This leads to a natural, but largely unexplored, question. Does optimal mechanism design entrench, or maybe even exacerbate, social inequality? We study this question in nonatomic congestion games, arguably one of the most thoroughly studied settings from the perspectives of price of anarchy as well as mechanism design. We introduce a new model that incorporates the wealth distribution of the population and captures the income elasticity of travel time. This allows us to argue about the equality of wealth distribution both before and after employing a mechanism. We start our analysis by establishing a broad qualitative result, showing that tolls always increase inequality in symmetric congestion games under any reasonable metric of inequality, e.g., the Gini index. Next, we introduce the iniquity index, a novel measure for quantifying the magnitude of these forces towards a more unbalanced wealth distribution and show it has good normative properties (robustness to scaling of income, no-regret learning). We analyze iniquity both in theoretical settings (Pigou's network under various wealth distributions) as well as experimental ones (based on a large scale field experiment in Singapore). Finally, we provide an algorithm for computing optimal tolls for any point of the trade-off of relative importance of efficiency and equality. We conclude with a discussion of our findings in the context of theories of justice as developed in contemporary social sciences.
We derive eigenvalue bounds for the $t$-distance chromatic number of a graph, which is a generalization of the classical chromatic number. We apply such bounds to hypercube graphs, providing alternative spectral proofs for results by Ngo, Du and Graham [Inf. Process. Lett., 2002], and improving their bound for several instances. We also apply the eigenvalue bounds to Lee graphs, extending results by Kim and Kim [Discrete Appl. Math., 2011]. Finally, we provide a complete characterization for the existence of perfect Lee codes of minimum distance $3$. In order to prove our results, we use a mix of spectral and number theory tools. Our results, which provide the first application of spectral methods to Lee codes, illustrate that such methods succeed to capture the nature of the Lee metric.
The in vitro and in vivo activity of diminazene (Dim), artesunate (Art) and combination of Dim and Art (Dim-Art) against Leishmania donovani was compared to reference drug; amphotericin B. IC50 of Dim-Art was found to be $2.28 \pm 0.24 \mu$ g/mL while those of Dim and Art were $9.16 \pm 0.3 \mu$ g/mL and $4.64 \pm 0.48 \mu$ g/mL respectively. The IC50 for Amphot B was $0.16 \pm 0.32 \mu$ g/mL against stationary-phase promastigotes. In vivo evaluation in the L. donovani BALB/c mice model indicated that treatments with the combined drug therapy at doses of 12.5 mg/kg for 28 consecutive days significantly ($p < 0.001$) reduced parasite burden in the spleen as compared to the single drug treatments given at the same dosages. Although parasite burden was slightly lower ($p < 0.05$) in the Amphot B group than in the Dim-Art treatment group, the present study demonstrates the positive advantage and the potential use of the combined therapy of Dim-Art over the constituent drugs, Dim or Art when used alone. Further evaluation is recommended to determine the most efficacious combination ratio of the two compounds.
We construct a stable formal model of a Lubin-Tate curve with level three, and study the action of a Weil group and a division algebra on its stable reduction. Further, we study a structure of cohomology of the Lubin-Tate curve. Our study is purely local and includes the case where the characteristic of the residue field of a local field is two.
Exciton dynamics can be strongly affected by lattice vibrations through electron-phonon coupling. This is rarely explored in two-dimensional magnetic semiconductors. Focusing on bilayer CrI3, we first show the presence of strong electron-phonon coupling through temperature-dependent photoluminescence and absorption spectroscopy. We then report the observation of periodic broad modes up to the 8th order in Raman spectra, attributed to the polaronic character of excitons. We establish that this polaronic character is dominated by the coupling between the charge-transfer exciton at 1.96 eV and a longitudinal optical phonon at 120.6 cm-1. We further show that the emergence of long-range magnetic order enhances the electron-phonon coupling strength by about 50$\%$ and that the transition from layered antiferromagnetic to ferromagnetic order tunes the spectral intensity of the periodic broad modes, suggesting a strong coupling among the lattice, charge and spin in two-dimensional CrI3. Our study opens opportunities for tailoring light-matter interactions in two-dimensional magnetic semiconductors.
Friedland (1981) showed that for a nonnegative square matrix A, the spectral radius r(e^D A) is a log-convex functional over the real diagonal matrices D. He showed that for fully indecomposable A, log r(e^D A) is strictly convex over D_1, D_2 if and only if D_1-D_2 != c I for any c \in R. Here the condition of full indecomposability is shown to be replaceable by the weaker condition that A and A'A be irreducible, which is the sharpest possible replacement condition. Irreducibility of both A and A'A is shown to be equivalent to irreducibility of A^2 and A'A, which is the condition for a number of strict inequalities on the spectral radius found in Cohen, Friedland, Kato, and Kelly (1982). Such `two-fold irreducibility' is equivalent to joint irreducibility of A, A^2, A'A, and AA', or in combinatorial terms, equivalent to the directed graph of A being strongly connected and the simple bipartite graph of A being connected. Additional ancillary results are presented.
We investigate the solvability of the Byzantine Reliable Broadcast and Byzantine Broadcast Channel problems in distributed systems affected by Mobile Byzantine Faults. We show that both problems are not solvable even in one of the most constrained system models for mobile Byzantine faults defined so far. By endowing processes with an additional local failure oracle, we provide a solution to the Byzantine Broadcast Channel problem.
This note gives explicit equations for the elliptic curves (in characteristic not 2 or 3) with mod 2 representation isomorphic to that of a given one.
In this paper, a new gradient-based optimization approach by automatically adjusting the learning rate is proposed. This approach can be applied to design non-adaptive learning rate and adaptive learning rate. Firstly, I will introduce the non-adaptive learning rate optimization method: Binary Forward Exploration (BFE), and then the corresponding adaptive per-parameter learning rate method: Adaptive BFE (AdaBFE) is possible to be developed. This approach could be an alternative method to optimize the learning rate based on the stochastic gradient descent (SGD) algorithm besides the current non-adaptive learning rate methods e.g. SGD, momentum, Nesterov and the adaptive learning rate methods e.g. AdaGrad, AdaDelta, Adam... The purpose to develop this approach is not to beat the benchmark of other methods but just to provide a different perspective to optimize the gradient descent method, although some comparative study with previous methods will be made in the following sections. This approach is expected to be heuristic or inspire researchers to improve gradient-based optimization combined with previous methods.
HI intensity mapping (IM) is a novel technique capable of mapping the large-scale structure of the Universe in three dimensions and delivering exquisite constraints on cosmology, by using HI as a biased tracer of the dark matter density field. This is achieved by measuring the intensity of the redshifted 21cm line over the sky in a range of redshifts without the requirement to resolve individual galaxies. In this chapter, we investigate the potential of SKA1 to deliver HI intensity maps over a broad range of frequencies and a substantial fraction of the sky. By pinning down the baryon acoustic oscillation and redshift space distortion features in the matter power spectrum -- thus determining the expansion and growth history of the Universe -- these surveys can provide powerful tests of dark energy models and modifications to General Relativity. They can also be used to probe physics on extremely large scales, where precise measurements of spatial curvature and primordial non-Gaussianity can be used to test inflation; on small scales, by measuring the sum of neutrino masses; and at high redshifts where non-standard evolution models can be probed. We discuss the impact of foregrounds as well as various instrumental and survey design parameters on the achievable constraints. In particular we analyse the feasibility of using the SKA1 autocorrelations to probe the large-scale signal.
A model of the ferromagnetic origin of magnetic fields of neutron stars is considered. In this model, the magnetic phase transition occurs inside the core of neutron stars soon after formation. However, owing to the high electrical conductivity the core magnetic field is initially fully screened. We study how this magnetic field emerges for an outside observer. After some time, the induced field that screens the ferromagnetic field decays enough to uncover a detectable fraction of the ferromagnetic field. We conjecture that weak fields of millisecond pulsars of 10^8-10^9 G could be identified with ferromagnetic fields of unshielded fraction f=10^-4 resulting from the decay of screening fields by a factor 1-f in 10^8 yr since their birth.
A 2D approach is used to simulate the properties of positive and negative streamers emerging from a high-voltage electrode in a long (14 cm) air gap for standard pressure and temperature. The applied voltage varies from 100 to 500 kV. To reveal the influence of photoionization, the calculations are made for various rates of seed electron generation in front of the streamer head. The difference between the properties of positive and negative streamers is associated with the different directions of the electron drift ahead of the streamer head. As a result, the peak electric field at the streamer head and the streamer velocity are higher for positive voltage polarity. The average electric field in the negative streamer channel is approximately twice that in the positive streamer channel, in agreement with available measurements in long air gaps. It is shown that photoionization in front of the streamer head is important not only for the development of strong positive discharges, but for the development of strong negative discharges as well. An increase in the photoionization rate increases the propagation velocity of the positive streamer and retards the propagation of the negative streamer.
Let $M$ be a hyperkahler manifold of maximal holonomy (that is, an IHS manifold), and let $K$ be its Kahler cone, which is an open, convex subset in the space $H^{1,1}(M, R)$ of real (1,1)-forms. This space is equipped with a canonical bilinear symmetric form of signature $(1,n)$ obtained as a restriction of the Bogomolov-Beauville-Fujiki form. The set of vectors of positive square in the space of signature $(1,n)$ is a disconnected union of two convex cones. The "positive cone" is the component which contains the Kahler cone. We say that the Kahler cone is "round" if it is equal to the positive cone. The manifolds with round Kahler cones have unique bimeromorphic model and correspond to Hausdorff points in the corresponding Teichmuller space. We prove thay any maximal holonomy hyperkahler manifold with $b_2 > 4$ has a deformation with round Kahler cone and the Picard lattice of signature (1,1), admitting two non-collinear integer isotropic classes. This is used to show that all known examples of hyperkahler manifolds admit a deformation with two transversal Lagrangian fibrations, and the Kobayashi metric vanishes unless the Picard rank is maximal.
The effects of CP violating anomalous ZZZ and gammaZZ vertices in ZZ production are determined. We present the differential cross-section for e+e- -> ZZ with dependence on the spins of the Z bosons. It is shown that from the different spin combinations those with one longitudinally and one transversally polarized Z in the final state are the most sensitive to CP violating anomalous couplings.
In this paper linear canonical correlation analysis (LCCA) is generalized by applying a structured transform to the joint probability distribution of the considered pair of random vectors, i.e., a transformation of the joint probability measure defined on their joint observation space. This framework, called measure transformed canonical correlation analysis (MTCCA), applies LCCA to the data after transformation of the joint probability measure. We show that judicious choice of the transform leads to a modified canonical correlation analysis, which, in contrast to LCCA, is capable of detecting non-linear relationships between the considered pair of random vectors. Unlike kernel canonical correlation analysis, where the transformation is applied to the random vectors, in MTCCA the transformation is applied to their joint probability distribution. This results in performance advantages and reduced implementation complexity. The proposed approach is illustrated for graphical model selection in simulated data having non-linear dependencies, and for measuring long-term associations between companies traded in the NASDAQ and NYSE stock markets.
An expression for the photon condensate in quantum electrodynamics is presented and generalized to deduce a simple relation between the gluon condensate and the running coupling constant of quantum chromodynamics (QCD). Ambiguities in defining the condensates are discussed. The values of the gluon condensate from some Ans\"{a}tze for the running coupling in the literature are compared with the value determined from QCD sum rules.
Schweizer, Sklar and Thorp proved in 1960 that a Menger space $(G,D,T)$ under a continuous $t$-norm $T$, induce a natural topology $\tau$ wich is metrizable. We extend this result to any probabilistic metric space $(G,D,\star)$ provided that the triangle function $\star$ is continuous. We prove in this case, that the topological space $(G,\tau)$ is uniformly homeomorphic to a (deterministic) metric space $(G,\sigma_D)$ for some canonical metric $\sigma_D$ on $G$. As applications, we extend the fixed point theorem of Hicks to probabilistic metric spaces which are not necessarily Menger spaces and we prove a probabilistic Arzela-Ascoli type theorem.
Source free domain adaptation (SFDA) aims to transfer a trained source model to the unlabeled target domain without accessing the source data. However, the SFDA setting faces an effect bottleneck due to the absence of source data and target supervised information, as evidenced by the limited performance gains of newest SFDA methods. In this paper, for the first time, we introduce a more practical scenario called active source free domain adaptation (ASFDA) that permits actively selecting a few target data to be labeled by experts. To achieve that, we first find that those satisfying the properties of neighbor-chaotic, individual-different, and target-like are the best points to select, and we define them as the minimum happy (MH) points. We then propose minimum happy points learning (MHPL) to actively explore and exploit MH points. We design three unique strategies: neighbor ambient uncertainty, neighbor diversity relaxation, and one-shot querying, to explore the MH points. Further, to fully exploit MH points in the learning process, we design a neighbor focal loss that assigns the weighted neighbor purity to the cross-entropy loss of MH points to make the model focus more on them. Extensive experiments verify that MHPL remarkably exceeds the various types of baselines and achieves significant performance gains at a small cost of labeling.
We present the results of the detailed surface photometry of a sample of early-type galaxies in the Hubble Deep Field. Effective radii, surface brightnesses and total V_606 magnitudes have been obtained, as well as U_300, B_450, I_814, J, H and K colors, which are compared with the predictions of chemical-spectrophotometric models of population synthesis. Spectroscopic redshifts are available for 23 objects. For other 25 photometric redshifts are given. In the <mu_e>-r_e plane the early-type galaxies of the HDF, once the appropriate K+E corrections are applied, turn out to follow the `rest frame' Kormendy relation. This evidence, linked to the dynamical information gathered by Steidel et al.(1996), indicates that these galaxies, even at z~2-3, lie in the Fundamental Plane, in a virial equilibrium condition. At the same redshifts a statistically significant lack of large galaxies [i.e. with Log r_e(kpc) > 0.2] is observed.
Lepton flavor violating Higgs decays can arise in flavor symmetry models where the Higgs sector is responsible for both the electroweak and the flavor symmetry breaking. Here we advocate an $S_4$ three-Higgs-doublet model where tightly constrained flavor changing neutral currents are suppressed by a remnant $Z_3$ symmetry. A small breaking of this $Z_3$ symmetry can explain the $2.4\,\sigma$ excess of Higgs decay final states with a $\mu \tau $ topology reported recently by CMS if the new neutral scalars are light. The model also predicts sizable rates for lepton flavor violating Higgs decays in the $e\tau $ and $e \mu$ channels because of the unifying $S_4$ flavor symmetry.
We identify the scaling limit of the backbone of the high-dimensional incipient infinite cluster (IIC), both in the finite-range and the long-range setting. In the finite-range setting, this scaling limit is Brownian motion, in the long-range setting, it is a stable motion. The proof relies on a novel lace expansion that keeps track of the number of pivotal bonds.
The historical microlensing surveys MACHO, EROS, MOA and OGLE (hereafter summarized in the MEMO acronym) have searched for microlensing toward the LMC for a total duration of 27 years. We have studied the potential of joining all databases to search for very heavy objects producing several year duration events. We show that a combined systematic search for microlensing should detect of the order of 10 events due to $100M_\odot$ black holes, that were not detectable by the individual surveys, if these objects have a major contribution to the Milky-Way halo. Assuming that a common analysis is feasible, i.e. that the difficulties due to the use of different passbands can be overcome, we show that the sensitivity of such an analysis should allow one to quantify the Galactic black hole component.
We report on a 10 ks simultaneous Chandra/HETG-NuSTAR observation of the Bursting Pulsar, GRO J1744-28, during its third detected outburst since discovery and after nearly 18 years of quiescence. The source is detected up to 60 keV with an Eddington persistent flux level. Seven bursts, followed by dips, are seen with Chandra, three of which are also detected with NuSTAR. Timing analysis reveals a slight increase in the persistent emission pulsed fraction with energy (from 10% to 15%) up to 10 keV, above which it remains constant. The 0.5-70 keV spectra of the persistent and dip emission are the same within errors, and well described by a blackbody (BB), a power-law with an exponential rolloff, a 10 keV feature, and a 6.7 keV emission feature, all modified by neutral absorption. Assuming that the BB emission originates in an accretion disc, we estimate its inner (magnetospheric) radius to be about 4x10^7 cm, which translates to a surface dipole field B~9x10^10 G. The Chandra/HETG spectrum resolves the 6.7 keV feature into (quasi-)neutral and highly ionized Fe XXV and Fe XXVI emission lines. XSTAR modeling shows these lines to also emanate from a truncated accretion disk. The burst spectra, with a peak flux more than an order of magnitude higher than Eddington, are well fit with a power-law with an exponential rolloff and a 10~keV feature, with similar fit values compared to the persistent and dip spectra. The burst spectra lack a thermal component and any Fe features. Anisotropic (beamed) burst emission would explain both the lack of the BB and any Fe components.
When there is a certain amount of field inhomogeneity, the biased ferrimagnetic crystal will exhibit the higher-order magnetostatic (HMS) mode in addition to the uniform-precession Kittel mode. In cavity magnonics, we show both experimentally and theoretically the cross-Kerr-type interaction between the Kittel mode and HMS mode. When the Kittel mode is driven to generate a certain number of excitations, the HMS mode displays a corresponding frequency shift and vice versa. The cross-Kerr effect is caused by an exchange interaction between these two spin-wave modes. Utilizing the cross-Kerr effect, we realize and integrate a multi-mode cavity magnonic system with only one yttrium iron garnet (YIG) sphere. Our results will bring new methods to magnetization dynamics studies and pave a way for novel cavity magnonic devices by including the magnetostatic mode-mode interaction as an operational degree of freedom.
We present a statistical study of optical warps in a sample of 540 galaxies, about five times larger than previous samples. About 40% of all late-type galaxies reveal S-shaped warping of their planes in the outer parts. Given the geometrical parameters and detection sensitivity, this result suggests that at least half of all galaxy disks might be warped. We demonstrate through geometrical simulations that some apparent warps could be due to spiral arms in a highly inclined galaxy. The simulations of non warped galaxies give an amount of false warps of $\approx$ 15%, while simulations of warped galaxies suggest that no more than 20% of the warps are missed. We find a strong positive correlation of observed warps with environment, suggesting that tidal interaction have a large influence in creating or re-enforcing warped deformations.
Decoherence of a central spin coupled to an interacting spin bath via inhomogeneous Heisenberg coupling is studied by two different approaches, namely an exact equations of motion (EOMs) method and a Chebyshev expansion technique (CET). By assuming a wheel topology of the bath spins with uniform nearest-neighbor $XX$-type intrabath coupling, we examine the central spin dynamics with the bath prepared in two different types of bath initial conditions. For fully polarized baths in strong magnetic fields, the polarization dynamics of the central spin exhibits a collapse-revival behavior in the intermediate-time regime. Under an antiferromagnetic bath initial condition, the two methods give excellently consistent central spin decoherence dynamics for finite-size baths of $N\leq14$ bath spins. The decoherence factor is found to drop off abruptly on a short time scale and approach a finite plateau value which depends on the intrabath coupling strength non-monotonically. In the ultrastrong intrabath coupling regime, the plateau values show an oscillatory behavior depending on whether $N/2$ is even or odd. The observed results are interpreted qualitatively within the framework of the EOM and perturbation analysis. The effects of anisotropic spin-bath coupling and inhomogeneous intrabath bath couplings are briefly discussed. Possible experimental realization of the model in a modified quantum corral setup is suggested.
We study supersymmetric (SUSY) responses to a photoassociation process in a mixture of Bose molecules $b$ and Fermi atoms $f$ which turn to mutual superpartners for a set of proper parameters. We consider the molecule $b$ to be a bound state of the atom $f$ and another Fermi atom $F$ with different species. The $b$-$f$ mixture and a free $F$ atom gas are loaded in an optical lattice. The SUSY nature of the mixture can be signaled in the response to a photon induced atom-molecule transition: While two new types of fermionic excitations, an individual $b$ particle-$f$ hole pair continuum and the Goldstino-like collective mode, are concomitant for a generic $b$-$f$ mixture, the former is completely suppressed in the SUSY $b$-$f$ mixture and the zero-momentum mode of the latter approaches to an exact eigenstate. This SUSY response can be detected by means of the spectroscopy method, e.g., the photoassociation spectrum which displays the molecular formation rate of $% Ff\to b$.
Data imbalance and open-ended distribution are two intrinsic characteristics of the real visual world. Though encouraging progress has been made in tackling each challenge separately, few works dedicated to combining them towards real-world scenarios. While several previous works have focused on classifying close-set samples and detecting open-set samples during testing, it's still essential to be able to classify unknown subjects as human beings. In this paper, we formally define a more realistic task as distribution-agnostic generalized category discovery (DA-GCD): generating fine-grained predictions for both close- and open-set classes in a long-tailed open-world setting. To tackle the challenging problem, we propose a Self-Balanced Co-Advice contrastive framework (BaCon), which consists of a contrastive-learning branch and a pseudo-labeling branch, working collaboratively to provide interactive supervision to resolve the DA-GCD task. In particular, the contrastive-learning branch provides reliable distribution estimation to regularize the predictions of the pseudo-labeling branch, which in turn guides contrastive learning through self-balanced knowledge transfer and a proposed novel contrastive loss. We compare BaCon with state-of-the-art methods from two closely related fields: imbalanced semi-supervised learning and generalized category discovery. The effectiveness of BaCon is demonstrated with superior performance over all baselines and comprehensive analysis across various datasets. Our code is publicly available.
We show, in an elementary way, that the Julia set of one-complex-variable entire functions is nonempty and perfect.
In this paper, we focus on the effect of mass-transfer between compact binaries like neutron-star-neutron-star (NS-NS) systems and neutron-star-white-dwarf (NS-WD) systems on gravitational waves (GWs). We adopt the mass quadrupole formula with 2.5 order Post-Newtonian (2.5 PN) approximation to calculate the GW radiation and the orbital evolution. After a reasonable discussion of astrophysical processes concerning our scenario, two kinds of mass-transfer models are applied here. One is the mass overflow of the atmosphere, where the companion star orbits into the primary's Roche limit and its atmosphere overflows into the common envelope. The other one is the tidal disruption of the core, which is viewed as incompressible fluid towards the primary star, and in the near region branches into an accretion disc (AD) and direct accretion flow. Viewing this envelope and as a background, the GW of its spin can be calculated as a rotating non-spherically symmetric star. We eventually obtained the corrected gravitational waveform (GWF) templates for different initial states in the inspiral phase.
This work presents the study of some new anomalous electromagnetic effects in graphite-like thin carbon films. These are: The fast switching (1nanosecond) of electrical conductivity The detection of microwave radiation and its temperature dependence The oscillations of film stack magnetization in the magnetic field of 1-5 T. The optical radiation under process of spasmodic switching of conductivity Results of magnetic force microscopy (MFM), DC SQUID magnetization, reversed Josephson effect (RJE), and resistance measurements in thin carbon arc (CA) films are presented. The observation of a RJE induced voltage as well as its rf frequency, input amplitude, and temperature dependence reveals the existence of Josephson-like Junction arrays. Oscillating behavior of the DC SQUID magnetization reminiscent of the Fraunhofer-like behavior of superconducting (SC) critical current in the range of 10000-50000 Oe has been observed. The DC SQUID magnetization measurement indicates a possible elementary 102 nm SC loop; this is compared to MFM direct observations of magnetic clusters with a median size of 165 nm. The results obtained provides a basis for non-cryogenic elecrtonic devices utilizing the Josephson effect.
In soccer (or association football), players quickly go from heroes to zeroes, or vice-versa. Performance is not a static measure but a somewhat volatile one. Analyzing performance as a time series rather than a stationary point in time is crucial to making better decisions. This paper introduces and explores I-VAEP and O-VAEP models to evaluate actions and rate players' intention and execution. Then, we analyze these ratings over time and propose use cases to fundament our option of treating player ratings as a continuous problem. As a result, we present who were the best players and how their performance evolved, define volatility metrics to measure a player's consistency, and build a player development curve to assist decision-making.
We consider the problem of accurately recovering a matrix B of size M by M , which represents a probability distribution over M2 outcomes, given access to an observed matrix of "counts" generated by taking independent samples from the distribution B. How can structural properties of the underlying matrix B be leveraged to yield computationally efficient and information theoretically optimal reconstruction algorithms? When can accurate reconstruction be accomplished in the sparse data regime? This basic problem lies at the core of a number of questions that are currently being considered by different communities, including building recommendation systems and collaborative filtering in the sparse data regime, community detection in sparse random graphs, learning structured models such as topic models or hidden Markov models, and the efforts from the natural language processing community to compute "word embeddings". Our results apply to the setting where B has a low rank structure. For this setting, we propose an efficient algorithm that accurately recovers the underlying M by M matrix using Theta(M) samples. This result easily translates to Theta(M) sample algorithms for learning topic models and learning hidden Markov Models. These linear sample complexities are optimal, up to constant factors, in an extremely strong sense: even testing basic properties of the underlying matrix (such as whether it has rank 1 or 2) requires Omega(M) samples. We provide an even stronger lower bound where distinguishing whether a sequence of observations were drawn from the uniform distribution over M observations versus being generated by an HMM with two hidden states requires Omega(M) observations. This precludes sublinear-sample hypothesis tests for basic properties, such as identity or uniformity, as well as sublinear sample estimators for quantities such as the entropy rate of HMMs.
This paper presents hydrodynamic-like model of business cycles aggregate fluctuations of economic and financial variables. We model macroeconomics as ensemble of economic agents on economic space and agent's risk ratings play role of their coordinates. Sum of economic variables of agents with coordinate x define macroeconomic variables as functions of time and coordinates x. We describe evolution and interactions between macro variables on economic space by hydrodynamic-like equations. Integral of macro variables over economic space defines aggregate economic or financial variables as functions of time t only. Hydrodynamic-like equations define fluctuations of aggregate variables. Motion of agents from low risk to high risk area and back define the origin for repeated fluctuations of aggregate variables. Economic or financial variables on economic space may define statistical moments like mean risk, mean square risk and higher. Fluctuations of statistical moments describe phases of financial and economic cycles. As example we present a simple model relations between Assets and Revenue-on-Assets and derive hydrodynamic-like equations that describe evolution and interaction between these variables. Hydrodynamic-like equations permit derive systems of ordinary differential equations that describe fluctuations of aggregate Assets, Assets mean risks and Assets mean square risks. Our approach allows describe business cycle aggregate fluctuations induced by interactions between any number of economic or financial variables.
Traveling Salesman Problem (TSP) is a decision-making problem that is essential for a number of practical applications. Today, this problem is solved on digital computers exploiting Boolean-type architecture by checking one by one a number of possible routes. In this work, we describe a special type of hardware for the TSP solution. It is a magnonic combinatorial device comprising magnetic and electric parts connected in the active ring circuit. There is a number of possible propagation routes in the magnetic mesh made of phase shifters, frequency filters, and attenuators. The phase shifters mimic cities in TSP while the distance between the cities is encoded in the signal attenuation. The set of frequency filters makes the waves on different frequencies propagate through the different routes. The principle of operation is based on the classical wave superposition. There is a number of waves coming in all possible routes in parallel accumulating different phase shifts and amplitude damping. However, only the wave(s) that accumulates the certain phase shift will be amplified by the electric part. The amplification comes first to the waves that possess the minimum propagation losses. It makes this type of device suitable for TSP solution, where waves are similar to the salesmen traveling in all possible routes at a time. We present the results of numerical modeling illustrating the TSP solutions for four and six cities. Also, we present experimental data for the TSP solution with four cities.
The numerical solution for polarization for two-level atom in polyharmonic field has been made. The analytical solution for partial case of symmetrical position of carrier frequency relative to transition frequency is possible. The results showed that the nonlinear features in polarization spectrum take place even for small amplitudes of comb-components for small frequency distance between them. It means that it is necessary to take into account nonlinear effects for interpretation of spectra in comb spectroscopy.
Advanced data augmentation strategies have widely been studied to improve the generalization ability of deep learning models. Regional dropout is one of the popular solutions that guides the model to focus on less discriminative parts by randomly removing image regions, resulting in improved regularization. However, such information removal is undesirable. On the other hand, recent strategies suggest to randomly cut and mix patches and their labels among training images, to enjoy the advantages of regional dropout without having any pointless pixel in the augmented images. We argue that such random selection strategies of the patches may not necessarily represent sufficient information about the corresponding object and thereby mixing the labels according to that uninformative patch enables the model to learn unexpected feature representation. Therefore, we propose SaliencyMix that carefully selects a representative image patch with the help of a saliency map and mixes this indicative patch with the target image, thus leading the model to learn more appropriate feature representation. SaliencyMix achieves the best known top-1 error of 21.26% and 20.09% for ResNet-50 and ResNet-101 architectures on ImageNet classification, respectively, and also improves the model robustness against adversarial perturbations. Furthermore, models that are trained with SaliencyMix help to improve the object detection performance. Source code is available at https://github.com/SaliencyMix/SaliencyMix.
The degree anti-Ramsey number $AR_d(H)$ of a graph $H$ is the smallest integer $k$ for which there exists a graph $G$ with maximum degree at most $k$ such that any proper edge colouring of $G$ yields a rainbow copy of $H$. In this paper we prove a general upper bound on degree anti-Ramsey numbers, determine the precise value of the degree anti-Ramsey number of any forest, and prove an upper bound on the degree anti-Ramsey numbers of cycles of any length which is best possible up to a multiplicative factor of $2$. Our proofs involve a variety of tools, including a classical result of Bollob\'as concerning cross intersecting families and a topological version of Hall's Theorem due to Aharoni, Berger and Meshulam.
The heightened sensitivity observed in non-Hermitian systems at exceptional points (EPs) has garnered significant attention. Typical EP sensor implementations rely on precise measurements of spectra and importantly, for real time sensing measurements, the EP condition ceases to hold as the perturbation increases over time, thereby preventing the use of high sensitivity at the EP point. In this work, we present an new approach to EP sensing which goes beyond these two traditional constraints. Firstly, instead of measuring the spectra, our scheme of EP based sensing is based on the observation of decay length of the optical mode in finite size gratings, which is validated via coupled mode theory as well as full wave electrodynamic simulations. Secondly, for larger perturbation strengths, the EP is spectrally shifted instead of being destroyed -- this spectral shift of the EP is calibrated and using this look-up table, we propose continuous real time detection by varying the excitation laser wavelength. As a proof of principle of our technique, we present an application to the sensing of coronavirus particles, which shows unprecedented limit of detection. These findings will contribute to the expanding field of exceptional point based sensing technologies for real time applications beyond spectral measurements.
The recently reported precise experimental determination of the dipole polarizability of the H_2^+ molecular ion ground state [P.L. Jacobson, R.A. Komara, W.G. Sturrus, and S.R. Lundeen, Phys. Rev. A 62, 012509 (2000)] reveals a discrepancy between theory and experiment of about 0.0007a_0^3, which has been attributed to relativistic and QED effects. In present work we analyze an influence of the relativistic effects on the scalar dipole polarizability of an isolated H_2^+ molecular ion. Our conclusion is that it accounts for only 1/5 of the measured discrepancy.
Weak Supervision (WS) techniques allow users to efficiently create large training datasets by programmatically labeling data with heuristic sources of supervision. While the success of WS relies heavily on the provided labeling heuristics, the process of how these heuristics are created in practice has remained under-explored. In this work, we formalize the development process of labeling heuristics as an interactive procedure, built around the existing workflow where users draw ideas from a selected set of development data for designing the heuristic sources. With the formalism, we study two core problems of how to strategically select the development data to guide users in efficiently creating informative heuristics, and how to exploit the information within the development process to contextualize and better learn from the resultant heuristics. Building upon two novel methodologies that effectively tackle the respective problems considered, we present Nemo, an end-to-end interactive system that improves the overall productivity of WS learning pipeline by an average 20% (and up to 47% in one task) compared to the prevailing WS approach.
The quest towards expansion of the MAX design space has been accelerated with the recent discovery of several solid solution and ordered phases involving at least two MAX end members. Going beyond the nominal MAX compounds enables not only fine tuning of existing properties but also entirely new functionality. This search, however, has been mostly done through painstaking experiments as knowledge of the phase stability of the relevant systems is rather scarce. In this work, we report the first attempt to evaluate the finite-temperature pseudo-binary phase diagram of the Ti2AlC-Cr2AlC via first-principles-guided Bayesian CALPHAD framework that accounts for uncertainties not only in ab initio calculations and thermodynamic models but also in synthesis conditions in reported experiments. The phase stability analyses are shown to have good agreement with previous experiments. The work points towards a promising way of investigating phase stability in other MAX Phase systems providing the knowledge necessary to elucidate possible synthesis routes for MAX systems with unprecedented properties.
Quantum technologies are developing powerful tools to generate and manipulate coherent superpositions of different energy levels. Envisaging a new generation of energy-efficient quantum devices, here we explore how coherence can be manipulated without exchanging energy with the surrounding environment. We start from the task of converting a coherent superposition of energy eigenstates into another. We identify the optimal energy-preserving operations, both in the deterministic and in the probabilistic scenario. We then design a recursive protocol, wherein a branching sequence of energy-preserving filters increases the probability of success while reaching maximum fidelity at each iteration. Building on the recursive protocol, we construct efficient approximations of the optimal fidelity-probability trade-off, by taking coherent superpositions of the different branches generated by probabilistic filtering. The benefits of this construction are illustrated in applications to quantum metrology, quantum cloning, coherent state amplification, and ancilla-driven computation. Finally, we extend our results to transitions where the input state is generally mixed and we apply our findings to the task of purifying quantum coherence.
Let $G$ be a finite group and $N\unlhd G$ with $|G: N|=p$ for some prime $p$. In this note, to compute $m_{G,N}$ directly, we construct a class poset $\mathfrak{T}_{C}(G)$ of $G$ for some cyclic subgroup $C$. And we find a relation between $m_{G,N}$ and the Euler characteristic of the nerve space $|N(\mathfrak{T}_{C}(G))|$ (see the Theorem 1.3). As an application, we compute $m_{S_5, A_5}=0$ directly, and get $S_5$ is a $B$-group.
We study and classify the purely parabolic discrete subgroups of $PSL(3,\Bbb{C})$. This includes all discrete subgroups of the Heisenberg group ${\rm Heis}(3,\Bbb{C})$. While for $PSL(2,\Bbb{C})$ every purely parabolic subgroup is Abelian and acts on $\Bbb{P}^1_\Bbb{C}$ with limit set a single point, the case of $PSL(3,\Bbb{C})$ is far more subtle and intriguing. We show that there are five families of purely parabolic discrete groups in $PSL(3,\Bbb{C})$, and some of these actually split into subfamilies. We classify all these by means of their limit set and the control group. We use first the Lie-Kolchin Theorem and Borel's fixed point theorem to show that all purely parabolic discrete groups in $PSL(3,\Bbb{C})$ are virtually triangularizable. Then we prove that purely parabolic groups in $PSL(3,\Bbb{C})$ are virtually solvable and polycyclic, hence finitely presented. We then prove a slight generalization of the Lie-Kolchin Theorem for these groups: they are either virtually unipotent or else Abelian of rank 2 and of a very special type. All the virtually unipotent ones turn out to be conjugate to subgroups of the Heisenberg group ${\rm Heis}(3,\Bbb{C})$. We classify these using the obstructor dimension introduced by Bestvina, Kapovich and Kleiner. We find that their Kulkarni limit set is either a projective line, a cone of lines with base a circle or else the whole $\Bbb{P}^2_\Bbb{C}$. We determine the relation with the Conze-Guivarc'h limit set of the action on the dual projective space $\check{\Bbb{P}}^2_\Bbb{C}$ and we show that in all cases the Kulkarni region of discontinuity is the largest open set where the group acts properly discontinuously.
We propose a scheme to perform a fundamental two-qubit gate between two trapped ions using ideas from atom interferometry. As opposed to the scheme considered by J. I. Cirac and P. Zoller, Phys. Rev. Lett. 74, 4091 (1995), it does not require laser cooling to the motional ground state.
This work deals with the modeling of nonsmooth vibro-impact motion of a continuous structure against a rigid distributed obstacle. Galerkin's approach is used to approximate the solutions of the governing partial differential equations of the structure, which results in a system of ordinary differential equations (ODEs). When these ODEs are subjected to unilateral constraints and velocity jump conditions, one must use an event detection algorithm to calculate the time of impact accurately. Event detection in the presence of multiple simultaneous impacts is a computationally demanding task. Ivanov proposed a nonsmooth transformation for a vibro-impacting multi-degree-of-freedom system subjected to a single unilateral constraint. This transformation eliminates the unilateral constraints from the problem and, therefore, no event detection is required during numerical integration. Ivanov used his transformation to make analytical calculations for the stability and bifurcations of vibro-impacting motions; however, he did not explore its application for simulating distributed collisions in spatially continuous structures. We adopt Ivanov's transformation to deal with multiple unilateral constraints in spatially continuous structures. Also, imposing the velocity jump conditions exactly in the modal coordinates is nontrivial and challenging. Therefore, in this work we use a modal-physical transformation to convert the system from modal to physical coordinates on a spatially discretized grid. We then apply Ivanov's transformation on the physical system to simulate the vibro-impact motion of the structure. The developed method is demonstrated by modeling the distributed collision of a nonlinear string against a rigid distributed surface. For validation, we compare our results with the well-known penalty approach.
Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/
We propose a new type of bistable device for silicon photonics, using the self-electro-optic effect within an optical cavity. Since the bistability does not depend on the intrinsic optical nonlinearity of the material, but is instead engineered by means of an optoelectronic feedback, it appears at low optical powers. This bistable device satisfies all the basic criteria required in an optical switch to build a scalable digital optical computing system.
In the intracluster medium (ICM) of galaxy clusters, heat and momentum are transported almost entirely along (but not across) magnetic field lines. We perform the first fully self-consistent Braginskii-MHD simulations of galaxy clusters including both of these effects. Specifically, we perform local and global simulations of the magnetothermal instability (MTI) and the heat-flux-driven buoyancy instability (HBI) and assess the effects of viscosity on their saturation and astrophysical implications. We find that viscosity has only a modest effect on the saturation of the MTI. As in previous calculations, we find that the MTI can generate nearly sonic turbulent velocities in the outer parts of galaxy clusters, although viscosity somewhat suppresses the magnetic field amplification. At smaller radii in cool-core clusters, viscosity can decrease the linear growth rates of the HBI. However, it has less of an effect on the HBI's nonlinear saturation, in part because three-dimensional interchange motions (magnetic flux tubes slipping past each other) are not damped by anisotropic viscosity. In global simulations of cool core clusters, we show that the HBI robustly inhibits radial thermal conduction and thus precipitates a cooling catastrophe. The effects of viscosity are, however, more important for higher entropy clusters. We argue that viscosity can contribute to the global transition of cluster cores from cool-core to non cool-core states: additional sources of intracluster turbulence, such as can be produced by AGN feedback or galactic wakes, suppress the HBI, heating the cluster core by thermal conduction; this makes the ICM more viscous, which slows the growth of the HBI, allowing further conductive heating of the cluster core and a transition to a non cool-core state.
Two modest-sized symbolic corpora of post-tonal and post-metric keyboard music have been constructed, one algorithmic, the other improvised. Deep learning models of each have been trained and largely optimised. Our purpose is to obtain a model with sufficient generalisation capacity that in response to a small quantity of separate fresh input seed material, it can generate outputs that are distinctive, rather than recreative of the learned corpora or the seed material. This objective has been first assessed statistically, and as judged by k-sample Anderson-Darling and Cramer tests, has been achieved. Music has been generated using the approach, and informal judgements place it roughly on a par with algorithmic and composed music in related forms. Future work will aim to enhance the model such that it can be evaluated in relation to expression, meaning and utility in real-time performance.
Point Projection Microscopy (PPM) is used to image suspended graphene using low-energy electrons (100-200eV). Because of the low energies used, the graphene is neither damaged or contaminated by the electron beam. The transparency of graphene is measured to be 74%, equivalent to electron transmission through a sheet as thick as twice the covalent radius of sp^2-bonded carbon. Also observed is rippling in the structure of the suspended graphene, with a wavelength of approximately 26 nm. The interference of the electron beam due to the diffraction off the edge of a graphene knife edge is observed and used to calculate a virtual source size of 4.7 +/- 0.6 Angstroms for the electron emitter. It is demonstrated that graphene can be used as both anode and substrate in PPM in order to avoid distortions due to strong field gradients around nano-scale objects. Graphene can be used to image objects suspended on the sheet using PPM, and in the future, electron holography.
A \emph{private proximity retrieval} (\emph{PPR}) scheme is a protocol which allows a user to retrieve the identities of all records in a database that are within some distance $r$ from the user's record $x$. The user's \emph{privacy} at each server is given by the fraction of the record $x$ that is kept private. In this paper, this research is initiated and protocols that offer trade-offs between privacy and computational complexity and storage are studied. In particular, we assume that each server stores a copy of the database and study the required minimum number of servers by our protocol which provides a given privacy level. Each server gets a query in the protocol and the set of queries forms a code. We study the family of codes generated by the set of queries and in particular the minimum number of codewords in such a code which is the minimum number of servers required for the protocol. These codes are closely related to a family of codes known as \emph{covering designs}. We introduce several lower bounds on the sizes of such codes as well as several constructions. This work focuses on the case when the records are binary vectors together with the Hamming distance. Other metrics such as the Johnson metric are also investigated.
The simulation of Micro Pattern Gaseous Detectors (MPGDs) signal response is an important and powerful tool for the design and optimization of such detectors. However, several attempts to simulate exactly the effective charge gain have not been completely successful. Namely, the gain stability over time has not been fully understood. Charging-up of the insulator surfaces have been pointed as one of the responsible for the difference between experimental and Monte Carlo results. This work describes two iterative methods to simulate the charging-up in one MPGD device, the Gas Electron Multiplier (GEM). The first method uses a constant step for avalanches time evolution, very detailed, but slower to compute. The second method uses a dynamic step that improves the computing time. Good agreement between both methods was reached. Despite of comparison with experimental results shows that charging-up plays an important role in detectors operation, should not be the only responsible for the difference between simulated and measured effective gain, but explains the time evolution in the effective gain.
Science opportunities and recommendations concerning optical/infrared polarimetry for the upcoming decade in the field of extragalactic astrophysics. Community-based White Paper to Astro2010 in response to the call for such papers.
Deep learning approaches to breast cancer detection in mammograms have recently shown promising results. However, such models are constrained by the limited size of publicly available mammography datasets, in large part due to privacy concerns and the high cost of generating expert annotations. Limited dataset size is further exacerbated by substantial class imbalance since "normal" images dramatically outnumber those with findings. Given the rapid progress of generative models in synthesizing realistic images, and the known effectiveness of simple data augmentation techniques (e.g. horizontal flipping), we ask if it is possible to synthetically augment mammogram datasets using generative adversarial networks (GANs). We train a class-conditional GAN to perform contextual in-filling, which we then use to synthesize lesions onto healthy screening mammograms. First, we show that GANs are capable of generating high-resolution synthetic mammogram patches. Next, we experimentally evaluate using the augmented dataset to improve breast cancer classification performance. We observe that a ResNet-50 classifier trained with GAN-augmented training data produces a higher AUROC compared to the same model trained only on traditionally augmented data, demonstrating the potential of our approach.
We propose a model-free algorithm for learning efficient policies capable of returning table tennis balls by controlling robot joints at a rate of 100Hz. We demonstrate that evolutionary search (ES) methods acting on CNN-based policy architectures for non-visual inputs and convolving across time learn compact controllers leading to smooth motions. Furthermore, we show that with appropriately tuned curriculum learning on the task and rewards, policies are capable of developing multi-modal styles, specifically forehand and backhand stroke, whilst achieving 80\% return rate on a wide range of ball throws. We observe that multi-modality does not require any architectural priors, such as multi-head architectures or hierarchical policies.
Related to Shank's notion of simplest cubic fields, the family of parametrised Diophantine equations, \[ x^3 - (n-1) x^2 y - (n+2) xy^2 - 1 = \left( x - \lambda_0 y\right) \left(x-\lambda_1 y\right) \left(x - \lambda_2 y\right) = \pm 1, \] was studied and solved effectively by Thomas and later solved completely by Mignotte. An open conjecture of Levesque and Waldschmidt states that taking these parametrised Diophantine equations and twisting them not only once but twice, in the sense that we look at \[ f_{n,s,t}(x,y) = \left( x - \lambda_0^s \lambda_1^t y \right) \left( x - \lambda_1^s\lambda_2^t y \right) \left( x - \lambda_2^s\lambda_0^t y \right) = \pm 1, \] retains a result similar to what Thomas obtained in the original or Levesque and Waldschidt in the once-twisted ($t = 0$) case; namely, that non-trivial solutions can only appear in equations where the parameters are small. We confirm this conjecture, given that the absolute values of the exponents $s, t$ are not too large compared to the base parameter $n$.
Rotational velocity, lithium abundance, and the mass depth of the outer convective zone are key parameters in the study of the processes at work in the stellar interior, in particular when examining the poorly understood processes operating in the interior of solar-analog stars. We investigate whether the large dispersion in the observed lithium abundances of solar-analog stars can be explained by the depth behavior of the outer convective zone masses, within the framework of the standard convection model based on the local mixing-length theory. We also aims to analyze the link between rotation and lithium abundance in solar-analog stars. We computed a new extensive grid of stellar evolutionary models, applicable to solar-analog stars, for a finely discretized set of mass and metallicity. From these models, the stellar mass, age, and mass depth of the outer convective zone were estimated for 117 solar-analog stars, using Teff and [Fe/H] available in the literature, and the new HIPPARCOS trigonometric parallax measurements. We determine the age and mass of the outer convective zone for a bona fide sample of 117 solar-analog stars. No significant on-to-one correlation is found between the computed convection zone mass and published lithium abundance, indicating that the large A(Li) dispersion in solar analogs cannot be explained by the classical framework of envelope convective mixing coupled with lithium depletion at the bottom of the convection zone. These results illustrate the need for an extra-mixing process to explain lithium behavior in solar-analog stars, such as, shear mixing caused by differential rotation. To derive a more realistic definition of solar-analog stars, as well as solar-twin, it seems important to consider the inner physical properties of stars, such as convection, hence rotation and magnetic properties.
The corona splash due to the impact of a liquid drop on a smooth dry substrate is investigated with high speed photography. A striking phenomenon is observed: splashing can be completely suppressed by decreasing the pressure of the surrounding gas. The threshold pressure where a splash first occurs is measured as a function of the impact velocity and found to scale with the molecular weight of the gas and the viscosity of the liquid. Both experimental scaling relations support a model in which compressible effects in the gas are responsible for splashing in liquid solid impacts.
To examine the evolution of the early-type galaxy population in the rich cluster Abell 2390 at z=0.23 we have gained spectroscopic data of 51 elliptical and lenticular galaxies with MOSCA at the 3.5 m telescope on Calar Alto Observatory. This investigation spans both a broad range in luminosity (-19.3>M_B>-22.3) and uses a wide field of view of 10'x10', therefore the environmental dependence of different formation scenarios can be analysed in detail as a function of radius from the cluster centre. Here we present results on the surface brightness modelling of galaxies where morphological and structural information is available in the F814W filter aboard the Hubble Space Telescope (HST) and investigate for this subsample the evolution of the Fundamental Plane.
If an artificial intelligence aims to maximise risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk. Even if the proportion ${\eta}$ of available unethical strategies is small, the probability ${p_U}$ of picking an unethical strategy can become large; indeed unless returns are fat-tailed ${p_U}$ tends to unity as the strategy space becomes large. We define an Unethical Odds Ratio Upsilon (${\Upsilon}$) that allows us to calculate ${p_U}$ from ${\eta}$, and we derive a simple formula for the limit of ${\Upsilon}$ as the strategy space becomes large. We give an algorithm for estimating ${\Upsilon}$ and ${p_U}$ in finite cases and discuss how to deal with infinite strategy spaces. We show how this principle can be used to help detect unethical strategies and to estimate ${\eta}$. Finally we sketch some policy implications of this work.
We investigate the survival rate of an initial momentum anisotropy (${v}_2^{ini}$), not spatial anisotropy, to the final state in a multi-phase transport (AMPT) model in Au+Au collisions at $\sqrt{s_{NN}}$=200~GeV. It is found that both the final-state parton and charged hadron $v_2$ show a linear dependence versus $v_2^{ini}\{\rm PP\}$ with respect to the participant plane (PP). It is found that the slope of this linear dependence (referred to as the survive rate) increases with transverse momentum ($p_T$), reaching~$\sim$100\% at $p_T$$\sim$2.5 GeV/c for both parton and charged hadron. The survival rate decreases with collision centrality and energy, indicating decreasing survival rate with increasing interactions. It is further found that a $v_2^{ini}\{\rm Rnd\}$ with respect to a random direction does not survive in $v_2\{\rm PP\}$ but in the two-particle cumulant $v_2\{2\}$. The dependence of $v_2\{2\}$ on $v_2^{ini}\{\rm Rnd\}$ is quadratic rather than linear.
In the realm of Boltzmann-Gibbs statistical mechanics there are three well known isomorphic connections with random geometry, namely (i) the Kasteleyn-Fortuin theorem which connects the $\lambda \to 1$ limit of the $\lambda$-state Potts ferromagnet with bond percolation, (ii) the isomorphism which connects the $\lambda \to 0$ limit of the $\lambda$-state Potts ferromagnet with random resistor networks, and (iii) the de Gennes isomorphism which connects the $n \to 0$ limit of the $n$-vector ferromagnet with self-avoiding random walk in linear polymers. We provide here strong numerical evidence that a similar isomorphism appears to emerge connecting the energy $q$-exponential distribution $\propto e_q^{-\beta_q \varepsilon}$ (with $q=4/3$ and $\beta_q \omega_0 =10/3$) optimizing, under simple constraints, the nonadditive entropy $S_q$ with a specific geographic growth random model based on preferential attachment through exponentially-distributed weighted links, $\omega_0$ being the characteristic weight.
We report the proof that the expression of extended Gibrat's law is unique and the probability distribution function (pdf) is also uniquely derived from the law of detailed balance and the extended Gibrat's law. In the proof, two approximations are employed that the pdf of growth rate is described as tent-shaped exponential functions and that the value of the origin of growth rate is constant. These approximations are confirmed in profits data of Japanese companies 2003 and 2004. The resultant profits pdf fits with the empirical data with high accuracy. This guarantees the validity of the approximations.
Joint models for longitudinal and time-to-event data constitute an attractive modeling framework that has received a lot of interest in the recent years. This paper presents the capabilities of the R package JMbayes for fitting these models under a Bayesian approach using Markon chain Monte Carlo algorithms. JMbayes can fit a wide range of joint models, including among others joint models for continuous and categorical longitudinal responses, and provides several options for modeling the association structure between the two outcomes. In addition, this package can be used to derive dynamic predictions for both outcomes, and offers several tools to validate these predictions in terms of discrimination and calibration. All these features are illustrated using a real data example on patients with primary biliary cirrhosis.
We present a generalized Landau-Brazovskii free energy for the solidification of chiral molecules on a spherical surface in the context of the assembly of viral shells. We encounter two types of icosahedral solidification transitions. The first type is a conventional first-order phase transition from the uniform to the icosahedral state. It can be described by a single icosahedral spherical harmonic of even $l$. The chiral pseudo-scalar term in the free energy creates secondary terms with chiral character but it does not affect the thermodynamics of the transition. The second type, associated with icosahedral spherical harmonics with odd $l$, is anomalous. Pure odd $l$ icosahedral states are unstable but stability is recovered if admixture with the neighboring $l+1$ icosahedral spherical harmonic is included, generated by the non-linear terms. This is in conflict with the principle of Landau theory that symmetry-breaking transitions are characterized by only a \textit{single} irreducible representation of the symmetry group of the uniform phase and we argue that this principle should be removed from Landau theory. The chiral term now directly affects the transition because it lifts the degeneracy between two isomeric mixed-$l$ icosahedral states. A direct transition is possible only over a limited range of parameters. Outside this range, non-icosahedral states intervene. For the important case of capsid assembly dominated by $l=15$, the intervening states are found to be based on octahedral symmetry.
We investigate the use of high-dimensional quantum key distribution (HD-QKD) in wireless access to hybrid quantum classical networks. We study the distribution of d-dimensional time-phase encoded states between an indoor wireless user and the central office on the other end of the access network. We evaluate the performance in the case of transmitting quantum and classical signals over the same channel by accounting for the impact of background noise induced by the Raman-scattered light on the QKD receiver. We also take into account the loss and background noise that occur in indoor environments as well as finite key effects in our analysis. We show that an HD-QKD system with d = 4 can outperform its qubit-based counterpart.
We study the interior regularity of solutions to the Dirichlet problem $Lu=g$ in $\Omega$, $u=0$ in $\R^n\setminus\Omega$, for anisotropic operators of fractional type $$ Lu(x)= \int_{0}^{+\infty}\,d\rho \int_{S^{n-1}}\,da(\omega)\, \frac{ 2u(x)-u(x+\rho\omega)-u(x-\rho\omega)}{\rho^{1+2s}}.$$ Here, $a$ is any measure on~$S^{n-1}$ (a prototype example for~$L$ is given by the sum of one-dimensional fractional Laplacians in fixed, given directions). When $a\in C^\infty(S^{n-1})$ and $g$ is $C^\infty(\Omega)$, solutions are known to be $C^\infty$ inside~$\Omega$ (but not up to the boundary). However, when $a$ is a general measure, or even when $a$ is $L^\infty(S^{n-1})$, solutions are only known to be $C^{3s}$ inside $\Omega$. We prove here that, for general measures $a$, solutions are $C^{1+3s-\epsilon}$ inside $\Omega$ for all $\epsilon>0$ whenever $\Omega$ is convex. When $a\in L^{\infty}(S^{n-1})$, we show that the same holds in all $C^{1,1}$ domains. In particular, solutions always possess a classical first derivative. The assumptions on the domain are sharp, since if the domain is not convex and the spectral measure is singular, we construct an explicit counterexample for which $u$ is \emph{not} $C^{3s+\epsilon}$ for any $\epsilon>0$ -- even if $g$ and $\Omega$ are $C^\infty$.
The extremely precise extraction of the proton radius by Pohl et al. from the measured energy difference between the 2P and 2S states of muonic hydrogen disagrees significantly with that extracted from electronic hydrogen or elastic electron-proton scattering. This is the proton radius puzzle. The origins of the puzzle and the reasons for believing it to be very significant are explained. Various possible solutions of the puzzle are identified, and future work needed to resolve the puzzle is discussed.
We analyze the effective action describing the linearised gravitational self-action for a classical superconducting string in a curved spacetime. It is shown that the divergent part of the effective action is equal to zero for the both Nambu-Goto and chiral superconducting string.
The use of propagandistic techniques in online contents has increased in recent years aiming to manipulate online audiences. Efforts to automatically detect and debunk such content have been made addressing various modeling scenarios. These include determining whether the content (text, image, or multimodal) (i) is propagandistic, (ii) employs one or more propagandistic techniques, and (iii) includes techniques with identifiable spans. Significant research efforts have been devoted to the first two scenarios compared to the latter. Therefore, in this study, we focus on the task of detecting propagandistic textual spans. Specifically, we investigate whether large language models (LLMs), such as GPT-4, can effectively perform the task. Moreover, we study the potential of employing the model to collect more cost-effective annotations. Our experiments use a large-scale in-house dataset consisting of annotations from human annotators with varying expertise levels. The results suggest that providing more information to the model as prompts improves its performance compared to human annotations. Moreover, our work is the first to show the potential of utilizing LLMs to develop annotated datasets for this specific task, prompting it with annotations from human annotators with limited expertise. We plan to make the collected span-level labels from multiple annotators, including GPT-4, available for the community.
The common terahertz time-domain spectroscopy (THz-TDS) based on photoconductive antenna (PCA) needs two separate PCA chips. One PCA works as an emitter, and the other works as a receiver. For a reflection-type measurement, the technique called 'attenuated total reflection' usually is needed to enhance the reflection sensitivity. These make the system bulk and complicated for the reflection-type measurement. In this paper, we propose a novel THz-TDS endoscope that is specifically designed for reflection-type measurement. This THz-TDS endoscope is benefited from an integrated photoconductive antenna (we call it iPCA), which integrates the emitter and receiver on a single antenna chip. Therefore, the dimension of the endoscope can be shrunk as much as possible for practical usage. We present the design and working principle of this THz-TDS endoscope in details. It may open a promising way for the THz-TDS application in biomedical fields.
This work develops non-asymptotic theory for estimation of the long-run variance matrix and its inverse, the so-called precision matrix, for high-dimensional time series under general assumptions on the dependence structure including long-range dependence. The estimation involves shrinkage techniques which are thresholding and penalizing versions of the classical multivariate local Whittle estimator. The results ensure consistent estimation in a double asymptotic regime where the number of component time series is allowed to grow with the sample size as long as the true model parameters are sparse. The key technical result is a concentration inequality of the local Whittle estimator for the long-run variance matrix around the true model parameters. In particular, it handles simultaneously the estimation of the memory parameters which enter the underlying model. Novel algorithms for the considered procedures are proposed, and a simulation study and a data application are also provided.
This survey of alternating permutations and Euler numbers includes refinements of Euler numbers, other occurrences of Euler numbers, longest alternating subsequences, umbral enumeration of classes of alternating permutations, and the cd-index of the symmetric group.
Direct photon production is an important process at hadron colliders, being relevant both for precision measurement of the gluon density, and as background to Higgs and other new physics searches. Here we explore the implications of recently derived results for high energy resummation of direct photon production for the interpretation of measurements at the Tevatron and the LHC. The effects of resummation are compared to various sources of theoretical uncertainties like PDFs and scale variations. We show how the high--energy resummation procedure stabilizes the logarithmic enhancement of the cross section at high--energy which is present at any fixed order in the perturbative expansion starting at NNLO. The effects of high--energy resummation are found to be negligible at Tevatron, while they enhance the cross section by a few percent for $p_T \lsim 10$ GeV at the LHC. Our results imply that the discrepancy at small $p_T$ between fixed order NLO and Tevatron data cannot be explained by unresummed high--energy contributions.
The folding rates of two-state proteins have been found to correlate with simple measures of native-state topology. The most prominent among these measures is the relative contact order (CO), which is the average CO or 'localness' of all contacts in the native protein structure, divided by the chain length. Here, we test whether such measures can be generalized to capture the effect of chain crosslinks on the folding rate. Crosslinks change the chain connectivity and therefore also the localness of some of the the native contacts. These changes in localness can be taken into account by the graph-theoretical concept of effective contact order (ECO). The relative ECO, however, the natural extension of the relative CO for proteins with crosslinks, overestimates the changes in the folding rates caused by crosslinks. We suggest here a novel measure of native-state topology, the relative logCO, and its natural extension, the relative logECO. The relative logCO is the average value for the logarithm of the CO of all contacts, divided by the logarithm of the chain length. The relative log(E)CO reproduces the folding rates of a set of 26 two-state proteins without crosslinks with essentially the same high correlation coefficient as the relative CO. In addition, it also captures the folding rates of 8 two-state proteins with crosslinks.
Atutov and Shalagin (1988) proposed light-induced drift (LID) as a physically well understandable mechanism to explain the formation of isotopic anomalies observed in CP stars. We generalized the theory of LID and applied it to diffusion of heavy elements and their isotopes in quiescent atmospheres of CP stars. Diffusional segregation of isotopes of chemical elements is described by the equations of continuity and diffusion velocity. Computations of the evolutionary sequences for abundances of mercury isotopes in several model atmospheres have been made using the Fortran 90 program SMART, composed by the authors. Results confirm predominant role of LID in separation of isotopes.
We are concerned with the convergence of a numerical scheme for the initial-boundary value problem associated to the Korteweg-de Vries- Kawahara equation (in short Kawahara equation), which is a transport equation perturbed by dispersive terms of 3rd and 5th order. This equation appears in several uid dynamics problems. It describes the evolution of small but finite amplitude long waves in various problems in uid dynamics. We prove here the convergence of both semi-discrete as well as fully-discrete finite difference schemes for the Kawahara equation. Finally, the convergence is illustratred by several examples.
We describe a maximum-likelihood technique for the removal of contaminating radio sources from interferometric observations of the Sunyaev-Zel'dovich (SZ) effect. This technique, based on a simultaneous fit for the radio sources and extended SZ emission, is also compared to techniques previously applied to Ryle Telescope observations and is found to be robust. The technique is then applied to new observations of the cluster Abell 611, and a decrement of -540 +/- 125 microJy/beam is found. This is combined with a ROSAT HRI image and a published ASCA temperature to give an Hubble constant estimate of 52+24-16 km/s/Mpc.
In this work, we study a class of deception planning problems in which an agent aims to alter a security monitoring system's sensor readings so as to disguise its adversarial itinerary as an allowed itinerary in the environment. The adversarial itinerary set and allowed itinerary set are captured by regular languages. To deviate without being detected, we investigate whether there exists a strategy for the agent to alter the sensor readings, with a minimal cost, such that for any of those paths it takes, the system thinks the agent took a path within the allowed itinerary. Our formulation assumes an offline sensor alteration where the agent determines the sensor alteration strategy and implement it, and then carry out any path in its deviation itinerary. We prove that the problem of solving the optimal sensor alteration is NP-hard, by a reduction from the directed multi-cut problem. Further, we present an exact algorithm based on integer linear programming and demonstrate the correctness and the efficacy of the algorithm in case studies.
This paper discusses the hardness of finding minimal good-for-games (GFG) Buchi, Co-Buchi, and parity automata with state based acceptance. The problem appears to sit between finding small deterministic and finding small nondeterministic automata, where minimality is NP-complete and PSPACE-complete, respectively. However, recent work of Radi and Kupferman has shown that minimising Co-Buchi automata with transition based acceptance is tractable, which suggests that the complexity of minimising GFG automata might be cheaper than minimising deterministic automata. We show for the standard state based acceptance that the minimality of a GFG automaton is NP-complete for Buchi, Co-Buchi, and parity GFG automata. The proofs are a surprisingly straight forward generalisation of the proofs from deterministic Buchi automata: they use a similar reductions, and the same hard class of languages.
Speech-driven 3D face animation aims to generate realistic facial expressions that match the speech content and emotion. However, existing methods often neglect emotional facial expressions or fail to disentangle them from speech content. To address this issue, this paper proposes an end-to-end neural network to disentangle different emotions in speech so as to generate rich 3D facial expressions. Specifically, we introduce the emotion disentangling encoder (EDE) to disentangle the emotion and content in the speech by cross-reconstructed speech signals with different emotion labels. Then an emotion-guided feature fusion decoder is employed to generate a 3D talking face with enhanced emotion. The decoder is driven by the disentangled identity, emotional, and content embeddings so as to generate controllable personal and emotional styles. Finally, considering the scarcity of the 3D emotional talking face data, we resort to the supervision of facial blendshapes, which enables the reconstruction of plausible 3D faces from 2D emotional data, and contribute a large-scale 3D emotional talking face dataset (3D-ETF) to train the network. Our experiments and user studies demonstrate that our approach outperforms state-of-the-art methods and exhibits more diverse facial movements. We recommend watching the supplementary video: https://ziqiaopeng.github.io/emotalk
Bregman divergences play a central role in the design and analysis of a range of machine learning algorithms. This paper explores the use of Bregman divergences to establish reductions between such algorithms and their analyses. We present a new scaled isodistortion theorem involving Bregman divergences (scaled Bregman theorem for short) which shows that certain "Bregman distortions'" (employing a potentially non-convex generator) may be exactly re-written as a scaled Bregman divergence computed over transformed data. Admissible distortions include geodesic distances on curved manifolds and projections or gauge-normalisation, while admissible data include scalars, vectors and matrices. Our theorem allows one to leverage to the wealth and convenience of Bregman divergences when analysing algorithms relying on the aforementioned Bregman distortions. We illustrate this with three novel applications of our theorem: a reduction from multi-class density ratio to class-probability estimation, a new adaptive projection free yet norm-enforcing dual norm mirror descent algorithm, and a reduction from clustering on flat manifolds to clustering on curved manifolds. Experiments on each of these domains validate the analyses and suggest that the scaled Bregman theorem might be a worthy addition to the popular handful of Bregman divergence properties that have been pervasive in machine learning.
We carry out a delay stability analysis (i.e., determine conditions under which expected steady-state delays at a queue are finite) for a simple 3-queue system operated under the Max-Weight scheduling policy, for the case where one of the queues is fed by heavy-tailed traffic (i.e, when the number of arrivals at each time slot has infinite second moment). This particular system exemplifies an intricate phenomenon whereby heavy-tailed traffic at one queue may or may not result in the delay instability of another queue, depending on the arrival rates. While the ordinary stability region (in the sense of convergence to a steady-state distribution) is straightforward to determine, the determination of the delay stability region is more involved: (i) we use "fluid-type" sample path arguments, combined with renewal theory, to prove delay instability outside a certain region; (ii) we use a piecewise linear Lyapunov function to prove delay stability in the interior of that same region; (iii) as an intermediate step in establishing delay stability, we show that the expected workload of a stable M/GI/1 queue scales with time as $\mathcal{O}(t^{1/(1+\gamma)})$, assuming that service times have a finite $1+\gamma$ moment, where $\gamma \in (0,1)$.
New physics close to the electroweak scale is well motivated by a number of theoretical arguments. However, colliders, most notably the Large Hadron Collider (LHC), have failed to deliver evidence for physics beyond the Standard Model. One possibility for how new electroweak-scale particles could have evaded detection so far is if they carry only electroweak charge, i.e. are color neutral. Future $e^+e^-$ colliders are prime tools to study such new physics. Here, we investigate the sensitivity of $e^+e^-$ colliders to scalar partners of the charged leptons, known as sleptons in supersymmetric extensions of the Standard Model. In order to allow such scalar lepton partners to decay, we consider models with an additional neutral fermion, which in supersymmetric models corresponds to a neutralino. We demonstrate that future $e^+e^-$ colliders would be able to probe most of the kinematically accessible parameter space, i.e. where the mass of the scalar lepton partner is less than half of the collider's center-of-mass energy, with only a few days of data. Besides constraining more general models, this would allow to probe some well motivated dark matter scenarios in the Minimal Supersymmetric Standard Model, in particular the incredible bulk and stau co-annihilation scenarios.
A new matrix representation for low-energy limit of heterotic string theory reduced to three dimensions is considered. The pair of matrix Ernst Potentials uniquely connected with the coset matrix is derived. The action of the symmetry group on the Ernst potentials is established.
We study some consequences of dimensionally reducing systems with massless fermions and Abelian gauge fields from 3+1 to 2+1 dimensions. We first consider fermions in the presence of an external Abelian gauge field. In the reduced theory, obtained by compactifying one of the coordinates `a la Kaluza-Klein, magnetic flux strings are mapped into domain wall defects. Fermionic zero modes, localized around the flux strings of the 3+1 dimensional theory, become also zero modes in the reduced theory, via the Callan and Harvey mechanism, and are concentrated around the domain wall defects. We also study a dynamical model: massless $QED_4$, with fermions confined to a plane, deriving the effective action that describes the `planar' system.
We relate the cardinality of the $p$-primary part of the Bloch-Kato Selmer group over $\mathbb{Q}$ attached to a modular form at a non-ordinary prime $p$ to the constant term of the characteristic power series of the signed Selmer groups over the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$. This generalizes a result of Vigni and Longo in the ordinary case. In the case of elliptic curves, such results follow from earlier works by Greenberg, Kim, the second author, and Ahmed-Lim, covering both the ordinary and most of the supersingular case.
Quantum mechanics can speed up a range of search applications over unsorted data. For example imagine a phone directory containing N names arranged in completely random order. To find someone's phone number with a probability of 50%, any classical algorithm (whether deterministic or probabilistic) will need to access the database a minimum of O(N) times. Quantum mechanical systems can be in a superposition of states and simultaneously examine multiple names. By properly adjusting the phases of various operations, successful computations reinforce each other while others interfere randomly. As a result, the desired phone number can be obtained in only O(sqrt(N)) accesses to the database.
In this paper, we present a global complexity analysis of the classical BFGS method with inexact line search, as applied to minimizing a strongly convex function with Lipschitz continuous gradient and Hessian. We consider a variety of standard line search strategies including the backtracking line search based on the Armijo condition, Armijo-Goldstein and Wolfe-Powell line searches. Our analysis suggests that the convergence of the algorithm proceeds in several different stages before the fast superlinear convergence actually begins. Furthermore, once the initial point is far away from the minimizer, the starting moment of superlinear convergence may be quite large. We show, however, that this drawback can be easily rectified by using a simple restarting procedure.