- Explanation of the 95 GeV $γγ$ and $b\bar{b}$ excesses in the Minimal Left-Right Symmetric Model We propose a simple interpretation of the gammagamma excesses reported by both CMS and ATLAS groups at 95 GeV together with the LEP excess in the Zbb channel around the same mass in terms of a neutral scalar field in the minimal left-right symmetric model (LRSM). We point out that the scalar field which implements the seesaw mechanism for neutrino masses has all the right properties to explain these observations, without introducing any extra scalar fields. The key point is that this scalar particle is hardly constrained because it couples only to heavy right-handed particles. As a result, the diphoton decay mode receives contributions from both mixing with the Standard Model (SM) Higgs and the heavy charged bosons in the LRSM, depending on the SU(2)_Rtimes U(1)_{B-L} symmetry breaking scale v_R. The complete allowed parameter space for explaining the 95 GeV excesses in this model can be probed with the high-precision measurements of the SM Higgs mixing with other scalars at the high-luminosity LHC and future Higgs factories. 3 authors · Dec 29, 2023
- On the Higgs spectra of the 3-3-1 model with the sextet of scalars engendering the type II seesaw mechanism In the 3-3-1 model with right-handed neutrinos, three triplets of scalars engender the correct sequence of symmetry breaking, SU(3)_C times SU(3)_L times U(1)_X rightarrow SU(3)_C times SU(2)_L times U(1)_Y rightarrow SU(3)_C times U(1)_{EM}, generating mass for all fermions, except neutrinos. Tiny neutrino masses may be achieved by adding one sextet of scalars to the original scalar content. As consequence, it emerges a very complex scalar sector, involving terms that violate lepton number explicitly, too. The main obstacle to the development of the phenomenology of such scenario is the knowledge of its spectrum of scalars since, now, there are 15 massive scalar particles on it. The proposal of this work is to do an exhaustive analysis of such scalar sector with lepton number being explicitly violated at low, electroweak and high energy scales by means of trilinear terms in the potential. The first case can be addressed analytically and, as a nice result, we have observed that the scalar content of such case is split into two categories: One belonging to the 331 energy scale and the other belonging to the EWSB energy scale, with the last recovering the well known THDM+triplet. For the other cases, the scalar sector can be addressed only numerically. Hence, we proposed a very general approach for the numerical study of the potential, avoiding simplifications that can make us reach conclusions without foundation. We show that, in the case of lepton number being explicitly violated at electroweak scale, it is possible to recover the same physics of the THDM+triplet, as the previous case. Among all the possibilities, we call the attention to one special case which generates the 3HDM+triplet scenario. For the last case, when lepton number is violated at high energy scale, the sextet become very massive and decouples from the original scalar content of the 3-3-1 model. 2 authors · Dec 20, 2022
- Breaking an Abelian gauge symmetry near a black hole horizon I argue that coupling the Abelian Higgs model to gravity plus a negative cosmological constant leads to black holes which spontaneously break the gauge invariance via a charged scalar condensate slightly outside their horizon. This suggests that black holes can superconduct. 1 authors · Jan 18, 2008
- Single replica spin-glass phase detection using field variation and machine learning The Sherrington-Kirkpatrick spin-glass model used the replica symmetry method to find the phase transition of the system. In 1979-1980, Parisi proposed a solution based on replica symmetry breaking (RSB), which allowed him to identify the underlying phases of complex systems such as spin-glasses. Regardless of the method used for detection, the intrinsic phase of a system exists whether or not replicas are considered. We introduce a single replica method of spin-glass phase detection using the field's variation experienced by each spin in a system configuration. This method focuses on a single replica with quenched random couplings. Each spin inevitably observes a different field from the others. Our results show that the mean and variance of fields named "Spontaneous Configurational Field" experienced by spins are suitable indicators to explore different ferromagnetic, paramagnetic, and mixed phases. To classify different phases of the system with defined indicators we have developed an algorithm based on machine learning to analyze the desired samples. 4 authors · Nov 7, 2024
- Kibble-Zurek Mechanism and Beyond: Lessons from a Holographic Superfluid Disk The superfluid phase transition dynamics and associated spontaneous vortex formation with the crossing of the critical temperature in a disk geometry is studied in the framework of the AdS/CFT correspondence by solving the Einstein-Abelian-Higgs model in an AdS_4 black hole. For a slow quench, the vortex density admits a universal scaling law with the cooling rate as predicted by the Kibble-Zurek mechanism (KZM), while for fast quenches, the density shows a universal scaling behavior as a function of the final temperature, that lies beyond the KZM prediction. The vortex number distribution in both the power-law and saturation regimes can be approximated by a normal distribution. However, the study of the universal scaling of the cumulants reveals non-normal features and indicates that vortex statistics in the newborn superfluid is best described by the Poisson binomial distribution, previously predicted in the KZM regime [Phys. Rev. Lett. 124, 240602 (2020)]. This is confirmed by studying the cumulant scalings as a function of the quench time and the quench depth. Our work supports the existence of a universal defect number distribution that accommodates the KZM scaling, its breakdown at fast quenches, and the additional universal scaling laws as a function of the final value of the control parameter. 4 authors · Jun 7, 2024
- Gravity/Spin-model correspondence and holographic superfluids We propose a general correspondence between gravity and spin models, inspired by the well-known IR equivalence between lattice gauge theories and the spin models. This suggests a connection between continuous type Hawking-phase transitions in gravity and the continuous order-disorder transitions in ferromagnets. The black-hole phase corresponds to the ordered and the graviton gas corresponds to the disordered phases respectively. A simple set-up based on Einstein-dilaton gravity indicates that the vicinity of the phase transition is governed by a linear-dilaton CFT. Employing this CFT we calculate scaling of observables near T_c, and obtain mean-field scaling in a semi-classical approximation. In case of the XY model the Goldstone mode is identified with the zero mode of the NS-NS two-form. We show that the second speed of sound vanishes at the transition also with the mean field exponent. 1 authors · Jul 27, 2010
- Holographic quantum criticality from multi-trace deformations We explore the consequences of multi-trace deformations in applications of gauge-gravity duality to condensed matter physics. We find that they introduce a powerful new "knob" that can implement spontaneous symmetry breaking, and can be used to construct a new type of holographic superconductor. This knob can be tuned to drive the critical temperature to zero, leading to a new quantum critical point. We calculate nontrivial critical exponents, and show that fluctuations of the order parameter are `locally' quantum critical in the disordered phase. Most notably the dynamical critical exponent is determined by the dimension of an operator at the critical point. We argue that the results are robust against quantum corrections and discuss various generalizations. 3 authors · Aug 9, 2010
- Condensed matter and AdS/CFT I review two classes of strong coupling problems in condensed matter physics, and describe insights gained by application of the AdS/CFT correspondence. The first class concerns non-zero temperature dynamics and transport in the vicinity of quantum critical points described by relativistic field theories. I describe how relativistic structures arise in models of physical interest, present results for their quantum critical crossover functions and magneto-thermoelectric hydrodynamics. The second class concerns symmetry breaking transitions of two-dimensional systems in the presence of gapless electronic excitations at isolated points or along lines (i.e. Fermi surfaces) in the Brillouin zone. I describe the scaling structure of a recent theory of the Ising-nematic transition in metals, and discuss its possible connection to theories of Fermi surfaces obtained from simple AdS duals. 1 authors · Feb 16, 2010
- Metastable Cosmological Constant and Gravitational Bubbles: Ultra-Late-Time Transitions in Modified Gravity The observed cosmological constant may originate as the minimum value U_{min} of a scalar field potential, where the scalar field is frozen due to a large mass. If this vacuum is metastable, it may decay to a true vacuum either at present or in the future. Assuming its decay rate Gamma is comparable to the Hubble expansion rate H_0, we estimate the scale of true vacuum bubbles and analyze their evolution. We find that their initial formation scale is sub-millimeter and their tension causes rapid collapse if m gtrsim 1.7 cdot 10^{-3}, eV. For smaller masses, the bubbles expand at the speed of light. We extend our analysis to scalar-tensor theories with non-minimal coupling, finding that the nucleation scale of gravitational constant bubbles remains consistent with the sub-millimeter regime of General Relativity. The critical mass scale remains around 10^{-3},eV. A theoretical estimate at redshift z_{obs} sim 0.01 suggests an observable bubble radius of sim 50 Mpc, implying a gravitational transition triggered sim 300 Myr ago, with a present-day size approaching 100 Mpc. Additionally, we explore mass ranges (m < 10^{-3},eV) and non-minimal coupling xi ranges (10^{-8},eV^{2-n} - 10^{-1},eV^{2-n}) that lead to a variation Delta G/G_N within the 1%-7% range. We assume non-minimal coupling of the form F(phi)=1/kappa - xi phi^n, with kappa=8pi G_N and 2 leq n leq 9. Finally, we review various local physics or/and transition based proposed solutions to the Hubble tension, including ultra-late-time transitional models (z sim 0.01), screened fifth-force mechanisms, and the Lambda_{rm s}CDM model, which features a transition at z sim 2. We discuss observational hints supporting these scenarios and the theoretical challenges they face. 2 authors · Mar 14, 2025
- A mechanism to generate varying speed of light via Higgs-dilaton coupling: Theory and cosmological applications We allow the Higgs field Phi to interact with a dilaton field chi of the background spacetime via the coupling chi^2,Phi^daggerPhi. Upon spontaneous gauge symmetry breaking, the Higgs VEV becomes proportional to chi. While traditionally this linkage is employed to make the Planck mass and particle masses dependent on chi, we present an textit alternative mechanism: the Higgs VEV will be used to construct Planck's constant hbar and speed of light c. Specifically, each open set vicinity of a given point x^* on the spacetime manifold is equipped with a replica of the Glashow-Weinberg-Salam action operating with its own effective values of hbar_* and c_* per hbar_*proptochi^{-1/2}(x^*) and c_*proptochi^{1/2}(x^*), causing these ``fundamental constants'' to vary alongside the dynamical field chi. Moreover, in each open set around x^*, the prevailing value chi(x^*) determines the length and time scales for physical processes occurring in this region as lproptochi^{-1}(x^*) and tauproptochi^{-3/2}(x^*). This leads to an textit anisotropic relation tau^{-1}propto l^{-3/2} between the rate of clocks and the length of rods, resulting in a distinct set of novel physical phenomena. For late-time cosmology, the variation of c along the trajectory of light waves from distant supernovae towards the Earth-based observer necessitates modifications to the Lema\^itre redshift relation and the Hubble law. These modifications are capable of: (1) Accounting for the Pantheon Catalog of SNeIa through a declining speed of light in an expanding Einstein--de Sitter universe, thus avoiding the need for dark energy; (2) Revitalizing Blanchard-Douspis-Rowan-Robinson-Sarkar's CMB power spectrum analysis that bypassed dark energy [A&A 412, 35 (2003)]; and (3) Resolving the H_0 tension without requiring a dynamical dark energy component. 1 authors · Aug 5, 2024
- Modular versus Hierarchical: A Structural Signature of Topic Popularity in Mathematical Research Mathematical researchers, especially those in early-career positions, face critical decisions about topic specialization with limited information about the collaborative environments of different research areas. The aim of this paper is to study how the popularity of a research topic is associated with the structure of that topic's collaboration network, as observed by a suite of measures capturing organizational structure at several scales. We apply these measures to 1,938 algorithmically discovered topics across 121,391 papers sourced from arXiv metadata during the period 2020--2025. Our analysis, which controls for the confounding effects of network size, reveals a structural dichotomy--we find that popular topics organize into modular "schools of thought," while niche topics maintain hierarchical core-periphery structures centered around established experts. This divide is not an artifact of scale, but represents a size-independent structural pattern correlated with popularity. We also document a "constraint reversal": after controlling for size, researchers in popular fields face greater structural constraints on collaboration opportunities, contrary to conventional expectations. Our findings suggest that topic selection is an implicit choice between two fundamentally different collaborative environments, each with distinct implications for a researcher's career. To make these structural patterns transparent to the research community, we developed the Math Research Compass (https://mathresearchcompass.com), an interactive platform providing data on topic popularity and collaboration patterns across mathematical topics. 1 authors · Jun 28, 2025
- Anomalous CMB polarization and gravitational chirality We consider the possibility that gravity breaks parity, with left and right handed gravitons coupling to matter with a different Newton's constant and show that this would affect their zero-point vacuum fluctuations during inflation. Should there be a cosmic background of gravity waves, the effect would translate into anomalous CMB polarization. Non-vanishing TB (and EB) polarization components emerge, revealing interesting experimental targets. Indeed if reasonable chirality is present a TB measurement would provide the easiest way to detect a gravitational wave background. We speculate on the theoretical implications of such an observation. 3 authors · Jun 18, 2008
3 Unlock Predictable Scaling from Emergent Abilities The scientific scale-up of large language models (LLMs) necessitates a comprehensive understanding of their scaling properties. However, the existing literature on the scaling properties only yields an incomplete answer: optimization loss decreases predictably as the model size increases, in line with established scaling law; yet no scaling law for task has been established and the task performances are far from predictable during scaling. Task performances typically show minor gains on small models until they improve dramatically once models exceed a size threshold, exemplifying the ``emergent abilities''. In this study, we discover that small models, although they exhibit minor performance, demonstrate critical and consistent task performance improvements that are not captured by conventional evaluation strategies due to insufficient measurement resolution. To measure such improvements, we introduce PassUntil, an evaluation strategy through massive sampling in the decoding phase. We conduct quantitative investigations into the scaling law of task performance. Firstly, a strict task scaling law is identified, enhancing the predictability of task performances. Remarkably, we are able to predict the performance of the 2.4B model on code generation with merely 0.05\% deviation before training starts. Secondly, underpinned by PassUntil, we observe concrete evidence of emergent abilities and ascertain that they are not in conflict with the continuity of performance improvement. Their semblance to break-through is that their scaling curve cannot be fitted by standard scaling law function. We then introduce a mathematical definition for the emergent abilities. Through the definition, we refute a prevalent ``multi-step reasoning hypothesis'' regarding the genesis of emergent abilities and propose a new hypothesis with a satisfying fit to the observed scaling curve. 12 authors · Oct 4, 2023
- Phase transitions between Reissner-Nordstrom and dilatonic black holes in 4D AdS spacetime We study Einstein-Maxwell-dilaton gravity models in four-dimensional anti-de Sitter (AdS) spacetime which admit the Reissner-Nordstrom (RN) black hole solution. We show that below a critical temperature the AdS-RN solution becomes unstable against scalar perturbations and the gravitational system undergoes a phase transition. We show using numerical calculations that the new phase is a charged dilatonic black hole. Using the AdS/CFT correspondence we discuss the phase transition in the dual field theory both for non-vanishing temperatures and in the extremal limit. The extremal solution has a Lifshitz scaling symmetry. We discuss the optical conductivity in the new dual phase and find interesting behavior at low frequencies where it shows a "Drude peak". The resistivity varies with temperature in a non-monotonic way and displays a minimum at low temperatures which is reminiscent of the celebrated Kondo effect. 3 authors · Dec 17, 2009
- Superposition Yields Robust Neural Scaling The success of today's large language models (LLMs) depends on the observation that larger models perform better. However, the origin of this neural scaling law -- the finding that loss decreases as a power law with model size -- remains unclear. Starting from two empirical principles -- that LLMs represent more things than the model dimensions (widths) they have (i.e., representations are superposed), and that words or concepts in language occur with varying frequencies -- we constructed a toy model to study the loss scaling with model size. We found that when superposition is weak, meaning only the most frequent features are represented without interference, the scaling of loss with model size depends on the underlying feature frequency; if feature frequencies follow a power law, so does the loss. In contrast, under strong superposition, where all features are represented but overlap with each other, the loss becomes inversely proportional to the model dimension across a wide range of feature frequency distributions. This robust scaling behavior is explained geometrically: when many more vectors are packed into a lower dimensional space, the interference (squared overlaps) between vectors scales inversely with that dimension. We then analyzed four families of open-sourced LLMs and found that they exhibit strong superposition and quantitatively match the predictions of our toy model. The Chinchilla scaling law turned out to also agree with our results. We conclude that representation superposition is an important mechanism underlying the observed neural scaling laws. We anticipate that these insights will inspire new training strategies and model architectures to achieve better performance with less computation and fewer parameters. 3 authors · May 15, 2025
6 The Flaw of Averages: Quantifying Uniformity of Performance on Benchmarks Benchmarks shape scientific conclusions about model capabilities and steer model development. This creates a feedback loop: stronger benchmarks drive better models, and better models demand more discriminative benchmarks. Ensuring benchmark reliability is therefore essential for trustworthy evaluation and meaningful progress. In this work, we study benchmark reliability from a distributional perspective and introduce benchmark harmony, which measures how uniformly a model's performance is distributed across the subdomains of a benchmark. We posit that high harmony is a desirable benchmark property, indicating that the aggregate metric reflects uniform competence across subdomains. Across 19 multiple-choice benchmarks and five model families, we map each benchmark onto a mean-variance plane of harmony computed across models, where high mean and low variance signal more reliable evaluation. Our analysis shows that less harmonious benchmarks can give misleading results, since overall accuracy may be disproportionately influenced by specific subdomains. For instance, ARC-Easy is overwhelmed by questions on Biological Concepts, overshadowing other critical subdomains such as Geography, Physics, Chemistry, and Environmental Science. By recommending that harmony should be reported alongside accuracy, we reframe evaluation from simple performance averages to a more robust, distributionally reliable measurement of performance. 3 authors · Sep 29, 2025
- Measuring a Parity Violation Signature in the Early Universe via Ground-based Laser Interferometers We show that pairs of widely separated interferometers are advantageous for measuring the Stokes parameter V of a stochastic background of gravitational waves. This parameter characterizes asymmetry of amplitudes of right- and left-handed waves and generation of the asymmetry is closely related to parity violation in the early universe. The advantageous pairs include LIGO(Livingston)-LCGT and AIGO-Virgo that are relatively insensitive to Omega_GW (the simple intensity of the background). Using at least three detectors, information of the intensity Omega_GW and the degree of asymmetry V can be separately measured. 2 authors · Jul 4, 2007
- Beyond Symmetries : Anomalies in Transverse Ward--Takahashi Identities Anomalies in transverse Ward--Takahashi identities are studied, allowing discussion of the feasibility of anomalies arising in general non-symmetry Ward--Takahashi identities. We adopt the popular Fujikawa's method and rigorous dimensional renormalization to verify the existence of transverse anomalies to one-loop order and any loop order, respectively. The arbitrariness of coefficients of transverse anomalies is revealed, and a way out is also proposed after relating transverse anomalies to Schwinger terms and comparing symmetry and non-symmetry anomalies. Papers that claim the non-existence of transverse anomalies are reviewed to find anomalies hidden in their approaches. The role played by transverse anomalies is discussed. 2 authors · Dec 31, 2019
- Truly Scale-Equivariant Deep Nets with Fourier Layers In computer vision, models must be able to adapt to changes in image resolution to effectively carry out tasks such as image segmentation; This is known as scale-equivariance. Recent works have made progress in developing scale-equivariant convolutional neural networks, e.g., through weight-sharing and kernel resizing. However, these networks are not truly scale-equivariant in practice. Specifically, they do not consider anti-aliasing as they formulate the down-scaling operation in the continuous domain. To address this shortcoming, we directly formulate down-scaling in the discrete domain with consideration of anti-aliasing. We then propose a novel architecture based on Fourier layers to achieve truly scale-equivariant deep nets, i.e., absolute zero equivariance-error. Following prior works, we test this model on MNIST-scale and STL-10 datasets. Our proposed model achieves competitive classification performance while maintaining zero equivariance-error. 2 authors · Nov 6, 2023
1 Critical scaling law for the deposition efficiency of inertia-driven particle collisions with a cylinder in high Reynolds number air flow The Earth's atmosphere is an aerosol, it contains suspended particles. When air flows over an obstacle such as an aircraft wing or tree branch, these particles may not follow the same paths as the air flowing around the obstacle. Instead the particles in the air may deviate from the path of the air and so collide with the surface of the obstacle. It is known that particle inertia can drive this deposition, and that there is a critical value of this inertia, below which no point particles deposit. Particle inertia is measured by the Stokes number, St. We show that near the critical value of the Stokes number, St_c, the amount of deposition has the unusual scaling law of exp(-1/(St-St_c)^{1/2}). The scaling is controlled by the stagnation point of the flow. This scaling is determined by the time for the particle to reach the surface of the cylinder varying as 1/(St-St_c)^{1/2}, together with the distance away from the stagnation point (perpendicular to the flow direction) increasing exponentially with time. The scaling law applies to inviscid flow, a model for flow at high Reynolds numbers. The unusual scaling means that the amount of particles deposited increases only very slowly above the critical Stokes number. This has consequences for applications ranging from rime formation and fog harvesting to pollination. 2 authors · Jan 3, 2023
24 Gemstones: A Model Suite for Multi-Faceted Scaling Laws Scaling laws are typically fit using a family of models with a narrow range of frozen hyper-parameter choices. In this work we study scaling laws using a wide range of architecture and hyper-parameter choices, and highlight their impact on resulting prescriptions. As a primary artifact of our research, we release the Gemstones: the most comprehensive open-source scaling law dataset to date, consisting of over 4000 checkpoints from transformers with up to 2 billion parameters; these models have been trained with different learning rates, cooldown schedules, and architectural shapes. Our checkpoints enable more complex studies of scaling, such as a law that predicts language modeling performance as a function of model width and depth. By examining the various facets of our model suite, we find that the prescriptions of scaling laws can be highly sensitive to the experimental design process and the specific model checkpoints used during fitting. Code: https://github.com/mcleish7/gemstone-scaling-laws Tom Goldstein's Lab at University of Maryland, College Park · Feb 7, 2025 2