Skip to main content
eScholarship
Open Access Publications from the University of California

About

The mission of Computing Sciences at Berkeley Lab is to achieve transformational, breakthrough impacts in scientific domains through the discovery and use of advanced computational methods and systems and to make those instruments accessible to the broad scientific community.

Computing Sciences

There are 5688 publications in this collection, published between 1962 and 2024.
Applied Math & Comp Sci (1984)

Real time evolution for ultracompact Hamiltonian eigenstates on quantum hardware

In this work we present a detailed analysis of variational quantum phase estimation (VQPE), a method based on real-time evolution for ground and excited state estimation on near-term hardware. We derive the theoretical ground on which the approach stands, and demonstrate that it provides one of the most compact variational expansions to date for solving strongly correlated Hamiltonians. At the center of VQPE lies a set of equations, with a simple geometrical interpretation, which provides conditions for the time evolution grid in order to decouple eigenstates out of the set of time evolved expansion states, and connects the method to the classical filter diagonalization algorithm. Further, we introduce what we call the unitary formulation of VQPE, in which the number of matrix elements that need to be measured scales linearly with the number of expansion states, and we provide an analysis of the effects of noise which substantially improves previous considerations. The unitary formulation allows for a direct comparison to iterative phase estimation. Our results mark VQPE as both a natural and highly efficient quantum algorithm for ground and excited state calculations of general many-body systems. We demonstrate a hardware implementation of VQPE for the transverse field Ising model. Further, we illustrate its power on a paradigmatic example of strong correlation (Cr2 in the SVP basis set), and show that it is possible to reach chemical accuracy with as few as ~50 timesteps.

EXAGRAPH: Graph and combinatorial methods for enabling exascale applications

Combinatorial algorithms in general and graph algorithms in particular play a critical enabling role in numerous scientific applications. However, the irregular memory access nature of these algorithms makes them one of the hardest algorithmic kernels to implement on parallel systems. With tens of billions of hardware threads and deep memory hierarchies, the exascale computing systems in particular pose extreme challenges in scaling graph algorithms. The codesign center on combinatorial algorithms, ExaGraph, was established to design and develop methods and techniques for efficient implementation of key combinatorial (graph) algorithms chosen from a diverse set of exascale applications. Algebraic and combinatorial methods have a complementary role in the advancement of computational science and engineering, including playing an enabling role on each other. In this paper, we survey the algorithmic and software development activities performed under the auspices of ExaGraph from both a combinatorial and an algebraic perspective. In particular, we detail our recent efforts in porting the algorithms to manycore accelerator (GPU) architectures. We also provide a brief survey of the applications that have benefited from the scalable implementations of different combinatorial algorithms to enable scientific discovery at scale. We believe that several applications will benefit from the algorithmic and software tools developed by the ExaGraph team.

1981 more worksshow all
NERSC (1754)

Measurement of the inclusive cross-sections of single top-quark and top-antiquark t-channel production in pp collisions at s=13 TeV with the ATLAS detector

A measurement of the t-channel single-top-quark and single-top-antiquark production cross-sections in the lepton+jets channel is presented, using 3.2 fb−1 of proton-proton collision data at a centre-of-mass energy of 13 TeV, recorded with the ATLAS detector at the LHC in 2015. Events are selected by requiring one charged lepton (electron or muon), missing transverse momentum, and two jets with high transverse momentum, exactly one of which is required to be b-tagged. Using a binned maximum-likelihood fit to the discriminant distribution of a neural network, the cross-sections are determined to be σ(tq) = 156 ± 5 (stat.) ± 27 (syst.) ± 3 (lumi.) pb for single top-quark production and σ(t¯ q) = 91 ± 4 (stat.) ± 18 (syst.) ± 2 (lumi.) pb for single top-antiquark production, assuming a top-quark mass of 172.5 GeV. The cross-section ratio is measured to be Rt=σ(tq)/σ(t¯q)=1.72±0.09 (stat.) ± 0.18 (syst.). All results are in agreement with Standard Model predictions.[Figure not available: see fulltext.]

Combined search for neutrinos from dark matter self-annihilation in the Galactic Center with ANTARES and IceCube

We present the results of the first combined dark matter search targeting the Galactic Center using the ANTARES and IceCube neutrino telescopes. For dark matter particles with masses from 50 to 1000 GeV, the sensitivities on the self-annihilation cross section set by ANTARES and IceCube are comparable, making this mass range particularly interesting for a joint analysis. Dark matter self-annihilation through the τþτ−, μþμ−, bb¯, and WþW− channels is considered for both the Navarro-Frenk-White and Burkert halo profiles. In the combination of 2101.6 days of ANTARES data and 1007 days of IceCube data, no excess over the expected background is observed. Limits on the thermally averaged dark matter annihilation cross section hσAυi are set. These limits present an improvement of up to a factor of 2 in the studied dark matter mass range with respect to the individual limits published by both collaborations. When considering dark matter particles with a mass of 200 GeV annihilating through the τþτ− channel, the value obtained for the limit is 7.44 × 10−24 cm3 s−1 for the Navarro-Frenk-White halo profile. For the purpose of this joint analysis, the model parameters and the likelihood are unified, providing a benchmark for forthcoming dark matter searches performed by neutrino telescopes.

1751 more worksshow all
Scientific Data (2222)

High-Performance Computational Intelligence and Forecasting Technologies

This report provides an introduction to the Computational Intelligence and Forecasting Technologies (CIFT) project at Lawrence Berkeley National Laboratory (LBNL). The main objective of CIFT is to promote the use of high-performance computing (HPC) tools and techniques for analysis of streaming data. After noticing the data volume being given as the explanation for the five-month delay for SEC and CFTC to issue their report on the 2010 Flash Crash, LBNL started the CIFT project to apply HPC technologies to manage and analyze financial data. Making timely decisions with streaming data is a requirement for many different applications, such as avoiding impending failure in the electric power grid or a liquidity crisis in financial markets. In all these cases, the HPC tools are well suited in handling the complex data dependencies and providing timely solutions. Over the years, CIFT has worked on a number of different forms of streaming data, including those from vehicle traffic, electric power grid, electricity usage, and so on. The following sections explain the key features of HPC systems, introduce a few special tools used on these systems, and provide examples of streaming data analyses using these HPC tools.

Measurement of the inclusive cross-sections of single top-quark and top-antiquark t-channel production in pp collisions at s=13 TeV with the ATLAS detector

A measurement of the t-channel single-top-quark and single-top-antiquark production cross-sections in the lepton+jets channel is presented, using 3.2 fb−1 of proton-proton collision data at a centre-of-mass energy of 13 TeV, recorded with the ATLAS detector at the LHC in 2015. Events are selected by requiring one charged lepton (electron or muon), missing transverse momentum, and two jets with high transverse momentum, exactly one of which is required to be b-tagged. Using a binned maximum-likelihood fit to the discriminant distribution of a neural network, the cross-sections are determined to be σ(tq) = 156 ± 5 (stat.) ± 27 (syst.) ± 3 (lumi.) pb for single top-quark production and σ(t¯ q) = 91 ± 4 (stat.) ± 18 (syst.) ± 2 (lumi.) pb for single top-antiquark production, assuming a top-quark mass of 172.5 GeV. The cross-section ratio is measured to be Rt=σ(tq)/σ(t¯q)=1.72±0.09 (stat.) ± 0.18 (syst.). All results are in agreement with Standard Model predictions.[Figure not available: see fulltext.]

Constraining Reionization with the z ∼ 5–6 Lyα Forest Power Spectrum: The Outlook after Planck

The latest measurements of cosmic microwave background electron-scattering optical depth reported by Planck significantly reduces the allowed space of reionization models, pointing toward a later ending and/or less extended phase transition than previously believed. Reionization impulsively heats the intergalactic medium (IGM) to , and owing to long cooling and dynamical times in the diffuse gas that are comparable to the Hubble time, memory of reionization heating is retained. Therefore, a late-ending reionization has significant implications for the structure of the Lyμ forest. Using state-of-the-art hydrodynamical simulations that allow us to vary the timing of reionization and its associated heat injection, we argue that extant thermal signatures from reionization can be detected via the Lyμ forest power spectrum at . This arises because the small-scale cutoff in the power depends not only the the IGM temperature at these epochs, but is also particularly sensitive to the pressure-smoothing scale set by the IGM full thermal history. Comparing our different reionization models with existing measurements of the Lyμ forest flux power spectrum at , we find that models satisfying Planck's constraint favor a moderate amount of heat injection consistent with galaxies driving reionization, but disfavoring quasar-driven scenarios. We study the feasibility of measuring the flux power spectrum at using mock quasar spectra and conclude that a sample of ∼10 high-resolution spectra with an attainable signal-to-noise ratio will allow distinguishing between different reionization scenarios.

2219 more worksshow all
Scientific Networking (365)

High-Performance Computational Intelligence and Forecasting Technologies

This report provides an introduction to the Computational Intelligence and Forecasting Technologies (CIFT) project at Lawrence Berkeley National Laboratory (LBNL). The main objective of CIFT is to promote the use of high-performance computing (HPC) tools and techniques for analysis of streaming data. After noticing the data volume being given as the explanation for the five-month delay for SEC and CFTC to issue their report on the 2010 Flash Crash, LBNL started the CIFT project to apply HPC technologies to manage and analyze financial data. Making timely decisions with streaming data is a requirement for many different applications, such as avoiding impending failure in the electric power grid or a liquidity crisis in financial markets. In all these cases, the HPC tools are well suited in handling the complex data dependencies and providing timely solutions. Over the years, CIFT has worked on a number of different forms of streaming data, including those from vehicle traffic, electric power grid, electricity usage, and so on. The following sections explain the key features of HPC systems, introduce a few special tools used on these systems, and provide examples of streaming data analyses using these HPC tools.

Algal genomes reveal evolutionary mosaicism and the fate of nucleomorphs

Cryptophyte and chlorarachniophyte algae are transitional forms in the widespread secondary endosymbiotic acquisition of photosynthesis by engulfment of eukaryotic algae. Unlike most secondary plastid-bearing algae, miniaturized versions of the endosymbiont nuclei (nucleomorphs) persist in cryptophytes and chlorarachniophytes. To determine why, and to address other fundamental questions about eukaryote eukaryote endosymbiosis, we sequenced the nuclear genomes of the cryptophyte Guillardia theta and the chlorarachniophyte Bigelowiella natans. Both genomes have 21,000 protein genes and are intron rich, and B. natans exhibits unprecedented alternative splicing for a single-celled organism. Phylogenomic analyses and subcellular targeting predictions reveal extensive genetic and biochemical mosaicism, with both host- and endosymbiont-derived genes servicing the mitochondrion, the host cell cytosol, the plastid and the remnant endosymbiont cytosol of both algae. Mitochondrion-to-nucleus gene transfer still occurs in both organisms but plastid-to-nucleus and nucleomorph-to-nucleus transfers do not, which explains why a small residue of essential genes remains locked in each nucleomorph.

362 more worksshow all