CHTC Partners Using CHTC Technologies and Services

IceCube and Francis Halzen

IceCube has transformed a cubic kilometer of natural Antarctic ice into a neutrino detector. We have discovered a flux of high-energy neutrinos of cosmic origin, with an energy flux that is comparable to that of high-energy photons. We have also identified its first source: on September 22, 2017, following an alert initiated by a 290-TeV neutrino, observations by other astronomical telescopes pinpointed a flaring active galaxy, powered by a supermassive black hole. We study the neutrinos themselves, some with energies exceeding by one million those produced by accelerators. The IceCube Neutrino Observatory is managed and operated by the Wisconsin IceCube Astroparticle Physics Center (WIPAC) in the Office of the Vice Chancellor of Graduate Education and Research and funded by a cooperative agreement with the National Science Foundation. We have used CHTC and the Open Science Pool for over a decade to perform all large-scale data analysis tasks and generate Monte Carlo simulations of the instrument's performance. Without CHTC and OSP resources we would simply be unable to make any of IceCube's groundbreaking discoveries. Francis Halzen is the Principal Investigator of IceCube. See the IceCube web site for project details.

Susan Hagness

In our research we're working on a novel computational tool for THz-frequency characterization of materials with high carrier densities, such as highly-doped semiconductors and metals. The numerical technique tracks carrier-field dynamics by combining the ensemble Monte Carlo simulator of carrier dynamics with the finite-difference time-domain technique for Maxwell's equations and the molecular dynamics technique for close-range Coulomb interactions. This technique is computationally intensive and each test runs long enough (12-20 hours) that our group's cluster isn't enough. This is why we think CHTC can help, to let us run more jobs than we're able to run now.

Phil Townsend

Professor Phil Townsend of Forestry and Wildlife Ecology says Our research (NASA & USDA Forest Service funded) strives to understand the outbreak dynamic of major forest insect pests in North America through simulation modeling. As part of this effort, we map forest species and their abundance using multi-temporal Landsat satellite data. My colleagues have written an automatic variable selection routine in MATLAB to preselect the most important image variables to model and map forest species abundance. However, depending on the number of records and the initial variables, this process can take weeks to run. Hence, we seek resources to speed up this process.

Biomagnetic Resonance Data Bank

The Biomagnetic Resonance Data Bank (BMRB) is headquarted within UW-Madison's National Magnetic Resonance Facility at Madison (NMRFAM) and uses the CHTC for research in connection with the Biological Magnetic Resonance Data Bank (BMRB).

Natalia de Leon

Ethanol per acre is determined by the amount of biomass per acre and the quality of that biomass. Quality includes the concentration of fermentable sugars, the availability of those sugars for fermentation, and the concentration of inhibitors to the fermentation process. We are using maize (Zea mays L.) as a model grass to identify genes and pathways underlying these traits. Maize is an excellent model both because it is a potential source of biomass for the lignocellulosic ethanol industry, and also because it is closely related to other important dedicated-bioenergy species including Miscanthus (Miscanthus giganteus) and switchgrass (Panicum virgatum). Our approach is to genetically dissect endogenous variation for biomass quantity and quality in maize, utilizing genetic mapping, association analysis and transcriptional profiling to identify genes and alleles that underlie phenotypic variation among maize genotypes. This forward genetic analysis provides an entry point into genes and pathways that could be further studied and manipulated to maximize ethanol production. For that end, we utilize cutting-edge genomic technologies, such as novel high throughput approaches to sequencing and genome-wide expression profiling technologies, as well as advanced computational procedures which utilize genomic information to understand the molecular basis of quantitative variation. This research is part of the Department of Energy-supported Great Lakes Bioenergy Research Center at the University of Wisconsin, Madison.

Sebastian Heinz

We are working on investigating the impact that astrophysical jets have on the intergalactic medium (IGM) in galaxy clusters. These jets, eminating from supermassive black holes at the centers of galaxies, can significantly impact the IGM, which in turn impacts galaxy evolution. Computer simulations permit us to test various scenarios like determining how to heat the IGM.

Barry Van Veen

The bio-signal processing laboratory develops statistical signal processing methods for biomedical problems. We use CHTC for casual network modeling of brain electrical activity. We develop methods for identifying network models from noninvasive measures of electric/ magnetic fields at the scalp, or invasive measures of the electric fields at or in the cortex, such as electrocorticography. Model identification involves high throughput computing applied to large datasets consisting of hundreds of spatial channels each containing thousands of time samples.

CMS LHC Compact Muon Solenoid

The UW team participating in the Compact Muon Solenoid (CMS) experiment analyzes petabytes of data from proton-proton collisions in the Large Hadron Collider (LHC). We use the unprecedented energies of the LHC to study Higgs Boson signatures, Electroweak Physics, and the possibility of exotic particles beyond the Standard Model of Particle Physics. Important calculations are also performed to better tune the experiment's trigger system, which is responsible for making nanosecond-scale decisions about which collisions in the LHC should be recorded for further analysis.

Atlas Experiment

Atlas Experiment

Hazy Research Group

The Hazy Research Group of the department of Computer Sciences is led by Christopher M. Re, with interests in large-scale, and deep data analytics. A machine reading system is a large software system that extracts information and knowledge buried in raw data such as text, tables, figures, and scanned documents. For example, it can extract facts like "Barack Obama wins the 2012 election" from news articles, or "Barnett Formation contains 6% Carbon" from geology journal articles. To extract this kind of information, a machine reading system requires deep understanding and statistical analytics over large document corpora. In the Hazy Research Group, we are building a machine reading system that supports scientific applications like GeoDeepDive, and many other projects. Please visit our YouTube Channel for video overviews of our projects. We leverage the resources of the CHTC, and the national Open Science Grid to enable our machine reading system to quickly perform a whole host of computationally expensive tasks like statistical linguistic processing, speech-to-text transcription, and optical character recognition (OCR). For example, on a crawl of 500 million web pages, we estimated that our deep linguistic parsing would take more than 5 years on a single machine. With the help of CHTC, we were able to do it in just 1 week! Similarly, on a recent batch of 30,000 geology journal articles, we estimated that the OCR task would take 34 years on a single machine. With CHTC, it took about 2 weeks. Thanks CHTC!