Friday, May 3, 2024

AI for control rooms

 



One of the first versions of AI was a computer that played chess. Developed in the 1950s, it could play a full game without the input of a human—except, of course, the moves of its opponent. It took the computer about eight minutes to make each of its own moves, but the computational breakthrough was the beginning of the end of a world without AI. Today, AI tools are taking on a variety of tasks, including helping to operate complex machines in particle physics and astrophysics. Just as the chess-playing AI required a human opponent, modern AI systems in control rooms must work together with human operators. And just as practicing against an AI might give a human new ideas for ways to play chess, working with an AI in the laboratory might help humans find new ways to operate machines for science.

Modeling particles inside accelerators

One commonly used type of AI is machine learning, in which algorithms search out patterns in large datasets without specific instructions on how to do so. At the US Department of Energy’s Fermi National Accelerator Laboratory, physicist Jason St. John writes machine-learning algorithms to help keep particle beams flowing in a particle accelerator. Outside of high-energy physics, the most powerful accelerators are used to drive X-ray lasers, which allow scientists to study chemical, material and biological systems in action. Inside of high-energy physics, the most powerful accelerators collide particles at high energies, which allows scientists to study the fundamental constituents of the universe. Scientists are developing algorithms to help with the operation of both types of accelerators. Operating a world-leading particle accelerator is its own profession, requiring years of apprenticeship and on-the-job training. At Fermilab, for example, operators are constantly monitoring and tweaking accelerator settings to keep particle beams circulating and focused at record-setting intensities. In the control room for a particle accelerator, an alarm system indicates when a beam is about to fail. What the alarm system doesn’t look for are the subtle misalignments of the beam or other trends that may occur a fraction of a second before the alarm. If those problems could be detected—and, just as importantly, immediately addressed by a trained machine-learning algorithm, St. John says, the beam could keep running and scientists could wring even more science out of each hour of the day. To build these types of algorithms, St. John works directly with accelerator operators and machine experts. Operators have the expertise to say what is worthy of automation, and St. John’s team has the expertise to determine if a solution to a problem can be programmed. “Our work is a cooperative effort,” he says. “There will be problems that a machine-learning system doesn’t predict well, so you’ll always need a human there to make decisions, too.”

Telescope scheduling

In astrophysics, scientists rely on powerful telescopes to help reveal the unknowns of the universe. The astrophysics community has a wide range of scientific questions to investigate, but they don’t have the resources to build a whole new telescope to address each one individually, says Fermilab scientist Brian Nord. So, they have to make compromises—like figuring out how to use a single, versatile telescope to address many different questions at once. The problem is that the observation schedules for the world’s most powerful telescopes are jam-packed, making it hard for astrophysicists to collect the data they need. Back in 2015, Nord thought about a new tool to help with this problem: He realized AI could help astrophysicists arrange telescope schedules so that they could pursue multiple very different questions at the same time. To test this idea, Nord; Peter Melchior, an assistant professor of astrophysical sciences at Princeton University; and Miles Cranmer, an assistant professor in data-intensive science at the University of Cambridge, developed an algorithm that modeled how a telescope could be best used to study a group of 1 billion galaxies. Scientists already had accurate measurements of the galaxies’ positions, but they were missing other key information, like their masses and distances from Earth. Traditionally, gathering this type of information from a billion galaxies takes a long time, sometimes years. To see if AI could find a way to gather the information more quickly, Nord’s team developed an unsupervised deep-learning model comprising two graph neural networks. GNNs rely on the graph structure—i.e., nodes and edges—of a collection of objects. The team’s GNNs quickly recommended which galaxies to observe first, choosing a non-uniform group to provide a nuanced picture of the universe.



International Research Conference on High Energy Physics and Computational Science

More details: -----------------
Visit Our Website : https://x-i.me/hep
Visit Our Conference Submission : https://x-i.me/hepcon
Visit Our Award Nomination : https://x-i.me/hepnom

Get Connected Here: ==================

No comments:

Post a Comment

US Department of Energy announces new Fermilab contractor

A consortium of universities and companies has been awarded the contract to manage and operate Fermilab, the US’s premier particle-physics ...