Dozens of members of LLNL’s Computation Directorate will attend the 2018 Supercomputing Conference. The Laboratory’s presence includes tutorials, poster and paper sessions, and the Job Fair.
I/O, Networking, and Storage
Disk- and tape-delivered I/O bandwidths are being rapidly outpaced by capacity increases, which means valuable processor time is being wasted while waiting for data delivery. For extreme-scale machines to be productive, bandwidth challenges throughout the entire I/O stack must be addressed. We’re working on techniques and technologies that leverage node-local or near-node storage, refactor parallel file systems, and evolve tertiary storage software to enable efficient extreme-scale computing environments. View content related to I/O, Networking, and Storage.
I/O and application-level benchmarks put Intel’s Optane 3D XPoint non-volatile memory technology to the test.
“If applications don’t read and write files in an efficient manner,” system software developer Elsa Gonsiorowski warns, “entire systems can crash.”
LLNL is home to one of the world’s preeminent data archives, run by Livermore Computing’s Data Storage Group.
Livermore’s archive leverages High Performance Storage System (HPSS), a hierarchical storage management (HSM) application that runs on a cluster architecture that is user-friendly, extremely scalable, and lightning fast. The result: Vast amounts of data can be both stored securely and accessed quickly for decades to come.
First use of Amazon Web Services promises lower cost, higher performance IT services.
LLNL studies in networking and noise reduction suggest a better way to configure systems to enable cost-effective scalability and more consistent performance.
Livermore computer scientists are incorporating ZFS into their high-performance parallel file systems for better performance and scalability.
Livermore Computing staff is enhancing the high-speed InfiniBand data network used in many of its high-performance computing and file systems.