You are here

People Highlight
Maya Gokhale thrives on change.

Computer scientist Maya Gokhale appreciates the unpredictability and rapid pace of change in her chosen field. “You never know where computing is going to go, and that’s what’s exciting about it,” she says. “It’s harder to look forward five years in computer science than it is in other technical fields.”

Since joining LLNL’s Center for Applied Scientific Computing in 2007, Maya’s research has centered on developing innovative high-performance computing (HPC) system architectures that effectively store, retrieve, and use the reams of data that modern scientific research produces. For one such effort, Maya and her team of researchers have been working to augment the memory of a computer server or compute node with large, parallel arrays of solid-state storage devices. Data stored in these nonvolatile memory storage arrays are both permanent and close to the compute node, allowing for fast access and manipulation. The viability of this novel architecture was proven in the June 2011 international Graph 500 competition, when Maya and her team achieved rank 7 using a single compute node with nonvolatile memory. A machine near the top of the Graph 500 list can efficiently analyze vast quantities of data to find the hidden gems of useful information. Not only has the nonvolatile memory research supported data-intensive computing in fields as diverse as social network analysis and bioinformatics, it has also boosted exascale computing preparation (exascale supercomputers that will be able to perform at least one quintillion operations a second—at least 100 times what today’s machines can do).

While the nonvolatile memory effort involves adding permanent memory to compute nodes, another of Maya’s data-science projects is its perfect complement, as it entails adding compute functionality to the memory or storage system. Latency is the delay that occurs as a packet of data travels between two points—for example, from memory to the CPU. By enabling the computer to perform low-level computations within the memory itself, rather than moving the data to the compute node before performing the operation, Maya and her colleagues are alleviating some of the latency problem. “This is an incredible way of increasing memory bandwidth, as the internal bandwidth in the memory is quite large,” she says. Maya considers in-memory computing to be one of the most exciting and potentially transformative research innovations underway today in computer architecture.

In 2013, Maya was named a distinguished member of Livermore’s technical staff, an honor only bestowed on a tiny fraction of researchers. She has previously worked in industry and academia, but one reason she favors working at a national laboratory is because her colleagues here help her identify and test the real-world applications for her ideas. She also appreciates the chance to make direct national security contributions through her work at LLNL. The daughter of two educators, Maya was the first in her family to pursue a career in science and technology. After having endured years of technical dinner-table conversations with Maya and her computer engineer husband, her son and daughter swore off computer science as a career. But now, both use computers extensively in their scientific disciplines. Says Maya, “It was quite a challenge for my husband and me to juggle careers and kids, and we are grateful to have wonderful children.”