The United States Department of Energy has, for the last several decades, routinely been in competition for operating the most powerful supercomputers in the world.
In June 2018, it fired up Summit, which, according to NBC News, 鈥渉as been clocked at handling 200 quadrillion calculations a second (or 200 petaflops). That's more than twice as fast as the previous record-holder, China鈥檚 93-petaflop Sunway TaihuLight, and so fast that it would take every person on Earth doing one calculation a second for 305 days to do what Summit can do in a single second.鈥
But what does that have to do with oil? Shawn Bennett, deputy assistant secretary for the Office of Oil and Natural Gas at the U.S. Department of Energy (DOE), says the DOE is looking at applying its big data computational abilities to analyzing geology and completions in the oilpatch.
The 2018 Summit computer is capable of 10 times the number of calculations of the Sequoia supercomputer. The DOE still operates Sequoia, and it is currently ranked 10th in the world, according to the DOE. In total, the DOE operates five of the ten most powerful computers on the planet. That鈥檚 the level of computer power the DOE has at its disposal, and it is now looking at applying that to the geology of the American oilpatch.
鈥淲hen we鈥檙e looking at that big data, we鈥檙e trying to see how we can use that supercomputing process, to see how we can, if there is an opportunity for us to use supercomputing in oil and gas development,鈥 Bennett said.
鈥淣ot for an individual company鈥檚 basis, but to unlock some of these questions that we have. When you look at predictive analytics and you look at big data, you need that very fast supercomputing power to potentially unlock some of these mysteries in the shale. So we are in the early stages of developing a program where we can hopefully utilize the supercomputer capacity to unlock some of these universal mysteries of oil and gas.鈥
When asked how soon they could do this, he joked that 鈥淢y boss asked how quickly we can get it done, too.鈥
鈥淚n order to compile the data, work out the data with companies, and have that conversation, we have to gather that data, big data. It means a lot of data has to be acquired. So we鈥檙e in the very beginning process of acquiring data and seeing if there鈥檚 an opportunity to start looking at different algorithms to go at it.
鈥淚t鈥檚 not going to be a next year thing. But hopefully, in the next few years, we鈥檒l have some questions answered.鈥
As an example, he said taking a subset of data from a basin to look at anomalies and similarities
鈥淭here鈥檚 been a lot of data that鈥檚 been acquired by these companies over the last decade of field development. Being able to clean up that data, use that data, and to start to see similarities and new predictive analytics through algorithms and physics-based analysis, and hopefully be able to increase the EUR through that big data approach, through these supercomputers,鈥 Bennett said.
鈥淲hen you look at big data, we know, right now, what works. But ultimately we want to improve resource recovery, the EUR, the estimated ultimate recovery, of these wells. And by doing that, going through these massive amounts, reams and reams of data. The problem with all these reams of data is it takes months, even years, to compile that data and to be able to understand it better. With those supercomputers, if we can do it in a more real-time manner, we could have real-time changes to the drilling program, whether it鈥檚 the drilling portion or completions, for each well.鈥