More than two-thirds of all the oil discovered in America to date remains in the ground and is economically unrecoverable with current technology. About 218 billion barrels of it, a volume approaching the proven reserves of Saudi Arabia, lies at depths of less than 5,000 feet. This by-passed oil represents a huge target for the roughly 7,000 independent producers active in the thousands of mature U.S. fields which cumulatively account for a significant share of the country’s crude oil supply.
Much by-passed oil lies in difficult-to-access pockets. Predicting the location and size of these elusive, compartmentalized deposits is costly because it often requires complex computing capabilities. Many independent producers aren’t able to commit the personnel or buy the expensive supercomputer time required to build and operate the models needed to find and produce these overlooked stores of oil.
The A&M research effort engineered a cost-effective way to streamline computer-generated reservoir models. It provides significant savings in computation time and manpower.
Reservoir characterization identifies “unswept” regions in these mature fields containing high oil or gas saturation. In this process, geoscientists first employ computer models to develop an accurate picture, or characterization, of a productive oil reservoir. “History matching” is then used to calibrate the model by correlating its predictions of oil and gas production to a reservoir’s actual production history.
In the Texas A&M project, researchers developed a novel, computerized method for rapidly interpreting field tracer tests. This innovation promises a cost-effective, time-saving solution for estimating the amounts of remaining oil in bypassed reservoir compartments. The new method integrates computer simulations with history matching techniques, allowing scientists to design tracer tests and interpret the data using practical PC-based software – a process that is much faster than conventional history matching.