Data Science and Machine Learning SIG: The Weaker the Better: Weak Supervision for Training Neural Networks ...-Oct 28th

Complete Title:  The Weaker the Better: Weak Supervision for Training Neural Networks for Seismic Interpretation; an Approach via Constrained Optimization      Sponsored By: Quantico

Online only event - You must pre-register to receive access information.

You must be logged in to register.

Speaker: Bas Peters, Emory University

Seismic interpretation is a problem with lots of data but few annotations/labels. The ground-truth location of salt structures or facies is available from borehole measurements. Of course, these are very expensive and sparsely available. Workflows often include manually labeling many seismic images for training. In this case, a neural network trains to mimic the human interpreter, not to achieve the best possible accuracy. This approach is also time-intensive, and we would like to avoid manual labeling.

We propose a problem formulation that does require not any annotated/labeled seismic images. Instead, weak information about the targets/horizons/facies of interest is sufficient for training. For example, rough bounding-boxes that include the layer or salt structure of interest. Given several images accompanied by bounding boxes, the network can learn to differentiate between the target of interest and 'background'. Alternatively, weak supervision may include arbitrarily shaped points/lines/shapes that only annotate what is not interesting. In both cases, the weak supervision encodes information about the minimum and maximum bounds on the expected 'size' or 'surface area' of the object that we want to segment.

The application of such ideas has been problematic due to the computationally expensive alternating optimization procedures that result from many formulations for training using weak supervision networks. We propose a new formulation based on point-to-set distance functions, where constraint sets on the output of a neural network encode the weak information. An examination of the Lagrangian structure of the problem reveals a way to merge our approach into standard backpropagation based training seamlessly. We demonstrate that we can segment salt structures and layers on two different datasets without having any annotation of those targets.


Speaker Biography: Bas Peters, Emory University
Bas Peters is an assistant professor in the mathematics department at Emory University. Previously, Bas worked for Computational Geosciences Inc as a research scientist, and received his PhD degree from the University of British Columbia in 2019. His main research interests are constrained optimization; design, optimization, and regularization of deep neural networks, geoscientific and geospatial applications, inverse problems, image processing, and numerical linear algebra.


** Access information will be sent to all registrants after registration closes.

When
10/28/2020 11:00 AM - 12:00 PM
Central Daylight Time

Sign In