Signal region detection is a routine task in NMR spectroscopy, however, it is one of the few processing steps that is typically performed manually. This is due to the fact that the method implemented in TopSpin to carry out this task automatically often requires fine-tuning of multiple parameters to obtain satisfactory results. This limitation typically prevents the effective integration of multiple, diverse spectra in an automatic way, and favours the use of the manual process.
Recently, deep learning techniques have emerged as a powerful tool for recognition and segmentation tasks, proving to be able to reach and surpass human-level performance in a fraction of the time. These techniques could potentially lead to a fully automatic extraction and analysis of the information contained in NMR spectra.
Here, we make the first step toward this goal. We introduce sigreg, a deep learning algorithm for signal region detection in 1D 1H NMR spectra. We show that this method is robust and provides better performance than current Bruker solutions, achieving excellent accuracy for spectra obtained in a wide range of base frequencies without requiring any inputs from the user side.
From the early days of nuclear magnetic resonance (NMR) it has been known that NMR signals yield quantitative information if the right experimental conditions are met. Relative concentrations of magnetically distinct nuclei are obtained by integrating the areas under the signals in an NMR spectrum, and absolute concentrations can be calculated by using a standard with a known concentration. This is an important property of NMR, which has contributed to the widespread use that NMR has achieved over the years, and is today at the base of many NMR applications such as qNMR, structure verification, and structure elucidation.