Patients in neonatal and pediatric intensive care units often require catheters and tubes, collectively referred to as "lines," to sustain life. These patients undergo a series of chest radiographs throughout their stay to ensure proper line placement, and each study must be assessed by a radiologist, representing a time-consuming, labor-intensive process. Here we employ deep learning approaches to automate line monitoring by segmenting the chest into medically relevant regions, locating lines, and determining whether each line is appropriately placed relative to those regions. In an IRB-approved study, pediatric chest radiographs were collected and annotated with custom software in which users drew boundaries around seven regions of the chest: left and right lung, left and right subdiaphragm, spine, mediastinum, and carina. We trained a U-Net, a type of fully convolutional neural network for biomedical image segmentation, implemented in Keras on 240 chest radiographs and their binary masks. On a test set of 43 radiographs, our model achieved 92.3% mean pixel accuracy and a mean Dice coefficient of 0.768. Qualitatively, the model produces realistic predictions for large regions like the lungs and spine. However, in some cases, boundaries between regions are unrealistic and, most crucially, the carina – the small point at which the trachea splits into each bronchus – is never correctly classified. Work is ongoing to implement weighted loss functions to overcome these challenges. At present, this approach offers an initial step toward the goal of providing automatic, anatomic context for line placement assessment.