Most dense SLAM systems rely on stereo or RGB-D cameras for sensing the surrounding environment. Existing RGB-D systems suffer two primary limitations. At any given time instant, such sensors are capable of scanning only a part of the world in front of them, owing to physical limitations of range-sensing devices and occlusions due to other objects in the scene. Humans, however, have the remarkable ability of anticipating and completing the scene by filling in the holes in sensor scans by leveraging prior knowledge of the appearance of artificial environments. The current best means by which SLAM systems reason about holes in RGB-D scans is by incorporating smoothness regularizers. DeepFusion is a dense semantic SLAM system that produces complete models of the environment using less data compared to state-of-the-art SLAM systems.