The state-of-the-art semantic segmentation and object detection deep learning models are taking the leap to generalize and leverage automation, but have yet to be useful in real-world tasks such as those in dense circuit board robotic manipulation. Consider a cellphone circuit board that because of small components and a couple of hundred microns gaps between them challenges any manipulation task. For effective automation and robotics usage in manufacturing, we tackle this problem by building a convolutional neural networks optimized for multi-task learning of instance semantic segmentation and detection while accounting for crisp boundaries of small components inside dense boards. We explore the feature learning mechanism, and add the auxiliary task of boundary detection to encourage the network to learn the objects’ geometric properties along with the other objectives. We examine the performance of the networks in the visual tasks (separately and all together), and the extent of generalization on the recycling phone dataset. Our network outperformed the state-of-the-art in the visual tasks while maintaining the high speed of computation. To facilitate this globally concerning topic, we provide a benchmark for E-waste visual tasks research, and publicize our collected dataset and code, as well as demos on our in-lab robot at https://github.com/MIT-MRL/recybot.