Classical error backpropagation (BP) optimizes a feedforward network using a symmetric feedback network during learning that is hidden during inference, serving no runtime function. Here, we train dual feedforward-feedback networks each with their own runtime objective without invoking any additional hidden networks for learning. Specifically, we leverage the complementarity of a discrimination and reconstruction network to produce the learning signals for each one via inference from the other. We term this Symbiotic Learning (SL) because of the interdependence between the inference goal of one path and learning in the other. As proof-of-concept on MNIST, we show that SL can both approach the digit recognition accuracy of a BP-trained discriminative network while also reconstructing digits nearly as well as a BP-trained autoencoder. Besides co-optimizing two objectives, we observe that SL networks outperform BP networks under adversarial attack and when pixel noise is added to the image. Thus, without strongly sacrificing performance, SL represents an approach beyond BP that seeks mutually compatible feedforward and feedback pathways for more robust discrimination and reconstruction.