Synaptic devices promise drastically lower power consumption in artificial neural networks vis-à-vis CMOS memories. In this work, we have demonstrated MoS2 synaptic transistors in n-FET, p-FET, and inverter configurations. Accounting for device non-idealities, we have simulated the system-level performance for MLP and VGG-8 DNN architectures. DNNs are robust to non-idealities with ∼ 15% higher accuracy but at the cost of increased complexity as compared to MLPs. This work explores the complexity-accuracy trade-offs in ANNs for offsetting non-ideal device behavior. © 2023 IEEE.