Weed detection and removal is an essential task for a better crop yield as they compete for resources along with crops. Manual weed control methods are time-consuming and labour-intensive tasks with room for errors. The critical challenge is to reliably and accurately detect weed from the field. To achieve this, UAV and ground-based sensing along with deep learning, especially CNN models are used to automate crop management. CNNs based on encoder-decoder models are preferred for semantic segmentation. However, they perform poorly for low-level features such as noisy boundaries. This paper presents our work on paddy rice data collection for crop-weed segmentation and a network model based on an overcomplete representation (Kite-Net) augmented on a transfer learning-based encoder-decoder model (TernausNet) for segmenting the RGB image into three classes: crop, weed and background. The network results show convincing seg-mentation mask output on overlapping crop-weed images. © 2022 IEEE.