Detecting copy-move forgery has become an inevitable and crucial task due to the advancements in highly sophisticated image processing tools. The conventional CMFD approaches confront the detection of tampered images among the cross-domain and domain-specific tampered images regardless of the domain knowledge source. Thus, this work proposes two contributions to overcome these constraints: the generalized CMFD and domain-specific CMFD by utilizing the dual feature representation jointly extracted from the handcrafted and deep features. The proposed CMFD incorporates the dual feature extraction, classification, and localization phases. In the generalized CMFD, dual feature representation involves the extraction of hybrid handcrafted features, and EfficientNetV2B3-assisted deep features leverage the accurate detection of tampered images using the Vision Transformer (ViT). Subsequently, it localizes the forged region in the tampered image based on the inter and intra-block mapping in the pixel-level clusters. In the domain-specific CMFD, dual feature representation with VGG16-assisted deep feature extraction additionally employs domain-invariant representation learning for the superpixel-level analysis. It relies on the enhanced Swin Transformer with the efficient local self-attention using shifted windows for capturing global context during tampered image detection. Finally, the domain-specific localization utilizes the domain-invariant feature space with the correlation analysis in addition to the inter and intra-block mapping to detect the forged region during postprocessing precisely. Experimental results demonstrate the notable performance of DFRFD by evaluating the different benchmark image datasets for the generalized CMFD algorithm and domain-specific CMFD algorithm. As a result, the generalized CMFD algorithm yields 97% accuracy while testing on the CoMoFoD dataset.