Federated learning(FL) environment plays a significant role in the distributed model training that enables secured collaborative learning, despite its advantages, these architectures often suffer from the several malicious intrusions which intentionally distorts the model updates to degrade the performance, Byzantine attacks are more common backdoor threats which predominantly causes the divergence, irregular gradient behaviours and significant degradation in the performance of classification. The existing defence systems often fails to detect these attacks that exploits the temporal and structural weakness in the FL environment. To overcome this limitations, this research article proposes the novel Siamese Graph Gated neural network to mitigate the Byzantine attacks in the FL training method. The suggested algorithm which ensemble Siamese principles with the Gated network to capture high discriminative temporal patterns that reflects the clients behaviours The comprehensive experimentation has been conducted utilising FEMNIST datasets and simulated Byzantine attacks on the datasets by sign-flips, scaling and adding the random noises. To prove the excellence of the proposed framework, various evaluation metrics are calculated and benchmarked against the existing FL methods. Experimental findings exhibit the suggested model has attained the accuracy of 0.999, precision of 0.999, recall of 0.999, specificity of 0.9980, F1-score of 0. 9981 in detecting the attacks and ablation studies further confirms the significance of the suggested framework that enhances the detection and robustness. The suggested framework provides bright insights for mitigating the Byzantine attacks in the federated learning environment.