Edge-Aware Federated Learning (EAFL) has emerged as a promising paradigm to address the dual challenges of data privacy and computational limitations in resource-constrained environments. Traditional federated learning (FL) approaches often overlook the heterogeneous nature of edge devices, leading to suboptimal performance and increased communication overhead. This research introduces an Edge-Aware Federated Learning framework that dynamically adapts model training to account for device-specific capabilities, network conditions, and data distribution. By integrating edge-awareness into the aggregation and optimization processes, the proposed method enhances model accuracy while reducing latency and energy consumption. Furthermore, EAFL incorporates lightweight privacy-preserving techniques, such as differential privacy and secure aggregation, tailored to operate efficiently on low-power devices without compromising data confidentiality. Experimental evaluations on benchmark datasets across diverse edge scenarios demonstrate that EAFL achieves up to 15% improvement in model accuracy and a 30% reduction in communication costs compared to conventional FL methods. The results affirm the potential of edge-aware strategies in bridging the performance gap in federated systems, making FL more viable for real-world applications such as smart healthcare, IoT, and autonomous systems. This work lays a foundational framework for future research in intelligent, privacy-preserving learning at the edge.