The document presents a new adaptive modified backpropagation algorithm designed to improve the convergence speed and error minimization in training feedforward neural networks. This algorithm employs adaptive learning rates based on differential errors for both output and hidden layers, eliminating the need for fixed learning rates determined through trial and error. Simulation results demonstrate its effectiveness across benchmark problems like XOR and the iris dataset, showing faster convergence compared to traditional methods.