As the widespread use of Artificial Intelligence (AI) Algorithms increases, the need to evaluate the fairness, or equitable decision-making, of such algorithms arises. It is crucial that an algorithm’s decision-making recommendations do not reflect bias or discrimination as the algorithm's output is used to inform real-world outcomes and therefore impacts people’s lives. This study aims to leverage previous research and propose new method to improve the fairness of an Artificial Intelligence model without sacrificing its performance. To that end, the study employs multiple historically used fairness metrics to build a Random Forest Model. The metrics are optimized, and their trends are analyzed to explore the balance needed to build a fair, equitable, and unbiased model that is still accurate and able to inform important decisions. This paper focuses on employing the Multi-Objective Ensemble Learning (MEL) method, as the algorithm considers both the model’s performance and its fairness metrics.