Quantum machine learning is an emerging technology with the potential to solve many problems in data science. It works with a large quantum circuit with many parameters, so the model and training process are quite complicated. We need to develop a mathematical description for the training problem in the context of a large parameter space. In a previous paper on classical neural networks, it was demonstrated that for neural networks with a growing number of parameters $n$, the loss function becomes convex, and the approximation error of the network scales as $O(n^{-1})$. In this thesis, we extend those results to a quantum framework. We prove that the loss landscape is asymptotically convex and run numerical experiments to attempt to show that the error scales as $O(n^{-1})$, but conclude that the latter requires further study.