Large Language Models have recently gained popularity due to their language comprehension abilities, which had never been seen before. Such attention has attracted both criticism and worry, and a keen interest in the correct use of these models to improve our society.



Large Language Models have a very complex architecture behind them, and different mechanisms that make them capable of doing amazing things, such as dialoguing, reasoning, text completion and translation among others. 



These models are founded on neural networks, structures that work by connecting different neurons in layers. In this research project, we are going to dive into several fundamental principles for a better comprehension regarding LLMs, starting with the history of these models. We will also provide a description of LLMs basic structure, transformers: why they are useful, how they work, and the technologies they employ, including like Word2Vec, SoftMax or Self-Attention, and how they are trained using different methods before being fully functional.



Different types of LLMs, especially Multi-Modal Large Language Models, have also been a very recent development in the field, providing LLMs with the ability to interact with image, text, or audio.



Additionally, we explore what hardware and maintenance requirements they meet. Moving towards the ethical and moral considerations, we will describe some advantages and disadvantages that LLMs provide and how they are implemented in our lives, as well as how they affect different fields such as medicine, translation, or education.