We have attempted to present in a condensed way the different properties of genetic algorithms, their merits as exploration and optimisation heuristics, and the challenges they face, notably in computation efficiency, parameter configuration, realism and robustness. We hope to have well covered the search space in this objective and to have identified its most promising points of its landscape. Genetic algorithms are particularly suitable to solving sparse problems in large, rugged search spaces. They require very few assumptions on the fitness landscape properties, deal well with existence of local optima, or multiplicity of extrema. They are applicable to a wide range of problems, and are quite efficient in returning good solutions quickly. They have achieved particular performance in the evolution of neural networks, multi-objective optimisation and genetic programming. They are more than optimisers, as they generate novelty, are directly connected to artificial life as they foster emergence of complex digital ecologies, and a clear relation with open-ended evolution that can greatly improve the modelling of evolutionary, adaptive systems. Genetic algorithms have been facing significant issues in computation efficiency and cost, and in parameter configuration. The latest developments of genetic algorithms towards meta-learning architectures, learning how to learn, and the endogenous generation of learning environments, have placed AI-generating algorithms as a credible means to produce general AI (Clune, 2019). Open-endedness in their evolution could lead them to produce interesting and increasingly complex discoveries, and so indefinitely (Stanley et al., 2019).