Training Neural Networks with a Genetic Algorithm for Obstacle Avoidance in Simulated Autonomous Drones

Table: COMP6
Experimentation location: School, Home
Regulated Research (Form 1c): No
Project continuation (Form 7): No

Display board image not available




Amer, K., Samy, M., Shaker, M., & Elhelw, M. (2021). Deep convolutional neural network based autonomous drone navigation. Thirteenth International Conference on Machine Vision.

Elajrami, M., Satla, Z., & Bendine, K. (2021). Ajr Drone Control using the Coupling of the PID Controller and Genetic Algorithm. Communications - Scientific Letters of the University of Zilina, 23(3), C75–C82.

Shivgan, R., & Dong, Z. (2020). Energy-Efficient Drone Coverage Path Planning using Genetic Algorithm. 2020 IEEE 21st International Conference on High Performance Switching and Routing (HPSR).

Additional Project Information

Project website: -- No project website --
Additional Resources: -- No resources provided --
Project files:
Project files

Research Plan:

A genetic algorithm is traditionally used to model natural selection within a simulated population. In this study, we use a genetic algorithm to select for neural networks that control drones well, and to remove drones that are useless. To start, we initialize the neural networks with random weights and biases, each drone receiving one neural network controller. Some drones will perform better than others, but initially most are incapable. We score the drones’ fitness based on certain criteria (Do the drones hit anything? Do they make it to their target?), and eliminate a certain percentage of them that scored low. After this, we crossbreed the remaining successful drones to repopulate, combining weights and biases from randomly chosen parents. After we have a new population, we can perform simple mutation (randomizing a few of the weights and biases) on the new drones in order to get some variation. After running the simulation many times, the drones will slowly increase in capability, with the less fit drones dying off and the more fit drones producing even more fit children. Essentially, we are optimizing the neural networks to produce a drone that best fits the given criteria. The only control over the training we have is the fitness criteria, as well as various hyperparameters for the genetic algorithm. These hyperparameters dictate things like how many drones we eliminate each generation, how fast the neural networks mutate, and how many drones are in the population.

Again, this study uses a genetic algorithm to train autonomous drones equipped with neural network controllers in an attempt to optimize speed and obstacle avoidance. It’s easy enough to get a drone to stay up in the air, so we focus on the more complex task of making sure that there are no costly collisions between drones or with obstacles, and that the drones make it to their waypoints in a respectable amount of time. One study has similarly attempted to automate drone pathing to avoid obstacles by processing camera input with convolutional neural networks, which was very successful (Amer et al., 2017). Here, we also attempt to perform obstacle avoidance, but with positional information and intercommunication between drones rather than the limited information acquired from a camera. Another study has also utilized genetic algorithms, tuning PID controllers for drone flight and showing that genetic algorithms are more than viable for complex tasks like obstacle avoidance in drones (Elajrami et al., 2021). Another study similar to this one has employed a genetic algorithm to optimize the path of drones to provide the most amount of area coverage while using the least amount of energy. The optimized algorithm consumed 2-5 times less energy than that of a traditional greedy algorithm by reducing the number of turns while covering all the waypoints (Shivgan and Dong, 2017). Our study aims to optimize the path of autonomous drones using a genetic algorithm by maximizing efficiency and minimizing collisions. It is hypothesized that the optimized drone control algorithm will result in a slightly slower time than the straight path algorithm, but result in much safer travel with extremely few collisions.

Questions and Answers

1. What was the major objective of your project and what was your plan to achieve it? 

The objective of this project was to apply a modern method of machine learning to the problem of efficient obstacle avoidance in drones to improve results over traditional algorithms.

a. Was that goal the result of any specific situation, experience, or problem you encountered?  

As drone technology becomes more viable for different applications, it's important to keep energy usage efficient and maintenance low. With increasing carbon emissions, its especially important to make sure we aren't using any extra energy that we could be saving. This project aims to make the safest drone possible that makes it to its destination in the least possible time.

b. Were you trying to solve a problem, answer a question, or test a hypothesis?

This project attempts to solve the problem of improving drone flight efficiency without causing damage to other drones and the environment in the process.


2. What were the major tasks you had to perform in order to complete your project?

There were many different tasks that I had to perform in order to complete my project. For one, I had to create my own neural network and genetic algorithm libraries in C++ for Unreal Engine 4. I created MLP objects with convenient sequential layer modeling, implemented activation functions, matrix multiplication, and even Xavier weight initialization. For the genetic algorithm library I implemented cross-breeding, mutation, and population handling. After implementing these libraries, I also had to create a simulation of a drone. It's rather naive, but takes data input from an MLP object to control what state the drone is in. I then created a system to spawn in many drones for each MLP model and loop the genetic algorithm, training the drones over time.


3. What is new or novel about your project?

My project applies new concepts like neural networks and genetic algorithms to the area of autonomous drones, which are traditionally controlled by hand-coded algorithms. This study aims to improve efficiency over traditional algorithms with these new techniques.

a. Is there some aspect of your project's objective, or how you achieved it that you haven't done before?

My previous machine learning projects focused on supervised learning and I wanted to explore something new with this project. This project utilizes reinforcement learning where the neural networks learn from their environment around them, rather than set labels. I learned a lot about genetic algorithms through this project, and I hadn't used them before. They proved to be a viable method for training when the solution to a problem is more abstract.

b. Is your project's objective, or the way you implemented it, different from anything you have seen?

I have seen studies that design algorithms used to optimize drone pathing, but my study is different in that it uses a genetic algorithm to train neural networks. Each drone has an individual "brain" that has been trained to process data around it in a meaningful way so it can reach its destination in the safest and most efficient way.

c. If you believe your work to be unique in some way, what research have you done to confirm that it is?

In my research I have seen genetic algorithms used to optimize neural networks - but for different purposes, like controlling a video game character. I have also seen drones be controlled by neural networks, but in a more naive way (like controlling the motors individually). My project focuses on pathfinding rather than drone physics like stabilization, which can be easily implemented without machine learning with technology like PID controllers.


4. What was the most challenging part of completing your project?

The most challenging part of the project was coding the machine learning libraries in C++. It was extremely difficult to know whether or not the neural network was just optimized poorly, or if there was a bug in the general neural network code architecture itself. I learned a lot about how neural networks work, and did a lot of the low level coding that is usually hidden behind high level functions in premade libraries. Coding my own library for UE4 was definitely a challenge, but I now have a much better understanding of the language and interface.

a. What problems did you encounter, and how did you overcome them?

One of the problems I encountered was when my neural network outputs exploded, and I had no idea why. Inputting values into my neural networks would yield huge numbers on the outside, when in reality they should only be from -1 to 1. I finally realized my mistake, and the activation function I was using was actually an implementation of "Fast Sigmoid" which doesn't map values as I expected. I switched the implementation to a regular sigmoid, which maps high magnitude values down to lower values. This gave me much better output and fixed my problem. Another issue I had was with performance. I was checking the position to the closest drone for every single drone, and then filtering out the drones that did not belong to the current drone's species. This created an immense amount of lag, which I was eventually able to fix by giving tags to each drone object, describing which species they belonged to. The new closest position algorithm only checked these drones, and was an order of magnitude faster.

b. What did you learn from overcoming these problems?

From overcoming these problems, I learned that even the smallest problems can halt a project. I spent hours looking for something that was causing huge problems when it was really just a mis-implemented activation function. It's important, especially in programming projects, to document and debug your code well so that errors are traceable. It's very hard to avoid small problems like these in projects, especially when you have a deadline, but most can be avoided with preparation, and the rest can be solved with persistence.


5. If you were going to do this project again, are there any things you would you do differently the next time?

If I were to do this project again, I think I would want to use a separate neural network library so I wouldn't have to implement my own. Libraries like Keras have so many more features that could have been used in this project, but it would have been hard to connect to my simulation software. Having access to more features, and not having to encounter bugs along the way would have made the process a lot easier, since neural networks are pretty unintuitive on their own. Since I ended up making the simulation pretty simple, I probably could have implemented it in Python rather than UE4, which would have made the machine learning integration much more streamlined.

6. Did working on this project give you any ideas for other projects? 

Working on this project gave me a ton of new ideas I could do with AI and reinforcement learning. Reinforcement learning is such a broad area, and can be used to solve a wide variety of abstract problems. For instance, what if we used reinforcement learning to manage farms? We ourselves don't know the best way to farm, and neither do we have a model for the perfect farm. However, we can score a farm based on how well it performs and how much food it yields. Rewarding and penalizing a farm managing neural network based on certain aspects could train it to become a better farmer, solving the problem through a series of criteria. This is why reinforcement learning is so widely applicable - if you can simplify your problem into criteria that need to be met, you can use reinforcement learning to optimize it. There are so many different applications in areas like finance, medicine, engineering, and more.

7. How did COVID-19 affect the completion of your project?

Since my project could be completed at home on my computer, COVID-19 did not affect my project.