Planets form in circumstellar discs, which are reservoirs of gas and dust that surround young stars. These discs develop very soon after stars form, and during their early stages they are immersed in an environment that can be very hostile for their survival. The surrounding gas and nearby stars can affect the discs in many ways and make them lose mass quickly, which limits the time and material available to form planets. Understanding how these discs evolve can help us understand the formation of planets and our own solar system.
During my PhD I developed computational simulations of circumstellar discs inside star clusters and studied how different mechanisms, in particular external photoevaporation, remove mass from the discs and constrain their potential to form planets. I did this using the Astrophysical Multipurpose Software Environment AMUSE, an open-source project being developed in part at the Computational Astrophysics group at Leiden Observatory. I analysed the resulting size and mass distributions of the discs along with characteristics of the clusters.
You can read my PhD thesis here.
Astronomy is a data-intensive discipline, where Terabytes of information are being produced around the world every night. Quick processing and analysis of said data is crucial for the development of scientific results. Because of this, the pipelines of data-processing tools for astronomy are beginning to embrace Big Data paradigms. However, hardware-wise, there are also ways to accelerate algorithms for data analysis.
For my MSc thesis I developed a tool for astronomical data reduction and photometry obtention using GPU-accelerated algorithms. The GPU is the Graphics Processing Unit of a computer, and in the last decades the use of GPUs for accelerating mathematical calculations has become standard in many data-intensive areas. The design of the GPU allows for many calculations to be carried out in parallel, accelerating the execution of algorithms by several orders of magnitude compared to their CPU counterparts.
However, designing data-intensive algorithms for the GPU is not straight-forward. You cannot just run your CPU scripts on a GPU, the whole problem needs to be re-thought and re-designed to take advantage of the GPU’s hardware.
My thesis (in English, with abstract in Spanish) is available here.