I have ommitted some of the simpler projects we were given, as well as projects that were essay-based, maths focused or anaylysis focused. I have written a brief description of some of the more interesting projects (basically year 3 and beyond), others just have a title if I thought they were self explanatory or less interesting. Click a project to expand the description for it, if there is one!
For my AI course, we made a DFS algorithm to search a tree of states and moves for the solitaire game 'Accordion', with cards of an arbitrary deck of a number of suits and ranks. So as not to store the whole tree, I use dynamic creation of nodes as they are needed, and then evaluate a leaf node before backtracking to the next unexplored node.
In a group of five, we are in the process of developing a Frontend and Backend serving website using React and Node JS that allows users to log into our site to play sudoku puzzles, and other sudoku variants. Our group uses Scrum and Agile development techniques to manage group interaction via GitLab. Users can also log into another site in a federation of sites (from other groups) using the OAuth 2.0 protocol. We currently have implemented the OAuth functionality, basic sudoku playing functionality (including having written a sudoku generator that can generate new puzzles) as well as a sudoku creator, where the user can create their own sudoku to be verified by an admin on the site and eventually played by other users.
We were tasked to make a program that could simulate a Turing Machine, given a TM descriptor file. We were also tasked with creating various TM descriptor files for acceptor TMs to check a string for balanced parentheses, a binary adder and two TMs of our choosing. I chose a palindrome checker and logic evaluator as my custom ones. We also had to analyse the time complexity of our Turing Machines in Big-O notation.
In this practical, we were tasked with creating a binary UDP packet, sending it to an emulated network path, and recording various aspects of transmission to measure loss, delay and end-to-end data rate in an accurate way. We designed our own packet structure then measured the delay, loss and data rate based on the sending and receiving of packets via the emulated path. Additionally, we were asked to critique our designs based on the performance measurements taken.
For this practical, we were tasked with designing and implementing a Simple Reliable Transport Protocol (SRTP) over UDP in C. Using a specified API, we were tasked with using the design we created to implement an idle-RQ algorithm that supports bi-directional, reliable data transfer. I made a FSM for the connection handshake, as well as specifying the packet design and bit alignment of messages sent. Additionally, I measured the Rx and Tx capacities so that I could add controlled loss to simulate lost packets.
In Haskell, our group of three made a small text adventure game like 'Zork' or 'Dunnet', where a user is given a line describing the environment they are in, and can use a variety of commands like 'get', 'put', 'go' or 'jump' to interact with their surroundings and reach an end goal of the game. Due to the time we had to do this task, the adventure is a small game set in a house, where the front door is locked, and the user must be 'caffeinated' (they must have a cup of coffee from the pot) before they can leave the house
For this Haskell assignment, in our group of three made a small scripting language to evaluate or print a mathetmatical statement, such as 'x = 7 + 2' or 'print(x)' or 'if x > 0 then print(x) else x = x+1'. We also included a cabal file to assist in building the project, as well as unit tests using Quickcheck.
For this group project, we were given a large dataset of tweet data in a CSV file related to a comet landing, and process the data using Python (and pandas) to create several analysis results in the form of graphs to display the relationship between certain tweets, devices and hashtags within the dataset. This meant also checkign the consistency of the data to remove any invalid entries or duplicate entries. The results were presented in a Jupyter notebook, as well as an executable script that generated all the graphical representations within the notebook.
Here we read in words from a file to produce a chart showing the frequency of those words. The order of the output was the order in which new (previously unseen) words were encountered in the file. This allowed for us to further develop our confidence using the C programming language.
In this practical, we had to implement an "ordinary" (not in-place) quicksort using the last element of the sequence as pivot. This allowed for more demonstrative analysis of the complexity of the algorithm, by running it against different sequences with different 'sortedness' metrics.