blog

Generating Pokemon Species with Neural Networks

Saturday 10th November 2018, it’s time for AstonHack. I’ve been to a couple of hackathons before but I’ve never really done anything other than eat pizza or watch Netflix… which is pretty much what I do with my spare time anyway. My team and I decided to make up for this by spending the entire 24 hours doing a LOT:

– We used machine learning to read your mind and tell if you were watching Pokémon or not (Note: we were most definitely actually classifying a positive emotional experience from the frontal lobe AF7 and AF8 sensors). see this paper for the science behind it.

– We did the same thing as above but for people who were thinking about Spartans, Dinosaurs, or Unicorns (it was for a sponsor’s challenge!). Again, check out this paper if you’d like to know about the technical details that went into this.

– We trained two ML models, one would draw a new Pokémon based on images of them, and the other would try and learn to spot the fake from the real ones. They’d compete against one another for a few hours until they could learn to make new artwork that was convincing. For this, we employed the use of Deep Convolutional Generative Adversarial Networks (DCGAN).

– We used ML to write names and descriptions for the generated Pokémon based on a dataset of text ripped from the games (eg. “Mowirup: They flock to the stars and mountains. This Pokemon glows, it does not construct silk.”)… Again, don’t ask.

Not shabby at all for 24 hours work! The best reward for all of it was the laughter and roaring applause for our presentation. We also won a Google Home Assistant and a bunch of swag from here.com courtesy of MLH.

Pokemon Hacks

Presentation too small? Click here to view the PDF fullscreen.

 

The hacks presented at AstonHack 2018 were absolutely mindblowing, my personal highlight was this guy who spent all night rewriting a linux driver from scratch so he could use his Nintendo Switch controllers as a virtual drumkit… and theremin. Click here to take a look at what went on!

A Study on Mental State Classification using EEG-based Brain-Machine Interface

This study into the statistical extraction and machine learning possibilities of EEG brainwave data was published at the 9th International Conference on Intelligent Systems 2018 on Madeira Island, Portugal.

Authors – Jordan J. Bird, Luis J. Manso, Eduardo P. Ribiero, Anikó Ekárt, Diego R. Faria
School of Engineering and Applied Science, Aston University, UK.
Department of Electrical Engineering, Federal University of Parana, Curitiba, Brazil.

Abstract – This work aims to find discriminative EEG-based features and appropriate classification methods that can categorise brainwave patterns based on their level of activity or frequency for mental state recognition useful for human-machine interaction. By using the Muse headband with four EEG sensors (TP9, AF7, AF8, TP10), we categorised three possible states such as relaxing, neutral and concentrating based on a few states of mind defined by cognitive behavioural studies. We have created a dataset with five individuals and sessions lasting one minute for each class of mental state in order to train and test different methods. Given the proposed set of features extracted from the EEG headband five signals (alpha, beta, theta, delta, gamma), we have tested a combination of different features selection algorithms and classifier models to compare their performance in terms of recognition accuracy and number of features needed. Different tests such as 10-fold cross validation were performed. Results show that only 44 features from a set of over 2100 features are necessary when used with classical classifiers such as Bayesian Networks, Support Vector Machines and Random Forests, attaining an overall accuracy over 87%.

View on ResearchGate Download PDF

A Study on CNN Transfer Learning for Image Classification

Authors – Mahbub Hussain, Jordan J. Bird, and Diego R. Faria
Aston Lab for Intelligent Collectives Engineering (ALICE)
School of Engineering and Applied Science, Aston University, UK.

Abstract – Many image classification models have been introduced to help tackle the foremost issue of recognition accuracy. Image classification is one of the core problems in Computer Vision field with a large variety of practical applications. Examples include object recognition for robotic manipulation, pedestrian or obstacle detection for autonomous vehicles, among others. A lot of attention has been associated with Machine Learning, specifically neural networks such as the Convolutional Neural Network (CNN) winning image classification competitions. This work proposes the study and investigation of such a CNN architecture model (i.e. Inception-v3) to establish whether it would work best in terms of accuracy and efficiency with new image datasets via Transfer Learning. The retrained model is evaluated, and the results are compared to some state-of-the-art approaches.

View on ResearchGate Download PDF

Learning from Interaction: An Intelligent Networked-based Human-bot and Bot-bot Chatbot System

This idea for a chatbot that actively learnt through interaction was developed for a final year project whilst studying for a Bachelor’s Degree in Computer Science from Aston University (Birmingham, UK). The work proved effective and as such a paper was written for UKCI2018 (18th Annual UK Workshop on Computational Intelligence). The work will also be published in Springer’s Advances in Intelligent Systems and Computing.

Authors – Jordan J. Bird, Diego R. Faria, Anikó Ekárt
Aston Lab for Intelligent Collectives Engineering (ALICE)
School of Engineering and Applied Science, Aston University, UK.

Abstract – In this paper we propose an approach to a chatbot software that is able to learn from interaction via text messaging between human-bot and bot-bot. The bot listens to a user and decides whether or not it knows how to reply to the message accurately based on current knowledge, otherwise it will set about to learn a meaningful response to the message through pattern matching based on its previous experience. Similar methods are used to detect offensive messages, and are proved to be effective at overcoming the issues that other chatbots have experienced in the open domain. A philosophy of giving preference to too much censorship rather than too little is employed given the failure of Microsoft Tay. In this work, a layered approach is devised to conduct each process, and leave the architecture open to improvement with more advanced methods in the future. Preliminary results show an improvement over time in which the bot learns more responses. A novel approach of message simplification is added to the bot’s architecture, the results suggest that the algorithm has a substantial improvement on the bot’s conversational performance at a factor of three.

View on ResearchGate Download PDF