Current DPhil Projects

Following the first year of taught modules and mini-projects our students move on to a variety of DPhil projects. Some examples of current areas of research are:

CDT Cohort 2017

Home Monitoring of Patients with Early and Late Stages of Dementia using Smartwatches – Antigoni Alevizaki

In recent years, there has been a surge in the area of outdoor localisation, tracking and analysis of human paths, due to the wide availability of GPS on wearable devices. In indoor environments however, where GPS is not enabled, these tasks are much more challenging: no location sensor infrastructure can be assumed, other than data from the inertial measurement units (IMUs) on smart wearables combined with bluetooth data. Moreover, no information for the floorplan of a house can be accessed to assist tracking, and IMU measurements from smart wearables can be very noisy due to the large number of activities that can take place in a home environment and require coordination, e.g. cooking, eating, tidying etc.

Addressing these problems would be of great interest, especially if they are seen in the medical context; patients with early and late stages of dementia could be much assisted through a scheme that can allow home monitoring. To this end, we will engage with healthy individuals and demen­tia patients at various stages of the disease, who have agreed to participate in experiments that include the use of smartwatches and the installation of bluetooth beacons at their home environ­ments, for the collection of IMU and bluetooth data. Use of these data towards finding ways to address the aforementioned tasks could be of great interest towards home monitoring schemes of dementia patients, personalised and adaptive medication plans as well as emergency intervention when required.

On Unsupervised Learning and where Supervision is Necessary – Yuki Asano

Artificial Intelligence, and more specifically machine learning has recently seen a huge gain in both its research impact and novel applications. This has been partly due to novel insights, more computing power, and to a very large extent, more data. Data and its algorithms used to processed it, can be briefly categorized into having a supervisory signal (e.g. a label such as a `dog’, indicating a dog’s presence in a picture) or being unsupervised, such as plain images or videos from the internet. In this work, we seek to understand exactly where and when rare and expensive supervisory signals are necessary to facilitate learning good models. The aim is thus to explore and extend the boundary between supervised and unsupervised learning further by applying insights from few-shot learning, meta learning and curiosity driven learning literature. As the field of machine vision has been at the forefront of neural network research, has standard testing data sets and well established baselines, this field is well suited for starting this line of research. For this, we begin with the task of image segmentation and aim to extend the results from [1], in which the neural network architecture was reinterpreted as a prior for a natural image. This approach will test the limits of how much labelled data really is necessary, and how much a more rigid and accurate model achieves.

Dynamic Motion Planning for Full-body Manipulators – Mark Finean

State-of-the art robots still appear very `robotic’ in their movements and are generally poor at interacting with moving objects, or static objects while the robot is moving. Overcoming these challenges should improve efficiency in industrial automation processes such as warehouse pick-and-place tasks. Human Support Robots are now at the forefront of research and becoming much more prevalent. In order to develop better relationships with robots, in particular in a care or hospital environment, these robots should appear more natural in their movements as well as be able to perform useful tasks. This research proposal will address these areas.

In this research, I propose expanding on the latest research in robotic control, such as new variants of path planning algorithms, in combination with the use of machine learning techniques. The aim will be to develop more natural and efficient movements, decrease the planning time needed and apply these techniques to real robots. This research will complement the main focus areas of research for the EPRSC such as in Artificial Intelligence Technologies, Assistive Technology, and Robotics.

Deep Learning and Control Theory Based Hierarchical Underactuated Robotic Control – Siddhant Gangapurwala

Much of the research in robotic control aims to develop solutions that, depending on the environment of operation, exploit the machine’s dynamics in order to achieve a highly agile behavior. This, however, is limited by the use of traditional control techniques such as model predictive control (MPC) [1] and quadratic programming (QP) [2] which are often based on simplified rigid body dynamics and contact models. A model-based optimization strategy employed over such simplified models often results in a constrained range of solutions that do not fully exploit the versatility of the robotic system, thereby limiting the agility of the robot in question.

Treating the control of robotic systems as an RL problem enables the use of model-free algorithms that attempt to learn a policy which maximizes the expected future (discounted) reward without infer­ring the effects of an executed action on the environment. Authors of [3] [4] and [5] have successfully implemented these strategies for various robotic applications including control of robotic manipulators, helicopter aerobatics, and even quadrupedal locomotion. However, despite the successful implementa­tion of these RL algorithms for the mentioned tasks, one of the main challenges faced in solving an RL problem is defining a reward function in order to learn an optimal policy resulting in a sensible robotic behavior. Often, this reward function needs to be tuned by a human expert. For tasks such as quadrupedal navigation through rough terrain, computing a reward function is also significantly more difficult than for tasks such as posture recovery, which when solved using an RL algorithm results in a near-optimal policy.

Robot Learning for Autonomous Control – Chia-Man Hung

Robot Learning is a research field at the intersection of robotics and machine learning. Machine learning offers to robotics a set of tools to solve complex problems that are hard to be hand-engineered; robotics offers to machine learning a platform with real-world physics and time limit constraints to test its applicability. Many machine learning algorithms have been developed in the past decade, mostly in toy environments, and not many have had successful real-world applications. This new research field has the potential to shape the future of industrial manufacturing, assistance in daily life, etc.

Branch and Bound Methods for Neural Networks Verification – Florian Jaeckle

Despite the recent success Deep Learning has had in a variety of scientific fields its use in safety-critical settings is still limited by the lack of formal verification.  However, even though neural networks are generally being treated as a black-box method, some progress has been made on verifying straight-forward properties in simple networks.  In my research I will focus on improving existing branch and bound methods that exploit the piecewise linear structure of neural networks with the aim of being able to apply them to larger networks.  Improvements can be made to all three parts of the branch and bound algorithm: the search strategy, which picks the next domain to branch on, the branching rule, which given a domain divides it into non-intersecting subdomains, and finally the bounding methods which estimate lower and upper bounds for each subdomain.

Signal Processing and Deep Learning on Graphs for Network Data Analysis – Henry Kenlay

Modern information processing tasks typically involve data that come with not only a large volume but also increasingly complex structures. In particular, data are often collected in non-Euclidean domains such as networks and graphs, where the observations are in uenced by the underlying structures as well as by the underlying dynamics at each node. For example, mobility trajectories may follow the physical constraints of the environment, and behaviours of a group of people may be in uenced by the friendship among them. This poses a series of challenges to classical learning approaches, which are mostly successful on data with an underlying Euclidean or grid-like structure with a built-in notion of metric and invariance. To cope with such challenges, geometric deep learning (GDL) [1] is a branch of emerging deep learning techniques that makes use of novel concepts and ideas brought about by graph signal processing (GSP) [2], a fast-growing eld by itself, to generalise classical deep learning approaches to data lying in non-Euclidean domains such as graphs and manifolds.

This project aims to develop novel signal processing and machine learning techniques within the context of GSP and GDL. In particular, owing to the infancy of the eld there remain many open challenges in GDL. An example of one of these open problems which we hope to explore early on is how do we construct an underlying graph? Although many GDL techniques have been proposed, they mostly focus on building models on a prede ned or known graph, but the importance of such a choice remains largely unexplored. We will aim to understand how this choice impacts the ecacy of GDL models. Furthermore, there is still considerable research to be done in exploring novel lter design on a prede ned graph.

Alignment Networks for CHange Detection – Hala Lamdouar

Deep Convolutional Neural Networks (CNNs) have led to impressive improvements in machine learning and computer vision, especially with the explosion of the multimedia data available on the internet and the rapid increase of computing power. Visual perception has greatly benefited from this revolution and witnessed dramatic achievements on classical tasks such as object recognition, human pose estimation, semantic segmentation, etc.

We are particularly interested in image alignment which can be described as the task of inferring correspondences and transformations that map a given source image into a given target image. This can be particularly useful in multiple scenarios, for instance, multi-modality registration in medical imaging, optical flow, change detection in videos, etc. The current state-of-the art methods lack robustness and fail to align the complex and ambiguous cases. This research aims to develop deep learning models for alignment by highlighting robust correspondences, while ignoring outliers.

Machine Learning for Autonomous Driving – Robet McCraith

In the past handful of years machine learning techniques have seen rapid development and incredible results on tasks which were previously much more challenging. One such area is Computer Vision where deep learning techniques are state of the art in many tasks. This has motivated many people to employ these techniques to various robotics tasks including autonomous driving which incorporates many classical vision problems such as segmentation, classification, depth prediction, and uncertainty estimation. Developing such systems therefore both contributes to the fields of computer vision and machine learning and benefits greatly from other developments in these fields.

Deep Learning for Inverse Problems – Ben Moseley

Solving inversion problems is core to many scientific areas. In geophysics, we wish to infer properties of the Earth from seismic recordings. In medical imaging, we wish to decode biological properties from sets of electromagnetic and acoustic measurements. In robotics, we wish to intuitively understand the physics in the world around us. For many areas the associated inverse problem is well studied and challenging to solve. Often the inverse problem is underdetermined and highly non-linear, and optimisation is heavily relied upon to provide a solution. Recently, deep learning has made an impressive impact on these problems. In seismic imaging, convolutional autoencoders have been used to predict underlying velocity models given a set of wavefield measurements, in a single inference step (Wu, Lin, & Zhou, 2018). Convolutional networks have rapidly become a method of choice in medical imaging (Litjens et al., 2017). Theoretical approaches have recently been suggested for combining the power of deep learning and optimisation, for example by using a deep neural network as a regulariser (Adler & Öktem, 2017; Li, Schwab, Antholzer, & Haltmeier, 2018). Closely linked to inversion is the ability to carry out forward modelling, and deep learning has made an impact here too (Guo, Li, & Iorio, 2016).

Optimisation for Efficient Machine Vision – Alasdair Paren

Machine Vision has undergone rapid development during the last 6 years with the state of the art on a range of benchmarks being persistently improved by new machine vision techniques. Many of these recent techniques in machine vision leverage large convolutional neural networks (CNNs) that require graphics processing units (GPUs) to both train and run at inference time because of their large computational load. However, the power, cost and space requirements of GPUs prohibits the applications of these techniques in many settings.

This research aims to develop novel machine vision methods, with a focus on efficient operation. As a starting point this research will look to develop novel methods for training Binary and Quantised Neural networks by using discrete programming relaxations to train binary neural networks.

If comparable results to modern CCNs could be replicated on low powered CPUs such as those found in mobile devices this would have a huge impact on the areas of self-driving cars, robotics, smart data acquisition and portable AI.

Probabilistics Numerics  for Automated Machine Learning- Tom Pretty

Initially we aim to address problems involving numerical optimization. Optimization problems particularly amenable to a probabilistic treatment are those that involving expensive to evaluate block-box functions. For such problems it is worth while to commit to spending extra compute to extract as much information as possible from a small set of function evaluations. A specific setting we aim to tackle is that of hyperparameter optimization for Machine Learning models, as in [4]. We want to build off of this work by incorporating ideas from the optimal dataset selection literature [7] and to incorporate
more prior knowledge into the procedure through better handling of certain hyperparameters with a-priori known effects e.g. increasing training time will generally increase performance. Looking further afield we want to address other problems in optimization such as neural network architecture selection [8] and address problems in numerical quadrature, such as evaluation of model likelihoods [1].

Probabilistic Inference for Reinforcement Learning and Meta-Learning – Tim Rudner

Probabilistic machine learning uses probability theory to represent and manipulate uncertainty and is based on the idea that learning can be thought of as inferring plausible models to explain observed data. This way, probabilistic methods provide a mathematically principled approach to learning that can be applied to other areas of machine learning such as reinforcement learning (RL) or meta-learning.

Probabilistic models stand to play a crucial role in a wide variety of RL problems, including: smart exploration; hierarchical RL; and model-based RL. Meta-learning also naturally lends itself to probabilistic approaches, as they allow for information about sets of models to be encoded and inferred probabilistically.

My research will aim to elucidate and bridge the gap between probabilistic inference, reinforcement learning, and meta-learning. The two main research foci will be: (i) improving the data efficiency of reinforcement learning through the use of probabilistic inference in model-based RL and meta-learning; as well as (ii) establishing optimization dualities between probabilistic inference and either RL or meta-learning. The former research focus will help open up a new range of problems to which reinforcement learning can be applied, while the latter will make training in reinforcement learning and meta-learning amenable to a wide range of probabilistic inference methods.

Invariances & Robustness in Machine Learning – Lewis Smith

Machine learning has made remarkable progress in recent years by exploiting ‘deep’ models, which promise to learn complex representations of their input, aiming to discover the underlying structure of the problem directly from data. However, despite their empirical successes, the theoretical evidence that this is actually the explanation for the success of deep models is mixed. Even in toy cases where a very simple invariance in the data exists, empirically deep models do not always infer it even in the limit of large amounts of data, showing failure to learn even simple structure. The advancements in deep learning have in reality largely been driven by explicitly modelling the structure of the problem; for example, convolutional nets, which introduce strong inductive biases on the functions which can be learned, enormously outperform fully connected models, even though the latter are strictly more expressive.

CDT Cohort 2016

Computer Vision for Understanding Human Communication – Daffy Afouras

Speech recognition and machine translation have been thoroughly researched in the past and continue being a popular area, due to the large impact of their applications.  Human communication however is multimodal and uses visual signals to complement the acoustic and linguistic information.  Attempting to transcribe only one of the modalities, namely speech, many times has ambiguous results. In fact, even a perfect transcript of a speaker’s verbal expression, is sometimes not enough to communicate more abstract notions such as their emotional state. In these cases, visual messages such as lip motion, gestures, body-language, and facial expressions, carry a great deal of information that substantially aids our understanding.

Autonomous Agents for Augmented Decision-Making – Oliver Bent

Artificial Intelligence Agents pose enormous opportunities to inform decisions made by expert and non-expert humans across industries. This research develops the potential for Agents to augment complex decision-making. Complex decisions impact the future data, observation and state of the system which considering. To achieve some confidence in the decision-making process Agents will have to efficiently explore high dimensional decision spaces and collaborate sharing information.

Deep Learning for Mobile Robotics – Fabian Fuchs

More and more tasks in robotics are attempted to be solved with deep neural networks, often trained from end to end from raw sensor inputs with examples ranging from visual odometry (Wang, Clark, & Trigoni , 2017) to steering commands for autonomous vehicles (Bojarski & Zieba, 2016). This requires large amounts of labelled data, which are not always readily available. One key area of research to address these issues is transfer learning, which describes the “ability of a system to recognize and applyknowledge and skills learned in previous tasks to novel tasks” (Pan & Yang , 2010).

Inference Amortization for Probabilistic Programming – Adam Golinski

Probabilistic modelling and reasoning are widespread techniques lying on the boundary of statistics and machine learning. Probabilistic programming simplifies the use of probabilistic modelling thanks to the ease of defining generative models, and saves the effort of deriving custom inference algorithms for the model of interest thanks to the general purpose Monte Carlo or black box variational inference algorithms which are available as part of some prominent probabilistic programming languages or systems, such as Anglican.

Creating a General Purpose Inference Algorithm – Bradley Gram-Hansen

Develop an all-purpose inference algorithm, based on the concept of Hamiltonian Monte Carlo (HMC). That can deal with not only fi nite continuous parameters, as it currently stands, but for nonparametric models and both discrete and discontinuous parameter spaces.

Unsupervised and Multi-task Learning for Computer Vision – Xu Ji

The need for large-scale manual annotations is a bottleneck for many machine learning methods that use deep neural networks, especially for computer vision problems such as image classification. Methods that are able to learn visual understanding in an unsupervised manner, i.e. without manual annotation, could be deployed in a wider range of applications, as the amount of real-world unlabelled data far exceeds that of labelled data. Catastrophic interference is another drawback of deep neural networks: learning from changing (i.e. non-stationary) distributions leads to forgetting previously learned modes of the functions being approximated. Consequently stationary distributions must be simulated for many real-world applications in computer vision and reinforcement learning, where for example video and game sequence data are both highly temporally correlated, meaning online (real time) learning and testing is inhibited. Furthermore, the need for neural networks to employ variable learning rates (few shot and episodic learning; the ability for humans to immediately retain specific observed events) is also hindered by catastrophic interference, as higher retention of new function modes equates to faster catastrophic forgetting of old ones. Solving these issues would result in making neural networks hardier: able to cope with the lack of dense manual annotation and non-stationarity that human learning can.

Machine Learning for Human Activity Recognition – Shuyu Lin

Human activity recognition (HAR) serves as a essential component for many important applications, including health monitoring in the medical domain, activity diary recording for users’ welfare and productivity analysis, and offering context awareness to ensure safe and efficient human-machine interaction in robotics.  Although recent work on HAR has reported encouraging accuracy in several datasets, there is still a non-trivial gap before the current methods are ready for practical use.

Affective Disorders Monitoring with Wearable Technologies – Andrea Patane

Mental health problems affect mood and the way people behave, think and react. Referred to asaffective or mood disorders, this group of psychiatric diseases includes depression, bipolar disorderand anxiety disorder. With over 33 million people diagnosed, the yearly healthcare costs related to affective disorders exceed 100 billion euros. Traditionally, affective disorders have been treated through medication and psychotherapy, but over the past decades psychotherapeutic
practice has been supplemented with computerised technologies.

Standing on the Shoulders of Giants: Domain and Task Transfer Reinforcement Learning – Sasha Salter

Due to the recent successful deployment of deep learning architectures in reinforcement learning (RL), the field has gained a lot of popularity as of late. Mastery of challenges such as the Atari suiteand AlphaGo builds excitement as to what artificial intelligence may be able to achieve in the nearfuture. However, this success relies on the ability to learn at low cost, often within the confines of a virtual environment, by trial and error over as many episodes as is required. In many domains, such as robotics, this presents a significant challenge. For embodied systems not only is there a cost (either monetary or execution time) associated with an episode, thereby limiting the number of training samples obtainable, but there also exist safety constraints making exploration of state space undesirable. One of the principle challenges for the future of artificial intelligence in real world systems is therefore the ability to train agents in a safe and data-efficient manner.

Probabilistic Numerics for Reinforcement Learning – Ed Wagstaff

Reinforcement learning is an established paradigm for machine learning which has seen impressive results in recent years, creating systems with state-of-the-art performance on a range of problems[1][2]. Probabilistic numerics is an emerging field which applies probabilistic inference to numerical problems (i.e. to problems of approximation)[3]. Practical reinforcement learning algorithms often depend heavily on numerical approximations. Further improvement of reinforcement learning algorithms has the potential to improve the performance of automated systems on a broad variety of real-world problems.

CDT Cohort 2015

Deep Learning for Large Heterogeneous Human-Centric Data – Leo Berrada

Deep generative neural networks and their conditional variants have recently witnessed a surge of interest due to their impressive ability to model very complex probability distributions, such as the modelling of human face images or human voice audio signal. However, parameter estimation for such models from large data sets and over large structured outputs remains an open area of research.

Next Best View Planning with Point Clouds for Detailed Mapping og Large Environments – Rowan Border

ORI has state-of-the-art systems for dense reconstruction and 3D localisation with autonomous ground vehicles. These systems can be leveraged to design similar capabilities for autonomous aerial vehicles. Drone operation with vision (for applications such as aerial inspection) is an important research area that has been dominated by photogrammetry techniques, which often require human control and offline processing. Aerial vehicles that can operate autonomously and provide onboard vision processing are vastly more capable and open up new possibilities. A drone with these capabilities would be able to provide an autonomous aerial inspection of a demarcated area, navigating the environment and computing a complete dense reconstruction in a closed loop.

Scalable machine learning in the presence of uncertainty- Adam Cobb

This is a study of applying new machine learning techniques to challenges which require robust measures of uncertainty. The thesis will cover novel techniques building on both Bayesian non-parametric methods and highly parametric deep neural networks. The emphasis throughout the work will be how to incorporate notions of uncertainty into real-world problems, while trying to avoid the overcomplication of models.

Bayesian Inference with Big Data – Rob Cornish

Many Bayesian methods, particularly those based on sampling, are not yet capable of handling very large datasets, which are becoming increasingly common across many scientific and engineering disciplines. My research aims to improve on this. For instance, we seek to build on recently proposed methods based on piecewise-deterministic Markov processes — such as the bouncy particle sampler — which have demonstrated scalability by providing a mechanism to subsample data correctly. We aim to extract and generalise these developments so they can be applied to a broader range of Bayesian inference tasks.

Structured value and policy learning for deep reinforcement learning – Greg Farquhar

Reinforcement learning (RL) aims to train systems that choose optimal actions given the state of their environment, by allowing agents to explore possible policies and learn from their experiences. This kind of trial-and-error learning is plagued by high variance in value estimates, non-stationarity in data distributions, and a number of other critical obstacles.  Deep reinforcement learning uses deep neural networks as function approximators for policies, models, and value functions. Structure in the problems may be exploited in the architecture of these neural networks and algorithms used to train them. For example, convolutional neural networks exploit the translational invariance of the observation space to learn rapidly in visual domains. However, many aspects of the structure of RL agents and optimal policies have not been explored. Further work in this area will help develop better RL methods with applications from robotic control to logistics or predictions in financial markets.

Inference and Probabilistic Programming in Reinforcement Learning – Max Igl

Recently major advances in Reinforcement Learning for game playing, a by now widely accepted benchmark[1], have been made by using Deep Q-Learning[2] (DQN). However, current state of the art methods still struggle with the combination of visual environments and structured hierarchical tasks.

In those cases exploration using a flat policy is highly inefficient as recurring subtasks, such as movement primitives, have to been re-learned in each situation. Several methods have been proposed to incorporate hierarchical policies, which impose structure on the search space and enable re-using of subroutines[3]-[6]. However, it is not yet clear how the visual input and higher-level policies should be combined or which higher-level policy representation should be used.

Unifying Motion Segmentation, Estimation, and Tracking for Complex Dynamic Scenes – Kevin Judd

The field of autonomous robotics is accelerating rapidly, and there has been significant research and development in visual navigation. Specifically, visual odometry (VO) addresses the challenge of estimating the egomotion of a mov- ing camera in a largely static environment. Recently, VO approaches have been extended to scenarios where large regions of the scene are dynamic; however, these systems are still primarily focused on only estimating egomotion and esti- mate other motions separately or even ignore them. Knowledge of all motions of the scene gives important context for navigating safely and intelligently through an environment.

A Scalable, Robust,  and Stable Approach to Signal Detection in Non-stationary Noise – Ivan Kiskin

Detecting signals in noise is a fundamental problem applicable to vastly diverse research areas. These range from potential planet discovery and trend identification in finance to disease-bearing insect detection. The latter application, aimed to battle malaria, has received attention and funding from winning the 2014 Google impact challenge for its strong potential societal impact. The project aims to identify mosquito swarms through a distributed network of low-cost sensors. Correct identification ensures the chances of targeting affected areas with aid are maximised. Within the scope of the project, effective detection in challenging real-world conditions is vital to the success of the overall collaboration with the Royal Botanic Gardens, Kew.

Robust Model Nased Policy Search – Kyriakos Polymenakos

During this research project we will combine machine learning (ML) tech­niques for constructing and tuning models and policies along with formal meth­ods and control theory. Our aim is, starting with an incomplete and uncertain model of the system dynamics, to design a controller which:

  • refines the model by intelligently exploring the environment
  • has verifiable properties such as safety and stability
  • approximates or achieves optimal performance given the above constraints

ML approaches, for the most part, have been concerned with finding optimal policies and not guarantees about properties of the system and its behaviour while training and in operation. On the other hand, system verification and robust control theory usually deal with the model uncertainty by establishing desirable system properties and investigating whether a system respects them, but with less focus on performance.

Connections between Probabilisitc Machine Learning and Systems Identification – Control Theory – Nikitas Rontsis 

In recent years there has been a surge in the area of Machine Learning techniques applied in a variety of areas, including the area of Control Systems. These techniques require very little prior knowledge for the system under control and are adjustable to changes of the system. However, they lack formal guarantees and interpretation of the resulting models & controllers, which is a well explored topic of classic control theory and system identification. The research will focus on combining and finding connections between the two fields. We will begin by examining an industry-motivated example system for Schlumberger and apply both standard system identification & machine learning methods to derive a model. We will then try to show circumstances under which the one could be a generalization of the other. Afterwards, the same thing will be done for the design of a controller for the identified plant model.

Distributed Model Learning with Guarantees for Dynamical Systems – Timothy Seabrook

I propose to conduct my DPhil research in the field of Distributed Learning, to take advantage of edge computing and decentralise the computational effort away from large data centres. I would like to explore the development of local dynamic models within a network of agents, to be aggregated into a global hi­erarchical set of models. This would accelerate learning not only by splitting work, but also by facilitating the transfer of knowledge between agents, from global sets to new local models.

 

CDT Cohort 2014

CNNs from the Sofa: Learning about People by Watching Sports – Sam Albanie

During the course of social interaction between humans, facial expressions are used to convey a great deal of emotional information. To fully understand human interaction in videos and photos, techniques that can extract this information are required. Unfortunately, at present the field of automated facial analysis still lags significantly behind human-level performance.

Learning to Drive Off-Road – Oliver Bartlett

Many great advances have been made in on-road driving, but the same breakthroughs have yet to take place off-road.  This is partly due to the lack of structure and extra uncertainty off-road, but also the reduced attention compared to on-road algorithms, where research groups race to demonstrate fully autonomous capability.

The aim of this DPhil is to create systems capable of operating robustly in off-road environments.  This will be achieved by leveraging advancemens in on-road capabilities and ORI’s autonomy stack, Selenium.

Bayesian Decision Making in Financial Systems – Sid Ghoshal

Supervised learning techniques will be applied to both traditional market indicators and a corpus of financial news, training the models on publicly available databases of news articles and testing it on the news outlets most used by industry practitioners. The thesis will partly aim to measure the relative predictive power of news sources actively monitored by traders and used in making quickfire decisions. It will also explore the potential for crowdsourcing investment wisdom, combing Twitter-like social media sites like www.stocktwits.com for retail investor opinions and using state-of-the-art computational linguistics to convert retail sentiment into price predictors. Finally, a wide range of options market data including traded volumes, open interest and price will be used to infer the inhomogeneous nature of price space, and map out its curvature.

Deep Learning for Vision – Ankush Gupta

Deep learning (hierarchical non-linear (or rectified) perceptron) models, big computa­tion power and large labeled datasets have driven the recent advances in computer vision [1, 2, 3]. Notwithstanding the ingenuity required in designing the model architectures, it is the amount of high-quality labeled data that is available, which determines the scope and the scale of the problems that can be solved today. Large, high-quality labeled datasets although increasingly available today [3, 4], have limited types of ground-truth annotations. Furthermore, such data is slow and expensive to obtain. This dearth of annotated data has been mitigated to a great degree through the use of synthetically generated data, e.g. MPI-Sintel dataset for optical flow [5], VGG text data for recog­nition [6]. We propose to utilize synthetically generated image data for learning deep hierarchical models. Concretely, we will first investigate the use of synthetic data for spotting text in natural images, and later extend its domain of application.

The Next Stage of Semantic Paint – Jack Hunt

The solving of computational perception problems is central to the functioning of autonomous sys­tems in real world environments. An intelligent system may need to perform tasks such as locating an object in real space or determining the difference between real world surfaces(for example, seg­menting a door from a wall). Such tasks rely on meaningful representations of the environment, inferred from observations of it. For example, given a scene where a motorcyclist is riding a mo­torcycle along a road and the task of identifying the motorcyclist, it would be appropriate to infer which portion of the scene corresponds to the cyclist. On the lower level, this task relies on a per pixel label representation of the scene. This per pixel representation is known as a segmentation of the image depicting the scene.

Visual Inertial Odometry Using Non-linear Optimization – Stefan Saftescu

Practical robotics uses data from a multitude of sensors, in some cases collected over long periods of time and at large scales. Being able to consis­tently fuse these measurements is an essential part of mobile autonomy. In this proposal, I will consider the fusion of visual data from a stereo camera set-up and an inertial measurement unit (IMU), for the purpose of motion estimation. The initial goal of my research in the Mobile Robotics Group will be to improve the existing Visual Odometry (VO) based motion esti­mation system.

Various techniques can be used to estimate how a robot moves, but vision-based systems have received a lot of interest thanks to the low-cost and versatility of cameras. One of the problems with single-camera vision systems is scale ambiguity. However, with a stereo camera, scale is deter­mined and therefore good ego-motion estimates can be produced.

Hillary Shakespeare – Attention for Planning to Perceive

This project proposal aims to apply recurrent neural networks (RNNs) [12] for visual attention to planning to perceive.

Maps without semantic information are of lim­ited use to a robot. Therefore building maps the robot can itself label with semantic information on­line would allow it to utilise the structure of its en­vironment while carrying out its mission [4]. This can allow for more sensible plans, for example ap­proaching a door head-on or understanding quali­ties of the environment, for example that an object could be on a table or in a drawer.

Objects from Motion – James Thewlis 

Many areas in computer vision have recently seen advances thanks to renewed interest in Convolutional Neural Networks (CNNs) [1], initially with image classification [2]. CNNs were subsequently applied to other tasks, in particular object detection [3] and semantic segmentation [4, 5]. Despite this success, the artificial division into individual problems means CNNs have a narrow understanding of images, and may disregard semantically meaningful information about instance­specific attributes and object extents. CNNs also require vast amounts of labelled data, in contrast to the ease with which humans learn through observation and interaction with their environment. We want to use information from moving scenes to infer the existence of objects and their properties.

Variational Bayesian Learning for Big Data – Stefan Webb

My second CDT miniproject investigated improvements to a Bayesian version of the pa­rameter server for large scale Bayesian machine learning applications. The basic technique, as outlined in [9], is for each node to run an independent MCMC chain on its subset of data, and to use expectation propagation as a framework for node samplers to communicate information with a central server regarding the first two moments of the posterior distribution. This frame­work allows for coordinated posterior sampling without significant synchronization costs. The method as presented in [9], however, suffers from a number of defects that stop it scaling to larger models such as feedforward neural networks (NNs). In a soon-to-be published journal article [5] based in part on my project work, we describe algorithmic improvements to permit it run on deep NNs.

More generally, there have been several recent attempts to develop parallelized MCMC al­gorithms for approximating the posterior distribution given a large dataset ([7], [4], [2]). These work by running independent Markov chains on each worker node on their subset of the data without communication between the workers, with a final combination step to merge the sam­ples from each worker. Such approaches, however, are inefficient and difficult to tune because of the lack of communication. The approach we trialled attempts to solve both of these problems by using communication between workers, but is not so strict that all the workers need exactly the same set of parameters. As I will explain, this research has created opportunities for further investigation.