Current DPhil Projects

Current DPhil Projects

Following the first year of taught modules and mini-projects our students move on to a variety of DPhil projects. Some examples of current areas of research are:

CDT Cohort 2014

CNNs from the Sofa: Learning about People by Watching Sports – Sam Albanie

During the course of social interaction between humans, facial expressions are used to convey a great deal of emotional information. To fully understand human interaction in videos and photos, techniques that can extract this information are required. Unfortunately, at present the field of automated facial analysis still lags significantly behind human-level performance.

Learning to Drive Off-Road – Oliver Bartlett

Many great advances have been made in on-road driving, but the same breakthroughs have yet to take place off-road.  This is partly due to the lack of structure and extra uncertainty off-road, but also the reduced attention compared to on-road algorithms, where research groups race to demonstrate fully autonomous capability.

The aim of this DPhil is to create systems capable of operating robustly in off-road environments.  This will be achieved by leveraging advancemens in on-road capabilities and ORI’s autonomy stack, Selenium.

Bayesian Decision Making in Financial Systems – Sid Ghoshal

Supervised learning techniques will be applied to both traditional market indicators and a corpus of financial news, training the models on publicly available databases of news articles and testing it on the news outlets most used by industry practitioners. The thesis will partly aim to measure the relative predictive power of news sources actively monitored by traders and used in making quickfire decisions. It will also explore the potential for crowdsourcing investment wisdom, combing Twitter-like social media sites like www.stocktwits.com for retail investor opinions and using state-of-the-art computational linguistics to convert retail sentiment into price predictors. Finally, a wide range of options market data including traded volumes, open interest and price will be used to infer the inhomogeneous nature of price space, and map out its curvature.

Deep Learning for Vision – Ankush Gupta

Deep learning (hierarchical non-linear (or rectified) perceptron) models, big computa­tion power and large labeled datasets have driven the recent advances in computer vision [1, 2, 3]. Notwithstanding the ingenuity required in designing the model architectures, it is the amount of high-quality labeled data that is available, which determines the scope and the scale of the problems that can be solved today. Large, high-quality labeled datasets although increasingly available today [3, 4], have limited types of ground-truth annotations. Furthermore, such data is slow and expensive to obtain. This dearth of annotated data has been mitigated to a great degree through the use of synthetically generated data, e.g. MPI-Sintel dataset for optical flow [5], VGG text data for recog­nition [6]. We propose to utilize synthetically generated image data for learning deep hierarchical models. Concretely, we will first investigate the use of synthetic data for spotting text in natural images, and later extend its domain of application.

The Next Stage of Semantic Paint – Jack Hunt

The solving of computational perception problems is central to the functioning of autonomous sys­tems in real world environments. An intelligent system may need to perform tasks such as locating an object in real space or determining the difference between real world surfaces(for example, seg­menting a door from a wall). Such tasks rely on meaningful representations of the environment, inferred from observations of it. For example, given a scene where a motorcyclist is riding a mo­torcycle along a road and the task of identifying the motorcyclist, it would be appropriate to infer which portion of the scene corresponds to the cyclist. On the lower level, this task relies on a per pixel label representation of the scene. This per pixel representation is known as a segmentation of the image depicting the scene.

Visual Inertial Odometry Using Non-linear Optimization – Stefan Saftescu

Practical robotics uses data from a multitude of sensors, in some cases collected over long periods of time and at large scales. Being able to consis­tently fuse these measurements is an essential part of mobile autonomy. In this proposal, I will consider the fusion of visual data from a stereo camera set-up and an inertial measurement unit (IMU), for the purpose of motion estimation. The initial goal of my research in the Mobile Robotics Group will be to improve the existing Visual Odometry (VO) based motion esti­mation system.

Various techniques can be used to estimate how a robot moves, but vision-based systems have received a lot of interest thanks to the low-cost and versatility of cameras. One of the problems with single-camera vision systems is scale ambiguity. However, with a stereo camera, scale is deter­mined and therefore good ego-motion estimates can be produced.

Hillary Shakespeare – Attention for Planning to Perceive

This project proposal aims to apply recurrent neural networks (RNNs) [12] for visual attention to planning to perceive.

Maps without semantic information are of lim­ited use to a robot. Therefore building maps the robot can itself label with semantic information on­line would allow it to utilise the structure of its en­vironment while carrying out its mission [4]. This can allow for more sensible plans, for example ap­proaching a door head-on or understanding quali­ties of the environment, for example that an object could be on a table or in a drawer.

Objects from Motion – James Thewlis 

Many areas in computer vision have recently seen advances thanks to renewed interest in Convolutional Neural Networks (CNNs) [1], initially with image classification [2]. CNNs were subsequently applied to other tasks, in particular object detection [3] and semantic segmentation [4, 5]. Despite this success, the artificial division into individual problems means CNNs have a narrow understanding of images, and may disregard semantically meaningful information about instance­specific attributes and object extents. CNNs also require vast amounts of labelled data, in contrast to the ease with which humans learn through observation and interaction with their environment. We want to use information from moving scenes to infer the existence of objects and their properties.

Variational Bayesian Learning for Big Data – Stefan Webb

My second CDT miniproject investigated improvements to a Bayesian version of the pa­rameter server for large scale Bayesian machine learning applications. The basic technique, as outlined in [9], is for each node to run an independent MCMC chain on its subset of data, and to use expectation propagation as a framework for node samplers to communicate information with a central server regarding the first two moments of the posterior distribution. This frame­work allows for coordinated posterior sampling without significant synchronization costs. The method as presented in [9], however, suffers from a number of defects that stop it scaling to larger models such as feedforward neural networks (NNs). In a soon-to-be published journal article [5] based in part on my project work, we describe algorithmic improvements to permit it run on deep NNs.

More generally, there have been several recent attempts to develop parallelized MCMC al­gorithms for approximating the posterior distribution given a large dataset ([7], [4], [2]). These work by running independent Markov chains on each worker node on their subset of the data without communication between the workers, with a final combination step to merge the sam­ples from each worker. Such approaches, however, are inefficient and difficult to tune because of the lack of communication. The approach we trialled attempts to solve both of these problems by using communication between workers, but is not so strict that all the workers need exactly the same set of parameters. As I will explain, this research has created opportunities for further investigation.

CDT Cohort 2015

Deep Learning for Large Heterogeneous Human-Centric Data – Leo Berrada

Deep generative neural networks and their conditional variants have recently witnessed a surge of interest due to their impressive ability to model very complex probability distributions, such as the modelling of human face images or human voice audio signal. However, parameter estimation for such models from large data sets and over large structured outputs remains an open area of research.

Learning Manoeuvres for Active Perception in Inspection Tasks – Rowan Border

ORI has state-of-the-art systems for dense reconstruction and 3D localisation with autonomous ground vehicles. These systems can be leveraged to design similar capabilities for autonomous aerial vehicles. Drone operation with vision (for applications such as aerial inspection) is an important research area that has been dominated by photogrammetry techniques, which often require human control and offline processing. Aerial vehicles that can operate autonomously and provide onboard vision processing are vastly more capable and open up new possibilities. A drone with these capabilities would be able to provide an autonomous aerial inspection of a demarcated area, navigating the environment and computing a complete dense reconstruction in a closed loop.

A Reinforcement Learning & Game Theoretic Approach to Sensor Networks – Adam Cobb

The aim is to study techniques that enable agents in multi-agent systems to independently make decisions based on their own knowledge of the world, along with information gained from other sensors. Decentralising decisions in this manner points to exploiting knowledge from both the area of reinforcement learning and game theory. This emerging area of mechanism design is important for any future networks that aim to follow the \Internet of Things” paradigm. As these sensor networks tend to consist of low power embedded systems, part of the research should include taking into account the computational and memory limitations.

The Power and Expressiveness of Probabilistic Programming Languages – Rob Cornish

The design of a general probabilistic programming language (PPL) involves a balancing act between two deeply conflicting concerns. On the one hand, the system should provide as uniform and as flexible an interface for specifying models as possible; on the other, it should maximise the class of models on which inference may be performed efficiently. All current probabilistic programming systems favour one of these concerns at the expense of the other: Church[1], Anglican[2], and Venture[3] offer a highly expressive programming interface at the cost of suboptimal inference performance in certain cases, while systems like Figaro[4], Factorie[5], and Infer .NET[6] restrict their expressibility so as to allow the application of specialized inference techniques. It has not yet been shown possible to optimize both concerns simultaneously.

[1] Goodman, Noah et al. “Church: a language for generative models.” arXiv preprint arXiv:1206.3255 (2012).

[2] Wood, Frank, Jan-Willem van de Meent, and Vikash Mansinghka. “A New Approach to Probabilistic Programming Inference.” AISTATS 21 Nov. 2014: 1024-1032.

[3] Mansinghka, Vikash, Daniel Selsam, and Yura Perov. “Venture: a higher-order probabilistic programming platform with programmable inference.” arXiv preprint arXiv:1404.0099 (2014).

[4] Pfeffer, Avi. “Figaro: An object-oriented probabilistic programming language.” Charles River Analytics Technical Report 137 (2009).

[5] McCallum, Andrew, Karl Schultz, and Sameer Singh. “Factorie: Probabilistic programming via imperatively defined factor graphs.” Advances in Neural Information Processing Systems 2009: 1249-1257.

[6] Minka, Tom et al. “Infer .NET 2.5.” Microsoft Research Cambridge (2012).

New Operators for Reinforcement Learning – Greg Farquhar

Reinforcement learning (RL) aims to train systems that choose optimal actions given the state of their environment. It is often difficult to directly learn such an optimal policy, so we learn intermediate proxies from which policies may be induced. However, these intermediate goals are not always well aligned with the ultimate objective of maximising expected long-term rewards. To alleviate this problem and improve performance on the end task, new operators for value-based RL have been proposed which learn ‘wrong’ yet optimality-preserving action-values [1], and recent approaches attempt to learn models directly from rewards, without the intermediate goal of optimal predictive accuracy [2]. Further work in this space will help develop better RL methods with applications from robotic control to logistics or financial markets.

Inference and Probabilistic Programming in Reinforcement Learning – Max Igl

Recently major advances in Reinforcement Learning for game playing, a by now widely accepted benchmark[1], have been made by using Deep Q-Learning[2] (DQN). However, current state of the art methods still struggle with the combination of visual environments and structured hierarchical tasks.

In those cases exploration using a flat policy is highly inefficient as recurring subtasks, such as movement primitives, have to been re-learned in each situation. Several methods have been proposed to incorporate hierarchical policies, which impose structure on the search space and enable re-using of subroutines[3]-[6]. However, it is not yet clear how the visual input and higher-level policies should be combined or which higher-level policy representation should be used.

Perceiving Depth and Route Affordance for Autonomy over Very Challenging Terrain – Kevin Judd

The field of autonomous robotics is accelerating rapidly, and there has been signi cant research and development in areas such as driverless cars. However, these systems are focused on the challenges present in various trac environments, which are relatively flat and well-structured. In comparison, there are many applications for autonomous vehicles in environments with very little inherent structure, such as rubble in a disaster area or the surface of an extraterrestrial body, as well as signi cantly challenging topography and terrain.

A Scalable, Robust,  and Stable Approach to Signal Detection in Non-stationary Noise – Ivan Kiskin

Detecting signals in noise is a fundamental problem applicable to vastly diverse research areas. These range from potential planet discovery and trend identification in finance to disease-bearing insect detection. The latter application, aimed to battle malaria, has received attention and funding from winning the 2014 Google impact challenge for its strong potential societal impact. The project aims to identify mosquito swarms through a distributed network of low-cost sensors. Correct identification ensures the chances of targeting affected areas with aid are maximised. Within the scope of the project, effective detection in challenging real-world conditions is vital to the success of the overall collaboration with the Royal Botanic Gardens, Kew.

Robust Model Nased Policy Search – Kyriakos Polymenakos

During this research project we will combine machine learning (ML) tech­niques for constructing and tuning models and policies along with formal meth­ods and control theory. Our aim is, starting with an incomplete and uncertain model of the system dynamics, to design a controller which:

  • refines the model by intelligently exploring the environment
  • has verifiable properties such as safety and stability
  • approximates or achieves optimal performance given the above constraints

ML approaches, for the most part, have been concerned with finding optimal policies and not guarantees about properties of the system and its behaviour while training and in operation. On the other hand, system verification and robust control theory usually deal with the model uncertainty by establishing desirable system properties and investigating whether a system respects them, but with less focus on performance.

Connections between Probabilisitc Machine Learning and Systems Identification – Control Theory – Nikitas Rontsis 

In recent years there has been a surge in the area of Machine Learning techniques applied in a variety of areas, including the area of Control Systems. These techniques require very little prior knowledge for the system under control and are adjustable to changes of the system. However, they lack formal guarantees and interpretation of the resulting models & controllers, which is a well explored topic of classic control theory and system identification. The research will focus on combining and finding connections between the two fields. We will begin by examining an industry-motivated example system for Schlumberger and apply both standard system identification & machine learning methods to derive a model. We will then try to show circumstances under which the one could be a generalization of the other. Afterwards, the same thing will be done for the design of a controller for the identified plant model.

Distributed Model Learning with Guarantees for Dynamical Systems – Timothy Seabrook

I propose to conduct my DPhil research in the field of Distributed Learning, to take advantage of edge computing and decentralise the computational effort away from large data centres. I would like to explore the development of local dynamic models within a network of agents, to be aggregated into a global hi­erarchical set of models. This would accelerate learning not only by splitting work, but also by facilitating the transfer of knowledge between agents, from global sets to new local models.