University of Bristol

VI-Lab PhD Research Opportunities

Funded Opportunities

VI-Lab currently has fully funded PhD opportunities in the following areas:


Title: Optimised acquisition and coding for immersive formats based on visual scene analysis

Supervisor: Professor David Bull

Funding: EPSRC iCASE award with BBC Research and Development

Deadline: Open until filled

Start Date: October 2018 (latest)

Description:  There is a hunger for new, more immersive video content (UHDTV, 360 etc) from users, producers and network operators.  Efforts in this respect have focused on extending the video parameter space with greater dynamic range, spatial resolution, temporal resolution, colour gamut, interactivity and display size / format. There is however, a very limited understanding of the interactions between these parameters and their relationship to content statistics, visual immersion and delivery methods.  The way we represent these immersive video formats is thus key in ensuring that content is delivered at an appropriate quality, which retains the intended immersive properties of the format, while retaining compatibility with the bandwidth and variable nature of the transmission channel. Major research innovations are needed to solve this problem.

The research challenges to be addressed are based on the hypothesis that, by exploiting the perceptual properties of the Human Visual System, and its content dependent performance, we can obtain step changes in visual engagement while also managing bit rate. We must therefore: i) understand the perceptual relationships between video parameters and content type; and ii) develop new visual content representations that adapt to content statistics and their immersive requirements.  The solution to this problem will focus around exploitation of machine learning methods to classify scene content and relate this to the extended video parameter space.


Title: Understanding and Measuring Visual Immersion

Supervisor: Professor Iain Gilchrist, Professor David Bull

Funding: EPSRC iCASE award with BBC Research and Development

Deadline: Open until filled

Start Date: October 2018

Description: Immersion is a psychological phenomenon that plays a large part in determining our enjoyment when viewing mediated content. New media formats for cinema and broadcast, such as High-Dynamic Range (HDR), High Frame Rate (HFR) video, wider colour gamuts, 360 content and VR promise new and powerful ways to deliver performance in effective, exciting and novel ways. These technologies introduce new mediation processes between artistic performance and audience appreciation, so the central question is the effect that this technology has on immersion. In this project we will for the first time enable the development and deployment of non-invasive measures of individual or collective immersion.  This will not only help us to understand the immersive properties of the narrative but also provide a dynamic means of informing editorial decision making and assessing the incremental value of technology over narrative. We will investigate both personalised immersion measures based on psychophysics and physiological measurements, and develop instrumentation for measuring collective measures of immersion – using this to evaluate both conventional and new formats.


Title: Automated Volumetrics for Stroke using Deep Learning

Supervisors:  Prof. Majid Mirmehdi,  Dr Phil Clatworthy

Funding:   Funding will be available for a successful applicant.

Deadline: January 22nd (for all candidates except Chinese applicants)

January 15th (for Chinese applicants)

Start Date: Sept 2018 – but negotiable for a later start

Description: Stroke is a devastating illness in which clinical decisions are often time critical. Ischaemic stroke is death of a part of the brain due to blockage of the artery supplying that area of the brain. There are many circumstances in which rapid and reliable measurement of the volume of an ischaemic stroke is likely to be extremely useful in making clinical decisions, such as determining the relative risk and benefit of urgent treatments, such as intravenous thrombolysis (“clot-busting”) and intra-arterial thrombectomy (mechanical clot removal). This project will involve developing deep learning methods with the objective of making key measurements on CT and MRI scans of the brain to allow time critical decisions with more confidence.


Title:      PhD in Computer Vision

Supervisor:   Prof. Majid Mirmehdi

Funding:     Various sources such as DTP/UOB/CSC or self-funded

Deadline:   January 15th for CSC applicants, January 22nd for all non-Chinese applicants

Start Date:   October 2018

Description:   I am open to supervising students who are interested in Computer Vision (including the application of Machine Learning techniques). I can propose various projects but also happy to hear from candidates who wish to propose their own ideas. General areas of interest include Human Motion and Action Analysis, Scene Understanding,  Heathcare Monitoring, Autonomous Vehicles, Vision for Robots, and Medical Image Analysis.


Title: Low-latency machine learning with neural networks in multi-modal imaging.

Supervisor: Dr Jose Nunez-Yanez and Professor Dave Bull

Funding: CSC Scholarships / self-funded

Deadline: Open until filled

Start date: Flexible

Description: This project aims to investigate a low-latency camera and hardware set-up that will be able to capture high resolution videos, extract regions of interest and perform data enhancement and fusion on different spectral bands. The outputs of this image pre-processing will then drive a FPGA-based deep convolutional neural network to perform training and inference in real-time. The FPGA (Field Programmable Gate Array) accelerator will use very low precision arithmetic and dataflow techniques to support rates in the range of thousands of frames per second. Considered image modalities will include visible light, infrared, ultrasound and others. The applications of this low-latency technology include medical diagnosis, autonomous navigation, haptic feedback among others. Many of these applications require response times in the order of milliseconds and for this reason high-performance hardware acceleration is a requirement.


Unfunded Research Areas of Interest

VI-Lab academic staff will accept PhD applications from self funded students or scholarship applicants in the following areas. Please note that for overseas scholarships, all applications must be complete by December 31 2017.


Title: Mitigating the effects of atmospheric distortions on surveillance video

Supervisor: Professor David Bull, Dr Alin Achim

Brief description: The influence of atmospheric turbulence on acquired surveillance imagery makes image interpretation and scene analysis extremely difficult and reduces the effectiveness of conventional approaches for detecting, classifying or tracking targets in the scene. This project will address this issue using supervised machine learning; the turbulent distortion, camera motion and target trajectories will be modelled using deep recurrent convolutional neural networks. These trained networks will improve image or video quality and will also support real-time applications.


 Title: Perceptual image and video denoising

Supervisor: Professor David Bull, Dr Alin Achim, Professor Iain Gilchrist.

Brief Description: Noise is a primary limiting factor in imaging systems, influencing both perceived visual quality and task-related performance. It impacts applications from video streaming and autonomous locomotion to medical and scientific measurement. Whether caused by sensor limitations, environmental influences, or transmission loss, noise mitigation is essential – a surveillance threat obscured by noise or a missed medical anomaly, could be a matter of life and death. Understanding human perception of noise and our ability to `see through it’ is thus key to developing optimised denoising methods that will transform machine and human performance and enable more compelling and informative visual content.


Title: Visual aids for the visually impaired

Supervisor: Professor David Bull, Professor Iain Gilchrist, Dr. J. Burn

Brief description: This research will investigate how human perception and low-level features drive decisions on foot placement and path selection during traversal of complex terrain. The outcome of this analysis will be employed to develop a novel framework for assisting visually impaired locomotion, encompassing short range (footstep prediction and classification for safe mobility) and long range (scene understanding and detection for path planning and awareness). This will be inspired by the human vision system, working in real-time with smart glasses and applied directly to people with visual impairment to improve their well-being.  The project will build on previous results of the applicants in terms of feature analysis, eye tracking and terrain classification for robotics.


Title: Partial reference visual quality metrics

Supervisor: Professor David Bull

Brief description: It is frequently important to judge the quality of a compressed video when no reference is available (for example after transmission over a distorting channel). In such cases it is imperative to assess the quality based on the received content, possibly in conjunction with a small amount of side information. This project will investigate efficient and robust means of achieving this.


Title: Cognitive video compression

Supervisor: Professor David Bull, Dr. Roland Baddeley

Brief description: There is currently significant activity linked to the development of new standards for the representation and coding of higher spatio-temporal resolutions, HDR content and 360 degree immersive formats.  There are a number of key challenges associated with these emerging immersive formats (particularly 360 formats) where ‘immersion-breaking’ artefacts due to acquisition, compression and display are very important and must be avoided. Hence there must be a high emphasis on compression performance, particularly with 6DoF 360 formats where raw bit rates can be extreme. Delivery bit rates demand compression ratios of many 1000s:1. Therefore new coding and delivery techniques are needed to deliver content at manageable bit rates while ensuring that the immersive properties of the format are preserved.


Title: Cell imaging and analytics

Supervisor: Dr. Alin Achim, Professor Paul Verkade, Professor David Bull

Brief description:  This project will investigate how autofluorescence can be exploited in conjunction with reflected contrast microscopic imaging in a bioreactor, with application to biopharmaceutical manufacture and regenerative medicines. It will address the problems of image enhancement dealing with opacity, blur, turbidity and geometric distortions and will extract image features that correlate with cell physiological state. It will then investigate means of cell state classification.