University of Bristol
  • terrainbiped
  • monumentheathaze
  • retinalimaging
  • VILogo3
  • bobblehead
  • for_website
  • help1
  • ParkJoy_Comparison
  • PFT_gif
  • VILogo3
  • img3
  • ASLGlass-ExperimentSetup
  • sharks
  • p-600x450
  • sampleHamster-1200x675
  • nddtcwt
  • confusion_visual
  • RGBD
  • C_Skeletondepth_325
  • C_systemoutput03-1
  • fusion
  • C_sr
  • Algorithmic diagram for PVM.
  • methodDiagram-1200x466
  • SPHERE logo ART Small v
  • C_modeling
  • phmri
  • provision logo
  • intmod
  • sitaware
  • flocal
  • cvslam
  • systemdiagramsmall-822x386
  • vnet
  • cas
  • vslam_matching
  • contextcompressionplanemodel-250x205
  • Plane detection
  • penguins
  • humanpose
  • mothcam
  • snbrain
  • hands
  • arcfilm
  • viss
  • VILogo3
  • VILogo3
  • weak1
  • camviewj
  • VILogo3
  • TCSVT533_Figure14
  • VILogo3

About Us

The Visual Information Laboratory of the University of Bristol exists to undertake innovative, collaborative and interdisciplinary research resulting in world leading technology in the areas of computer vision, image and video communications, content analysis and distributed sensor systems. VI-Lab was formed in 2010, merging the two well established research groups, Signal Processing (EEEng) and Computer Vision (CS). The two constituent groups offer shared and complementary strengths and, with a history of successful collaboration since 1993, their merger has created one of the largest groupings of its type in the UK.

VILogo3

VILSS: Human Action Recognition and Detection from Noisy 3D Skeleton Data

Mohamed Hussein, Egypt-Japan University of Science and Technology Human action recognition and human action detection are two closely related problems. In human action recogniton, the purpose is to determine the class of an action performed by…

terrainbiped

Terrain analysis for biped locomotion

Numerous scenarios exist where it is necessary or advantageous to classify surface material at a distance from a moving forward-facing camera. Examples include the use of image based sensors for assessing and predicting terrain type…

monumentheathaze

Mitigating the effects of atmospheric turbulence on surveillance imagery

Various types of atmospheric distortion can influence the visual quality of video signals during acquisition. Typical distortions include fog or haze which reduce contrast, and atmospheric turbulence due to temperature variations or aerosols. An effect…

retinalimaging

Computer Assisted Analysis of Retinal OCT Imaging

Texture-preserving image enhancement for Optical Coherence Tomography This project developed novel image enhancement algorithms for retinal optical coherence tomography (OCT). These images contain a large amount of speckle causing them to be grainy and of…

VILogo3

VILSS: Global description of images. Application to robot mapping and localisation

VILab Seminar with speaker Luis Payá from Miguel Hernández University, Spain

VILogo3

Deep Driving: Learning Representations for Intelligent Vehicles

Together with University of California at Berkeley, University of Jenna, NICTA and Daimler we are organising a workshop in representation learning at the IEEE Intelligent Vehicles Symposium in Gothenburg, Sweden. Vision is a rich and…

VILogo3

Monitoring Vehicle Occupants

Visual Monitoring of Driver and Passenger Control Panel Interactions Researchers Toby Perrett and Prof. Majid Mirmehdi Overview Advances in vehicular technology have resulted in more controls being incorporated in cabin designs. We present a system…

bobblehead

High Frame Rate Video

As the demand for higher quality and more immersive video content increases, the need to extend the current video parameter space of spatial resolutions and display sizes, to include, among other things, a wider colour gamut, higher dynamic range…

VILogo3

Fake eyes? – How Eyespots work

BVI Seminar with speaker Dr. Karin Kjernsmo from University of Bristol.

for_website

What’s on TV: A Large-Scale Quantitative Characterisation of Modern Broadcast Video Content

What does one year of modern broadcast video look like? This project analyses the spatial and temporal diversity in contemporary video to make inferences about testing new video technologies.

Next Page »