Learning to analyse traffic patterns in developing countries using a neural network

Road traffic accidents are one of the leading causes of death in many Low-Middle Income Countries (LMICs). Infrastructure is limited and typically prioritises the needs of motorised transport. City authorities in LMICs would benefit from rapid assessments of road traffic hazards at key locations (road crossings; other transport infrastructure etc.) both to assess needs but also to determine the impacts of any road improvements.

In the UK, we collect data from sensors in the roads or mounted on street furniture around junctions. These data enable the calculation of metrics such as flow, occupancy and velocity for identifying traffic movements, congestion and incidents. If video footage is available, then they can also be used to identify and count vehicles and their types.

This advanced infrastructure is lacking in many Low-Middle Income Countries. Here, drone footage would allow the rapid collection of imagery that could then be used to identify vehicles, their speeds and density, transport modes and also the interactions of different modes (cars vs. trucks; cars vs. motorbikes; cars vs. pedestrians etc.). It is a much cheaper and more flexible approach to traffic analysis and management without requiring any new traffic infrastructure.

This project will automate the identification and analysis of different transport modes from these drone-video footages. We have already successfully installed a widely-available deep neural network (DNN) that can identify cars from drone footage relatively accurately (Fig 1)

However, the network must be trained to identify other types of vehicle (trucks, bikes) and pedestrians. The student will label existing video footage generated by the University of York and use this labelled data to train the network further using a well-established pipeline.

This may be followed by risk (proximity assessments) and hot spot identification for predicting accidents. This project has the potential to analyse traffic congestion and aid decision support to reduce both congestion and, potentially, traffic fatalities across a wide range of low-middle income countries.

 Automatically labelled drone footage: in a single video frame, 10 cars are identified automatically. The labelling works accurately at more than 30fps on UoY drone footage.

(Figure 1 Automatically labelled drone footage: in a single video frame, 10 cars are identified automatically. The labelling works accurately at more than 30fps on UoY drone footage.)

Supervisors

Dr Victoria Hodge, Senior Researcher, DC Labs
Dr Steve Cinderby, Senior Researcher, Stockholm Environment Institute
Dr Alex Wade, Professor, Department of Psychology