Moderator:

Presentations: (click on links below to see the abstracts)



A FEASIBILITY STUDY OF SHORT LINE SEMI-HIGH SPEED ELECTRIC RAILWAY IN CANADA: CASE STUDY OF OKANAGAN VALLEY RAILWAY
Primary Author: Elham Boozarjomehri, UBC Okanagan (elibuzi@interchange.ubc.ca)
Co-Authors: Gordon Lovegrove (gord.lovegrove@ubc.ca)

Across North America there is an increasing demand for
faster, more sustainable modes of transportation.
Furthermore, with the expected high level of congestion in
freight shipping lines across the continent, alternate
routes will be required. One alternative of strong interest
is the placement of a semi-high speed electric railway line
through the Okanagan Valley in British Columbia, Canada.
This line would not only be able to service the Okanagan
Valley, currently a rubber tire exclusive region, but also
provide an additional link between the railway lines in the
United States and the rest of Canada. The proposed line
would run from Osoyoos, where it would connect to existing
lines in the US, to Vernon, where it could connect, via an
existing line to Kamloops, to railway lines across Canada.
This study investigates the economic and social aspects of
the proposed railway line through the Okanagan. To this end,
a present worth and benefit/cost analysis was conducted
using estimates from literature reviews on similar regions.
As few studies exist for the Okanagan Valley regarding
railway, estimates were made to gain insight into the
feasibility of conducting full analyses into the areas where
gaps in knowledge exist. Furthermore, it is of interest to
identify the areas for which further research is required to
obtain a more accurate estimation of the costs and benefits
of an electric railway through the Okanagan Valley. It was
determined that the proposed railway may be economically
feasible in the near future.

Back to Top
Back to Main Program



Temporal Aggregation Effects on ITS Data Applications
Primary Author: Alexander Bigazzi, Portland State University (abigazzi@pdx.edu)
Co-Authors: None

ITS data are a valuable resource for traffic management,
performance measurement, and transportation research. Most
regions aggregate ITS data for transmission and storage,
saving only mean values for given time intervals. The most
common aggregation intervals range from 20 seconds to 15
minutes. The aggregation process loses information that is
necessary for some applications, such as the distribution of
data and the exact time of individual events. This research
project looked at two such applications to assess the
effects of temporal aggregation on the utility of ITS data.

The first data application investigated was the measurement
of shockwave speeds at a freeway bottleneck. Traffic data
was used to measure the moving speed of traffic-state
transitions as queues formed at a freeway bottleneck. For
this purpose, loop detector data was analyzed in its
disaggregated state to estimate shockwave speed with oblique
vehicle-speed plots. For comparison, the detector data was
aggregated to various time intervals (20 seconds, 30
seconds, 1 minute, 5 minutes, 15 minutes, and 1 hour) and a
similar analysis was performed to estimate shockwave speed.
Larger aggregation intervals led to greater error in
estimated shockwave speeds, illustrating the utility of
discrete traffic data.

The second data application was measuring speed
distributions on a freeway segment. Again the analysis was
performed with both disaggregated traffic data and data
aggregated over six different time intervals. The results
show that aggregating freeway data greatly reduces the
variance in individual vehicle speeds. As examples of the
impacts of this effect, this research shows that the reduced
speed variance will distort such estimates as CO emissions
and travel delay.


Back to Top
Back to Main Program



ASSESSMENT OF AN OPTIMAL BUS STOP SPACING MODEL USING HIGH RESOLUTION ARCHIVED STOP-LEVEL DATA
Primary Author: Huan Li, Portland State University (huanl@pdx.edu)
Co-Authors: Robert L. Bertini bertini@pdx.edu

With increasing attention being paid to performance and
financial issues related to the operation of public
transportation systems, it is necessary to develop tools
for improving the efficiency and effectiveness of service
offerings. With the availability of high resolution
archived stop-level bus performance data, it is shown that
a bus stop spacing model can be generated and tested with
the aim of minimizing the operating cost while maintaining
a high degree of transit accessibility. In this paper, two
cost components are considered in the stop spacing model
including passenger access cost and in-vehicle passenger
stopping cost, and are combined and optimized to minimize
total cost. A case study is conducted using one bus route
in Portland, Oregon, using one year's stop-level archived
Bus Dispatch System (BDS) data provided by TriMet, the
regional transit provider for the Portland metropolitan
area. Based on the case study, the theoretical optimized
bus stop spacing is 1,200 feet compared to the current
value of 886 feet. The paper discusses trade-offs and
presents an estimate of transit operating cost savings
based on the optimized spacing. Given the availability of
high resolution archived data, the paper illustrates that
this modeling tool can be applied in a routine way across
multiple routes as part of an ongoing service planning and
performance measurement process.

Back to Top
Back to Main Program



AUTOMATED COLLECTION OF PEDESTRIAN DATA USING COMPUTER VISION TECHNIQUES
Primary Author: Karim Ismail, UBC (karim.ismail@bitsafs.ca)
Co-Authors: Dr. Tarek Sayed, Nicolas Saunier, Clark Lim

Pedestrian data collection is critical for the planning and
design of pedestrian facilities. Most pedestrian data
collection efforts involve field observations or observer-
based video analysis. These manual observations are time
consuming, limited in coverage, resource intensive and
error prone. Automated video analysis which involves the
use of computer vision techniques can overcome many of
these shortcomings. Despite advances in the field of
computer vision applications for pedestrian detection and
tracking, the technical literature shows little use of
these techniques in pedestrian data collection practices.
The likely reasons are the technical complexities that
surround the processing of pedestrian videos. To extract
pedestrian trajectories automatically from video, all road
users must be detected, tracked at each frame and
classified by type, at least as pedestrians and non-
pedestrians. This is a challenging task in busy open
outdoor urban environment. Common problems include global
illumination variations, multiple object tracking and
shadow handling. Specific problems arise when dealing with
pedestrians because of their complex movement dynamics,
varied appearance and non-rigid nature. The main objective
of this study is to present a system for automated
collection of pedestrian walking speed using computer
vision techniques. The system is based on a previously
developed feature-based tracking system for vehicles which
was significantly modified to adapt to the particularities
of pedestrian movement and to discriminate pedestrian and
motorized traffic. The system was tested on real video data
collected at Downtown area of Vancouver, British Columbia.
This study is unique in so far as it tests the system under
a variety of daylight conditions, crowd densities, movement
context, and the video analysis approach. Promising results
were obtained and several conclusions were drawn using
statistical analysis of the automatically extracted
pedestrian trajectories.

Back to Top
Back to Main Program



A RISK/COST-BASED ALGORITHM FOR THE ROUTING OF DANGEROUS GOODS
Primary Author: Karim ElBasyouny, UBC (kemozz@interchange.ubc.ca)
Co-Authors: Clark Lim (clim@bitsafs.ca) Derek Cheng Tarek Sayed (tsayed@civil.ubc.ca

In order to achieve the above, this project identifies and
formulates some of the most common DG routing criteria
based on readily available datasets, which are unique to
the province of British Columbia. In order to make large-
scale implementation possible, all the information is
incorporated into a GIS environment. GIS capabilities make
it possible to relate DG movements to other spatial data,
thereby facilitating the assessment of the possible
consequences in the event of an incident. Each of the
investigated routing criteria attempts to characterize risk
based on different objectives. Some of these objectives
might be conflicting. The tradeoffs (if any) between the
different routing criteria are examined. Finally, a risk
and cost-based dangerous goods routing algorithm (DGRA) is
developed to combine these routing criteria. The risk/cost-
based DGRA utilizes an optimal routing algorithm that
considers all the different routing criteria in a GIS
environment to identify optimal routing strategies.

The risk/cost-based DGRA focuses on mitigating the risks
associated with the transportation of DG via route
selection. The algorithm is applied to a large-scale
transportation network representing Metro Vancouver area.
The network is represented spatially in a GIS database
along with a real-time dispersion plume simulating a
specific chemical release under local weather conditions.
GIS facilitates the comparison between the various criteria
by overlaying transportation networks characteristics on
other spatially referenced data, such as population
demographics or meteorological data. The algorithm and
general methodology is used for the routing of dangerous
goods on-demand, serving individual shipments in a
permitting environment. This is different than the use of
such algorithms for just the planning of designated DG
routes. The uniqueness of the project is in
the 'normalization' of risks and operating costs such that
a cost-based DG routing optimization is achieved.
Furthermore, the practicality of the algorithm is
demonstrated by developing a computer application using
Canadian and B.C. datasets for the calculation of
generalized route costs in the Metro Vancouver area.


Back to Top
Back to Main Program



Travel Time Estimation in an Urban Network Using Sparse Probe Vehicle Data and Historical Travel Time Relationships
Primary Author: Mohamed ElEsawey, UBC (elesawey@civil.ubc.ca)
Co-Authors: Tarek Sayed (tsayed@civil.ubc.ca)

This research proposes an approach to provide travel time
estimates on a network using data from part of the network
only. This applies to the problem of having a small sample
of probes that do not cover an entire network. The method
makes use of sparse probe vehicle data along with travel
time correlation between neighbor links. By developing
travel time relationships between neighbor links, a
relatively small sample of probes can be used to estimate
travel times on part of the network and then the developed
relationships can be used to extend travel time estimation
to the whole network. In practice, to apply this approach,
historical travel time data need to be first collected for
the entire network to develop the required travel time
relationships. To investigate and test the method, a
microsimulation model for downtown Vancouver was developed
using VISSIM. The model was updated and modified according
to recent network changes then turned into a dynamic-based
model. Travel time data were generated using five demand
levels and two hours of simulation. Travel times were
obtained from 25 segments in the same direction.
Correlation matrices between all travel time segments were
developed for different aggregation intervals. The
correlation was found to increase when the aggregation
period increases. As well, high travel time correlation was
found for consecutive links and nearby parallel links. A
correlation threshold was selected and used to define a set
of 'neighbors' for each link. Statistical models were then
developed to relate link travel time with the neighbors'
travel times. The models were validated using two
simulation runs for two different demand levels. Error
measurements indicated a good fit of the developed models.
Simple weighting schemes were used to fuse estimates of
different models to enhance the travel time estimation. The
Mean Absolute Percentage Error (MAPE) of travel time
estimates ranged between 1.91% and 9.48% for the applied
weighting schemes. The method should prove useful to
estimate travel time on links that does not have vehicle
probes based on historical travel time correlations.

Back to Top
Back to Main Program



Evaluation of an AVI Applicaiton at Nordel Inspection Station
Primary Author: Clark Lim, BITSAFS (clim@bitsafs.ca)
Co-Authors: Karim Ismail (karim.ismail@bitsafs.ca); Dr. Tarek Sayed (tsayed@civil.ubc.ca.)

The current operation at Nordel Inspection Station, at the
south end of Alex Fraser Bridge, requires all commercial
vehicles to pass through the inspection station during
operating hours. Due to the access and geometric
configuration of the inspection station, this inspection
requirement can create heavy congestion within the area,
spilling onto the bridge at times. This study investigates
the application of an AVI system and program in which
approved carriers are allowed to bypass the inspection
station, subject to random inspection. A discrete-event
micro-simulation model was developed and calibrated to field
data. The model was applied to a number of scenarios,
resulting in a range of benefit-cost ratios that account for
time savings and reduced operating costs. Furthermore,
estimates of reduced GHGs were calculated, showing a range
of values of costs per tonne of emissions reduced.

Back to Top
Back to Main Program






Countdown to Quad 2009


 


List of Supporters

 
 
 
 

Platinum Sponsors

 
 
 

Images courtesy of Tourism Vancouver.