Next Event:

No upcoming events scheduled.

August 6, 2015 at 10:00am - M.S. Thesis Defense - Sean Archer - Empirical Measurement and Model Validation of Infrared Spectra of Liquid- Contaminated Surfaces

Carlson Bldg. (76) – Room 3215 (DIRS Lab)
August 6, 2015 at 10:00am
Sean Archer
Empirical Measurement and Model Validation of Infrared Spectra of Liquid- Contaminated Surfaces
M.S. Thesis Defense
Abstract Liquid contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The goal of this thesis was to validate predicted infrared spectra of liquid contaminated surfaces from a recently developed micro-scale bi-directional reflectance distribution function (BRDF) model, known as microDIRSIG. This micro-scale model had been developed coincide with the Digitial Image and Remote Sensing Image Generation (DIRSIG) model as a rigorous ray tracing physics-based model capable of predicting the BRDF of geometric surfaces that are defined at micron to millimeter spatial resolution. The model offers an extension from conventional BRDF models by allowing contaminants to be added as geometric objects to a micro-facet surface. This model was validated through the use of empirical measurements. A total of 18 different substrate and contaminant combinations were measured and compared against modeled outputs. These substrates included wood and aluminum samples with three different paint finishes and varying levels of silicon based oil (SF96) liquid contamination. The longwave infrared radiance for each substrate was measured with a Design & Prototypes (D&P) Fourier transform infrared spectrometer and a Physical Sciences Inc. Adaptive Infrared Imaging Spectroradiometer (AIRIS). The microDIRSIG outputs were compared against measurements qualitatively in both the emissivity and radiance domains. A temperature emissivity separation (TES) algorithm was applied to the measured radiance spectra for comparison with the microDIRSIG predicted emissivity spectra. The model predicted emissivity spectra was also forward modeled through a DIRSIG simulation for comparisons to the measured radiance spectra. The results showed a promising agreement for homogenous surfaces with a liquid contamination that could be well characterized geometrically. Limitations arose in substrates that were modeled as homogeneous surfaces, but had spatially varying artifacts due to uncertainties with the contaminant and surface interaction. There is high desire for accurate physics based modeling of liquid contaminated surfaces and this validation framework may be extended to include a wider array of samples for more realistic natural surfaces that are often found in the real world.

August 6, 2015 at 2:00am - MS Thesis Defense - MATTHEW EDWARD MURPHY - Statistical Study of Interplanetary Coronal Mass Ejections with Strong Magnetic Fields

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
August 6, 2015 at 2:00am
MATTHEW EDWARD MURPHY
Statistical Study of Interplanetary Coronal Mass Ejections with Strong Magnetic Fields
MS Thesis Defense
Abstract Coronal Mass Ejections (CMEs) with strong magnetic fields are typically associated with significant solar energetic particle (SEP) events, high solar wind speed and solar flare events. Successful prediction of the arrival time of a CME at Earth is required to maximize the time available for satellite, infrastructure, and space travel programs to take protective action against the coming flux of high-energy particles. It is known that the magnetic field strength of a CME is linked to the strength of a geomagnetic storm on Earth. Unfortunately, the correlations between strong magnetic field CMEs from the entire sun (especially from the far side or non-Earth facing side of the sun) to SEP and flare events, solar source regions and other relevant solar variables are not well known. New correlation studies using an artificial intelligence engine (Eureqa) were performed to study CME events with magnetic field strength readings over 30 nanoteslas (nT) from January 2010 to October 17, 2014. This thesis presents the results of this study, validates Eureqa to obtain previously published results, and points the way towards future studies that might extend the lead time before such events strike valuable targets.

July 28, 2015 at 2:00am - Ph.D. Imaging Science Thesis Defense - THOMAS B KINSMAN - Semi-Supervised Pattern Recognition and Machine Learning for Eye-Tracking

Carlson Fishbowl 76-1275
July 28, 2015 at 2:00am
THOMAS B KINSMAN
Semi-Supervised Pattern Recognition and Machine Learning for Eye-Tracking
Ph.D. Imaging Science Thesis Defense
Abstract The first step in monitoring an observer’s eye gaze is identifying and locating the image of their pupils in video recordings of their eyes. Current systems work under a range of conditions, but fail in bright sunlight and rapidly varying illumination. A computer vision system was developed to assist with the recognition of the pupil in every frame of a video, in spite of the presence of strong first-surface reflections off of the cornea. A modified Hough Circle detector was developed that incorporates knowledge that the pupil is darker than the surrounding iris of the eye, and is able to detect imperfect circles, partial circles, and ellipses. As part of the processing, the image is modified to compensate for the distortion of the pupil caused by the out-of-plane rotation of the eye. A sophisticated noise cleaning technique was developed to mitigate first surface reflections, enhance edge contrast, and reduce image flare. Semi-supervised human input and validation is used to train the algorithm. The final results are comparable to those achieved using a human analyst, but require only a tenth of the human interaction.

July 27, 2015 at 9:00am - Ph.D. Dissertation Defense - Paul Romanczyk - Extraction of Vegetation Biophysical Structure from Small-Footprint Full-Waveform Lidar Signals

Carlson Bldg. (76) - Room 2215
July 27, 2015 at 9:00am
Paul Romanczyk
Extraction of Vegetation Biophysical Structure from Small-Footprint Full-Waveform Lidar Signals
Ph.D. Dissertation Defense
Abstract: 

Abstract

The National Ecological Observatory Network (NEON) is a continental scale environmental monitoring initiative tasked with characterizing and understanding ecological phenomenology over a 30-year time frame. To support this mission, NEON collects ground truth measurements, such as organism counts and characterization, carbon flux measurements, etc. To spatially upscale these plot-based measurements, NEON developed an airborne observation platform (AOP), with a high-resolution visible camera, next-generation AVIRIS imaging spectrometer, and a discrete and waveform digitizing light detection and ranging (lidar) system. While visible imaging, imaging spectroscopy, and discrete lidar are relatively mature technologies, our understanding of and associated algorithm development for small-footprint full-waveform lidar are still in early stages of development. This work has as its primary aim to extend small-footprint full-waveform lidar capabilities to assess vegetation biophysical structure.

In order to fully exploit waveform lidar capabilities, high fidelity geometric and radio-metric truth data are needed. Forests are structurally and spectrally complex, which makes collecting the necessary truth challenging, if not impossible. We utilize the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, which provides an environment for radiometric simulations, in order to simulate waveform lidar signals. The first step of this research was to build a virtual forest stand based on Harvard Forest inventory data. This scene was used to assess the level of geometric fidelity necessary for small-footprint waveform lidar simulation in broadleaf forests. It was found that leaves have the largest influence on the backscattered signal and that there is little contribution to the signal from the leaf stems and twigs. From this knowledge, a number of additional realistic and abstract virtual “forest” scenes were created to aid studies assessing the ability of waveform lidar systems to extract biophysical phenomenology. We developed an additive model, based on these scenes, for correcting the attenuation in backscattered signal caused by the canopy. The attenuation-corrected waveform, when coupled with estimates of the leaf-level reflectance, provides a measure of the complex within-canopy forest structure. This work has implications for our improved understanding of complex waveform lidar signals in forest environments and, very importantly, takes the research community a significant step closer to assessing fine-scale horizontally- and vertically-explicit leaf area, a holy grail of forest ecology. 

July 23, 2015 at 9:00am - Ph.D. Dissertation Defense - Rey Jan D. Garma - Image Quality Modeling and Characterization of Nyquist Sampled Framing Systems with Operational Considerations for Remote Sensing

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
July 23, 2015 at 9:00am
Rey Jan D. Garma
Image Quality Modeling and Characterization of Nyquist Sampled Framing Systems with Operational Considerations for Remote Sensing
Ph.D. Dissertation Defense
Abstract The trade between detector and optics performance is often conveyed through the Q metric, which is defined as the ratio between detector sampling frequency and optical cutoff frequency. Historically sensors have operated at Q≈1, which introduces aliasing but increases the system modulation transfer function (MTF) and signal-to-noise ratio (SNR). Though mathematically suboptimal, such designs have been operationally ideal when considering system parameters such as pointing stability and detector performance. Substantial advances in read noise and quantum efficiency of modern detectors may compensate for the negative aspects associated with balancing detector/optics performance, presenting an opportunity to revisit the potential for implementing Nyquist-sampled (Q≈2) sensors. A digital image chain simulation is developed and validated against a laboratory testbed using objective and subjective assessments. Objective assessments are accomplished by comparing the modeled MTF to measurements from slant-edge photographs. Subjective assessments are carried out by performing a psychophysical study where subjects are asked to rate simulation and testbed imagery against a ΔNIIRS scale with the aid of a marker-set. Using the validated model, additional test cases are simulated to study the effects of increased detector sampling on image quality with operational considerations. First, a factorial experiment using Q-sampling, pointing stability, integration time, and detector performance is conducted to measure the main effects and interactions of each on the response variable, ΔNIIRS. To assess the fidelity of current models, variants of the General Image Quality Equation (GIQE) are evaluated against subject-provided ratings and two modified GIQE versions are proposed. Finally, using the validated simulation and modified IQE, trades are conducted to ascertain the feasibility of implementing Q≈2 designs in future systems.

July 22, 2015 at 10:00am - Ph.D. Dissertation Defense - Tyler D. Carson - Signature Simulation and Characterization of Mixed Solids in the Visible and Thermal Regimes

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
July 22, 2015 at 10:00am
Tyler D. Carson
Signature Simulation and Characterization of Mixed Solids in the Visible and Thermal Regimes
Ph.D. Dissertation Defense
Abstract Solid target signatures vary due to geometry, chemical composition and scene radiome- try. Although radiative transfer models and function-fit physical models may describe certain targets in limited depth, the ability to incorporate all three of these signature variables is dicult. This work describes a method to simulate the transient signatures of mixed solids and soils by first considering scene geometry that was synthetically created using 3-d physics engines. Through the assignment of spectral data from the Nonconven- tional Exploitation Factors Data System (NEFDS) and other libraries, synthetic scenes are represented as a chemical mixture of particles. Finally, first principles radiometry is modeled using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. With DIRSIG, radiometric and sensing conditions were systematically manipu- lated to produce goniometric signatures. The implementation of this virtual goniometer allows users to examine how a target bidirectional reflectance function (BRDF) and di- rectional emissivity will change with geometry, composition and illumination direction. The tool described provides geometry flexibility that is unmatched by radiative transfer models. It delivers a discrete method to avoid the significant cost of time and treasure associated with hardware based goniometric data collections.

July 14, 2015 at 2:00am - Ph.D. Dissertation Defense - Oesa A. Weaver - An Analytical Framework for Assessing the Efficacy of Small Satellites in Performing Novel Imaging Missions

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
July 14, 2015 at 2:00am
Oesa A. Weaver
An Analytical Framework for Assessing the Efficacy of Small Satellites in Performing Novel Imaging Missions
Ph.D. Dissertation Defense
Abstract In the last two decades, small satellites have opened up the use of space to groups other than governments and large corporations, allowing for increased participation and experimentation. This democratization of space was enabled by improved technology, which allowed the miniaturization of components and reduction of overall cost, meaning many of the capabilities of larger satellites could be replicated at a fraction of the cost. The potential of these smaller satellites to replace or augment existing systems has led to an explosion of potential satellite and mission concepts, often with little rigorous study of whether the proposed satellite or mission is achievable or necessary. This work proposes an analytical framework to aid system designers in evaluating the ability of an existing concept or small satellite to perform a particular imaging mission, either replacing or augmenting existing capabilities. This framework was developed and then refined by application to the problem of using small satellites to perform a wide area search mission – a mission not possible with existing imaging satellites, but one that would add to current capabilities. Requirements for a wide area search mission were developed, along with a list of factors that would affect image quality and mission performance. Two existing small satellite concepts were evaluated for use by examining image quality from the systems, selecting an algorithm to perform the search function, and then assessing mission feasibility by applying the algorithm to simulated imagery. Finally, a notional constellation design was developed to assess the number of satellites required to perform the mission. It was found that a constellation of 480 CubeSats producing 4 m spatial resolution panchromatic imagery and employing an on-board processing algorithm would be sufficient to perform a wide area search mission.

June 26, 2015 at 10:00am - Ph.D. Dissertation Defense - David Kelbe - Forest structure from terrestrial laser scanning – in support of remote sensing

Carlson Bldg. 76 - Room 3215 (DIRS Lab)
June 26, 2015 at 10:00am
David Kelbe
Forest structure from terrestrial laser scanning – in support of remote sensing
Ph.D. Dissertation Defense
Advisor: Dr. Jan van Aardt
Abstract: 
Abstract Forests are an important part of the natural ecosystem, providing resources such as timber and fuel, performing services such as energy exchange and carbon storage, and presenting risks, such as fire damage and invasive species impacts. Improved characterization of forest structural attributes is desirable, as it could improve our understanding and management of these natural resources. Traditionally, the systematic collection of forest information related to stem volume and biomass – dubbed ``forest inventory'' - is achieved via relatively crude, readily-measured explanatory variables, such as tree height and stem diameter. Such field inventories are time-consuming, expensive, and coarse when compared to novel 3D measurement technologies. Remote sensing estimates, on the other hand, provide synoptic coverage, but often fail to capture the fine-scale structural variation of the forest environment. Terrestrial laser scanning (TLS) has demonstrated a potential to address these limitations, while offering opportunity to support remote sensing efforts by providing spatially explicit ground-truth data for calibration/validation in forest environments. An additional benefit is the potential to extract realistic 3D forest models, for use in simulation and visualization studies. However, despite this potential, operational use has remained limited due to unsatisfactory performance characteristics vs. budgetary constraints of many end-users. To address this gap, my dissertation advanced affordable mobile laser scanning capabilities for operational forest structure assessment. We developed geometric reconstruction of forest structure from rapid-scan, low-resolution point cloud data, providing for automatic extraction of standard forest inventory metrics. To augment these results over larger areas, we designed a view-invariant feature descriptor to enable marker-free registration of TLS data pairs, without knowledge of the initial sensor pose. A graph-theory framework was then integrated to perform multi-view registration between a network of disconnected scans. This provided improved structural assessment at the plot-level. Finally, a data mining approach was taken to assess plotlevel canopy structure, which has important implications to our understanding forest function. Outputs are being utilized to provide antecedent science data for NASA's HyspIRI mission and to support the National Ecological Observatory Network's (NEON) long-term environmental monitoring initiatives.

May 21, 2015 at 8:00am - Ph.D. Dissertation Defense - Katie N. Salvaggio - A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
May 21, 2015 at 8:00am
Katie N. Salvaggio
A Voxel-Based Approach for Imaging Voids in Three-Dimensional Point Clouds
Ph.D. Dissertation Defense
Abstract: 

Abstract

Geographically accurate scene models have enormous potential beyond that of just simple visualizations in regard to automated scene generation.  In recent years, thanks to ever increasing computational efficiencies, there has been significant growth in both the computer vision and photogrammetry communities pertaining to automatic scene reconstruction from multi-view imagery.  The result of these algorithms is a three-dimensional (3D) point cloud which can be used to obtain a final model using surface reconstruction techniques.  However, the fidelity of these point clouds has not been well studied, and voids often exist within the point cloud.  Voids exist in texturally flat areas that fail to generate features, as well as areas where multiple views were not obtained during collection, constant occlusion existed due to collection angles or overlapping scene geometry, or in regions that failed to triangulate accurately.  It may be possible to fill in small voids in the scene using surface reconstruction or hole-filling techniques, but this is not the case with larger voids, and attempting to reconstruct them using only the knowledge of the incomplete point cloud is neither accurate nor aesthetically pleasing.  

A method is presented for identifying voids in point clouds using a voxel-based approach to partition the 3D space.  By using collection geometry and information derived from the point cloud, it is possible to detect unsampled voxels such that voids can be identified.  This analysis takes into account the location of the camera and the 3D points themselves to capitalize on the idea of free space, such that voxels that lie on the ray between the camera and point are devoid of obstruction, as a clear line of sight is a necessary requirement for reconstruction.  Using this approach, voxels are classified into three categories: occupied (contains points from the point cloud), free (rays from the camera to the point passed through the voxel), and unsampled (does not contain points and no rays passed through the area).  Voids in the voxel space are manifested as unsampled voxels.  A similar line-of-sight analysis can then be used to pinpoint locations at aircraft altitude at which the voids in the point clouds could theoretically be imaged.  This work is based on the assumption that inclusion of more images of the void areas in the 3D reconstruction process will reduce the number of voids in the point cloud that were a result of lack of coverage.  Voids resulting from texturally flat areas will not benefit from more imagery in the reconstruction process, and thus are identified and removed prior to the determination of future potential imaging locations.

 

April 30, 2015 at 9:00am - Imaging Science MS Thesis Defense - DENGYU LIU - Efficient Space-Time Sampling with Pixel-wise Coded Exposure for High Speed Imaging

Carlson Fishbowl 76-1275
April 30, 2015 at 9:00am
DENGYU LIU
Efficient Space-Time Sampling with Pixel-wise Coded Exposure for High Speed Imaging
Imaging Science MS Thesis Defense
Abstract: 

Abstract

 

Cameras face a fundamental tradeoff between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing and reconstructing the space-time volume in order to overcome this tradeoff. Our approach has two important distinctions compared to previous works: (1) we achieve sparse representation of videos by learning an over-complete dictionary on video patches, and (2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach - sampling function and sparse representation by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a Liquid Crystal on Silicon (LCoS) device. System characteristics such as field of view, Modulation Transfer Function (MTF) are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial

resolution.

 

April 27, 2015 at 8:00am - Ph.D. Dissertation Defense - Madhurima Bandyopadhyay - Quantifying the urban forest environment using dense discrete return LiDAR and aerial color imagery for segmentation and object-level assessment

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
April 27, 2015 at 8:00am
Madhurima Bandyopadhyay
Quantifying the urban forest environment using dense discrete return LiDAR and aerial color imagery for segmentation and object-level assessment
Ph.D. Dissertation Defense
Abstract: 
Abstract
The urban forest is becoming increasingly important in the contexts of urban green space, carbon sequestration and offsets, and socio-economic impacts. In addition to aesthetic value, these green spaces remove airborne pollutants, preserve natural resources, mitigate adverse climate changes, among other benefits. A great deal of attention recently has been paid to urban forest management. However, the comprehensive monitoring of urban vegetation for carbon sequestration and storage is an under-explored research area. Often such assessment requires information at the individual tree level, necessitating the proper masking of vegetation from the built environment, as well as delineation of individual tree crowns. As an alternative to expensive and time-consuming manual surveys, remote sensing can be used effectively in characterizing the urban vegetation and man-made objects. 
Many studies in this field has made use of aerial and multispectral/hyperspectral imagery over cities. The emergence of light detection and ranging (LiDAR) technology has provided new impetus to the effort of extracting objects and characterizing their 3D attributes - LiDAR has been used successfully to model buildings and urban trees. However, challenges remain when using such structural information only, and researchers have investigated the use of fusion-based approaches that combine LiDAR and aerial imagery to extract objects, so that the complementary characteristics of the two modalities can be utilized. 
In this study, a fusion-based classification method was implemented between aerial color (RGB) imagery and co-registered LiDAR point clouds to classify urban vegetation and buildings from other urban classes/cover types. Structural, as well as spectral features, were used in the classification method, including height, flatness, and the distribution of normal vectors from LiDAR data, along with a non-calibrated LiDAR-based vegetation index derived from combining LiDAR intensity at 1064 nm with the red channel from the RGB imagery. This novel index was dubbed the LiDAR-infused difference vegetation index (LDVI). Classification results indicated good separation between buildings and vegetation, with an overall kappa coefficient of 85%.  
A multi-tiered delineation algorithm was designed to extract individual tree crowns from the tree clusters and species-independent biomass models were developed using LiDAR-derived tree attributes in regression analysis. These LIDAR-based biomass assessments were conducted for individual trees, as well as for clusters of trees, in cases where proper delineation of individual trees was impossible. The LIDAR-derived biomass estimates were validated against allometry-based biomass estimates that were computed from field-measured tree data. The best biomass model for the tree clusters and the individual trees showed an adjusted R2 value of 0.93 and 0.58, respectively. 
The results of this study showed that the fusion-based classification approach using LiDAR and aerial color (RGB) imagery is capable of producing good object detection accuracy. It was concluded that the LDVI can be used in vegetation detection and can act as a substitute for the normalized difference vegetation index (NDVI), where multiband imagery is not available. Furthermore, the utility of LiDAR for characterizing the urban forest and associated biomass was proven. This work could have significant impact on the rapid and accurate assessment of urban green spaces and associated monitoring and management.  

April 1, 2015 at 12:00pm - Ph.D. Dissertation Defense - Amanda K. Ziemann - A manifold learning approach to target detection in high-resolution hyperspectral imagery

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
April 1, 2015 at 12:00pm
Amanda K. Ziemann
A manifold learning approach to target detection in high-resolution hyperspectral imagery
Ph.D. Dissertation Defense
Abstract: 

Abstract

Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying “targets” such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, research has shown that for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m << d. In the remote sensing community, this has led to a recent increase in the use of non-linear manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data when implementing traditional target detection approaches, and their limitations are well-documented. Here, we present an approach to target detection in HSI that is instead based on a graph theory model of the data and a manifold learning transformation, thereby avoiding these restrictive assumptions. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, transformed manifold coordinates. Then, target detection is performed in the manifold space. Target detection results will be shown using laboratory-measured, field-measured, and in-scene target spectra across multiple hyperspectral data sets.

 

January 22, 2015 at 2:00am - M.S. Thesis Defense - XINGCHAO YU - Studies of Gas Absorption In Infrared Spectra of Carbon-Rich AGB Stars

76-1275
January 22, 2015 at 2:00am
XINGCHAO YU
Studies of Gas Absorption In Infrared Spectra of Carbon-Rich AGB Stars
M.S. Thesis Defense

M.S. Thesis Defense

Xingchao YU

Studies of Gas Absorption In Infrared Spectra of Carbon-Rich AGB Stars

 

Advisor: Dr. Joel Kastner & Dr. Ben Sargent

 

22nd, January, 2:00pm

Carlson Auditorium 76-1275

Abstract: 

Abstract

 

An asymptotic giant branch (AGB) star is a dying, Sun-like star, and it is actively expelling its mass into an envelope around the star, forming a circumstellar shell.  Infrared spectra can reveal the composition of the material within this shell, informing studies of the recycling of the products of stellar nuclear processing in the Universe. In this study, we want to identify how the differing metallicities of our own, relatively metal-rich Milky Way galaxy and the nearby, lower metallicity Large Magellanic Cloud (LMC) galaxy affect gas composition in the circumstellar shells of carbon-rich AGB stars.

 

Radiative transfer models are created to simulate spectra of 4 carbon-rich AGB stars chosen from the Milky Way and 4 chosen from the LMC.  Different gas species, whose model spectra are computed using line lists obtained from the HITRAN database, are added to the models, including C2H2, HCN and CS. By comparing to spectra obtained using the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope (for the LMC) and to spectra obtained using the Short Wavelength Spectrometer (SWS) on board the Infrared Space Observatory (ISO; for the Milky Way), we determine basic physical characteristics, such as gas temperature and shell radius, for each star, as well as the relative amounts of gas species for each star, by matching models to the observed spectra.

 

Results confirm that infrared spectra of Milky Way AGB stars typically suggest more than one of the molecules C2H2, HCN, and CS in their shells, whereas infrared spectra of LMC stars only suggest the presence of C2H2. This suggests a correspondence between metallicity and the abundances of specific gas species in circumstellar shells. Future work includes developing a more accurate radiative transfer model, using and possibly developing more accurate line lists, and modeling more stars, to confirm and improve upon the present results.

December 16, 2014 at 10:00am - MS Thesis Defense - CHAO ZHANG - Optical simulation of terahertz antenna using finite difference time domain method

Carlson 76-1275
December 16, 2014 at 10:00am
CHAO ZHANG
Optical simulation of terahertz antenna using finite difference time domain method
MS Thesis Defense

Chester F Carlson Center for Imaging Science

Advisors: Dr. Zoran Ninkov

  

Abstract: 

Abstract

 

Terahertz science is a promising and rapidly developing research area.  However, solid-state terahertz detectors of high performance are still needed. An antenna within each pixel is needed in these detectors so as to couple more incident radiation into the detector. In this thesis, a software package called Lumerical FDTD Solutions is used to optimize the terahertz antenna design. The ultimate goal is to design broadband antennas that work efficiently over desired frequency bands. The transmission/absorption characteristics of various bowtie antennas were modeled using the software. For absorption modeling, an equivalent resistor was added to load the antenna and absorb the terahertz energy.  The effect of various parameters, including geometrical shape, boundary condition, material index, were considered. Fat bowtie was chosen as the optimum design for a 215GHz antenna. Optimization was carried out to check how the gap, slot, distance between metal contacts would affect the performance of the antenna. A transmission experiment was designed to verify the validity of these simulations using a 188GHz source.  Finally, some tests for the angular response of silicon/air interface and dipole antenna were done, in order to ascertain the efficiency of coupling between the optical telescope used to collect the THz radiation and the antenna/detector combination.

 

 

 

December 5, 2014 at 9:00am - Ph.D. Thesis Defense - Bin Chen - Multispectral Image Road Extraction Based Upon Automated Map Conflation

76-1275
December 5, 2014 at 9:00am
Bin Chen
Multispectral Image Road Extraction Based Upon Automated Map Conflation
Ph.D. Thesis Defense
Abstract: 
Abstract
 
Road network extraction from remotely sensed imagery enables many important and diverse applications such as vehicle tracking, drone navigation, and intelligent transportation studies. There are, however, a number of challenges to road detection from an image. Road pavement material, width, direction, and topology vary across a scene. Complete or partial occlusions caused by nearby buildings, trees, and the shadows cast by them, make maintaining road connectivity difficult. The problems posed by occlusions are exacerbated with the increasing use of oblique imagery from aerial and satellite platforms. Further, common objects such as rooftops and parking lots are made of materials similar or identical to road pavements. This problem of common materials is a classic case of a single land cover material existing for different land use scenarios. 
This work addresses these problems in road extraction from geo-referenced imagery by leveraging the OpenStreetMap digital road map to guide image-based road extraction. The crowd-sourced cartography has the advantages of worldwide coverage that is constantly updated. The derived road vectors follow only roads and so can serve to guide image-based road extraction with minimal confusion from occlusions and changes in road material. On the other hand, the vector road map has no information on road widths and misalignments between the vector map and the geo-referenced image are small but nonsystematic. Properly correcting misalignment between two geospatial datasets, also known as map conflation, is an essential step. 
A generic framework requiring minimal human intervention is described for multispectral image road extraction and automatic road map conflation. The approach relies on the road feature generation of a binary mask and a corresponding curvilinear image. A method for generating the binary road mask from the image by applying a spectral measure is presented. The spectral measure, called anisotropy-tunable distance (ATD), differs from conventional measures and is created to account for both changes of spectral direction and spectral magnitude in a unified fashion. The ATD measure is particularly suitable for differentiating urban targets such as roads and building rooftops. The curvilinear image provides estimates of the width and orientation of potential road segments. Road vectors derived from OpenStreetMap are then conflated to image road features by applying junction matching and intermediate point matching, followed by refinement with mean-shift clustering and morphological processing to produce a road mask with piecewise width estimates.  
The proposed approach is tested on a set of challenging, large, and diverse image data sets and the performance accuracy is assessed. The method is effective for road detection and width estimation of roads, even in challenging scenarios when complete occlusion occurs.
 

November 24, 2014 at 2:00am - Ph.D. Dissertation Defense - Jiangqin Sun - Temporal signature modeling and analysis

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
November 24, 2014 at 2:00am
Jiangqin Sun
Temporal signature modeling and analysis
Ph.D. Dissertation Defense
Abstract: 
Abstract
A vast amount of digital satellite and aerial images are collected over time, which calls for techniques to extract useful high-level information, such as recognizable events. One part of this thesis proposes a framework for streaming analysis of time series data, which can recognize events without supervision and memorize them by building the temporal contexts. The memorized historical data is then used to predict the future and detect anomalies. A new incremental clustering method is proposed to recognize the event without training. A memorization method of double localization, including relative and absolute localization, is proposed to model the temporal context. Finally, the predictive model is built based on the method of memorization. The “Edinburgh Pedestrian Dataset”, which offers about 1000 observed trajectories of pedestrians detected in camera images each working day for several months, is used as an example to illustrate the framework.
Although there is a large amount of image data captured, most of them are not available to the public. The other part of this thesis developed a method of generating spatial-spectral-temporal synthetic images by enhancing the capacity of a current tool called DIRISG (Digital Image and Remote Sensing Image Generation). Currently, DIRSIG can only model limited temporal signatures. In order to observe general temporal changes in a process within the scene, a process model, which links the observable signatures of interest temporally, should be developed and incorporated into DIRSIG. The sub process models could be categorized into two types. One is that the process model drives the property of each facet of the object changing over time, and the other one is to drive the geometry location of the object on the scene changing as a function of time. Two example process models are used to show how process models can be incorporated into DIRSIG. 

October 22, 2014 at 10:00am - Ph.D. Thesis Defense - Monica J. Cook - Atmospheric Compensation for a Landsat Land Surface Temperature Product

Carlson Bldg. (76) - Room 3215 (DIRS Lab)
October 22, 2014 at 10:00am
Monica J. Cook
Atmospheric Compensation for a Landsat Land Surface Temperature Product
Ph.D. Thesis Defense
CHESTER F. CARLSON Center for Imaging Science
Ph.D. Thesis Defense
 
Monica J. Cook
Atmospheric Compensation for a Landsat Land Surface Temperature Product
Advisor: Dr. John R. Schott
 
Wednesday, October 22th 2014, 10:00am
Carlson Bldg. (76) - Room 3215 (DIRS Lab)
 
Abstract: 
Abstract
The Landsat series of satellites is the longest set of continuously acquired moderate resolution multispectral satellite imagery collected on a single maintained family of instruments.  The data are very attractive because the entire archive has been radiometrically calibrated and characterized so that the sensor reaching radiance values are well known.  Because of the spatial and temporal coverage provided by Landsat, it is an intriguing candidate for a land surface temperature (LST) product.  The entire archive has been calibrated, but effective spectral radiance values are not intuitively applied, so this dataset has not been utilized to its fullest potential.  Land surface temperature is an important earth system data record for a number of fields including numerical weather prediction, climate research, and various agricultural applications.  The Landsat LST product will make an already existing dataset, that is largely untapped, truly useful to the remote sensing community.
 
Using the Landsat LWIR thermal band, LST can be derived with a well-characterized atmosphere and known surface emissivity.  This work focuses on atmospheric compensation at each Landsat pixel, which will later be used with ASTER derived emissivity data from JPL to perform LST retrievals.  
 
We develop a method to automatically generate the effective in band radiative transfer parameters transmission, upwelled radiance, and downwelled radiance for each pixel by using the North American Regional Reanalysis dataset as atmospheric profile data in MODTRAN.  Due to differences in temporal and spatial sampling and computational limitations, a number of interpolations are required.  We validate our methodology by comparing our predicted apparent temperatures to ground truth water temperatures derived from buoy data at a number of validation sites around the continental United States.  Initial results show a mean error of -0.267 K and a standard deviation of 0.900 K for cloud free scenes in the validation dataset.  Based on the same validation dataset, we explored multiple options for developing a confidence metric for the product.  Our current best expectation for a confidence metric for the final product involves categorizing each pixel as cloudy, clouds in the vicinity, or cloud free, based on the incorporation of a Landsat cloud product.  The mean and standard deviation of the errors associated with each category will be included as a quantitative basis for each category.
 
To support future work we explored the extension to a global dataset.  Using a small sample of scenes, we justify moving forward with the use of the MERRA product for a global dataset by comparing to ground truth, NARR results, and another global source.  We also consider possible improvements to the atmospheric compensation by more closely exploring the column water vapor contributions to error.  Finally, we acknowledge the need for a more formal incorporation of the cloud product, and possibly improvements, in order to finalize the confidence metric for the atmospheric compensation component of the product.

October 21, 2014 at 3:15pm - Master’s Thesis Defense - Ming Li - Building Model Reconstruction from Point Clouds Derived from Oblique Imagery

Carlson Bldg. (76) - room 3215 (DIRS LAB)
October 21, 2014 at 3:15pm
Ming Li
Building Model Reconstruction from Point Clouds Derived from Oblique Imagery
Master’s Thesis Defense
Abstract: 
Chester F Carlson Center for Imaging Science
Master’s Thesis Defense
 
Ming Li
Building Model Reconstruction from Point Clouds Derived from
Oblique Imagery
Advisor: Dr. John Kerekes
 
Tuesday, October 21st 2014, 3:15 PM
Carlson Bldg. (76) - room 3215 (DIRS LAB)
 
 
 
Abstract
 
The increasing availability of high resolution airborne imagery increases the accuracy of building modeling of urban scenes. This high accuracy of building modeling offers a strong 3D reference for disaster recovery and asset evaluation applications. With the advantage of having more façade information, this thesis addresses building modeling from airborne oblique imagery.
 
Building on previous work, this thesis presents two schemes to construct building models from point clouds derived from oblique imagery. With the assumption that buildings are in a cubic-shape, the scheme consists of three different steps. Plane estimation aims at identifying dominant surfaces; edge extraction helps in detecting and simplifying in-plane edges in each identified surfaces; model construction finishes the job of assembling the surfaces and edges together and producing a model in a universally accepted format. We find this scheme works well with complete point clouds that covering all sides of the building. Another method based on a minimum bounding box is proposed to handle the complications when the point clouds do not represent all sides of the building.
 
The schemes are tested on point cloud data sets from multiple sources, including both image derived and LiDAR derived point clouds. The surface based approach and minimum bounding box based approach both show the capability of reconstructing models, while both of them have disadvantages. The limitations of these approaches and recommendations for future work are also discussed.
 

October 14, 2014 at 2:00am - Master’s Thesis Defense - VIRAJ R. ADDURU - Ultrasound Guided Robot for Human Liver Biopsy using High Intensity Focused Ultrasound for Hemostasis

Carlson Bldg. (76) - room 3215 (DIRS LAB)
October 14, 2014 at 2:00am
VIRAJ R. ADDURU
Ultrasound Guided Robot for Human Liver Biopsy using High Intensity Focused Ultrasound for Hemostasis
Master’s Thesis Defense

Chester F Carlson Center for Imaging Science

Master’s Thesis Defense

 

Viraj r. adduru

Ultrasound Guided Robot for Human Liver Biopsy using High Intensity Focused Ultrasound for Hemostasis

Advisor: Dr. Rao Navalgund

 

Tuesday, October 14th 2014, 2:00 PM

Carlson Bldg. (76) - room 3215 (DIRS LAB)

 

Abstract: 

 

Abstract

 

Percutaneous liver biopsy is the gold standard among clinician’s tools to diagnose and guide subsequent therapy for liver disease. Ultrasound image guidance is being increasingly used to reduce associated procedural risks but post-biopsy complications still persist. The major complication is hemorrhage, which is highly unpredictable and may sometimes lead to death. Non-invasive methods to stop bleeding exist like electro-cautery, microwave, RF, and High Intensity Focused Ultrasound (HIFU), etc. All the methods except HIFU require direct exposure of the needle puncture site for hemostasis.

To reduce human error in focusing HIFU we have designed and developed an ultrasound guided prototype robot for accurate targeting. The robotic system performs percutaneous needle biopsy and a 7.5 cm focal length HIFU is fired at the puncture point when the needle tip retracts to the liver surface after sample collection. The robot has 4 degrees of freedom (DOF) for biopsy needle insertion, HIFU positioning, needle angle alignment and US probe image plane orientation. As the needle puncture point is always in the needle path, mechanically constraining the HIFU to focus on the needle reduced its functionality significantly. Two mini c-arms are designed for needle angle alignment and US probe image plane orientation. This reduced the contact footprint of the robot over the patient providing a greater dexterity for positioning the robot. The robot is validated for HIFU hemostasis by a series of experiments on chicken breasts.

HIFU initiated hemorrhage control with robotic biopsy ensures arrest of post-biopsy hemorrhage and decreases patient anxiety, hospital stay, morbidity, time of procedure, and cost. This can also be extended to other organs like kidney, lungs etc.

This research opens a greater scope for research for further size reduction of the robot and automation making it a physician friendly tool for eventual clinical use.

August 5, 2014 at 10:00am - M.S. Thesis Defense - Colin M. Fink - Glint Avoidance and Removal in the Maritime Environment

Carlson Bldg. (76) – Room 2155
August 5, 2014 at 10:00am
Colin M. Fink
Glint Avoidance and Removal in the Maritime Environment
M.S. Thesis Defense

Advisor: Dr. Michael G. Gartley

Abstract: 

In-scene glint greatly affect the usability of maritime imagery and several glint removal algorithms have been developed that work well in some situations.  However, glint removal algorithms produce several unique artifacts when applied to very high resolution systems, particularly those with temporally offset bands.  The optimal solution to avoid these artifacts is to avoid imaging in areas of high glint.  The glint avoidance tool was developed to avoid glint conditions and provide a measure of parameter detectability.  This work recreates the GAT using HydroLight, as a validation of the work done by Dr. Adam Goodenough.  Because avoiding glint is not always possible, this research concentrates on the impact of glint and residual artifacts using RIT's Digital Imaging and Remote Sensing Image Generation (DIRSIG) dynamic wave model and HydroLight back-end to create accurate Case-I synthetic imagery.  The synthetic imagery was used to analyze the impact of glint on automated anomaly detection, glint removal, and development of a new glint compensation technique for sensors with temporally offset bands.

Pages