User-cooperation via half-duplex relaying yields up to 40% increase in throughput over both direct and multi-hop communication. To realize this gain, we propose an LDPC coding scheme for the half-duplex relay channel. Our scheme mimics the information theoretically optimal random coding scheme for decode-and-forward relaying. An important advantage of our coding scheme is that it is composed entirely of single-user LDPC codes, despite the fact that the relay channel is a combination of a broadcast channel and a multiple-access channel. Achieving the relay gain using single-user LDPC codes introduces new challenges in code profile optimization. We address these challenges by including additional constraints into the density evolution algorithm. We also modify the Gaussian approximation of density evolution to obtain good profiles with much less computation. The performance of our codes is less than 0.4dB away from the theoretical limit.
Typical wireless communication systems rely on receiver-side channel estimation to compensate for the effects of the fading channel. Our work conclusively demonstrates that having a channel estimate at the transmitter provides vastly superior outage performance and is an essential design criterion for emerging high-performance systems. Though some existing systems do include power control, they are generally rudimentary and fail to realize the full potential of transmitter adaptation. The transmitter has two methods for obtaining channel state information (CSI): training and feedback. Both methods require the allocation of system resources to estimating the channel. We show that, in the absence of any receiver training, minimal resources devoted to transmitter training yield exponentially decaying error probability. In contrast, systems that only provide CSI to the receiver can do no better than a linear decrease in error probability with increasing SNR. Thus, for equal resource expenditure, transmitter training is much more effective than receiver training. Partial feedback of the receiver's channel estimate produces polynomial decay that is proportional to the number of feedback bits. This result is similar to the receiver-only CSI case, but the slope of the error probability curve (diversity order) is significantly better. Based on these results, we conclude that new physical layer designs must provide channel state information to the transmitter in order to achieve high datarates with high reliability.
This poster presents the design of a custom platform for research in advanced wireless algorithms and applications. The platform consists of both custom hardware and FPGA implementations of key communications blocks. The hardware consists of FPGA-based processing boards coupled to wideband radios and other I/O interfaces; the algorithm implementations already include a flexible OFDM physical layer. Both the hardware design and algorithm implementations will be freely available to academic researchers in hopes of developing a widely disseminated, highly capable platform for wireless research.
Rapid prototyping offers verification of cutting-edge algorithms in a realistic setting before their deployment. Most high speed wireless communication algorithms exhibit a high degree of complexity and parallelism. The programmability and inherent parallelism provided by Field Programmable Gate Arrays (FPGAs) make them well suited for prototyping this class of algorithms. Xilinx System Generator is a bit true and cycle true simulation tool and allows design to be composed from a variety of ingredients. As an application example, we present an FPGA-based SISO OFDM transceiver implementation with adaptive transmission.
Classical coding schemes such as channel coding and Slepian-Wolf coding assume known statistics and rely on asymptotically long sequences. However, in practice the statistics are unknown, and the input sequences are of finite length n. In this finite regime, we must allow a non-zero probability of coding error and also pay a penalty in the rate. The penalty manifests itself as redundancy in source coding and a gap to capacity in channel coding. Our contributions are two-fold. First, we characterize the penalty due to universality and show that it is proportional to sqrt(n) bits. We develop a variable-rate coding scheme with feedback for which the penalty achieves this limit. Prior art shows that the penalty for coding schemes with known statistics is O(sqrt(n)), and hence we cannot design a better scheme for universal coding that has a lower penalty. Second, we derive the penalty incurred in variable-rate universal coding when we restrict the number of rounds of feedback. We show that 2-3 rounds of feedback are sufficient in practice. This result is valuable for practical communication systems.
Multiple frequency channels have been utilized primarily for improving aggregate throughput of a multi-hop wireless network. However, it is not evident that channelization can address starvation phenomena that arise when CSMA random access protocols are employed in a multi-hop environment. We present a distributed medium access protocol called Asynchronous Multi-channel Coordination Protocol (AMCP) that not only improves aggregate throughput but, more importantly, mitigates starvation by providing per-flow minimum rate guarantees. We first identify the root cause of starvation as a generic coordination problem where flows experience extended sensing periods and/or high collision probability due to misaligned interfering transmissions. Simply scheduling such transmissions on orthogonal channels does not necessarily alleviate the problem. Instead, we show multiple channels introduce new coordination problems that may lead to further performance degradation. AMCP resolves the above coordination problems by exchanging messages over a dedicated control channel in an asynchronous manner. Since a common control channel may become a performance bottleneck, we derive the maximum number of data channels that can be driven by our protocol. We also represent its channel skipping pattern with an aggregate Poisson process to derive an analytical lower bound on the throughput of any flow in an arbitrary topology. Our experiments demonstrate that the per-flow throughput achieved by AMCP can approach the analytical lower bound within a heavily congested contention region, while in arbitrary multi-hop topologies it is much higher due to spatial reuse.
Multihop wireless mesh networks can provide Internet access over a wide area with minimal infrastructure expenditure. In this work, we investigate key issues involved in deploying such networks including individual link characteristics, multihop application layer performance, and network-wide reliability and throughput. We use extensive measurements from a two-tier urban scenario to parameterize the propagation environment and correlate received signal strength with link and application performance. We next measure the behavior of independent and simultaneous multihop flows in a parking lot topology, in which all traffic originates or terminates at the wireline Internet entry point. We examine a number of traffic scenarios and characterize how the achievable rates fall off with path length and contention. In our topology study, we incorporate our single link and multihop findings into a computational model to guide network deployment decisions. Finally, we explore the impact of node density, the ratio of wired-to-wireless mesh nodes, random wire locations, and randomness in mesh node placement.
Despite the remarkable advantages of wavelets for analyzing and processing one-dimensional signals, a surprising realization of the past few years is their inability to capitalize in a similar way on higher-dimensional signals containing singularities. The confounding aspect of these singularities is geometry: the singularities are often localized along smooth, lower-dimensional manifolds with strong geometric structure. Examples of geometric singularities include edges in images,object motion in video, and interfaces in volumetric data. This poster will overview some of our recent progress in dealing with multi-dimensional, multiscale geometry; in particular, we introduce new hypercomplex wavelet transforms with applications in image analysis and seismic processing.
Natural signals often contain structure that allows them to be compressed, or represented in a concise manner. Compression is critical in applications such as sensor networks, where communication consumes much of the available power. Interestingly, in such scenarios, the multiple signals observed by the sensors will also share a certain intra-signal correlation, which should permit efficient joint compression if the sensors were able to collaborate. Unfortunately, this extra collaboration would only consume additional communication resources and thus more power.
We present a new framework for distributed source coding, which we term Distributed Compressed Sensing (DCS). In our setting, each sensor computes linear projections of its signal onto random subspaces and then independently transmits these coefficients to a central receiver. A joint decoder then reconstructs the ensemble of signals via optimization techniques. Not only is this strategy collaboration-free, but it can be applied in any setting where signals are sparse in some basis (which could even be unknown to the sensors). The emerging field of Compressed Sensing (CS) provides a precedent for recovery from this incoherent information; we extend the CS framework to joint recovery of multiple signals by exploiting their joint sparsity structure. In some settings, our results are the best possible -- each sensor transmits no more information than would be required using collaboration.
Many of today's most important signal processing problems demand fault-tolerant, data-driven distributed processing of sensed information under very limiting constraints. Biological sensorineural systems have already evolved a solution to this problem: Using attention, neural systems focus their processing on "important" events occurring in the sensory scene. Attention acts through a combination of adaptation and feedback, which allows resources to be allocated dynamically as new events arise and old ones disappear. In our research, we are trying to characterize mathematically the neural mechanisms of attention while simultaneously applying those principles to designing new distributed information processing systems.
One of the driving challenges of wireless sensor networks is to design, analyze, and implement distributed signal processing algorithms in a resource scarce environment. It is a challenge, of course, which requires a deep and broad understanding of all aspects of the problem, from understanding how to decentralize known algorithms, to understanding the interplay between these algorithms and imposed constraints. But, while much progress has been made in designing highly constrained distributed algorithms, comparatively little attention has been paid to the fundamental problem of understanding the impact of a distributed system's architecture (i.e. the pattern of communication links between nodes) on the performance of signal processing algorithms. Traditionally, communication considerations largely dictated network architectures. The presence or absence of a link had to do more with its availability and not on whether communicated data contributed to (or impaired) the signal processing task being performed. Because many of the currently envisioned sensor networks will carry out, most, if not all of the signal processing tasks ``in-network'' and in a decentralized fashion, it is likely such cooperative algorithms will heavily depend on a network's architecture. Thus, understanding the role architecture plays in distributed signal processing systems (i.e. understanding the impact communication links have on signal processing algorithms) becomes a fundamental problem. Here, we propose an approach to quantify the effect a particular class of cooperative distributed detection architectures has on detection performance. Examples show that within this class, and for Markov dependent observations, a distributed system's architecture can significantly impact the asymptotic exponential error decay rate of the ultimate decision maker.
Simultaneous digital communication of multiple analog signals represents a formidable design challenge, especially when the channel is very noisy, has limited bandwidth, and introduces significant inter-symbol interference. We show that the optimal communication scheme is achieved by a joint source-channel coding design. This approach jointly optimizes source compression while controlling the individual bit error probabilities to minimize the aggregate end-to- end mean-squared error distortion in reconstructing the analog signal from their compressed, digital representations. Multicarrier modulation enables efficient implementation of the joint source- channel coding design criterion. To provide more performance, we developed a new systematic bit and power loading scheme and optimized constellation design on each subchannel. We will compare our performance results with the theoretically attainable limits from rate-distortion theory.
Sensor networks have emerged as a promising tool for monitoring and actuating the physical world, employing self-organizing networks of wireless sensors that can sense, process, and communicate. Energy is a critical resource for such networks, and there is both a need and an opportunity to optimize the network architecture for specific applications in order to optimize resource utilization.
Many applications - such as large-scale collaborative sensing, distributed signal processing, distributed data assimilation - require sensor data to be available at multiple resolutions, or allow fidelity to be traded-off for energy efficiency. In this poster, we shall present COMPASS - an adaptive cross-layered Sensor Network Architecture that enables multi-scale collaboration and communication. Consisting of adaptive protocols for routing management and distributed signal processing, this architecture provides scalability, localization, and resolution-tuning. These protocols optimize the energy requirements of the architecture through intelligent co-design of network services and applications.
In this poster, we propose a Redundant Quaternion Wavelet Transform (RQWT) which successfully separates local signal energy and local signal structure into quaternion magnitude and phases. RQWT is a natural framework for studying the various statistical models used in existing image denoisers. With the help of RQWT, we identify a class of Noncoherent Image Denoising algorithms including most existing image denoising methods. These noncoherent denoising algorithms explore only the local signal energy while ignore the local signal structure. Straightforward signal estimation in the RQWT framework closely matches the state-of-the-art noncoherent image denoisers and provides a natural bound on their performance, thereby showing the importance of exploring location information in quaternion phases.
Self-organized learning is a powerful process in intelligent data analysis. Mimicking observed properties of natural neural maps on the cerebral cortex, Self-Organizing Maps (SOMs, an unsupervised Artificial Neural Network learning paradigm) produce a two-dimensional spatially ordered quantization of a higher-dimensional data space. The quantization prototypes are adaptively determined for optimal approximation of the (unknown) pdf of the data, and distributed on a rigid lattice according to their similarity relations, which facilitates detailed and precise cluster formation. SOMs have proved valuable in data mining. Successful applications in various science, engineering, and other problems were published in more than 5000 papers in the last 20 years (http://www.cis.hut.fi/research/som-bibl). However, beyond relatively simple (low-dimensional, low-volume) data, automated and precise capture of cluster boundaries from a learned SOM has been a long-standing challenge that to date has only partial solutions. The problem is especially important for high-dimensional and large data sets with many meaningful clusters such as, for example, in hyperspectral images of remote sensing sites or biological tissues, or in genetic microarray data. Such data often also have interesting rare clusters whose discovery may be the most important finding in the analysis, yet those rare clusters suffer most frequently from the limitations of clustering algorithms. We present two approaches that can help: one is the forcing of a nonlinearly "magnified" mapping of the (unknown) pdf of the data, which can preferentially enlarge the representation of small clusters, thereby making their discovery easier; the other is an automated clustering of the SOM (quantization) prototypes, which is responsive to all cluster sizes and captures rare clusters as well.
Hyperspectral imagery provides all discriminating details needed for fine delineation of many material classes. Detailed and precise classification is essential for scientific research ranging from geologic to environmental impact studies. In a supervised classification scenario, a natural question is whether a subset of the input features could be used without compromising classification quality. Feature extraction has been of interest for data modeling, compression, and to make hyperspectral data accessible to traditional classifiers that are of limited use for hundreds of dimensions. Feature extraction models based on PCA or wavelets judge feature importance by the magnitude of their coefficients, rarely leading to an appropriate set of features for uncompromised classification. Benediktsson used a Backpropagation neural network to extract the (39) bands from an AVIRIS image needed to achieve the same accuracy in classifying 9 surface units as achieved using all bands. However, this claim is not supported by a benchmark classification. We study a recent neural paradigm, Generalized Relevance Learning Vector Quantization (GRLVQ) to discover input dimensions relevant for classification. GRLVQ substantially extends Learning Vector Quantization (LVQ) by learning relevant input dimensions while incorporating classification accuracy in the cost function. LVQ is the supervised version of Kohonens unsupervised Self- Organizing Map. As the only requirement for classification success is defining class boundaries, LVQs are a good choice. LVQs iteratively adjust prototype vectors towards class boundaries while minimizing the Bayes risk. Our algorithmic improvements to the original GRLVQ stabilize it for high-dimensional data and increase performance. Using only the relevant spectral channels determined by GRLVQ, we produce as good or better classification accuracy for 23 surface materials as by using all spectral channels of AVIRIS. We support this claim by comparing independent classifiers on the unreduced and reduced feature sets.
Inductance modeling and automated design of interconnect and integrated spiral inductors continue to hinder the realization of mixed-signal systems in system-on-chip (SoC) technology. We present modeling and optimization techniques for interconnect and integrated spiral inductors in mixed-signal systems. For on-chip interconnect, inductive effects can significantly impact the timing and signal integrity of the mixed-signal design. We develop an analytical model of frequency dependent self inductance for RLC interconnect that accurately characterizes a wide range of interconnect layouts. With 2500x performance improvement over the field solvers, the model provides a tractable solution for inductance aware physical synthesis, inductance screening, interconnect synthesis and optimization. For integrated spiral inductors, design automation tools require efficient modeling and optimization techniques in order to quickly pinpoint appropriate inductor geometries. We introduce a new frequency-dependent model to quickly characterize spiral inductors. The model provides up to 4 orders of magnitude overall performance improvement. We also develop a scalable multi-level optimization methodology for spiral inductors that utilizes our spiral inductor model. The optimization engine integrates the flexibility of constrained global optimization with the rapid convergence of local nonlinear convex optimization techniques. Results indicate that our methodology locates optimal spiral inductor geometries with significantly fewer function evaluations than current techniques.
As the CMOS technology scaling continues to approach the basic physical limits and the importance of interconnect continues to grow, development of alternate technologies becomes crucial for the next generation high performance integrated circuits. Conventional copper interconnects are potentially limiting the semiconductor industry growth, due to the increasing delay and crosstalk. In recent years, on-chip optical interconnects have been considered as a potential solution to traditional wiring interconnects. Optical interconnect has the natural advantage of high speed, high bandwidth and interconnect density, with an overall superior performance compared to traditional coppers wires. In this poster, we will present an initial work towards modeling of the interconnect medium and designing the optical system. Our modeling of the medium is focused on plasmon waveguides that consists of metal nanoparticles as they offer higher integration density while also allowing low-loss energy guidance at sharp corners. As a first step towards modeling plasmon waveguide, we have developed an analytical closed form expression for the frequency resonance and scattering characteristics of single nanoshells. Additionally, at the system level, we present a new knowledge based synthesis methodology that facilitates the automatic design of individual blocks that guarantees minimum delay. Our methodology relies on newly developed models that characterize the different performance parameters of each building block to predict the required optimum parameter that maximizes the performance. The newly introduced synthesis methodology of the optical system achieve up to 50% reduction in delay compared to the standard design approaches. It is envisaged that our modeling and design methodology will provide an integrated interconnect solution for future high performance on-chip optical interconnects.
The ideal architecture for high performance network interfaces is still an open research question. To explore the problem and validate potential solutions, software-programmable NICs are often used. Their performance when simulating complex hardware architectures leaves much to be desired, however, and their flexibility can be limited through the use of proprietary intellectual property. In this project, a gigabit Ethernet NIC was implemented on an FPGA to obtain a maximally flexible environment for future advanced-architecture research. This system contains two embedded PowerPC processors under software control, providing excellent flexibility for changing application requirements. In addition, the reconfigurable FPGA fabric itself allows for particularly intensive computational or logical tasks to be implemented directly in hardware, freeing the embedded processors for more complicated or less frequent tasks. This unique flexibility makes this NIC the perfect tool to study advanced network processor architectures to support data rates beyond 1 gigabit, as well as investigate protocol and application offloading from the host PC to the NIC.
This research investigates the performance of block memory operations in the operating system that include memory copies, page zeroing, interprocess communication, and networking. The performance of these common OS operations is highly dependent on the cache state and future use pattern of the data, and no single routine maximizes performance in all situations. Despite this, current systems use a statically selected copy algorithm to perform block memory operations. This research examines various ways of determining the initial state of the cache before a memory operation begins and predicting way in which the data will be used. It demonstrates that online methods for predicting the state of the cache by using a software or hardware probe can have high accuracy. Additionally, it proposes a methodology of offline profiling that allows static prediction of both initial cache state and data reuse patterns that can be used for further optimization. By using this memory locality information to select the optimal software algorithm, the performance of kernel copy operations is improved. Furthermore, the behavior of the block memory optimization suggests a variety of potential hardware improvements to enhance performance by lowering the overhead of various software algorithms.
Interconnect topology has a significant impact on overall chip cost in terms of power and area. Topology-based statistical interconnect modeling enables a higher-level synthesis tool to make better optimization decisions more quickly. Results show that with simple heuristics, interconnect power can be reduced significantly by topology optimization during high-level synthesis. Lin Zhong will discuss optimizing interconnect topology and statistically modeling interconnect cost based on the topology throughout the synthesis process, including both high-level and logic syntheses.
With increasing computing, storage capacity, and better connectivity, smart-phones will evolve as our worldwide digital hubs in the near future. Lin Zhong discusses two smart-phone-based prototypes of personal and community service systems. In the first system, a smart-phone serves as the center of a wireless personal-area network (PAN) that consists of I/O devices, such as health-monitoring sensors and user interface devices. The system provides a platform for services such as continuous personal health monitoring and digital diaries. The second system is based on the infrastructure of 802.11 or cellular network base stations. Each station will send information from trusted sources in its neighborhood to smart-phones within its range. Such information can include safety alert, commercials, and community activities.
Pseudo-random channel coding can achieve near-Shannon performance. In this category, there are two main candidates: the Turbo-codes (TC) and the LDPC codes. Although the LDPC codes are flexible (adaptive rate) and have excellent error-correcting performance (low block error rate), they suffer from a relatively low convergence speed in comparison with TC: typically 20 iterations vs. 8 iterations. We propose to optimize message-passing schedule of the LDPC codes by performing layered decoding which can be applied on the rows and/or columns of the parity check matrix. The convergence speed is now improved about two times. By using one of these optimized schedules, we designed a semi-parallel decoder that achieves data throughput of up to 1GBits/sec. The implementation is flexible - different coding rates (from 1/2 to 5/6) and code sizes (up to 2304) are supported according to the required specifications for IEEE 802.11n and IEEE 802.16e standards.
In this poster we present system-on-a-chip extensions to the Spinach simulation infrastructure for rapidly prototyping heterogeneous and reconfigurable FPGA based architectures, specifically in the embedded domain. This infrastructure has been successfully used to model various embedded DSP/FPGA system-on-a-chip designs incorporating Texas Instruments C6x series DSPs with tightly coupled FPGA based coprocessors for computational offloading. As an illustrative example of this toolsets functionality, we show how the toolset can be used for system-on-a-chip partitioning and architecutural exploration in the wireless domain.
Orthogonal Frequency Division Multiplexing (OFDM) systems present an attractive solution for meeting the requirements of next generation wireless communications systems. A well-designed OFDM system exhibits resilience to frequency-selective fading, lower multi-path distortion, and high spectral efficiency while enabling efficient hardware implementations. Emerging standards for high data-rate WLANs combine OFDM with multi-antenna (MIMO) techniques to achieve reliable high-throughput links while maintaining low hardware complexity. The major drawback with these systems is their sensitivity to synchronization errors between the receiver and the transmitter. In this work, we address the performance degradation of synchronization schemes in time-varying outdoor channel environments and present extensions to preamble-based schemes in the context of MIMO-OFDM WLAN systems. By performing extensive simulations, we strive to minimize the challenges faced during the hardware implementation phase. The simulation work will be followed by an efficient hardware implementation of the wireless system using field programmable gate arrays (FPGAs) and an integrated high-speed analog interface.
Sharing of centralized resources is becoming common in large companies and institutes. Meeting response time deadlines and throughput requirements of various workloads contending for these shared resources is often quite challenging. This paper presents a 2-level scheduling framework to meet response time guarantees by dynamically reordering the requests while maintaining the throughput given to each of the workloads. Preliminary results show that our approach works well for dynamically varying workloads and is work conserving.
The surface plasmon resonances of noble metal nanoparticles are highly sensitive to their local dielectric environment. The binding of analytes to acceptor molecules attached to a nanoparticle surface has been shown to alter this environment, thus enabling biological and chemical detection through spectroscopic measurements. When the plasmon resonances of individual nanoparticles are observed, the sensitivity reaches 100's to 1000's of molecules, and may approach single molecule detection for large analytes. Gold nanoparticles may serve as exceptionally versatile plasmon resonance sensors due to their high aspect ratios, tunable plasmon resonance energies, rational synthesis, and well-developed surface chemistry. We have measured the plasmon resonance energy dependence of single gold nanostars as a function of dielectric environment, and shown functionalization of a single gold nanostar with 16-mercaptohexadecanoic acid. These results, as well as progress towards biological functionalization of gold nanostars will be discussed.
Large electric-field enhancements occur in nanoparticle junctions near their plasmon frequencies. The nanoshell dimer, for instance, has enhancements as high as three orders of magnitude. These enhancements enable more efficient molecular spectroscopy and may allow single molecule detection. Plasmon hybridization is a method that can be used to express the fundamental plasmon modes of complex nanosystems in terms of simpler composite plasmons. We apply the plasmon hybridization method to multi-spherical particle systems in order to understand these plasmon modes in a more intuitive way and compare these results to results from the Finite Difference Time Domain (FDTD) method.
We show that the plasmon resonances of a metallic nanoparticle interacting with the surface plasmons of a metallic film is an electromagnetic analog of the spinless Anderson-Fano model. This is the same model used to describe the interaction of a localized electronic state with a continuous band of electronic states. The three characteristic regimes of this model are realized here, where the energy of the nanoparticle plasmon resonance lies above, within, or below the energy band of surface plasmon states. These three interaction regimes are controlled by film thickness. The latter regime is experimentally observed and identified.
Fabrication methods that allow us to pattern different nanoparticle configurations on a substrate are essential to the development of more complex plasmonic nanodevices. One of these interesting structures is nanoshell dimers. Strong electromagnetic field enhancement has been both predicted theoretically and verified experimentally in the junctions of nanoparticles. Nanoshell dimers are highly promising plasmonic nanostructures for achieving high sensitivity of surface enhanced spectroscopy. Here we develop two bench-top methods for the fabrication of nanoshell homodimers and hetrodimers.
Using plasmonic nanostructures to control and optimize the surface enhanced spectrocopies has been the focus of increasing attention over the past few years as the spectroscopy-based molecular identification methods have widespread applications in chemical and biomolecular sensing. Efforts to develop, control and optimize surface enhanced Raman spectroscopy (SERS) as an analytical tool depend on methods for fabricating substrates with high sensitivity, stability, and reproducibility of the SERS signals. Using periodic nanostructured substrates for SERS measurements can provide quantitative correlations between surface structures and SERS enhancements. We exploit a convenient and cost-effective approach to the self-assembly of cetyltrimethyl ammonium bromide (CTAB)-capped Au nanoparticles into highly ordered close-packed Au nanoparticle arrays on solid substrates using a solvent evaporation method. The as-fabricated nanoparticle arrays display an intense plasmon band in the near-infrared region, which primarily arises from the interparticle plasmon coupling between adjacent nanoparticles. Such interparticle electromagnetic coupling produces enormous near-field enhancements at the junctions between neighboring nanoparticles, creating uniform periodic densities of well-defined hot spots which can be exploitable for large SERS enhancement. From the electromagnetic point of view, these nanoparticle arrays can be regarded as inverse Van Duyne lattices, possessing similar but complementary near-field properties to the triangle arrays fabricated using nanosphere lithography. Moreover, the high periodicity of the arrays also provides a useful model for detailed studies of the correlations between localized near-field electromagnetic properties and spectroscopic enhancements. The SERS performance of the close- packed Au nanoparticle arrays is quantitatively and systematically evaluated by using para-mercaptoaniline (pMA) and dye molecules as the probing molecules.
Roughened subwavelength nanostructures have been attracting widespread interest in both fundamental research and technological applications. We developed two facile and controllable methods for the texturing of the surface topography of silica-Au core-shell nanoparticles based on the site- selective chemical etching of the polycrystalline Au nanoshell surface by cysteamine and the seed-mediated growth of nanoscale bumps on the surface of smooth nanoshells respectively. Such nanoscale surface texturing processes systematically introduce nanoscale roughness on the surface of Au nanoshells and dramatic effects on the plasmonic properties of the textured Au nanoshells. The modification of the plasmonic properties of nanoshells as a function of increased surface roughness was examined experimentally and modeled theoretically using three-dimensional Finite Difference Time Domain (FDTD) simulations. We also discovered that smooth and bumpy nanoshells display significantly different angular dependent light scattering patterns under their dipole and quadrupole resonant illuminations.
Various spectroscopic techniques (IR, Raman) have tremendous potential for chemical sensing. Designing substrates for chemical sensing in the mid-infrared to far-infrared regime is challenging. One of the major advances in this field has been the discovery of SERS that uses plasmonic nanostructures (nanoshells, metal island films, metallic particles etc.) as substrates that provide enhanced incident electromagnetic field. However, SERS is limited to visible to NIR region of the electromagnetic spectrum.
Unlike SERS, SEIRA can be used for NIR-FIR region chemical sensing. The electromagnetic interactions of the incident photon field with the metal and molecules play major role in SEIRA. We have exploited the tunability of nanoshells to use them as SEIRA substrate. Silica core-Gold shell (SiO2-Au) nanoparticles have been fabricated with the dipole plasmon resonance tuned to the mid-infrared region (7 microns). These particles act as a potential metal surface on which analyte molecules with specific functional groups are attached. We have designed and fabricated nanoshells in the mid-infrared regime, which is a first step towards developing reproducible SEIRA probes.
There is tremendous interest in the enhancement of electromagnetic fields near metal surfaces for applications such as surface enhanced Raman spectroscopy (SERS). The Raman scattered intensity of a fluorophore should increase in the enhanced optical near field of a metal surface.
Gold Nanoshells are spherical metallodielectric particles composed of a silica core covered by a gold shell, which exhibit tunable plasmon resonances. At these resonances, the electromagnetic field surrounding the nanoshells is strongly enhanced. Experimentally we have probed the decay of this fringing field by fabricating gold nanoshell - fluorescein conjugates. The fluorescein molecules are placed at controlled distances from the gold nanoshell surface using poly-adenine DNA strands as tethers. By varying the number of adenine bases in the DNA tethers we can vary the nanoshell fluorescein distance. The measured dependence of the SERS intensity on DNA tether length and correlation with theoretical models are reported.
Metallic nanostructures, such as NANOSHELLs, are of considerable interest as substrates for surface enhanced spectroscopies such as Surface Enhanced Raman Spectroscopy (SERS) , Surface Enhanced Infrared Absorption (SEIRA), and Surface Enhanced Raman Optical Activity (SEROA). The resonant excitation of plasmons in the metallic substrates can create large local electromagnetic field enhancements in the vicinity of the surfaces of the nanoparticles, which enables drastic increases in the cross sections. In this work we investigate the far- and near-field properties of nanoshells with a non-concentric core, Nanoeggs.
Interest in sources and systems for terahertz (1 THz = 1012 Hz) radiation has grown rapidly in recent years, spurred in part by the advances of numerous new techniques for generating and detecting radiation in this spectral range. THz radiation bridges the gap between the microwave and optical regimes and offers a great scientific and technological potential in many fields. Numerous uses of THz radiation have been explored, in areas such as trace gas detection, medical diagnosis, security screening, and defects analysis in complex materials such as space shuttle tiles. However, waveguiding in this intermediate spectral region still remains a challenge. Neither conventional metal waveguides for microwave radiation, nor dielectric fibers for visible and near-infrared radiation can be used to guide THz waves over a long distance, owing to the high loss and the large dispersion. Here we show some recent result of our exploration of novel THz waveguides and their applications. A metal wire waveguide for broadband THz pulses is demonstrated, which exhibits virtually no dispersion, extremely low attenuation, and facile manipulation of guided waves. Based on these remarkable features, we build the first THz endoscope. A new type of photoconductive THz transmitter is also demonstrated which dramatically improves the input coupling of the metal wire waveguide.
Photonic crystals are materials with periodic dielectric functions in one or more directions. They have been widely studied for their various potential applications, such as waveguides, filters, sensors, wavelength division multiplexing (WDM) and so on. Here we consider two dimensional photonic crystals at terahertz frequencies, a frequency range which becomes accessible only until recently. We investigate the propagation of light along the direction perpendicular to the plane of periodicity. At low frequencies, we find a large dispersion, and it exhibits a complicated spectral dependence. At high frequencies, where the lattice parameter exceeds the wavelength, we find that the group delay is equal to that of empty space despite the fact that the volume-weighted average dielectric of the slab is 3.8. We have employed Finite Element Method (FEM) and Finite-Difference Time-Domain (FDTD) method to study the slab and excellent agreement between simulations and experiments is achieved.
Simple metal wires were recently found to be effective terahertz waveguides that exhibit very low loss and dispersion. The THz radiation propagates along the surface of the wire's cylindrical geometry. This propagation is very similar to the wave phenomena described by Sommerfeld. This theory does not agree with experimental studies of the attenuation versus frequency. We employ the Finite Element Method (FEM) to study the propagation of terahertz radiation along metal wires. In our simulations, radially polarized waves at THz frequencies are launched down metal wires. The variation of attenuation with increasing frequency is examined for straight metal wires and compared to Sommerfeld's results. His theory predicts that the dominant loss mechanism for a metal wire is its finite conductivity. This loss is studied by performing simulations where the conductivity is varied. Surface modifications of the wire including an outer dielectric coating and surface gratings are modeled. Such effects are predicted to affect the confinement of the guided wave about the wire. The effects of varying the wire radius are also modeled. Where possible, these results are compared with experimental ones. The Finite Element Method is shown to be a powerful tool for studying the propagation of guided terahertz radiation.
Low-dimensional semiconductor structures offer interacting many-electron systems in which to observe and manipulate novel quantum coherent phenomena. Spin-related quantum coherence has been intensively studied in various confined geometries during the past several years, opening up new device possibilities for spintronics and quantum information processing. Here we demonstrate charge-based quantum coherence in a high-mobility two-dimensional electron gas in a GaAs/AlGaAs structure that lasts as long as 50 ps. The quantum coherence manifests itself as time-domain cyclotron resonance oscillations, measured with a coherent THz spectroscopy system combined with a 1.5 K, 10 T superconducting magnet cryostat with optical windows. These oscillations can be thought of as the free induction decay of a coherent superposition between the lowest unfilled Landau level and the highest filled Landau level induced by the incident THz pulse. From the decay time and oscillation frequency obtained at each magnetic field, we can directly determine the magnetic field dependence of the phase coherence time (T2 or T2*) and cyclotron frequency (ωc). Using the 0 T data as reference, we can also extract the real and imaginary parts of the dynamic conductivity σ(ω)for different magnetic fields in the frequency domain. Using this system, we can conduct further coherent and quantum optical experiments such as cyclotron echo and Rabi flopping.
We present time-resolved magneto-optical transmission measurements on micelle-wrapped, individually-suspended single-walled carbon nanotubes in μs pulses of ultrahigh magnetic fields up to 150 T. Polarized optical transmission data taken in the Voigt geometry reveals time-dependent optical anisotropy, indicating that the nanotubes can dynamically align in response to the μs-time-scale pulsed magnetic fields. Unlike previous measurements with DC and ms pulsed high magnetic fields, however, the transmission dynamics clearly shows hysteretic behavior as a function of magnetic field, suggesting that nanotube alignment/de-alignment does not follow μs field pulses faithfully. Our quantitative data analysis shows that the transient value of nematic order parameter (S) at 150 T is only ~4% whereas almost complete alignment (S ~ 92%) should be expected in a DC 150 T field if we use the previously-determined magnetic susceptibility anisotropy of semiconducting nanotubes with ~1 nm diameter. These results strongly suggest that it is necessary to take into account nanotube inertia as well as the viscosity of the surrounding solution in correctly understanding and predicting the magneto-alignment dynamics of nanotubes.
New DSP based architectures have been developed for semiconductor diode laser based trace gas sensors, resulting in ultra-compact sensor designs and which can match or exceed current systems in terms of their detection limit. This work presents architectures based on custom digital signal processing (DSP) technology. A recently developed system based on pulsed QC laser absorption spectroscopy is being optimized to fully utilize the acquisition and associated processing resources in order to reach the highest performance of the system. By utilizing fast wavelength scans, maximum laser pulse rates, and a data oversampling technique, such a sensor system provides faster convergence, showing simultaneously high immunity to low frequency noise, and improvement in detection limit. Another platform which will benefit from low power signal processing technology is quartz-enhanced photoacoustic spectroscopy (QEPAS). QEPAS is a technique that allows a significant decrease in size of sensor platform, which in combination with compact low power DSP-based data acquisition and control system can provide significant reduction of the final instrument size. Cost-effective sensor networks can be developed which would allow the rapid detection of toxic gases as well as wide area gas monitoring. Numerous battery powered sensors designed for minimized power consumption can be scattered throughout the region of interest, and would communicate wirelessly either by optical or radio means to determine realtime gas concentration maps and/or diffusion rates. Recent results obtained with pulsed QC laser spectrometer and design work for the implementation of such future low power QEPAS architectures will be discussed.
Chronic obstructive pulmonary disease (COPD) is the fourth-ranked cause of death in the United States and currently kills more than 100,000 Americans each year. Diffuse airway inflammation is observed in COPD and can be related to the deterioration of lung function. Nuclear transcription factor κB (NF-κB) and other inflammatory factors may be linked to this diffuse airway inflammation. Curcumin (diferuloylmethane) is a polyphenol found in tumeric that has been shown to suppress transcription factors such as NF-κB. Nitric oxide (NO) is released into the human airway in normal individuals and occurs at elevated levels in patients with chronic inflammatory lung diseases such as COPD. Our hypothesis is that curcumin will decrease indices of inflammation in stable COPD patients with severe disease. Details of a recently initiated pilot study based on dose-escalating open label trial of 30 subjects with severe COPD in collaboration with BCM will be presented. The subjects will be seen monthly for three months. The subjects will receive 1 gram bid for a month, 1.5 grams bid for a month, and finally 2 grams bid for the final month. We will measure NO concentrations in exhaled breath samples offline using quantum cascade laser based integrated cavity output spectroscopy at 5.45 microns. Several patients have begun the study and example NO spectra will be presented.
INNOVATE is an annual conference for undergraduate and graduate technical students that examines the relationship between technology, globalization, and leadership in the contemporary economy by allowing students to understand these concepts at a high level. Student delegates to the 2005 conference spent five days each in Singapore and Tokyo participating in meetings with key business, academic and government leaders and conducting professional visits to leading technology companies. INNOVATE 2005 involved 51 students and 10 advisors from 14 universities and 14 different countries on 4 continents. Beyond the formal sessions of the program, conversations occurred during dinner and sightseeing excursions that were also excellent opportunities for delegates to reflect on what they learned. This conference challenges emerging technical leaders to grapple with critical issues of technology and globalization and challenges their thinking about their own cultures, not just in the abstract, but through experience. The venues for INNOVATE change on an annual basis, allowing students to visit all the key centers of technology in Asia. The 2006 conference will take place in Shanghai, China and Kansai, Japan.
The faculty of Electrical and Computer Engineering Department strives to provide high quality degree programs that emphasize fundamental principles, that respond to the changing demands and opportunities of technology, that challenge the exceptional abilities of Rice students, and that prepare these students for roles of leadership in their chosen careers. In support of this goal, the department adopts a set of program objectives and educational outcomes for its graduates. The department continuously assesses how well it is achieving its objectives and outcomes, and uses the results to improve its curriculum, course content, and degree requirements. As the employers of our graduates, and/or as alumni, we seek your input and on our program objectives, educational outcomes, and graduates. Please stop by our poster and give us your comments.
The Rice chapter of IEEE is a student organization dedicated to informing ECE students of events in the department, preparing the undergrad ECE majors for life "beyond the hedges," encouraging freshmen and sophomores to major in Electrical Engineering, and creating a greater sense of community among the ECE students. The student branch has weekly lunches for the ECE undergrads with informal talks by professors, graduate students, and industry speakers. Rice IEEE also provides the ECE undergrads with a monthly newsletter with articles on research, careers, and the ECE degree process, as well as hosting presentations by companies interested in recruiting ECE undergrads. Industry representatives and faculty members who are interested in giving a talk, writing an article, or contributing to IEEE events can contact co-presidents Gina Upperman and Brian VanOsdol at email@example.com.
ECE Affiliates Meeting