Current Search: Research Repository (x) » * (x) » Thesis (x) » Biological Abstracts, Inc (x) » Saint Petersburg (x) » Biochemistry (x) » Slavic countries (x) » Electrical engineering (x)
Search results
Pages
 Title
 Identification of the Inertial Parameters of Manipulator Payloads.
 Creator

Reyes, RyanDavid, Department of Electrical and Computer Engineering
 Abstract/Description

Momentum based motion planning allows small and lightweight manipulators to lift loads that exceed their rated load capacity. One such planner, Sampling Based Model Predictive Optimization (SBMPO) developed at the Center for Intelligent Systems, Control, and Robotics (CISCOR), uses dynamic and kinematic models to produce trajectories that take advantage of momentum. However, the inertial parameters of the payload must be known before the trajectory can be generated. This research utilizes a...
Show moreMomentum based motion planning allows small and lightweight manipulators to lift loads that exceed their rated load capacity. One such planner, Sampling Based Model Predictive Optimization (SBMPO) developed at the Center for Intelligent Systems, Control, and Robotics (CISCOR), uses dynamic and kinematic models to produce trajectories that take advantage of momentum. However, the inertial parameters of the payload must be known before the trajectory can be generated. This research utilizes a method based on least squares techniques for determining the inertial parameters of a manipulator payload. It is applied specifically to a two degree of freedom manipulator. A set of exciting trajectories, i.e., trajectories that sufficiently excite the manipulator dynamics, in task space will be commanded to the system. Inverse kinematics are then used to determine the desired angle, angular velocity, and angular acceleration for the manipulator joints. Using the sampled torque, joint position, velocity, and acceleration data, the least squares technique produces an estimate of the inertial parameters of the payload. This paper focuses on determining which trajectories produce sufficient excitation so that an adequate estimate can be obtained.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_uhm0418
 Format
 Thesis
 Title
 Simulation of LiIon Coin Cells Using COMSOL Multiphysics.
 Creator

Chepyala, Seshuteja, Moss, Pedro L., Weatherspoon, Mark H., Andrei, Petru, Florida State University, FAMUFSU College of Engineering, Department of Electrical and Computer...
Show moreChepyala, Seshuteja, Moss, Pedro L., Weatherspoon, Mark H., Andrei, Petru, Florida State University, FAMUFSU College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

Lithium batteries have played an important role since early 1980’s to provide us with energy for small portable devices. Due to the increasing demand and limited availability of fossil fuels there is a need to shift to renewable energy. In this thesis, the fabrication procedure for the lithium ion coin cell is extensively analyzed. A brief introduction into the lithium ion battery is discussed, the physics and chemistry of the materials is explained. Emphasis is made on the importance of...
Show moreLithium batteries have played an important role since early 1980’s to provide us with energy for small portable devices. Due to the increasing demand and limited availability of fossil fuels there is a need to shift to renewable energy. In this thesis, the fabrication procedure for the lithium ion coin cell is extensively analyzed. A brief introduction into the lithium ion battery is discussed, the physics and chemistry of the materials is explained. Emphasis is made on the importance of calendaring an electrode. LiFePO4 was mixed with the Super P, PVDF and NMP at appropriate stoichiometric amounts and half coin cells were produced with the reference electrode as lithium foil. The effects of calendaring in terms of discharge capacity, density profile and ac impedance was analyzed. The resulting material sample were analyzed in two parts, Sample A was left as is and Sample B was calendared. The calendared electrode exhibited a lower impedance when observed with the impedance test. The calendared electrode exhibited a higher discharge capacity of about 162 mAh/g at C/10 rate when compared to the uncalendared electrode with a discharge capacity of about 152 mAh/g at C/10. The experimental results were than compared to the simulated model constructed in Comsol Multiphysics. The coin cell model in COMSOL was started with use of the existing model for cylindrical cells. The parameters and equations required for the setup were analyzed and discussed. The comparison of the experimental vs simulated results yielded some preliminary information. However, this work is still in progress, for building further models with different materials for the coin cells.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Chepyala_fsu_0071N_14110
 Format
 Thesis
 Title
 Estimation of Power Density of Modular Multilevel Converter Employing Set Based Design.
 Creator

Toshon, Tanvir Ahmed, Faruque, Md Omar (Professor of Electrical and Computer Engineering), Foo, Simon Y., Bernadin, Shonda Lachelle, Soman, Ruturaj, Florida State University,...
Show moreToshon, Tanvir Ahmed, Faruque, Md Omar (Professor of Electrical and Computer Engineering), Foo, Simon Y., Bernadin, Shonda Lachelle, Soman, Ruturaj, Florida State University, FAMUFSU College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

Medium Voltage DC (MVDC) system is becoming a captivating alternative for designing All Electric Ship (AES) for the US Navy. Modular Multilevel Converter (MMC) is considered as an essential component of MVDC systems for its scalability and efficacy. Designing such a power electronic converter for an electric ship is a challenging task in terms of volume constraints in an electric ship.Preliminary naval ship design used point based spiral design techniques, but the complexity and some...
Show moreMedium Voltage DC (MVDC) system is becoming a captivating alternative for designing All Electric Ship (AES) for the US Navy. Modular Multilevel Converter (MMC) is considered as an essential component of MVDC systems for its scalability and efficacy. Designing such a power electronic converter for an electric ship is a challenging task in terms of volume constraints in an electric ship.Preliminary naval ship design used point based spiral design techniques, but the complexity and some disadvantages of such design techniques don’t necessarily produce the most feasible cost effective design. To overcome the issue, the US Navy is exploring the application of Set Based Design(SBD) for designing naval architecture through Smart Ship System Design (S3D) to aid the early stage ship design.This thesis explores the areas of SBD to have a better understanding and knowledge of the design techniques. This is accomplished by design exercise employing SBD to design an essential component of the MVDC breakerless architecture which is Modular Multilevel Converter. The effort begins with investigating the scaling factors for MMC and apply them to estimate the power density of the converter through exploration of SBD.The outcome of this work is expected to aid early stage ship design exercises using S3D which will enable a guideline for applying SBD concepts to integrate into ship system design.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_TOSHON_fsu_0071N_14095
 Format
 Thesis
 Title
 Low Voltage Ridethrough for Photovoltaic Systems Using Finite ControlSet Model Predictive Control.
 Creator

Franco, Fernand Diaz, Edrington, Christopher S., Ordóñez, Juan Carlos, Faruque, Md Omar (Professor of Electrical and Computer Engineering), Foo, Simon Y., Florida State...
Show moreFranco, Fernand Diaz, Edrington, Christopher S., Ordóñez, Juan Carlos, Faruque, Md Omar (Professor of Electrical and Computer Engineering), Foo, Simon Y., Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

Grid codes impose immunity requirements to the generation systems that are connected to the transmission lines. Immunity refers to the generator’s capability to overcome grid abnormal conditions. One of the requirements is to remain connected during a certain time when a fault, like voltage sag, is presented. During the fault scenario, a generator unit should remain connected for a predetermined amount of time, and also provide reactive power to support the grid voltage. This is called low...
Show moreGrid codes impose immunity requirements to the generation systems that are connected to the transmission lines. Immunity refers to the generator’s capability to overcome grid abnormal conditions. One of the requirements is to remain connected during a certain time when a fault, like voltage sag, is presented. During the fault scenario, a generator unit should remain connected for a predetermined amount of time, and also provide reactive power to support the grid voltage. This is called lowvoltage ride through (LVRT). Initially, LVRT requirements were imposed for large generator units like wind farms connected to the transmission network; however, due to the increased penetration of distributed generation (DG) on the distribution system, new grid codes extend the mentioned capability to generator units connected to the distribution grid. Due to matured photovoltaic (PV) technology and the decreased price of PV panels, PV grid tied installations are proliferating in the utility grids; this is creating new challenges related to voltage control. In the past, DG such as PV were allowed to trip from the grid when a fault or unbalance occurred and reconnect within several seconds (sometimes minutes) once the fault had been cleared. Nevertheless, thanks to high PV penetration nowadays, the same method cannot be used because it will further deteriorate the power quality and potentially end in a power blackout. Different approaches have been considered to fulfill the LVRT requirement on PV systems. A large amount of literature focuses on the control of the grid side converter of the PV installation rather than the control of PV operation during the fault, and most control designs applied to the grid side follow classical control methods. Moreover, the effects of the grid fault on the generator side impose a challenge for controlling the PV systems since the quality of the synthesized converter voltages and currents depends on the dc link power/voltage control. This document proposes a Model based Predictive Control (MPC) for controlling a two stage PV system to fulfill LVRT requirements. MPC offers important advantages over traditional linear control strategies since the MPC cost function can include constraints that are difficult to achieve in classical control. Special attention is given to implementation of the proposed control algorithms. Simplified MPC algorithms that do not compromise the converter performance and immunity requirement are discussed.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_DiazFranco_fsu_0071E_14045
 Format
 Thesis
 Title
 Modeling and Application of Effective Channel Utilization in Wireless Networks.
 Creator

Ng, Jonathan, Yu, Ming (Professor of scientific computing), Zhang, Zhenghao, Harvey, Bruce A., Andrei, Petru, Florida State University, College of Engineering, Department of...
Show moreNg, Jonathan, Yu, Ming (Professor of scientific computing), Zhang, Zhenghao, Harvey, Bruce A., Andrei, Petru, Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

As a natural scarcity in wireless networks, radio spectrum becomes a major investment in network deployment. How to improve the channel utilization (CU) of the spectrum is a challenging topic in recent research. In a network environment, the utilization of a channel is measured by the effective CU (ECU), i.e., the effective time for transmission or when the medium being sensed busy over its total operation time. However, existing work does not provide a valid model for ECU. We investigate the...
Show moreAs a natural scarcity in wireless networks, radio spectrum becomes a major investment in network deployment. How to improve the channel utilization (CU) of the spectrum is a challenging topic in recent research. In a network environment, the utilization of a channel is measured by the effective CU (ECU), i.e., the effective time for transmission or when the medium being sensed busy over its total operation time. However, existing work does not provide a valid model for ECU. We investigate the relationship between ECU and the interference from other wireless transmission nodes in a wireless network, as well as from potential malicious attacking interfering sources. By examining the relationship between their transmission time and cotransmission time ratios between two or more interferers, we propose a new model based on the channel occupation time of all nodes in a network. The model finds its mathematical foundation on the set theory. By eliminating the overlapping transmission time intervals instead of simply adding the transmission time of all interferers together, the model can obtain the expected total interference time by properly combining the transmission time of all individual nodes along with the time when two or more nodes transmit simultaneously. Through dividing the interferers into groups according to the strength levels of their received interference power at the interested node, less significant interfering signals can be ignored to reduce the complexity when investigating real scenarios. The model provides an approach to a new detection method for jamming attacks in wireless networks based on a criterion with combined operations of ECU and CU. In the experiments, we find a strong connection between ECU and the received interference power and time. In many cases, strong and frequent interference is accompanied by a declination of ECU. The descending slope though may be steep or flat. When the decrease of ECU is not significant, CU can be observed with a sharp drop instead. Therefore, the two metrics, ECU and CU when properly combined together, demonstrate to be an effective measurement for judging strong interference. In addition, relating to other jamming detection methods in the literature, we build a mathematical connection between the new jamming detection conditions and PDR, the Packet Delivery Ratio, which has been proved effective by previous researchers. Thus, the correlation between the new criteria and PDR guarantees the validity of the former by relating itself to a tested mechanism. Both the ECU model and the jamming detection method are thoroughly verified with OPNET through simulation scenarios. The experiment scenarios are depicted with configuration data and collected statistical results. Especially, the radio jamming detection experiments simulate a dynamic radio channel allocation (RCA) module with a userfriendly graphical interface, through which the interference, the jamming state, and the channel switching process can be monitored. The model can further be applied to other applications such as global performance optimization based on the total ECU of all nodes in a wireless communications environment because ECU relates one node's transmission as the interference for others using the same channel for its global attribute, which is our work planned for the next step. We would also like to compare its effectiveness with other jamming detection methods by exploring more extensive experiment research.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Ng_fsu_0071E_14083
 Format
 Thesis
 Title
 Investigation of Alternative Cryogenic Dielectric Materials and Designs for High Temperature Superconducting Devices.
 Creator

Cheetham, Peter Graham, Pamidi, Sastry V., Ordóñez, Juan Carlos, Edringtion, Christopher S., Graber, Lukas, Foo, Simon Y., Florida State University, FAMUFSU College of...
Show moreCheetham, Peter Graham, Pamidi, Sastry V., Ordóñez, Juan Carlos, Edringtion, Christopher S., Graber, Lukas, Foo, Simon Y., Florida State University, FAMUFSU College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

The consumption of electricity is seen by society as a certainty and not an uncertainty; however, there are several uncertainties about how the topology of the electrical grid will look in the future. For instance, it is expected that the demand for electricity is set to considerably increase, there will be a greater incorporation of renewable generation sources, and society will call for a decrease in the spatial footprint of the electrical power grid. To address these uncertainties, new...
Show moreThe consumption of electricity is seen by society as a certainty and not an uncertainty; however, there are several uncertainties about how the topology of the electrical grid will look in the future. For instance, it is expected that the demand for electricity is set to considerably increase, there will be a greater incorporation of renewable generation sources, and society will call for a decrease in the spatial footprint of the electrical power grid. To address these uncertainties, new technology has been proposed to replace the conventional copper devices currently utilized. One of the new technologies that has shown great promise over the last decade are superconducting power devices. The appeal of superconducting technology lies in its ability to operate at significantly higher current densities than equivalently sized copper or aluminum technologies. This increase in current density will potentially allow for the electrical power grid to operate at higher capacity and greater efficiency. In order to develop superconducting devices for high power applications, knowledge of the critical boundaries with regards to temperature, current and magnetic field need to be studied. Highvoltage engineering principles also need to be studied in order to ensure that an optimal design is produced for the superconducting power device. These theoretical and practical challenges of designing superconducting power devices are discussed in Chapter 1. Chapter 2 focuses on the highvoltage engineering and dielectric design aspects of a specific superconducting power device: HTS power cables. In particular, this chapter discusses the different dielectric design topologies, cable layouts, and reviews successfully demonstrated HTS power cables. One of the current limitations of designing superconducting power devices is the lack of dielectric materials compatible with cryogenic temperatures, and this area has been the focus of my research. The main focus of my Ph.D. is the investigation of new cryogenic dielectric materials and designs, which can be separated into two main areas. The cryogenic studies on increasing the dielectric strength of gaseous helium (GHe) focused on the addition of a small mol% of various gases such as nitrogen (N2), hydrogen (H2) and neon (Ne) to GHe (Chapter 4). The studies to increase partial discharge inception voltage of GHe cooled high temperature superconducting (HTS) power cables focused on using a Polyethylene Terephthalate heat shrink to individually insulate HTS tapes (Chapter 6), as well as the development of a novel HTS cable design referred to as the Superconducting GasInsulated Transmission Line (SGIL) (Chapter 7). While the research conducted can be split into different categories, the experimental techniques in preparing samples and performing measurements are consistent and are discussed in Chapter 3. From completing this research, several key findings were discovered that will help advance the development of GHe cooled superconducting devices. Here is a summary of these discoveries: • The addition of 4 mol% of hydrogen gas to GHe increases the dielectric strength by 80% of pure GHe for all pressures. This trend was seen with both AC and DC voltages and DC breakdown strengths were approximately 1.4 times higher than the AC, as expected. • By measuring the breakdown strength of 1, 2, and 4 mol% hydrogen gas mixed with GHe, a linear relationship exists between hydrogen mol% and breakdown strength. The saturation limit does not appear to have been reached, so there is potential for higher breakdown strengths with higher hydrogen mol%. However, there are potential safety concerns with regards to flammability that need to be considered for higher mol% hydrogen mixtures. • Tertiary mixtures containing 8 mol% nitrogen gas, and 4 mol% hydrogen gas mixed with GHe yielded approximately a 400% increase in the dielectric strength when compared to GHe. With the introduction of the nitrogen gas to the mixture the maximum operating pressure was limited to approximately 0.85 MPa before condensation occurred. • The partial discharge inception voltage (PDIV) measurements for a cable measured in the 4 mol% hydrogen mixture and then in GHe showed a 25% higher value when the cable was measured in the 4 mol% hydrogen mixture than in GHe. This improvement in PDIV is not as great as the 80% improvement seen in the breakdown measurements. • The Polyethylene Terephthalate heat shrink selected to provide individual insulation to HTS tapes did not allow for a high operational voltage when used as the insulation method for a HTS cable as breakdown occurred between 12 kV. • The development of the SGIL allows for the full benefits of increasing the dielectric strength of GHe to be exploited. • The SGIL will allow for higher operating voltages and better thermal characteristics than currently available for GHe superconducting power cables.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Cheetham_fsu_0071E_13956
 Format
 Thesis
 Title
 Study of 2D Square RodinAir Photonic Crystal Optical Switch and Design of Fast Planar Laser Shutter.
 Creator

Wang, Huazhong, Zheng, Jim P., Cao, Jim, Andrei, Petru, Li, Hui, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

This dissertation is made of two parts. Part I, chapter 1 to chapter 7, is 2D square rodinair photonic crystal optical switch; Part II, chapter 8 to chapter 12, is design of fast laser shutter. Photonic crystal is a kind of materials with periodic structures. Moreover the lattice constant of photonic crystals is on the same scale as the wavelength of electromagnetic waves. One of photonic crystal's specific properties is that only allowable electromagnetic wave states could propagate...
Show moreThis dissertation is made of two parts. Part I, chapter 1 to chapter 7, is 2D square rodinair photonic crystal optical switch; Part II, chapter 8 to chapter 12, is design of fast laser shutter. Photonic crystal is a kind of materials with periodic structures. Moreover the lattice constant of photonic crystals is on the same scale as the wavelength of electromagnetic waves. One of photonic crystal's specific properties is that only allowable electromagnetic wave states could propagate inside. This property presents a new way of controlling the propagation of light inside materials. In this paper, a 2D photonic crystal optical switch is proposed. This is a rodsinair structure device by removing two crosslines of rods from a 2D squarerod photonic crystal. The optical switch feature is achieved by inserting a single rod along the line segment from (0.7, 0.7) to (0.7, 0.7) in coordinate. In fact, this line segment is the diagonal line of the intersection area of two removed crosslines of rods. The position of the inserted single rod determines how much the total source energy propagates into the upper channel. In the case of transverse magnetic Gaussian point source, up to 41.38% of the total source energy goes into the upper channel and is shown by time domain simulation. It is also found that the magnitude of the reflected wave in the left channel varies greatly with spatial position of the single inserted rod. The larger the magnitude of the reflected wave in the left channel, the less energy goes into the upper channel. The time delay between the incident wave and the reflected wave in the left channel is also related to the position of the single inserted rod. In addition, the extremely large time delay between the incident wave and the reflected wave in the left channel shows that the reflected wave encounters many reflections with the walls of the left channel, instead of reflected back from the single inserted rod directly. Simulations also demonstrate that the control effect of this 2D photonic crystal optical switch exists under the cases of Gaussian/continuous wave, point/line source. The advantage of this photonic crystal optical switch presented here is operational simplicity because the change of the position of only one rod is needed to finish the switching function. This operational simplicity is critical in microoptoelectromechanical system (MOEMS) device. Consequently, this 2D photonic crystal optical switch is an attractive design in the study of integrating optical circuit. The goal of Part II is to design a laser shutter to protect eyes from fast laser pulse. The width of targeted laser pulse is 30 ns. It is proposed to apply Pockels cell intensity modulator with longitudinal configuration to block the laser pulse. The Pockels cell material is Deuterated Potassium Dihydrogen Phosphate KD2PO4 (DKDP) because its electrooptic parameter, r63, is highest among popular nonlinear electrooptic materials. The laser shutter is controlled by a semiconductor photon sensor. When photon sensor probes laser pulse, laser shutter starts to block off the laser pulse. The performance of laser shutter is also investigted under variant condictions: laser pulse intensity, semiconductor carrier lifetime, size of Pockels cell.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd1259
 Format
 Thesis
 Title
 Formation of RuO₂.Xh₂O Supported Pt Anode Electrode for Direct Methanol Fuel Cells.
 Creator

Tiwari, Vivek, Zheng, Jim P., Li, Hui, Andrei, Petru, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Thesis starts with the introduction and literature review of energy storage and conversion devices, which lay the background for motivation and purpose of this research. The fundamental background behind this work was laid by Zheng et al in 1995 [1], wherein they proposed a new charge storage mechanism of hydrous form of ruthenium oxide (i.e. RuO2.xH2O). They proposed this material as a prospective material for super capacitors and direct methanol fuel cells (DMFCs). Later, Wang et al...
Show moreThesis starts with the introduction and literature review of energy storage and conversion devices, which lay the background for motivation and purpose of this research. The fundamental background behind this work was laid by Zheng et al in 1995 [1], wherein they proposed a new charge storage mechanism of hydrous form of ruthenium oxide (i.e. RuO2.xH2O). They proposed this material as a prospective material for super capacitors and direct methanol fuel cells (DMFCs). Later, Wang et al proposed a monolithic hybrid direct methanol fuel cell, employing a layer of RuO2.xH2O between anode catalyst and membrane [3]. In the same paper, they discussed the probability of RuO2.xH2O supported Pt anode catalyst material. The first section of this work, which is covered in chapter 3, comprises of fundamental research and involves proposing and developing an electrode catalyst material for DMFC. It explains the heuristic approach, leading towards the methodical approach  which finally leads to the development of a catalyst material which, in addition to its remarkable feature of possessing high specific capacitance, could be compared with commercially available materials. The same section also covers the extensive study, testing and electrochemical results of this catalyst material, which included cyclic voltammetery, TEM, XRD and EDAX tests and results. The later segment of this work covers the application of this catalyst material in DMFC. Results from the same show that there is a significant improvement in dynamic response of the DMFC prepared using the proposed catalyst material, when compared to the one prepared using commercial catalyst. Steady state response, on the other hand is slightly degraded. It is discussed as what could be the probable reasons behind the reduced steady state response of the monolithic hybrid DMFC prepared using the new proposed catalyst material. High charge transfer resistance, poor mass transfer, poor dispersion and poor porosity could be few of the reasons behind reduced steady state performance of DMFC. Finally, we conclude that since the improved dynamic response of DMFC is evident using this catalyst material, combined with the fact that it exhibits excellent electrochemical surface area, good methanol oxidation activity, high specific capacitance and small particle size  one could very well extend this research in dealing with the aforementioned short comings. The necessity to extend this research could be estimated from the fact that once commercially realized, DMFCs could easily replace chargeable batteries in automobiles (and other applications). Unlike batteries, which are energy storage devices, DMFCs are energy conversion devices which run directly on fuel (methanol) and don't be to be recharged. Few of the issues which are hindering the commercialization of DMFCs are its low power density (poor energy density and poor dynamic response) and size. The catalyst material developed in this research is easy to synthesize in laboratory and promises to improve the dynamic response of DMFC, eliminating the need of an external battery or super capacitor for instantaneous power demands – hence, reducing its size and weight.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd1355
 Format
 Thesis
 Title
 BioInspired Stereoscopic Ranging Imager for Robot Obstacle Avoidance.
 Creator

Stegeman, Thomas, DeBrunner, Victor, Roberts, Rodney, Foo, Simon, Brooks, Geoffery, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

This thesis presents and evaluates a bioinspired vision system design to increase the depth field of a stereoscopic ranging imager. Two key attributes of the human vision system are leveraged in this design. The first attribute is image stabilization similar to the inner ear semicircular canals and neck muscles. To accomplish this, an accelerometer and servos were used to stabilize the imager platform. The second human vision attribute used by the design is the ability to change the focal...
Show moreThis thesis presents and evaluates a bioinspired vision system design to increase the depth field of a stereoscopic ranging imager. Two key attributes of the human vision system are leveraged in this design. The first attribute is image stabilization similar to the inner ear semicircular canals and neck muscles. To accomplish this, an accelerometer and servos were used to stabilize the imager platform. The second human vision attribute used by the design is the ability to change the focal vector. This is accomplished by a servo that tilts the imagers in unison and separate servos that enable each imager to pan independently. The performance metrics of depth field size and resolution of this method are compared to a system that has staticallymounted imagers and a fixed video platform.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd1562
 Format
 Thesis
 Title
 A 'ProtonFree' Coil for Magnetic Resonance Imaging of Porous Media.
 Creator

Seshadhri, Madhumitha, Foo, Simon Y., Brey, William W., Andrei, Petru, Arora, Rajendra K., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Nuclear Magnetic Resonance Imaging or MRI is a non invasive imaging technique which exploits the inherent magnetic field produced by nuclei of atoms with a nonzero spin. Since hydrogen is the most abundant atom, it forms the basis of NMR imaging techniques. The transparency of many materials to RF irradiation coupled with the access to a large variety of contrast parameters and the nondestructiveness of the method may make it highly useful in materials imaging. In a number of situations...
Show moreNuclear Magnetic Resonance Imaging or MRI is a non invasive imaging technique which exploits the inherent magnetic field produced by nuclei of atoms with a nonzero spin. Since hydrogen is the most abundant atom, it forms the basis of NMR imaging techniques. The transparency of many materials to RF irradiation coupled with the access to a large variety of contrast parameters and the nondestructiveness of the method may make it highly useful in materials imaging. In a number of situations there is a critical need to evaluate the distribution of small amounts of water adsorbed throughout a solid sample. One of these pertains to SprayOn Foam Insulation (SOFI), a thermal insulation material used on liquid hydrogen and oxygen tanks on space shuttles. The basic components of an NMR spectrometer are the magnet, amplifiers, transceiver and imaging coils. In MRI, imaging coils are radiofrequency coils that serve two purposes: the excitation of nuclear spins and the detection of nuclear precession. This thesis aims to successfully design a RF coil for 1H imaging of foam and the water trapped within it. The single turn solenoid is probably the most simple and efficient RF coil design. This type was selected as it has high sensitivity and uniform homogeneity throughout the volume of the coil. The coil has been optimized in terms of dimension, feasibility, strategies for tuning and matching and performance.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd1806
 Format
 Thesis
 Title
 Processing, Microstructure, and Critical Current Density of AgSheathed Bi₂Sr₂CaCu₂Oₓ Multifilamentary Round Wire.
 Creator

Shen, Tengming, Hellstrom, Eric, Schwartz, Justin, Larbalestier, David, Andrei, Petru, DeBrunner, Victor, Zheng, Jim P., Department of Electrical and Computer Engineering,...
Show moreShen, Tengming, Hellstrom, Eric, Schwartz, Justin, Larbalestier, David, Andrei, Petru, DeBrunner, Victor, Zheng, Jim P., Department of Electrical and Computer Engineering, Florida State University
Show less  Abstract/Description

Agsheathed multifilamentary Bi2Sr2CaCu2Ox round wire is one of the leading hightemperature superconductors that can generate a magnetic field exceeding the maximum of ~23 T available in present Nbbased lowtemperature superconducting magnet technology. However, the magnet fabrication of powerintube (PIT) AgBi2Sr2CaCu2Ox multifilamentary round wire to develop critical current density Jc > 105 A/cm2 in magnetic fields up to 45 T is difficult, due to complicated material processing, asyet...
Show moreAgsheathed multifilamentary Bi2Sr2CaCu2Ox round wire is one of the leading hightemperature superconductors that can generate a magnetic field exceeding the maximum of ~23 T available in present Nbbased lowtemperature superconducting magnet technology. However, the magnet fabrication of powerintube (PIT) AgBi2Sr2CaCu2Ox multifilamentary round wire to develop critical current density Jc > 105 A/cm2 in magnetic fields up to 45 T is difficult, due to complicated material processing, asyet incompletely understood microstructure, and the problem that Jc is sensitive to hightemperature reactions. This thesis analyzed the critical steps of melt processing PIT Bi2Sr2CaCu2Ox multifilamentary wires, systematically investigating the relationships between processing, microstructure, and conductor & magnet performance. The phase transformation and microstructure development during the melt processing of Bi2Sr2CaCu2Ox wires were thoroughly examined using a brinequench technique that preserves the hightemperature microstructures. On heating to the maximum temperature (~890 °C), Bi2Sr2CaCu2Ox powder melts incongruently, producing a mixture of liquid and secondary solid phases. On subsequent cooling, the liquid reacts with the solid phases and Bi2Sr2CaCu2Ox reforms. The phase reaction to Bi2Sr2CaCu2Ox is often incomplete, leaving remnant nonsuperconducting phases from the melt and the Bi2Sr2CaCu2Ox phase and intergrowth in the superconducting matrix, all of which become current limiting mechanisms (CLMs) and block current flow. Moreover, the gas between precursor powder grains accumulates into large pores upon melting, which divide the filament into segments. The consequence of having large pores in the melt is that the pore regions may become bottlenecks for current flow in fully reacted wires. The high population of CLMs strongly indicates that the fraction of oxide filament area that is effectively used for carrying current is low and increasing the connectivity is the key to improving Jc of Bi2Sr2CaCu2Ox wires. The formation mechanisms of filament bridges that populate melt processed PIT multifilamentary wires were studied. Two types of filament bridges were found. TypeA bridges are singlegrain Bi2Sr2CaCu2Ox that couple multiple filaments. TypeA bridges were suggested to enable an interfilament current flow that may be important for increasing the superconducting crosssection for effectively carrying current. The TypeA bridges form because filaments can bond to adjacent filaments in the melt by Ag preferentially dissolving into liquid at Ag grain boundaries. This discovery of filament bonding and Ag&liquid transport has general application to the design and optimization of multifilamentary Bi2Sr2CaCu2Ox wires. Jc development through the melt processing was examined. Jc of wires doubled during the final cooling stage to room temperature. The fundamental cause of this Jc increase was identified as oxygen overdoping, which reduces the superconducting transition temperature, but increases the flux pinning and most importantly, improves the grain boundary current transport and connectivity. A critical limitation of Bi2Sr2CaCu2Ox for magnet fabrication is that melt processing yields an optimum Jc only within a narrow processing window (both maximum temperature Tmax and soaking time tmax need to be precisely controlled), which makes uniform heat treatment of large coils with large thermal mass difficult. The systematics of this temperature and time dependence were probed by examining the microstructure evolution and Jc of wires prepared at various tmax and Tmax using different melt processing schedules. The final Jc of wires was found to correlate weakly to Tmax or tmax, but it strongly correlates to tmelt, a hidden processing parameter that measures how long conductor spends in the melt state. This strong correlation between Jc and tmelt suggests that the Jc of wires is dominantly controlled by tmelt, not by Tmax or tmax, and careful control of tmelt creates a wider processing window for coils. Raising tmelt above an optimum time caused a decrease in connectivity and Jc. This Jc degradation was found to be associated with lower Bi2Sr2CaCu2Ox nucleation temperature and inhomogeneous Bi2Sr2CaCu2Ox nucleation. The fundamental cause of Jc decreasing with extended tmelt appears to be the Cu loss from the Bi2Sr2CaCu2Ox melt. There are three known ways of Cu loss in the literature; the new finding in the study is that Cu can be lost from wire by a fourth mechanism: Cu diffuses through the Ag from the filament to the surface of the wire where it evaporates as a copper oxide. Cu loss is a pervasive process during melt processing that can explain why nearly all windandreact Bi2Sr2CaCu2Ox coils have Jc values much lower than those of bare, short samples of Bi2Sr2CaCu2Ox wires melt processed at the same time. The study indicates that eliminating Cu loss would potentially raise the Jc of Bi2Sr2CaCu2Ox coils.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd1778
 Format
 Thesis
 Title
 Adaptive Subband Array Techninques for Structural Health Monitoring.
 Creator

Medda, Alessio, DeBrunner, Victor E., Chicken, Eric, DeBrunner, Linda, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Modal based traditional Structural Health Monitoring techniques are limited because of several factors – including a poorlyformed aggregate system model, very low SNR, and unrealistic boundary conditions. Moreover, global techniques often rely on modal damage indicators that are not sensitive to localized damage. In this dissertation, the author proposes a new Damage Detection technique that addresses the spacefrequency localization of damage artifacts in a reference and noreference...
Show moreModal based traditional Structural Health Monitoring techniques are limited because of several factors – including a poorlyformed aggregate system model, very low SNR, and unrealistic boundary conditions. Moreover, global techniques often rely on modal damage indicators that are not sensitive to localized damage. In this dissertation, the author proposes a new Damage Detection technique that addresses the spacefrequency localization of damage artifacts in a reference and noreference framework. For the first situation of referenced damage detection, the author employs the use of compactly supported subband space/frequency and time/frequency analysis using local vibration characteristics, overcoming the signal to noise ration problem with a nearfield adaptive beamformer filter bank. The beamformer filter bank operates on the subband space and provides accurate spatial selectivity and high signal to noise ratio for any given scan direction. Subband analysis is performed using wavelet packets and Daubechies mother wavelets. The system is simulated using a one dimensional Finite Element model of a simply supported beam with simple constraints as a good approximation of a real situation. The local damage is simulated as a reduction of the Young's modulus over a selected group of elements. The Damage Detection is performed using as a damage feature the subband energy for any given scan direction and for each subband center frequency. The energy signature for every location/frequency is compared to the energy signature obtained for the equivalent undamaged structure. The obtained results are validated against the analysis obtained before the beamforming stage, and the algorithm localizes the damage in areas of high probability around the direction of the simulated discontinuity. Moreover, the proposed technique shows a very high accuracy and it is able to detect variations on the structure parameters as low as 1%, with a signal near the noise level. For the second situation of Damage Detection performed without an undamaged reference for the analysis, the author proposes a new statistical method based on the density estimation of the vibration signal. This technique is based in the Gaussian Mixture estimation of the probability density function of the vibration signal, using a greedy EM approach with a new model order selection criteria. This model order is based on global measurement on the cumulative density function as well as on local measurement on density indicators, such as the KullbackLeibler divergence and the estimated Correlation Coefficient. The technique is used to estimate the density of time domain signal and frequency domain signal. As damage indicators, the technique uses the first two principal components from measurements of standard deviation, kurtosis, skewness and entropy on the estimated density. The obtained damage indicators perform better in frequency domain and damage as low as 30% can be detected in a noisy environment.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd2507
 Format
 Thesis
 Title
 Microscopic Observations of Quenching and the Underlying Causes of Degradation in YBa₂Cu₃O[subscript 7δ] Coated Conductor.
 Creator

Song, Honghai, DeBrunner, Victor, Schwartz, Justin, Larbalestier, David C., Andrei, Petru, Baldwin, Thomas, Zheng, Jim P., Department of Electrical and Computer Engineering,...
Show moreSong, Honghai, DeBrunner, Victor, Schwartz, Justin, Larbalestier, David C., Andrei, Petru, Baldwin, Thomas, Zheng, Jim P., Department of Electrical and Computer Engineering, Florida State University
Show less  Abstract/Description

Significant advances have been made in the processing and scaleup of YBa2Cu3O7δ (YBCO) coated conductor (CC) and sufficient lengths up to 1 km for YBCO coils are now available. This progress is very promising for a wide range of applications, including high field magnets and electric power systems. One of the remaining issues in the transition from long length conductor to functional coils which needs to be addressed, however, is quench protection, which requires a detailed understanding of...
Show moreSignificant advances have been made in the processing and scaleup of YBa2Cu3O7δ (YBCO) coated conductor (CC) and sufficient lengths up to 1 km for YBCO coils are now available. This progress is very promising for a wide range of applications, including high field magnets and electric power systems. One of the remaining issues in the transition from long length conductor to functional coils which needs to be addressed, however, is quench protection, which requires a detailed understanding of dynamic thermal and nonlinear electromagnetic behaviors during a quench. These behaviors remain poorly understood. Foremost, to have a complete description of macroscopic behaviors of YBCO using traditional voltage and temperature characterization, from high temperature to 4.2 K, and from short, straight sample to long length coil, we carried out measurements on short straight YBCO CC at 4.2 K and conductioncooled YBCO CC pancake coils. We found that, for the same fraction of critical current (I/Ic) at 4.2 K, YBCO CCs have similar minimum quench energy (MQE) and normal zone propagation velocity (NZPV) to that of Agalloy clad Bi2Sr2CaCu2Ox wires and significantly higher MQE and lower NZPV than MgB2 round wires of similar Ic(4.2 K). In the conduction cooled YBCO coils, the longitudinal NZPV (10 – 40 mm/s) is about one order of magnitude larger than the transverse NZPV (1 – 2 mm/s). Moreover, when coil results are compared to those of a short straight sample at 50 K, the longitudinal propagation in the short sample is significantly faster than the longitudinal propagation in the coil. This is due to transverse heat conduction (transverse propagation) which reduces the temperature gradients in the coil but also slows down the longitudinal propagation. Moreover, we simultaneously observe normal zone propagation during a heaterinduced quench using a highspeed, highresolution CCD camera with magnetooptical imaging (MOI) and monitor the voltage and temperature distribution as a function of time. We for the first time present the realtime, dynamic observation of magnetic field redistribution during a thermal disturbance via magnetooptical imaging. The optical images are converted to a twodimensional, timedependent data set that is then analyzed quantitatively. We found that the normal zone propagates nonuniformly in two dimensions within the YBCO layer and that its normal front has a diagonal shape while propagating along the length. Two stages of normal zone propagation are observed. The normal zone propagation velocity at 45 K, I = 50 A (~50% Ic), is determined as 22.7 mm/s using the timedependent optical light intensity data. If time for current redistribution can be reduced, the quench propagation velocity is likely to be increased. Lastly, to understand the failure mechanisms during quenching, YBCO coated conductors were quenched such that the samples were degraded at different levels and the microstructure was locally evaluated in the degraded zones. To evaluate the microstructures, the Cu and Ag layers were etched from both quenched and unquenched control samples for comparison. In the control samples, the YBCO layer is found to have some porosity and a few distributed particles in and above the YBCO surface. Two types of quenched samples were prepared. One quenched sample showed nearly no reduction in endtoend Ic after the quench series and the other was somewhat damaged. In the sample without any reduction in Ic, the Ag cap, however, was found to be partially broken, which is likely due to inhomogeneous heat deposit from the quench heater. In the sample with degradation in superconducting properties, although the degradation zone was eroded by the etchant, the reactants became a signature of the Ag delamination where there is degradation. Quench propagation induced damage is believed to originate from preexisting edge damage in the YBCO CC. The damage propagation starts from the damaged edge was influenced by a thermal gradient along the normal zone. Meanwhile, more damage is likely happen to the other edge of the conductor due to current redistribution starting from both edges. For the first time, we report that quench induced degradation has a dendritic structure, which may come from the thermomagnetic instability in YBCO layer during quenching. The presence of dendritic structure implies that the YBCO has delamination from the Ag/Cu layers. The damage caused by the point defects in the YBCO structure is circular in shape and is not dependent on normal zone propagation. Another interesting feature among the results is that melted Ag particles were found in the degradation zone which implies that the local temperature at defects is very high during quenching.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd1628
 Format
 Thesis
 Title
 RealTime Switched Reluctance Machine Emulation via Magnetic Equivalent Circuits.
 Creator

Fleming, Fletcher, Edrington, Chris S., Ordonez, Juan, Foo, Simon, MeyerBaese, Uwe, Weatherspoon, Mark H., Department of Electrical and Computer Engineering, Florida State...
Show moreFleming, Fletcher, Edrington, Chris S., Ordonez, Juan, Foo, Simon, MeyerBaese, Uwe, Weatherspoon, Mark H., Department of Electrical and Computer Engineering, Florida State University
Show less  Abstract/Description

Electrical power systems utilizing electromagnetic devices, namely those of electrical ships, are subject to nonlinearities from regenerative loads, distributed energy storage systems, and onboard loads such as air handling and fluid pumps. Thus, accurate and timely electromagnetic (EM) device models are required in order to fully assess the impact of such transient and/or nonlinear activity. Specifically, by exploiting an often overlooked technique, i.e. the magnetic equivalent circuit (MEC)...
Show moreElectrical power systems utilizing electromagnetic devices, namely those of electrical ships, are subject to nonlinearities from regenerative loads, distributed energy storage systems, and onboard loads such as air handling and fluid pumps. Thus, accurate and timely electromagnetic (EM) device models are required in order to fully assess the impact of such transient and/or nonlinear activity. Specifically, by exploiting an often overlooked technique, i.e. the magnetic equivalent circuit (MEC) modeling method, a solution of adequate granularity for the EM device may be attained while still obeying a faster time commitment when compared to the simulation standard for EM devices, the finite element analysis (FEA) technique. The Hardware in the Loop (HIL) concept synergizes with expedient modeling methods, potentially allowing a wider range of dynamics to be observed in large scale simulations or even tested hardware systems. By scaling down the next generation all electric ships integrated power system (NGIPS) to a power level suitable for an academic laboratory environment, the nonlinear effects of EM devices may be investigated via the HIL concept and the MEC modeling method, given that the runtime is acceptable. This work proposes to develop a novel "real time" MEC (RT MEC) machine model, to ensure the aforementioned runtime. A switched reluctance machine (SRM) is used as a case study device due to both its inherent nonlinearity and it providing an ideal foundation for incorporating various characteristics of the MEC modeling technique. The proposed RT MEC concept will be implemented on a field programmable gate array (FPGA). The advantages of FPGA realization include the inherently parallel nature, a substantially cheaper real time (RT) platform when compared to computationally efficient FEA methods that require dedicated, elaborate resources and application specific hardware. Furthermore, FPGA realization provides a fully customizable solution in terms of numerical methods, time step, HIL interfacing and system expansion. The primary contribution of this work is the RT MEC methodology; more specifically, a high fidelity, real time platform exploited for dynamic SRM modeling, an undoubtedly nonlinear device. RTMEC contributes higher accuracy and lighter computational loads when compared to commercially available modeling techniques adhering to similar time constraints; ultimately, this yields faster simulation times and more accurate HIL simulation or Power Hardware in the Loop (PHIL) emulations. Further exercising the RT MEC concept, a variety of novel applications can arise that are uniquely capable of accentuating the nonlinear intricacies and effects assimilated into machine connected systems. Expanding, RT MEC can provide a state of the art tool useful for assessing overall system impact when subjected to electromechanical transients, control strategies and power electronics; providing pertinence and merit to the principal contribution. Potential applications include investigating the nonlinear effects of loading the NGIPS via a PHIL implementation, emulated via SRM winding pulses or utilizing SRM RT MEC models with large scale wind system simulations to study the impact an SRM motor type has on wind farm design.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd8987
 Format
 Thesis
 Title
 A Particle Swarm Optimization Based Maximum Torque Per Ampere Control for a Switched Reluctance Motor.
 Creator

Griffin, Lee, Edrington, Chris S., Andrei, Petru, Moss, Pedro, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

The Switched Reluctance Machine (SRM) is known for being one of the oldest electric machine designs. Unfortunately, it is usually assumed that this implies that the machine is outdated. However with the advent of microprocessors, the SRM has become a suitable option for a number of applications because the shortcomings of the machine can be mitigated with control. Compared to other machines, the SRM is more rugged, has a simpler structure, and is less expensive to manufacture. The machine has...
Show moreThe Switched Reluctance Machine (SRM) is known for being one of the oldest electric machine designs. Unfortunately, it is usually assumed that this implies that the machine is outdated. However with the advent of microprocessors, the SRM has become a suitable option for a number of applications because the shortcomings of the machine can be mitigated with control. Compared to other machines, the SRM is more rugged, has a simpler structure, and is less expensive to manufacture. The machine has two control regions: when the speed of the machine is beneath a value called the base speed and when the speed is above the base speed. The base speed is the speed at which the back electromotive force (EMF) of the motor becomes substantial when compared to the source voltage. In both regions, the turnon and turnoff angles of the machine can be used to control the machine. This thesis proposes a method of generating optimal turnon and turnoff angles. The method presented in this thesis is concerned with finding the turnon and turnoff angles needed to generate maximum torque per ampere (MTA). The strategy applies a particle swarm optimization (PSO) technique that searches for the angles that maximize the inductance of the SRM in order to achieve MTA. The inductance function was obtained via Finite Element Analysis (FEA) and experimentally. The method was applied to a 4phase 8/6 SRM. The proposed strategy was found to be effective at both low speeds (beneath the base speed) and high speeds (above the base speed), but MTA could only be asserted for low speeds.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd8994
 Format
 Thesis
 Title
 A SamplingBased Model Predictive Control Approachto Motion Planning Forautonomous Underwater Vehicles.
 Creator

Caldwell, Charmane Venda, Collins, Emmanuel G., Roberts, Rodney G., Cartes, David, DeBrunner, Linda S., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

In recent years there has been a demand from the commercial, research and military industries to complete tedious and hazardous underwater tasks. This has lead to the use of unmanned vehicles, in particular autonomous underwater vehicles (AUVs). To operate in this environment the vehicle must display kinematically and dynamically feasible trajectories. Kinematic feasibility is important to allow for the limited turn radius of an AUV, while dynamic feasibility can take into consideration...
Show moreIn recent years there has been a demand from the commercial, research and military industries to complete tedious and hazardous underwater tasks. This has lead to the use of unmanned vehicles, in particular autonomous underwater vehicles (AUVs). To operate in this environment the vehicle must display kinematically and dynamically feasible trajectories. Kinematic feasibility is important to allow for the limited turn radius of an AUV, while dynamic feasibility can take into consideration limited acceleration and braking capabilities due to actuator limitations and vehicle inertia. Model Predictive Control (MPC) is a method that has the ability to systematically handle multiinput multioutput (MIMO) control problems subject to constraints. It finds the control input by optimizing a cost function that incorporates a model of the system to predict future outputs subject to the constraints. This makes MPC a candidate method for AUV trajectory generation. However, traditional MPC has difficulties in computing control inputs in real time for processes with fast dynamics. This research applies a novel MPC approach, called SamplingBased Model Predictive Control (SBMPC), to generate kinematically or dynamically feasible system trajectories for AUVs. The algorithm combines the benefits of samplingbased motion planning with MPC while avoiding some of the major pitfalls facing both traditional samplingbased planning algorithms and traditional MPC, namely large computation times and local minimum problems. SBMPC is based on sampling (i.e., discretizing) the input space at each sample period and implementing a goaldirected optimization method (e.g., A?) in place of standard nonlinear programming. SBMPC can avoid local minimum, has only two parameters to tune, and has small computational times that allows it to be used online fast systems. A kinematic model, decoupled dynamic model and full dynamic model are incorporated in SBMPC to generate a kinematic and dynamic feasible 3D path. Simulation results demonstrate the efficacy of SBMPC in guiding an autonomous underwater vehicle from a start position to a goal position in regions populated with various types of obstacles.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd4518
 Format
 Thesis
 Title
 Experimental and Mathematical Modeling Studies on Current Distribution in High Temperature Superconducting DC Cables.
 Creator

Pothavajhala, Venkata, Edrington, Chris, Graber, Lukas, Andrei, Petru, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

High temperature superconducting power cables have the advantage of high current density and low losses over conventional cables. One of the factors that affect the stability and reliability of a superconducting cable is the distribution of current among the tapes of cable. Current distribution was investigated as a function of variations in contact resistance, individual tape critical current (Ic), and index (n)value of individual tapes. It has been shown that besides contact resistances,...
Show moreHigh temperature superconducting power cables have the advantage of high current density and low losses over conventional cables. One of the factors that affect the stability and reliability of a superconducting cable is the distribution of current among the tapes of cable. Current distribution was investigated as a function of variations in contact resistance, individual tape critical current (Ic), and index (n)value of individual tapes. It has been shown that besides contact resistances, variations in other superconducting parameters affect current distribution. Variations in critical current and nvalue become important at low contact resistances. The effects of collective variations in contact resistances, individual tape Ic, and nvalues were studied through simulations using Monte Carlo method. Using an experimentally validated mathematical model, 1000 cables were simulated with normally distributed random values of contact resistances, individual tape Ics, and nvalues. Current distribution in the 1000 simulated cables demonstrated the need for selecting tapes with a narrow distribution in the superconducting parameters to minimize the risk of catastrophic damage to superconducting cables during their operation. It has been demonstrated that there is a potential danger of pushing some tapes closer to their Ic before the current in the cable reaches its design critical current. Mathematical models were also used to study the effect of longitudinal variations in the tape parameters on superconducting cable using Monte Carlo simulations. Each tape of a 30 meter long, 3 kA model cable with 30 tapes was considered to have longitudinal variations in Ic, and n values for every 1 cm section, thus generating particular standard deviation in Ic and n for all 3000 sections of each tape. The results indicate that the apparent critical current and index value of the cable are reduced by a certain percentage depending upon the extent of variation in the characteristics along the length of the tapes.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9071
 Format
 Thesis
 Title
 Spectrum Management in Wireless Networks.
 Creator

Ma, Xiaoguang, Yu, Ming, Duan, Zhenhai, Harvey, Bruce A., Kwan, Bing W., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

The limited spectrum provided by the IEEE 802.11 standard is not efficiently utilized in the existing wireless networks. The inefficiency comes from three issues in spectrum management. First, the utilization of the available nonoverlapping channels is not evenly distributed, that is, closely deployed users tend to congregate in the same or interfering channels. This issue incurs an excessive amount of cochannel interference (CCI), causing collisions, and thus decreases network throughput....
Show moreThe limited spectrum provided by the IEEE 802.11 standard is not efficiently utilized in the existing wireless networks. The inefficiency comes from three issues in spectrum management. First, the utilization of the available nonoverlapping channels is not evenly distributed, that is, closely deployed users tend to congregate in the same or interfering channels. This issue incurs an excessive amount of cochannel interference (CCI), causing collisions, and thus decreases network throughput. Second, the dynamic radio channel allocation (RCA) problem is nondeterministic polynomialtime hard (NPhard). The employed heuristic optimization methods can not efficiently find a global optimum, including simple minimization or maximization processes, or certain slow learning processes. Third, the default transmission power of a user reserves unnecessarily large deference areas, in which the collision avoidance (CA) mechanisms prohibit simultaneous transmissions in a given channel. Consequently, the spatial channel reuse is significantly reduced. For the first issue, many RCA algorithms have been proposed. The objective is to minimize CCI among cochannel users while increasing network throughput. Most RCA algorithms use heuristic optimization methods, which have restricted performance limited by one or more of the following aspects. 1)Their evaluation variables may not properly reflect the CCI levels in a network, e.g., the number of cochannel users, the local energy levels, etc.. 2)The dynamic RCA problem is nondeterministic polynomialtime hard (NPhard). The employed heuristic optimization methods can not efficiently find a global optimum, e.g., simple minimization or maximization processes, or certain slow learning processes. 3)The information gathering and processing approaches in these RCA algorithms require prohibitive overheads, such as a common control channel or a central controller. 4)Some unrealistic premises are used, e.g., all users in the same channel can hear each other. 5)Most RCA algorithms are designed for some specific networks. For example, an algorithm designed for organizedorinformation sharing (OIS) networks does not work properly in nonorganizednorinformationSharing (NOIS) networks. For the second issue, it is worth pointing out that the complexity of the existing distributed RCA algorithms has not been studied. For the third issue, various power control algorithms, including courtesy algorithms and opportunistic algorithms, have been introduced to restrain transmission power and thus to minimize deference areas, which in turn to maximize the spatial channel reuse. The courtesy algorithms assign a node with a specific power level according to the link length, which is the distance between the transmitter and the receiver, and the noise and interference power level. These algorithms can be further classified into linear power assignment algorithms and nonlinear power assignment algorithms. The linear power assignment algorithms are so aggressive that they may introduce extra hidden terminals, which cause additional unregulated collisions. However, the nonlinear algorithms are too conservative to maximize the power control benefits. The opportunistic power control algorithms allow conditional violations of the CA mechanisms, i.e., a deferring node can initiate a transmission with a deliberately calculated transmission power so that the ongoing transmission will not be affected. However, the power calculation is based on the constants that are only valid in certain wireless scenarios. Related to this issue, a more difficult problem is how to improve network throughput when the demanded data rate within a certain area exceeds the limit of throughput density, which is defined as the upper limit of the total throughput constrained by the modulation techniques and CA mechanisms in the area. Note that no existing algorithm, neither RCA nor the power control, is able to solve this problem. In this work, we focus our study on the above issues in the spectrum management of wireless networks. Our contributions can be summarized as follows. Firstly, to solve the first issue, we propose an annealing Gibbs sampling (AGS) based distributive RCA (ADRCA) algorithm. The ADRCA algorithm has the following advantages: 1)It uses average effective channel utilization (AECU) to evaluate the channel condition. AECU has a simple relationship with CCI and can accurately reflect the channel congestion conditions. 2)It employs the AGS optimization method, which divides a global optimization problem into a set of distributed local optimization problems. Each of those problems can be solved by simulating a Markov chain. The stationary distribution of the Markov chains is a globally optimized solution. 3)It includes three different cases, namely AGS1, AGS2 and AGS3, which adapt to various types of wireless networks with different optimization objectives. AGS1 is designed to search for a global optimal channel assignment in OIS networks; AGS2 is proposed to work in NOIS networks and pursue maximum individual performance. Added with a prerequisite for RCA procedures, AGS3 focuses on costeffectiveness, reduces channel reallocation attempts, and enhances system stability without significantly downgrading its optimization performance. To further study the costeffectiveness of ADRCA, an upper limit of the computational scale (CS) is found for AGS3 based on an innovative neighboring relationship model in a practical network scenario. Secondly, to solve the second issue, we propose a hybrid approach to study the computational scale (CS), which is defined as the number of channel reallocations until a network reaches a convergent state. First, we propose a simple relationship model to describe the interference relation between an AP and its neighboring APs. Second, for one of the simplest cases in the relationship model, we find an analytical solution for the CS and validate it by simulations. Third, for more general cases, we combine the cases with a similar CS means by using oneway analysis of variance (ANOVA) and find the upper bound of the CS with extensive simulations. The simulation results demonstrate that the hybrid approach is simple and accurate as compared to traditional intuitive comparison methods. Based on the aforementioned hybrid approach, an upper limit of the CS is found for AGS3 in a practical network scenario. Thirdly, to solve the third issue and also raise the limit of throughput density, we propose the channel allocation with powercontrol (CAP) strategy which integrates the ADRCA algorithm and the digitized adaptive power control (DAPC) algorithm, to achieve a synergetic benefit between power control and RCA, which is not considered by the existing RCA algorithms. The synergy comes from the following two aspects: • By reducing the transmission power of each node, DAPC can lower CCI levels, allow more simultaneous transmissions within a certain area, increase spatial reuse, and raise the limit of the throughput density. It also reduces the number of nodes competing for a given channel, and thus significantly decreases the CS of ADRCA. • By striving to assign interfering neighbors to nonoverlapping channels, ADRCA minimizes the number of hidden terminals introduced by the power control processes. The integration causes two potential problems. First, since most RCA algorithms are heuristic, after a system converges, any change in transmission power may trigger unnecessary channel reallocation processes, which then would lead to extra computational costs. Second, channel reallocations can also invalidate current transmission power assignments. The above two problems significantly impair system stability. The CAP strategy overcomes the following two problems as follows: to mitigate the impact of the first problem, a node must estimate the conditions of a new channel and use adaptive transmission power accordingly; for the second problem, the node must calculate the transmission power using linear power control algorithms and round it up to the next larger level in a given set of predetermined power levels. There are several statistical methods applied in our study, including the Markov chain Monte Carlo (MCMC) method, distribution model fitting, paired ttest, and the ANOVA test. They are more accurate and efficient than traditional intuitive comparison methods, making this study an important cornerstone for further research. In this work, we have conducted extensive simulations to demonstrate the effectiveness of the proposed methods. Our simulation results show that AGS1 can achieve a global optimum in most OIS network scenarios. With a 95% confidence level, it achieves 99.75% of the global maximum throughput. AGS2 performs on par with AGS1 in NOIS networks. AGS3 reduces the CS by as much as 98% compared to AGS1 and AGS2. The simulation results also demonstrate that compared with the standard MAC protocol, CAP increases the overall throughput by up to 9.5 times and shortens the endtoend delay by up to 80% for UDP traffic.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd2818
 Format
 Thesis
 Title
 Dynamic Resource Management in Wireless Networks.
 Creator

Malvankar, Aniket A. (Aniket Ashok), Yu, Ming, Duan, Zhenhai, Harvey, Bruce, Foo, Simon, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Wireless Communication has been a rapidly growing industry over the past decade. The mobile and portable device market has boomed with the advent of new data, multimedia and voice technologies. The technical advances in mobile and personalized computer fields have accelerated wireless communication into a crucial segment of the communication industry. Introduction of smart phones and hand held devices with internet browsing, email, and multimedia services, has made it essential to add...
Show moreWireless Communication has been a rapidly growing industry over the past decade. The mobile and portable device market has boomed with the advent of new data, multimedia and voice technologies. The technical advances in mobile and personalized computer fields have accelerated wireless communication into a crucial segment of the communication industry. Introduction of smart phones and hand held devices with internet browsing, email, and multimedia services, has made it essential to add features like security, reliability etc over the wireless network. Wireless sensor networks which are a subset of the wireless ad hoc networks have been deployed in various military and defense applications. The popularity of 802.11 technologies have led to large scale manufacturing of 802.11 chipsets and reduced the cost drastically. Thus enabling deployment of large scale wifi networks resembling sensor environment. As wireless communication uses air interface it is challenging to support such advanced QoS (Quality Of Service), features due to external interference. Some of the typical interference encountered is from other electronic devices like microwaves, environmental interference like rain, and from physical structures like buildings. Also it is a known fact that battery technology hasn't kept pace with the electronics industry. Consequently to add portability to these wireless devices it has become essential to cut down on energy sources embedded within these devices. Hence wireless equipment designers have to combat interference with minimal power expenditure. To best utilize the limited resources of these wireless devices and guarantee QoS it is essential to design specialized algorithms spanning across all layers of the network. These algorithms should not only take into account the network parameters but also dynamically adapt to the changes in the network configurations, traffic etc. The complete set of such techniques constitutes what can be described as Dynamic Resource Management In Wireless Networks. The proposed research was aimed to design techniques such as dynamic channel allocation, energy efficient clustering and reliable power aware routing. Clustering is one of the energy efficient architecture in wireless ad hoc networks and more specifically used in sensor network like environment. Clustering is achieved by grouping devices together based on location, traffic generation etc. Clustering not only limits energy spent by devices in communication, but also aids in better utilization of channel by avoiding collisions. Clustering makes sure that devices communicate with their respective cluster head with minimal required power thereby causing very less interference to the devices in the neighboring cluster. It also makes it possible to combine and compress the information at the CH (Cluster Head) before relaying it to the central collection or base station
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd2762
 Format
 Thesis
 Title
 Luminous Intensity Measurements for LED Related Traffic Signals and Signs.
 Creator

Jiang, Zhaoning, Zheng, Jim P., Tung, Leonard J., Kwan, Bing W., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

The proper intensity and chromaticity of traffic signals and signs play a key role in the safe management of the traffic environment. Light Emitting Diode (LED) becomes the most important light emitting device for traffic signals and signs. This thesis describes an experimental measurement system which will measure the luminous intensity of several types of traffic signals and signs, which are made of LEDs. Although chromaticity measurement will be mentioned, the thesis is focused on luminous...
Show moreThe proper intensity and chromaticity of traffic signals and signs play a key role in the safe management of the traffic environment. Light Emitting Diode (LED) becomes the most important light emitting device for traffic signals and signs. This thesis describes an experimental measurement system which will measure the luminous intensity of several types of traffic signals and signs, which are made of LEDs. Although chromaticity measurement will be mentioned, the thesis is focused on luminous intensity measurement. While there are many different types of traffic signals, this thesis will focus on the current measurement procedure of the 12inch traffic signal and the improvement of the procedure. The measurement procedure for other types of LEDrelated signals and future development are also discussed.
Show less  Date Issued
 2004
 Identifier
 FSU_migr_etd3516
 Format
 Thesis
 Title
 Some Theory and an Experiment on the Fundamentals of Hirschman Uncertainty.
 Creator

Ghuman, Kirandeep, DeBrunner, Victor E., Srivastava, Anuj, DeBrunner, Linda S. (Linda Sumners), Harvey, Bruce A., Roberts, Rodney G., Florida State University, FAMUFSU College...
Show moreGhuman, Kirandeep, DeBrunner, Victor E., Srivastava, Anuj, DeBrunner, Linda S. (Linda Sumners), Harvey, Bruce A., Roberts, Rodney G., Florida State University, FAMUFSU College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

The Heisenberg Uncertainty principle is a fundamental concept from Quantum Mechanics that also describes the Fourier Transform. Unfortunately, it does not directly apply to the digital signals. However, it can be generalized if we use entropy rather than energy to form an uncertainty relation. This form of uncertainty, called the Hirschman Uncertainty, uses the Shannon Entropy. The Hirschman Uncertainty is defined as the average of the Shannon entropies of the discretetime signal and its...
Show moreThe Heisenberg Uncertainty principle is a fundamental concept from Quantum Mechanics that also describes the Fourier Transform. Unfortunately, it does not directly apply to the digital signals. However, it can be generalized if we use entropy rather than energy to form an uncertainty relation. This form of uncertainty, called the Hirschman Uncertainty, uses the Shannon Entropy. The Hirschman Uncertainty is defined as the average of the Shannon entropies of the discretetime signal and its Fourier Transform. The functions that minimize this uncertainty are not the wellknown Gaussians from the Heisenberg theory, but are the picket fence functions first noticed in wavelet denoising. This connection suggests that the Hirschman Uncertainty is fundamental, but not conclusively. Here in this research, we develop two new uncertainty measures that are derived from the Hirschman Uncertainty. We want to use these measures to explore the fundamental nature of the Hirschman Uncertainty. In the first case, we replace the Shannon entropy with the Rényi entropy and study the impact of varying the Rényi order on the uncertainty of various digital signals. We call this new uncertainty measure, the HirschmanRényi uncertainty denoted by U[alpha over ½](x). We find that the derived uncertainty measure is invariant to the Rényi order in case of the picket fence signals and varies in case of other the digital signals like rectagular, cosine, square wave signals to name a few. This new uncertainty measure that utilizes the Rényi entropy decays with the increase in Rényi order value. Considering the invariance in uncertainty in case of picket fence signal, we can use either Shannon or Rényi entropy with any value of Rényi order to calculate Hirschman Uncertainty. In the second case, we derive an uncertainty measure that replaces the Fourier Transform with the Fractional Fourier Transform. The Hirschman Uncertainty using dFRT denoted by U[alpha over ½](x) is explored with the help of the minimizers of the Hirschman Uncertainty (the picket fence signals) along with the other digital signals. In this case, we find that the degree of rotation in the Fractional Fourier Transform does impact the uncertainty at the integer values of the transfer order. But for the noninterger values of the transfer order, the uncertainty variations are greatly reduced or are minimal. Finally to help verify our theory, we perform a classical texture recognition experiment. We find that the recognition performance follows directly as our developed Hirschman Rényi Uncertainty and the Hirschman Uncertainty using dFRT theory suggests. Additionally, it appears that a predictive solution for the proper selection of the Rényi order and the rotation angle can be developed that could significantly aid in image analysis. Our recognition results are consistent with entropic invariance theory in case of the two uncertainty measures. These results suggests that the Hirschman Uncertainty may be a fundamental characteristic of the digital signals.
Show less  Date Issued
 2015
 Identifier
 FSU_2015fall_Ghuman_fsu_0071E_12257
 Format
 Thesis
 Title
 Antenna Array Synthesis Using the Cross Entropy Method.
 Creator

Connor, Jeffrey D. (Jeffrey David), Foo, Simon Y., Weatherspoon, Mark H., ChanHilton, Amy, MeyerBaese, Anke, Department of Electrical and Computer Engineering, Florida State...
Show moreConnor, Jeffrey D. (Jeffrey David), Foo, Simon Y., Weatherspoon, Mark H., ChanHilton, Amy, MeyerBaese, Anke, Department of Electrical and Computer Engineering, Florida State University
Show less  Abstract/Description

This dissertation addresses the synthesis of antenna arrays using the CrossEntropy (CE) method, marking the first application of the CE method for solving electromagnetic optimization problems. The CE method is a general stochastic optimization technique for solving both continuous and discrete multiextremal, multiobjective optimization problems. The CE method is an adaptive importance sampling derived from an associated stochastic problem (ASP) for estimating the probability of a rare...
Show moreThis dissertation addresses the synthesis of antenna arrays using the CrossEntropy (CE) method, marking the first application of the CE method for solving electromagnetic optimization problems. The CE method is a general stochastic optimization technique for solving both continuous and discrete multiextremal, multiobjective optimization problems. The CE method is an adaptive importance sampling derived from an associated stochastic problem (ASP) for estimating the probability of a rareevent occurrence. The estimation of this probability is determined using a loglikelihood estimator governed by a parameterized probability distribution. The CE method adaptively estimates the parameters of the probability distribution to produce a random variable solution in the neighborhood of the globally best outcome by minimizing cross entropy. In this work, single and multiobjective optimization using both continuous and combinatorial forms of the CE method are performed to shape the sidelobe power, mainlobe beamwidth, null depths and locations as well as number of active elements of linear array antennas by controlling the spacings and complex array excitations of each element in the array. Specifically, aperiodic arrays are designed through both nonuniform element spacings and thinning active array elements, while phased array antennas are designed by controlling the complex excitation applied to each element of the array. The performance of the CE method is demonstrated by considering different scenarios adopted from literature addressing more popular stochastic optimization techniques such as the Genetic Algorithm (GA) or Particle Swarm Optimization. The primary technical contributions of this dissertation are the simulation results computed using the Cross Entropy method for the different scenarios adopted from literature. Cursory comparisons are made to the results from the literature, but the overall goal of this work is to expose the tendencies of the Cross Entropy method for array synthesis problems and help the reader to make an educated decision when considering the Cross Entropy method for their own problems. Overall, the CE method is a competitive alternative to these more popular techniques, possessing attractive convergence properties, but requiring larger population sizes.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd3444
 Format
 Thesis
 Title
 Performance Analysis of a Synchronous Coherent Optical Code Division Multiple Access Networks.
 Creator

Chanila, Mohan, Arora, Krishna, Foo, Simon, MeyerBaese, Anke, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

In this master's thesis a performance analysis of Optical CDMA systems is conducted. This is accomplished by simulating and analyzing various CDMA code sequences. These are modified and implemented for all optical networks, in OCDMA systems. These include msequences, Gold codes, Prime sequences and Modified prime code sequences. A simulation model of an OCDMA network for the analysis and performance of these sequences is developed and analyzed. From these studies it has been shown that for a...
Show moreIn this master's thesis a performance analysis of Optical CDMA systems is conducted. This is accomplished by simulating and analyzing various CDMA code sequences. These are modified and implemented for all optical networks, in OCDMA systems. These include msequences, Gold codes, Prime sequences and Modified prime code sequences. A simulation model of an OCDMA network for the analysis and performance of these sequences is developed and analyzed. From these studies it has been shown that for a large number of simultaneous users in the network, Modified Prime codes give the best system performance. Also, for a coherent system, bipolar codes have a significantly better performance. The hardware technology used in the implementation of an OCDMA system is studied. It includes the transmitter structures and receivers with various modifications that can be implemented. The synchronous and asynchronous techniques, coherent and noncoherent with electrical or optical processing are examined for CDMA.
Show less  Date Issued
 2006
 Identifier
 FSU_migr_etd3889
 Format
 Thesis
 Title
 Guiding the Selection of Physical Experiments for the Validation of a Model Designed to Study Grounding in DC Distribution Systems.
 Creator

Infante, Diomar, Edrington, Chris S., Baldwin, Tom, Steurer, Mischa, Foo, Simon Y., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

The following work establishes a process for model validation and its application to the study of grounding in DC shipboard power systems. The aim of the thesis is to create a general procedure detailing how to appropriately select physical experiments that validate the simulation model under use. The procedure presented can be applied to any physical system. In the work presented here, the procedure is implemented on a physical setup representational of a shipboard power system. This set up...
Show moreThe following work establishes a process for model validation and its application to the study of grounding in DC shipboard power systems. The aim of the thesis is to create a general procedure detailing how to appropriately select physical experiments that validate the simulation model under use. The procedure presented can be applied to any physical system. In the work presented here, the procedure is implemented on a physical setup representational of a shipboard power system. This set up is used for the study of grounding. Grounding in the context used in this work refers to the intentional physical connection from the power carrying elements in the electrical system to the ship hull. Grounding practices are generally well understood for AC shipboard power system. However, the same cannot be said for MVDC systems. There is a growing interest in medium voltage DC systems to be implemented on shipboard power systems. This new type of distribution systems posse many unanswered questions. One of those questions is in regards to the selection of the grounding scheme. The selection of the grounding scheme for a MVDC system is a question of optimization once the designer has a good understanding of the key parameters found in the system. The key parameters in the systems are those that have a large impact on the system's responses. This work provides the designer with a tool to assess the impact each of those parameters has on the responses of the system. These system responses can be labeled as metrics and are encapsulated under the two main objectives of shipboard power systems grounding: safety and continuity of service. Thus, the aim of this work, to establish a procedure, which regardless of the shipboard power system under study, can deliver a validated simulation model for the designer to optimize. The proposed procedure was applied to a representative physical set up of a shipboard power system. The physical model contains most of the requirements to understand to issues associated with DC shipboard grounding. Certain aspects of DC shipboard power systems have not been implemented in the physical model due to material constraints. However, the physical system still holds enough value to gain insight into what happens in a DC shipboard power system. In addition, the physical model has enough complexity to use as a test case for the application of the procedure proposed. The work presented herein focuses on the selection of physical experiments in order to validate the simulation model in a qualitative fashion. The process is presented and its major components are discussed. It is important to note that the design process yields, as a byproduct, insight into the system under study. In the case presented here, in which the process is used with a focus in system grounding, a good explanation of rail to ground harmonics is obtained. In summary, the contributions of this work are twofold. The thesis provides a layout for obtaining confidence in the models being used for the system under study. In addition, this work establishes the important parameters in regards to grounding in DC shipboard power system based on the simplified model used.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd3875
 Format
 Thesis
 Title
 Kernel Methods and Component Analysis for Pattern Recognition.
 Creator

Isaacs, Jason C., Foo, Simon Y., MeyerBaese, Anke, Liu, Xiuwen, ChanHilton, Amy, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Kernel methods, as alternatives to component analysis, are mathematical tools that provide a higher dimensional representation, for feature recognition and image analysis problems. In machine learning, the kernel trick is a method for converting a linear classification learning algorithm into nonlinear one, by mapping the original observations into a higherdimensional space so that the use of a linear classifier in the new space is equivalent to a nonlinear classifier in the original space...
Show moreKernel methods, as alternatives to component analysis, are mathematical tools that provide a higher dimensional representation, for feature recognition and image analysis problems. In machine learning, the kernel trick is a method for converting a linear classification learning algorithm into nonlinear one, by mapping the original observations into a higherdimensional space so that the use of a linear classifier in the new space is equivalent to a nonlinear classifier in the original space. In this dissertation we present the performance results of several continuous distribution function kernels, lattice oscillation model kernels, Kelvin function kernels, and orthogonal polynomial kernels on select benchmarking databases. In addition, we develop methods to analyze the use of these kernels for projection analysis applications; principal component analysis, independent component analysis, and optimal projection analysis. We compare the performance results with known kernel methods on several benchmarks. Empirical results show that several of these kernels outperform other previously suggested kernels on these data sets. Additionally, we develop a genetic algorithmbased kernel optimal projection analysis method which, through extensive testing, demonstrates a ten percent average improvement in performance on all data sets over the kernel principal component analysis projection. We also compare our kernels methods for kernel eigenface representations with previous techniques. Finally, we analyze the benchmark databases used here to determine whether we can aid in the selection of a particular kernel that would perform optimally based on the statistical characteristics of each database.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd3861
 Format
 Thesis
 Title
 Efficient Hardware Implementation Techniques for Digital Filters.
 Creator

Guo, Rui, DeBrunner, Linda, Harvey, Bruce, DeBrunner, Victor, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

This dissertation addresses the development of efficient digital filter implementation techniques. Measures for area, latency, and throughput are used to quantify the benefits of the proposed implementation schemes, as well as consideration of the digital signal processing algorithm performance. Multipleconstant multiplication (MCM) is a popular approach for implemented fixed coefficient finite impulse response (FIR) filters. We propose two methods for truncating addition results in an MCM...
Show moreThis dissertation addresses the development of efficient digital filter implementation techniques. Measures for area, latency, and throughput are used to quantify the benefits of the proposed implementation schemes, as well as consideration of the digital signal processing algorithm performance. Multipleconstant multiplication (MCM) is a popular approach for implemented fixed coefficient finite impulse response (FIR) filters. We propose two methods for truncating addition results in an MCM implementation that reduce the area required while decreasing latency. The effects of filter order and coefficient quantization are explored by the proposed search technique that reduces the computations required by an MCM implementation. Two new adaptive filter implementation techniques based on distributed arithmetic (DA) are proposed that provide reduced area and speed without loss of filter performance. Adaptive filter implementations can also be based on realtime conversion of the adapted coefficients into a canonicalsigneddigit (CSD) representation. We propose a new conversion circuit that reduces both latency and area. Finegrained parallelism and relaxed lookahead techniques are applied to develop a pipelined GaussSeidel fast affine projection (GSFAP) adaptive filter implementation that allow degradation to the adaptive filter performance and increases area to achieve significantly faster performance. These proposed techniques for fixed coefficient and adaptive filters can be used for applications in which low area and high speed implementations are required.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd3897
 Format
 Thesis
 Title
 Optimization of a Parallel Cordic Architecture to Compute the Gaussian Potential Function in Neural Networks.
 Creator

Chandrasekhar, Nanditha, Baese, Anke Meyer, Baese, Uwe Meyer, Foo, Simon, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Many pattern recognition tasks employ artificial neural networks based on radial basis functions. The statistical characteristics of pattern generating processes are determined by neural networks. The Gaussian potential function is the most common radial basis function considered which includes square and exponential function calculations. The Coordinate Rotations Digital Computer, CORDIC algorithm which is used to compute the exponential function and the exponent was first derived by Volder...
Show moreMany pattern recognition tasks employ artificial neural networks based on radial basis functions. The statistical characteristics of pattern generating processes are determined by neural networks. The Gaussian potential function is the most common radial basis function considered which includes square and exponential function calculations. The Coordinate Rotations Digital Computer, CORDIC algorithm which is used to compute the exponential function and the exponent was first derived by Volder in 1959 for calculating trigonometric functions and conversions between rectangular and polar coordinates. It was later developed by Walther, the CORDIC is a class of shiftadd algorithms for rotating vectors in a plane. In a nutshell, the CORDIC rotator performs a rotation using a series of specific incremental rotation angles selected so that each is performed by a shift and add operation. This thesis focuses on implementation of new parallel hardware architecture to compute the Gaussian Potential Function in neural basis classifiers for pattern recognition. The new hardware proposed computes the exponential function and the exponent simultaneously in parallel thus reducing computational delay in the output function. The new CORDIC is synthesized by Altera's MAX PLUS II software for FLEX 10 K device and improvised for calculation of Radix 4. Case studies are presented and compared on the performance of Radix 2 and Radix 4 design based on the speed and the size occupied respectively. It is observed that though the area occupied by Radix 4 is more as compared to Radix 2 there is speed improvement which is desirable.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3905
 Format
 Thesis
 Title
 Performance of Wide Temperature Range Electrolytes in LithiumIon Capacitors Pouch Cells.
 Creator

Cappetto, Anthony, Zheng, Jianping P., Andrei, Petru, Foo, Simon Y., Weatherspoon, Mark H., Florida State University, College of Engineering, Department of Electrical and...
Show moreCappetto, Anthony, Zheng, Jianping P., Andrei, Petru, Foo, Simon Y., Weatherspoon, Mark H., Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

Lithiumion Capacitor’s (LICs) performance is greatly influenced by the operating temperature. Many cell design factors such as electrolyte formulation and electrode material composition can determine such performance. The standards for today’s commercial LIC do not reach temperatures needed for extreme temperature applications. Research was completed to develop other electrolytes for wide temperature range applications and along the way side effects of lithium plating and stripping were...
Show moreLithiumion Capacitor’s (LICs) performance is greatly influenced by the operating temperature. Many cell design factors such as electrolyte formulation and electrode material composition can determine such performance. The standards for today’s commercial LIC do not reach temperatures needed for extreme temperature applications. Research was completed to develop other electrolytes for wide temperature range applications and along the way side effects of lithium plating and stripping were explored in anode materials. Metrics for performance that were used for LICs were capacity, capacitance and ESR, cycle life retention, and electrochemical impedance spectroscopy (EIS). Wide temperature range electrolytes were developed from 70°C to 40°C and lithium plating in different anode materials was mitigated.
Show less  Date Issued
 2016
 Identifier
 FSU_2016SU_Cappetto_fsu_0071N_13438
 Format
 Thesis
 Title
 Realization of Swarm Behavior in Wireless Communication Systems and Ad Hoc Sensor Networks.
 Creator

Hoang, Hai H. (Hai Hung), Kwan, Bing W., Liu, Xiuwen, Roberts, Rodney, Foo, Simon, MeyerBaese, Anke, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Swarm behavior indicates the direct or indirect interactions among relatively simple agents to perform a particular task that is unknown to the individual agent. The study of swarm behavior originates from the research of swarms in nature such as schools of fish or flocks of birds. The attraction of swarm behavior comes from the collective behavior of independent simple agents, each responding to local information without supervision, to perform a global behavior of the entire swarm. The...
Show moreSwarm behavior indicates the direct or indirect interactions among relatively simple agents to perform a particular task that is unknown to the individual agent. The study of swarm behavior originates from the research of swarms in nature such as schools of fish or flocks of birds. The attraction of swarm behavior comes from the collective behavior of independent simple agents, each responding to local information without supervision, to perform a global behavior of the entire swarm. The first part of the dissertation investigates the incorporation of swarm behavior in particle filtering to improve the performance of channel estimation in narrowband multipleinput multipleoutput (MIMO) wireless communication systems. Channel estimation is required at the receiver to coherently detect the transmitted symbols and has significant impact on the reliability of the systems. Particle filter is a powerful method to approximate the posterior distribution of the channel information given the received signals in the case of nonlinear nonGaussian systems. However, the particle filter method based on importance sampling has problems of importance density selection and noise uncertainty. The suboptimal particle filters with swarm behavior incorporation in part I are proposed to overcome these problems. On the other hand, part II of the dissertation focuses on class of wireless sensor networks utilizing ultra wideband (UWB) technology in the physical layer. UWB technology has potential applications in wireless sensor networks with attractive features such as low cost, low complexity, low power, and multiple access efficiency. A network of large number of eventually identical sensor nodes are considered, in which each node has limited resources and capabilities. There are similarities between the sensor node in the network and the agent in a swarm. The second part proposes a selforganizing protocol based on swarm behavior to the UWB wireless sensor networks. The sensor nodes are grouped into clusters, each of which is able to transfer the information toward a datacollecting node along a steep decent path.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd4016
 Format
 Thesis
 Title
 Physical Based Modeling and Simulation of LiFePO₄ Secondary Batteries.
 Creator

Greenleaf, Michael, Zheng, Jim P., Andrei, Petru, Li, Hui, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

LiFePO4 batteries have been in existence since 1997 through the work of Padhi et al and showed promise early on due to its material abundance, low cost, low environmental impact, high temperature tolerance and theoretical capacity of 170mAh/g. Much work has been done in optimizing the cathode, electrolyte and anode materials to yield excellent results. In contrast, little work has been done in developing an accurate, physicalbased model for LiFePO4 behavior. Most "models" are based from...
Show moreLiFePO4 batteries have been in existence since 1997 through the work of Padhi et al and showed promise early on due to its material abundance, low cost, low environmental impact, high temperature tolerance and theoretical capacity of 170mAh/g. Much work has been done in optimizing the cathode, electrolyte and anode materials to yield excellent results. In contrast, little work has been done in developing an accurate, physicalbased model for LiFePO4 behavior. Most "models" are based from Electrochemical Impedance Spectroscopy (EIS) and used to fit the data obtained. It is easy to build several vastly different models to describe the same data, so it is therefore important to define a model based upon the physical properties of the cell. In doing so correctly it may then be possible to simplify that model while maintaining precision allowing it to be more useful to other fields. In this work, different EIS measurements were taken at uniform states of charge (SOC) and from a developed physicalbased model a LiFePO4 battery was accurately described for steadystate conditions.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd3991
 Format
 Thesis
 Title
 Performance Analysis of Distributed Control Algorithms Using a Hardware in the Loop Testbed.
 Creator

Sundararajan, Sindhuja, Edrington, Christopher S., Steurer, Michael Morten, Moss, Pedro L., Florida State University, College of Engineering, Department of Electrical and...
Show moreSundararajan, Sindhuja, Edrington, Christopher S., Steurer, Michael Morten, Moss, Pedro L., Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

The benefits of a smart grid system greatly depends on the efficient implementation of the power delivery system utilizing the data communication infrastructure. This makes it necessary to have a cosimulation platform to test the enabling technology. The Hardware in the loop testbed (HILTB) at the Center for Advanced Power Systems (CAPS) is a cyberphysical testbed that provides a real time cosimulation platform for testing the smart grid operations and control. Due to the inherent...
Show moreThe benefits of a smart grid system greatly depends on the efficient implementation of the power delivery system utilizing the data communication infrastructure. This makes it necessary to have a cosimulation platform to test the enabling technology. The Hardware in the loop testbed (HILTB) at the Center for Advanced Power Systems (CAPS) is a cyberphysical testbed that provides a real time cosimulation platform for testing the smart grid operations and control. Due to the inherent complexity involved in initializing and running the individual components of the HILTB, the testbed is typically inaccessible and is mostly used for demonstrating only a single test scenario. As the test setup involves manual intervention, the idea of repeatability is lost. The aim of this thesis is to address the above raised concerns related to HILTB. The objective is to develop a methodology to perform comprehensive testing and analysis of distributed control algorithms that are developed for smart grid power systems. In order to quantify the effect of the algorithm on the underlying power system, it is necessary to develop metrics. This also allows the comparison of various algorithms and assess the effect on different feeder configurations. To verify the system level functionality for different operating conditions, the factors affecting the system performance are determined. The values for these factors needs to be chosen intelligently to maximize the accuracy and minimize the number of experiments. The HILTB validation framework presented in this thesis is built based on the principles of design of experiments. The framework provides a platform for the assessment of the control algorithms that would help derisk the effects of the new techniques on the power system.
Show less  Date Issued
 2016
 Identifier
 FSU_2016SP_Sundararajan_fsu_0071N_13264
 Format
 Thesis
 Title
 Edge Detection of Noisy Images Using 2D Discrete Wavelet Transform.
 Creator

Chaganti, Venkata Ravikiran, Foo, Simon Y., MeyerBaese, Anke, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Wavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution matched to its scale. Wavelets are an extremely useful tool for coding images and other real signals. Because the wavelet transform is local in both time (space) and frequency, it localizes information very well compared to other transforms. Wavelets code transient phenomena, such as edges, efficiently, localizing them typically to just a few coefficients....
Show moreWavelets are mathematical functions that cut up data into different frequency components, and then study each component with a resolution matched to its scale. Wavelets are an extremely useful tool for coding images and other real signals. Because the wavelet transform is local in both time (space) and frequency, it localizes information very well compared to other transforms. Wavelets code transient phenomena, such as edges, efficiently, localizing them typically to just a few coefficients. This thesis deals with the different types of edge detection techniques, mainly concentrating on the two major categories Gradient and Laplacian. The gradient method detects the edges by looking for the maximum and minimum in the first derivative of the image. The Laplacian method searches for zerocrossings in the second derivative of the image to find edges. Given the wavelet transforms values wavelet analysis can be done in the wavelet domain by comparison of wavelet coefficients that account for the edges. The detection of the maxima or inflection points is generally a key factor for analyzing the characteristics of the nonstationary signals. The wavelet transformation has been proved to be a very promising technique for the multiscale edge detection applied both to 1D and 2D signals. The dyadic wavelet transforms at two adjacent scales are multiplied as a product function to magnify the edge structures and suppress the noise. Unlike many multiscale techniques that first form the edge maps at several scales and then synthesize them together, we determined the edges as the local maxima directly in the scale product after an efficient thresholding. It is shown that the scale multiplication achieves better results than either of the two scales, especially on the localization performance. The thesis deals with the comparison of edge detection of images using traditional edge detection operators (Prewitt, Sobel, Freichen and Laplacian of Gaussian) and Discrete Wavelet Transformation (DWT) using Haar, Daubechies, Coifman and Biorthogonal wavelets. It also deals with the edge detection of noisy images and the optimization of the wavelets for edge detection.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3948
 Format
 Thesis
 Title
 The Modular Multilevel Converter and Fault Current Management in Medium Voltage DC System of an Electric Ship.
 Creator

Sun, Ke, Faruque, Md Omar, Edrington, Christopher S., Foo, Simon Y., Florida State University, College of Engineering, Department of Electrical and Computer Engineering
 Abstract/Description

The Modular Multilevel Converter (MMC) is a potential candidate for power conversion in a Medium Voltage DC System (MVDC) based electric ship. One of the major advantages of utilizing MMC in an MVDC environment is the capability of limiting DC side fault current and fast restart process because reenergizing of the MMC cells is not necessary. The MMC cells have various configurations e.g. halfbridge, fullbridge. The fullbridge MMC is more suitable for the MVDC system and fault current...
Show moreThe Modular Multilevel Converter (MMC) is a potential candidate for power conversion in a Medium Voltage DC System (MVDC) based electric ship. One of the major advantages of utilizing MMC in an MVDC environment is the capability of limiting DC side fault current and fast restart process because reenergizing of the MMC cells is not necessary. The MMC cells have various configurations e.g. halfbridge, fullbridge. The fullbridge MMC is more suitable for the MVDC system and fault current handling. However, the modeling, control, coordination in a multiMMC system, and fault handling of a fullbridge MMC based MVDC system is still not fully investigated and understood. This thesis focused on the key issues of the fullbridge MMC controls and modeling in an MVDC environment and the fault current limiting using multiple MMCs. The fundamental characteristics of the MMC topology are also discussed. Followed by the single MMC control design, the MMC control scheme for the MVDC system is designed to adapt to the capability of having a fast and controllable DC voltage and current. To decrease the complexity of the MMC circuit, two simple averaged models of MMC are proposed. To verify the accuracy of the averaged models, the simulation results are compared with the results from Controller Hardware in The Loop (CHIL). The results of the comparison show that the proposed two types of averaged models predict the steady state values with very a good accuracy. For studying the behavior of a multiMMC based MVDC system under DC side fault scenarios, an MVDC test system is proposed in this work. For comparison purposes, the realtime system model and offline model are developed respectively. The offline MMC model uses the individual IGBT component from the MATLAB/Simulink/SimPowerSystem software package whereas the realtime model is built using the library provided by OPALRT. The multicell circuit which has many nodes is simplified as a twonode voltage source and an equivalent resistance in series connection by applying the Th\'evenin equivalent. This thesis also discusses the challenges of determining the sampling time and how to group the MVDC system component models so that it is able to run in a multicore realtime simulator. Besides the modeling of the MVDC system components (e.g. the MMCs, the loads), a fault current limiting strategy is also proposed in this work. This thesis put forward an operation mode for the multiMMC system in a way that only one MMC is allowed to run in voltage controlled mode and the other MMCs are required to run in power controlled mode. By employing this operation mode, the fault current can be limited in the case of a DC side fault scenario. And no operation mode switching is needed as this operation mode also works for normal operation. The proposed fault current limiting strategy also contains the sequence of the converter actions. Five simulation cases are designed to test the proposed fault handling strategy. The simulation results show that the peak fault current is related to the operation conditions e.g. the prefault load current carried by MMCs, and the MMC control has some effect on mitigating the peak fault current. The proposed fault current limiting strategy is able to limit the fault current to a certain level in an MVDC system made up of single MMC, two MMCs and four MMCs with different loads conditions.
Show less  Date Issued
 2016
 Identifier
 FSU_2016SP_Sun_fsu_0071N_13261
 Format
 Thesis
 Title
 Wavelet Transform Based Image Compression on FPGA.
 Creator

Iqbal, Faizal, Foo, Simon Y, MeyerBaese, Uwe, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Wavelet Transform has been successfully applied in different fields, ranging from pure mathematics to applied sciences. Numerous studies carried out on Wavelet Transform have proven its advantages in image processing and data compression. Recent progress has made it the basic encoding technique in data compression standards. Pure software implementations of the Discrete Wavelet Transform, however, appear to be the performance bottleneck in realtime systems. Therefore, hardware acceleration...
Show moreWavelet Transform has been successfully applied in different fields, ranging from pure mathematics to applied sciences. Numerous studies carried out on Wavelet Transform have proven its advantages in image processing and data compression. Recent progress has made it the basic encoding technique in data compression standards. Pure software implementations of the Discrete Wavelet Transform, however, appear to be the performance bottleneck in realtime systems. Therefore, hardware acceleration of the Discrete Wavelet Transform has become a topic of interest. The goal of this work is to investigate the feasibility of hardware acceleration of Discrete Wavelet Transform for image compression applications, and to compare the performance improvement against the software implementation. In this thesis, a design for efficient hardware acceleration of the Discrete Wavelet Transform is proposed. The hardware is designed to be integrated as an extension to customcomputing platform and can be used to accelerate multimedia applications as JPEG2000 or MPEG4.
Show less  Date Issued
 2004
 Identifier
 FSU_migr_etd3864
 Format
 Thesis
 Title
 Inferences in Shape Spaces with Applications to Image Analysis and Computer Vision.
 Creator

Joshi, Shantanu H., Srivastava, Anuj, MeyerBaese, Anke, Klassen, Eric, Roberts, Rodney, Foo, Simon Y., Fisher, John W., Department of Electrical and Computer Engineering,...
Show moreJoshi, Shantanu H., Srivastava, Anuj, MeyerBaese, Anke, Klassen, Eric, Roberts, Rodney, Foo, Simon Y., Fisher, John W., Department of Electrical and Computer Engineering, Florida State University
Show less  Abstract/Description

Shapes of boundaries can play an important role in characterizing objects in images. Shape analysis involves choosing mathematical representations of shapes, deriving tools for quantifying shape differences, and characterizing imaged objects according to the shapes of their boundaries. We describe an approach for statistical analysis of shapes of closed curves using ideas from differential geometry. In this thesis, we initially focus on characterizing shapes of continuous curves, both open...
Show moreShapes of boundaries can play an important role in characterizing objects in images. Shape analysis involves choosing mathematical representations of shapes, deriving tools for quantifying shape differences, and characterizing imaged objects according to the shapes of their boundaries. We describe an approach for statistical analysis of shapes of closed curves using ideas from differential geometry. In this thesis, we initially focus on characterizing shapes of continuous curves, both open and closed, in R^2 and then propose extensions to more general elastic curves in R^n. Under appropriate constraints that remove shapepreserving transformations, these curves form infinitedimensional, nonlinear spaces, called shape spaces. We impose a Riemannian structure on the shape space and construct geodesic paths under different metrics. Geodesic paths are used to accomplish a variety of tasks, including the definition of a metric to compare shapes, the computation of intrinsic statistics for a set of shapes, and the definition of intrinsic probability models on shape spaces. Riemannian metrics allow for the development of a set of tools for computing intrinsic statistics for a set of shapes and clustering them hierarchically for efficient retrieval. Pursuing this idea, we also present algorithms to compute simple shape statistics  means and covariances,  and derive probability models on shape spaces using local principal component analysis (PCA), called tangent PCA (TPCA). These concepts are demonstrated using a number of applications: (i) unsupervised clustering of imaged objects according to their shapes, (ii) developing statistical shape models of human silhouettes in infrared surveillance images, (iii) interpolation of endo and epicardial boundaries in echocardiographic image sequences, and (iv) using shape statistics to test phylogenetic hypotheses. Finally, we present a framework for incorporating prior information about highprobability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinitedimensional, nonlinear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate prior shape knowledge also in form of gradient fields on curves. Through experimental results, we demonstrate the use of prior shape models in estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shapebased object recognition or classification. This Bayesian shape extraction approach is found to yield a significant improvement in detection of objects in presence of occlusions or obscurations.
Show less  Date Issued
 2007
 Identifier
 FSU_migr_etd3697
 Format
 Thesis
 Title
 Minimizing FIR Filter Designs Implemented in FPGAs Utilizing Minimized Adder Graph Techniques.
 Creator

Howard, Charles D., DeBrunner, Linda S., DeBrunner, Victor, Harvey, Bruce A., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Multiple constant multiplications (MCM) is an optimization technique that is wellsuited to DSP implementations. Using MCM, all coefficient multiplications are grouped into one efficient block of wired shifts and adds. A disadvantage of using MCM is the requirement of knowing the filter coefficients {it a priori}. Due to this limitation, MCM optimizations cannot be used in many applications. We propose a programmable adder graph (PAG) circuit that can implement multiplication using shift and...
Show moreMultiple constant multiplications (MCM) is an optimization technique that is wellsuited to DSP implementations. Using MCM, all coefficient multiplications are grouped into one efficient block of wired shifts and adds. A disadvantage of using MCM is the requirement of knowing the filter coefficients {it a priori}. Due to this limitation, MCM optimizations cannot be used in many applications. We propose a programmable adder graph (PAG) circuit that can implement multiplication using shift and add techniques without prior knowledge of the multiplier value. The PAG circuit allows any programmable device to be optimized using MCM for a wide range of DSP applications, including adaptive filters.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd3725
 Format
 Thesis
 Title
 Statistical Modeling of SmallScale Fading Channels.
 Creator

Hekeno, Mahinga, Kwan, Bing W., Yu, Ming, Arora, Krishna, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

With the increase of wireless networks, consumers are increasingly aware of the importance and convenience of wireless technology. Wireless technologies such as WLANs, mobile phones, blue tooth or PCS rely on a range of mechanisms to provide for high Quality of Service (QoS), the core of which would be accurate modeling of the wireless channels. The radio channel emanates timevariant linear channel characteristics. In this research, the analysis of the statistics of the underlying channel...
Show moreWith the increase of wireless networks, consumers are increasingly aware of the importance and convenience of wireless technology. Wireless technologies such as WLANs, mobile phones, blue tooth or PCS rely on a range of mechanisms to provide for high Quality of Service (QoS), the core of which would be accurate modeling of the wireless channels. The radio channel emanates timevariant linear channel characteristics. In this research, the analysis of the statistics of the underlying channel behavior is investigated using a developed physicsbased channel model that characterizes smallscale fading behavior the wireless channels. Specifically, we investigate Flat Slow Fading, Flat Fast Fading, FrequencySelective Slow Fading and FrequencySelective Fast Fading propagation channels. This thesis will provide for computer simulation of a physicsbased channel model to define the essential channel parameters, and subsequently reproduce the characterized channel by appropriately utilizing the autoregressive process to remodel the attained channel data. The principal method for this study is the use of LevinsonDurbin recursion to build a signal model for channel analysis. The motivation for this research is, given a set of channel parameters obtained from the physicsbased channel model, the proposed autoregressive signal model can reproduce the physical channel parameters and accurately predict the nature of small scale fading present in a channel whether it is Flat Slow Fading, Flat Fast Fading, FrequencySelective Slow Fading or FrequencySelective Fast Fading. Performance comparisons are then made from the generated physical properties of the channel with the simulation results of the constructed autoregressive model built by the use of statistical comparison analysis such as autocorrelation properties to demonstrate the merits of the approach. This manuscript is organized as follows; Chapter one provides an introduction and background information of communication systems. Chapter two describes random timevarying channels, different parameters affecting the propagation of signals in the communication channel; phenomena such as Doppler shift and multipath delay are discussed. The physicsbased channel is developed in chapter two. Chapter three discusses different parameters that can be used to categorize wireless channels and types of multipath fading that can happen in a wireless channel. Autoregressive channel modeling using LevinsonDurbin recursion is discussed in chapter four. Simulation results of the developed model are provided and discussed in chapter five. Chapter six gives a conclusion and discusses areas where further studies need to be carried out.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd4134
 Format
 Thesis
 Title
 Modeling, Simulation, and Experimental Verification of Impedance Spectra in LiAir Batteries.
 Creator

Mehta, Mohit Rakesh, Andrei, Petru, Schlenoff, Joseph B., Zheng, Jianping P., Moss, Pedro L., Li, Hui, Florida State University, College of Engineering, Department of Electrical...
Show moreMehta, Mohit Rakesh, Andrei, Petru, Schlenoff, Joseph B., Zheng, Jianping P., Moss, Pedro L., Li, Hui, Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

There has been a growing interest in electrochemical storage devices such as batteries, fuel cells and supercapacitors in recent years. This interest is due to our increasing dependence on portable electronic devices and on the high demand for energy storage from the electric transport vehicles and electrical power grid industries. As we transition towards cleaner renewable fuel sources such as solar, wind, tidal, etc. our dependence on energy storage devices will continue to grow. Liair...
Show moreThere has been a growing interest in electrochemical storage devices such as batteries, fuel cells and supercapacitors in recent years. This interest is due to our increasing dependence on portable electronic devices and on the high demand for energy storage from the electric transport vehicles and electrical power grid industries. As we transition towards cleaner renewable fuel sources such as solar, wind, tidal, etc. our dependence on energy storage devices will continue to grow. Liair offers much higher energy density than all other batteries based on electrochemical storage. However, these batteries currently suffer from a number of issues such as a low cyclability and a reduced practical energy density compared to the theoretical energy density. The deposition of lithium peroxide on the surface of the cathode is one of the main causes for the low practical specific capacity of lithiumair batteries with organic electrolyte. Electrochemical impedance spectroscopy (EIS) has been used in the past to extract physical parameters such as chemical diffusion coefficient, effective diffusion coefficient, Faradaic reaction rate, degradation and stability of an electrochemical device. In this dissertation, a physics based analytical model is developed to study the EIS of Liair batteries, in which the mass transport inside the cathode is limited by oxygen diffusion, during charge and discharge. The model takes into consideration the effects of double layer, Faradaic processes, and oxygen diffusion in the cathode, but neglects the effects of anode, separator, conductivity of the deposit layer, and Liion transport. The analytical model predicts that the effects of Faradaic impedance can be hidden by the double layer capacitance. Therefore, the dissertation focuses separately on two cases: 1) the case when the Faradaic process and the double layer capacitance are separate and can be observed as two different semicircles on the Nyquist plot and 2) the case when the Faradaic process is shadowed by the double layer capacitance and shows up as only one large semicircle on the Nyquist plot. A simple expression is developed to extract physical parameters such as the values of the diffusion coefficient of oxygen and Faradaic reaction rate from experimental impedance spectrum for each of the two cases. The diffusion coefficient can be determined by using the resistances (real impedance intercept on the Nyquist plot) of both the semicircles for the first case and by using the combined resistance for the second case. Once, the effective oxygen diffusion coefficient is estimated, it can be used to estimate the value of the reaction constant. This method of extracting the values of the diffusion coefficient and reaction constant can serve as a tool in identifying an effective electrolyte or cathode material. It can also serve as a noninvasive technique to identify and also quantify the use of the catalyst to improve the reaction kinetics in an electrochemical system. Finally, finite element simulations are used to validate the analytical models and to study the effects of discharge products on the impedance spectra of Liair batteries with organic electrolyte. The finite element simulations are based on the theory of concentrated solutions and the complex impedance spectra are computed by linearizing the partial differential equations that describe the mass and charge transport in Liair batteries. These equations include the oxygen diffusion equation, the Li driftdiffusion equation, and the electron conduction equation. The reaction at the anode and cathode are described by ButlerVolmer kinetics. The total impedance of a Liair battery increases by more than 200% when the response is measured near the end of the discharge cycle as compared to on a fresh battery. The resistivity of the deposition layer significantly affects the deposition profile and the total impedance. Using electrolytes with high oxygen solubility and concentrated O2 gas at high pressures will reduce the total impedance of Liair batteries.
Show less  Date Issued
 2015
 Identifier
 FSU_2015fall_Mehta_fsu_0071E_12827
 Format
 Thesis
 Title
 Coupled Subspace Analysis and PCA Variants: A Computer Vision Application.
 Creator

Nelson, Richard A., Roberts, Rodney G., Foo, Simon Y., Tung, Leonard J., Florida State University, College of Engineering, Department of Electrical and Computer Engineering
 Abstract/Description

In numerous applications involving high dimensional data, certain subspace techniques such as principal components analysis (PCA) may be utilized in feature extraction. Often, PCA can reduce the dimensionality while retaining most of the significant information of the original data. This can be beneficial not only for representation of the data more compactly (compression), but also for transforming the data into a more useful form for applications involving feature extraction and...
Show moreIn numerous applications involving high dimensional data, certain subspace techniques such as principal components analysis (PCA) may be utilized in feature extraction. Often, PCA can reduce the dimensionality while retaining most of the significant information of the original data. This can be beneficial not only for representation of the data more compactly (compression), but also for transforming the data into a more useful form for applications involving feature extraction and classification. Relatively recent developments with PCA extend conventional principal components analysis to newer variants of PCA which appear particularly useful in computer vision and image applications: (1) two dimensional PCA ("2D PCA"), and (2) bidirectional or bilateral two dimensional PCA ("B2DPCA", "Bi2DPCA", or "(2D)² PCA"). The latter category includes an iterative version which is an example of coupled subspace analysis or "CSA"; the noniterative version is known as projective Bi2DPCA. In this thesis, these PCA variants are considered as special cases of the more general CSA. Theoretical advantages of 2D PCA and bidirectional PCA over conventional PCA should arise from the fact that significant information about the spatial relationship between image pixels may be discarded in conventional PCA as the image is represented by a large column vector, whereas 2D PCA and bidirectional PCA techniques can preserve more of this information by representing the image as a matrix rather than a long vector. The problems of small sample size, and curse of dimensionality are also alleviated to some extent, particularly in the cases of B2DPCA and iterated CSA. Some of these PCA variants have been proposed in various image recognition applications recently, including biometric identification using iris texture, face images, and palm prints, and categorization of wood species based on wood grain texture to name a few examples. So, while much focus has been placed on feature extraction methods such as use of Gabor wavelets or similar techniques for some applications such as iris recognition, some subspace techniques, including some of these PCA variants, have shown promise in conjunction with image preprocessing techniques for removal of uneven background illumination and contrast enhancement. In this thesis, the image application of biometric iris recognition is chosen as the means of evaluating potential advantages of these newer PCA variants, including CSA, in the context of feature extraction and classification. The rich texture information of these images, and the utilization of effective image registration techniques, yields images which are well suited for this purpose. As the primary focus of this thesis, these PCA variants are evalulated using closed set identification test mode, and are compared using Euclidean distance single nearest neighbor classifier; images are preprocessed using tophat filtering and contrast limiting adaptive histogram equalization (CLAHE). Use of multiple test (probe) images is considered, and the impact on performance is considered also for training image sets with 2, 3, and 4 sample images per class. Concurrently, the application of iris image recognition is addressed in detail. Other applications for which these PCA variants and preprocessing techniques may be beneficial are discussed in the concluding section.
Show less  Date Issued
 2015
 Identifier
 FSU_2015fall_Nelson_fsu_0071N_12964
 Format
 Thesis
 Title
 SAS Yaw Motion Compensation Using AlongTrack Phase Filtering.
 Creator

Joshi, Shantanu H., Gross, Frank B., Arora, Krishna R., Roberts, Rodney R., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

In order to image or map targets on an ocean floor, a synthetic aperture sonar platform is moved underwater over the ocean floor. The platform pings or transmits acoustic signals, which reflect off the target back to the receiver. A target image is generated after applying a focusing or a beamforming algorithm on the processed received signal. However the moving platform, when pinging, undergoes motions like yaw, sway, surge, which produce distortions in the final target image. The main...
Show moreIn order to image or map targets on an ocean floor, a synthetic aperture sonar platform is moved underwater over the ocean floor. The platform pings or transmits acoustic signals, which reflect off the target back to the receiver. A target image is generated after applying a focusing or a beamforming algorithm on the processed received signal. However the moving platform, when pinging, undergoes motions like yaw, sway, surge, which produce distortions in the final target image. The main objective of this thesis is to geometrically model yaw motion and apply the motion compensation scheme to correct for the yaw motion causing target image distortion. The compensation scheme makes use of phase filtering of the received signals to improve the target image quality. The results obtained, demonstrate effectiveness of the method to compensate for the target image distortion due to yaw motion.
Show less  Date Issued
 2002
 Identifier
 FSU_migr_etd3691
 Format
 Thesis
 Title
 Alternative Measurement Approach Using Inverse Scattering Theory to Improve Modeling of Rotating Machines in Ungrounded Shipboard Power Systems.
 Creator

Breslend, Patrick Ryan, Edrington, Christopher S., Graber, Lukas, Steurer, Michael, Florida State University, College of Engineering, Department of Electrical and Computer...
Show moreBreslend, Patrick Ryan, Edrington, Christopher S., Graber, Lukas, Steurer, Michael, Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

The Navy has proposed to use a shipboard power system operating at medium voltage direct current to distribute power for their allelectric ship. The power is generated by electric machines as alternating current and requires power electronic rectifiers to output direct current. Power electronics converters are needed to convert the direct current to alternating current for ship propulsion and service loads. An increase in the use of fast switching power electronics is expected in future...
Show moreThe Navy has proposed to use a shipboard power system operating at medium voltage direct current to distribute power for their allelectric ship. The power is generated by electric machines as alternating current and requires power electronic rectifiers to output direct current. Power electronics converters are needed to convert the direct current to alternating current for ship propulsion and service loads. An increase in the use of fast switching power electronics is expected in future ships. The increased voltage rise time on switches is known to produce unwanted high frequencies with corresponding wavelengths of the same order of magnitude as the length of the ship hull. These high frequency transients can cause the ship system to couple with the surrounding ship hull causing adverse effects. The amount of high frequency content and the impact it has on the ship system performance is difficult to calculate with current models. Increased voltage and performance requirements for power electronics has led to advancements in switching frequencies into the 10s to 100s of kilohertz and increased voltage edge rates. The faster switching corresponds to higher frequency responses from the shipboard power system. Research has shown that high frequency content in electrical power systems is responsible for parasitic coupling and ultimately damage to the equipment. Electric machines, for instance, have increased winding and iron losses, overvoltages at the terminals, and even bearing currents via shaft voltages. The Navy is interested in simulating ship systems to test their electromagnetic compatibility before implementing or committing to a specific design. There are numerous techniques used to acquire machine parameters that have been proven to be useful in modeling electric machine behavior. The approaches were considered by the amount of proprietary information needed to acquire accurate results, the complexity of the modeling methods, and the overall time it takes for implementation. A majority of system simulations gravitate towards simple solutions for machine behavior which require assumptions to be made that deviate from the actual machine behavior. Exact inner dimensions, winding layouts, end winding dimensions, insulation thickness, and other information are proprietary and often not accurate representations of the physical machine once built. It is time consuming to obtain an accurate working model when assumptions are made or when detailed computer aided design models are needed to calculate machine response quantities. The research modeling approach put forth in this paper is not aimed at capturing the steadystate behavior of the machine. It is shown that a detailed understanding of the motor may not be necessary to accurately model the high frequency effects. It is the transient behavior at nonoperating frequencies that need to be modeled correctly to develop new models of shipboard power systems for grounding research. The frequency dependent information is most useful to determine frequencies of interest that other modeling techniques are less likely to capture and point out. Previously suggested measurement techniques have been considered useful in determining parameters of machines but are not always accurately implemented without indepth knowledge of the motor that may be proprietary. Lumpedparameter models are based on extracting information at transitional frequencies or looking at the slope of a variable over a frequency range. These models tend to be over simplified representations of the component by averaging the parameters for given ranges. In reality a machine's impedance varies with all frequencies. Lumped parameter based models typically over simplify the grounding behavior of the machine by not varying the impedance as a function of frequency. The technique used in this research is based on scattering parameters, a way of determining the terminal behavior of the machine without the knowledge of the actual inner workings of the machine. The inverse scattering technique uses steadystate stimuli to calculate reflection and transmission coefficients of system components allowing the device to be considered as a black box. This can be understood as electrical snapshots of how the machine would respond when subjected to a range of spectral content. The approach could have a significant impact on the modeling of ground interactions with machines. The machine can now be measured and characterized with no prior knowledge of the machine. The measurements are placed in simulation software in the typical measurement configurations used in other approaches to extract parametric data. It was discovered that these different configuration setups could now be measured in software without the need to physically reconfigure the machine's wiring for each measurement. This modeling approach was coined 'virtual measurement modeling.' To the best of the author's knowledge there are not any known techniques for fast model prototyping of electric machines which cover a broad range of frequencies with high accuracy. This thesis will present a possible solution for consideration in future models developed for grounding studies. This approach outlines a promising technique that can be easily implemented with high accuracy and reproducibility. The technique was derived from inverse scattering theory and was implemented on electric machines for characterizing high frequency behaviors.
Show less  Date Issued
 2015
 Identifier
 FSU_2015fall_Breslend_fsu_0071N_12834
 Format
 Thesis
 Title
 A Study of DespreadRespread Multitarget Adaptive Algorithms in an AWGN Channel.
 Creator

Connor, Jeffrey D., Gross, Frank B., Foo, Simon, Kwan, Bing W., Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Typical adaptive algorithms attempt to exploit some characteristic of a desired mobile user's signal incident upon an array of antenna elements to form a blind estimate of the user's signal, wherein this estimate is used to update weights added to each element of the array in order to perform beamsteering. Generally, when mobile user's operate in a CDMA mobile environment two particular characteristics are exploited: 1.) Minimizing the Mean Square Error (MSE) between the array output and the...
Show moreTypical adaptive algorithms attempt to exploit some characteristic of a desired mobile user's signal incident upon an array of antenna elements to form a blind estimate of the user's signal, wherein this estimate is used to update weights added to each element of the array in order to perform beamsteering. Generally, when mobile user's operate in a CDMA mobile environment two particular characteristics are exploited: 1.) Minimizing the Mean Square Error (MSE) between the array output and the blind estimate of the desired user. 2.) Restoring the constant modulus to the output of the adaptive array corrupted by noise in the channel. These typical adaptive algorithms do not utilize knowledge of the spreading sequences used in a CDMA system, which separate users occupying the same frequency and time channels. However, this knowledge is exploited by DespreadRespread Multitarget Arrays (DRMTA). The four DRMTA algorithms which currently exist are: 1.) Least Squares DespreadRespread Multitarget Constant Modulus Array (LSDRMTCMA) 2.) Least Squares DespreadRespread Multitarget Array (LSDRMTA) 3.) Block Based RLS DespreadRespread Multitarget Array (BRLSDRMTA) 4.) DespreadRespread Kalman Predictor Multitarget Array (DRKPMTA) The objective of this thesis is to develop a comparison between these four algorithms for a stationary, additive white Gaussian noise (AWGN) channel in a CDMA mobile environment using MATLAB computer simulations for the following metrics: 1.) Analyzing Array Factor Patterns (Beampatterns) 2.) SignaltoInterferenceplusNoise Ratio (SINR) 3.) Convergence Degree of Weight (CDW) 4.) Bit Error Rate (BER) These comparisons are performed for several different scenarios:  Highly corruptive AWGN channel.  Low SINR environment.  Response to poor initial conditions.  Measuring Convergence characteristics.  Number of users greater than or equal to number of elements in array.  Response to a sudden increase in total number of users in environment  Reduced orthogonality of spreading sequences.  Minimizing MSE by maximizing CDW.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3454
 Format
 Thesis
 Title
 Gallium Arsenide Mesfet SmallSignal Modeling Using Backpropagation & RBF Neural Networks.
 Creator

Langoni, Diego, Weatherspoon, Mark H., MeyerBäse, Anke, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

The smallsignal intrinsic ECPs (equivalent circuit parameters) of a 4x50 µm gate width, 0.25 µm gate length GaAs (gallium arsenide) MESFET (metal semiconductor fieldeffect transistor) were modeled versus bias (voltage and current) and temperature using backpropagation and RBF (radial basis function) ANNs (artificial neural networks). The resulting ANNs consisted of 3input, 8output models of the MESFET ECPs and were compared to each other in terms of memory usage, convergence speed, and...
Show moreThe smallsignal intrinsic ECPs (equivalent circuit parameters) of a 4x50 µm gate width, 0.25 µm gate length GaAs (gallium arsenide) MESFET (metal semiconductor fieldeffect transistor) were modeled versus bias (voltage and current) and temperature using backpropagation and RBF (radial basis function) ANNs (artificial neural networks). The resulting ANNs consisted of 3input, 8output models of the MESFET ECPs and were compared to each other in terms of memory usage, convergence speed, and accuracy. Also, each network's performance was evaluated under "normal" training conditions (75% training data with a uniform distribution) and "stressed" training conditions (50% and 25% training data with a uniform distribution, 75%, 50%, and 25% training data with a skewed distribution). The results showed that for the RBF network, much better overall convergence speed as well as better accuracy under both "normal" and "moderately stressed" training conditions were obtained. However, the backpropagation network yielded better accuracy for the "extremely stressed" training conditions and better overall memory usage.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3286
 Format
 Thesis
 Title
 Investigation and Development of Liair and Liair Flow Batteries.
 Creator

Chen, Xujie, Zheng, Jianping P., Liu, Tao, Moss, Pedro L., Andrei, Petru, Florida State University, College of Engineering, Department of Electrical and Computer Engineering
 Abstract/Description

This dissertation is mainly focused on the investigation of cathode in Liair batteries using organic electrolyte and the development of highrate rechargeable Liair flow batteries. A Liair battery using organic electrolyte with an air electrode made with a mixture of carbon nanotube (CNT) and carbon nanofiber (CNF) is utilized to investigate the capacity limitation effects of cathode using a multipledischarge method. Scanning electron microscopy (SEM) images show that the discharge...
Show moreThis dissertation is mainly focused on the investigation of cathode in Liair batteries using organic electrolyte and the development of highrate rechargeable Liair flow batteries. A Liair battery using organic electrolyte with an air electrode made with a mixture of carbon nanotube (CNT) and carbon nanofiber (CNF) is utilized to investigate the capacity limitation effects of cathode using a multipledischarge method. Scanning electron microscopy (SEM) images show that the discharge product mainly forms at the air side of cathode due to low oxygen solubility and diffusivity in the organic electrolyte. This inhomogeneous distribution of discharge product indicates that the Liair cell falls short of the maximum capacity of air electrode. Electrochemical impedance spectra (EIS) demonstrated that during discharge at high current density (1 mA/cm2) pore blocking is the major factor that limits capacity; however, during discharge at low current density (0.2 mA/cm2) both pore blocking and impedance rise contribute to the capacity limitation. It's been confirmed that cathode is the dominant limitation to the discharge capacity. Also, the gradient porosity structure of cathode is able to increase the capacity based on the weight of carbon, but the electrolyte loading needs to be optimized to achieve high energy density of cell. A novel rechargeable Liair flow battery is demonstrated. It consists of a lithiumion conducting glassceramic membrane sandwiched by a Limetal anode in organic electrolyte and a carbon nanofoam cathode through which oxygensaturated aqueous electrolyte flows. It features a flow cell design in which aqueous electrolyte is bubbled with compressed air, and is continuously circulated between the cell and a storage reservoir to supply sufficient oxygen for high power output. It shows high rate capability (5 mA/cm²) and renders a power density of 7.64 mW/cm² at a constant discharge current density of 4 mA/cm². Adding RuO² as a catalyst in the cathode, the battery showed a high roundtrip efficiency (ca. 83%), with the overpotentials of 0.67 V between charge and discharge at a current of 1 mA/cm². A Liair flow battery using graphite as anode is also demonstrated for several cycles.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9156
 Format
 Thesis
 Title
 ConsensusBased Distributed Control for Economic Dispatch Problem with Comprehensive Constraints in a Smart Grid.
 Creator

Cao, Jianwu, Yu, Ming, MeyerBaese, Anke, Andrei, Petru, Li, Hui, Florida State University, College of Engineering, Department of Electrical and Computer Engineering
 Abstract/Description

Over the past few decades, the smart grid technology has been developed rapidly due to its main features of more involvement of customers and abilities to accommodate all renewable energy and distributed storages. More importantly, it offers an improved reliability, power quality and selfhealing capability. However, there are many problems and challenges associated the development of smart grid. For example, the economic dispatch problem (EDP) in a smart grid has become more complex and...
Show moreOver the past few decades, the smart grid technology has been developed rapidly due to its main features of more involvement of customers and abilities to accommodate all renewable energy and distributed storages. More importantly, it offers an improved reliability, power quality and selfhealing capability. However, there are many problems and challenges associated the development of smart grid. For example, the economic dispatch problem (EDP) in a smart grid has become more complex and challenging due to special characteristics of smart grid. For example, one of the major characteristics of smart grids is plugandplay due to its accommodation of distributed energy. Economic dispatch is the shortterm determination of the optimal output of a number of electricity generation facilities, to meet the system load, at the lowest possible cost, subject to transmission line loss and generation constraints. In short, EDP is an optimization problem and its aim is to reduce the total operation cost. Various mathematical and optimization methods have been developed to solve EDP in power systems. Most of the conventional methods collect global information and process commands in a centralized controller. In a smart grid, it's expensive and unreliable for these conventional centralized methods to achieve a minimum cost when generating a certain amount of power within certain power constraints. There are several reasons why it's not suitable to use centralized methods for EDP in a smart grid. First of all, the centralized controller requires a high level of connectivity to collect all the information among power generators. A failure or error may impair the effectiveness of the centralized controller. Secondly, the topologies of the smart grid and the communication network are likely to be variable in a smart grid. Therefore, a small change in the smart grid may lead to reconfiguration of the centralized algorithm. Thirdly, the centralized controller is not able to accommodate the plugandplay characteristic of smart grid. In this work, we propose a distributed controller based on consensus algorithm to solve the EDP in a smart grid. The consensus algorithm is based on graph theory in the area of communication. Compared with the centralized method, the distributed algorithm features advantages of less information requirement, robustness, and scalability. In order to present a more practical scenario of EDP, a quadratic cost function and comprehensive constraints are assumed in the problem definition. It's assumed that the valve point effect of the generation unit is negligible. Different from the centralized approach, the proposed algorithm enables each generator to collect the mismatch between power demand and power generations in a distributed manner. The mismatch power is used as a feedback for each generator to adjust its power generation. In order to implement the consensus algorithm, the incremental cost of each generator is selected as the consensus quantity and will converge to a common value eventually. Simulation results of different case studies are provided to show the effectiveness of the proposed algorithm. Effect of power constraints, communication topology and generator dynamic on the convergence and iteration speed of proposed algorithm is also examined. These case studies are simulated and analyzed in Matlab/Simulink. The convergence speed and total generation cost of proposed algorithm are also compared with the conventional algorithms such as lambda iteration method and particle swarm optimization. The consensus algorithm has a better combined performance of convergence and total generation cost compared to lambda iteration method and particle swarm optimization. In order to validate the consensus algorithm, an IEEE 14 bus system with the proposed algorithm is established in PSCAD/EMTDC and verified by comparing with the analytical results.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9153
 Format
 Thesis
 Title
 A Framework for Comparing Shape Distributions.
 Creator

Henning, Wade, Srivastava, Anuj, Alamo, Ruﬁna G., Huﬀer, Fred W. (Fred William), Wu, Wei, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

The problem of comparisons of shape populations is present in many branches of science, including nanomanufacturing, medical imaging, particle analysis, fisheries, seed science, and computer vision. Researchers in these fields have traditionally characterized the profiles in these sets using combinations of scalar valued descriptor features, like aspect ratio or roughness, whose distributions are easy to compare using classical statistics. However, there is a desire in this community for a...
Show moreThe problem of comparisons of shape populations is present in many branches of science, including nanomanufacturing, medical imaging, particle analysis, fisheries, seed science, and computer vision. Researchers in these fields have traditionally characterized the profiles in these sets using combinations of scalar valued descriptor features, like aspect ratio or roughness, whose distributions are easy to compare using classical statistics. However, there is a desire in this community for a single comprehensive feature that uniquely defines these profiles. The shape of the profile itself is such a feature. Shape features have traditionally been studied as individuals, and comparing distributions underlying sets of shapes is challenging. Since the data comes in the form of samples from shape populations, we use kernel methods to estimate underlying shape densities. We then take a metric approach to define a proper distance, termed the FisherRao distance, to quantify differences between any two densities. This distance can be used for clustering, classification and other types of statistical modeling; however, this dissertation focuses on comparing shape populations as a classical twosample hypothesis test with populations characterized by respective probability densities on shape space. Since we are interested in the shapes of planar closed curves and the space of such curves is infinite dimensional, there are some theoretical issues in defining and estimating densities on this space. We therefore use a spherical multidimensional scaling algorithm to project shape distributions to the unit twosphere, and this allows us to use a von MisesFisher kernel for density estimation. The estimated densities are then compared using the FisherRao distance, which, in turn, is estimated using Monte Carlo methods. This distance estimate is used as a test statistic for the twosample hypothesis test mentioned above. We use a bootstrap approach to perform the test and to evaluate population classification performance. We demonstrate these ideas using applications from industrial and chemical engineering.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9185
 Format
 Thesis
 Title
 Throughput Improvement in Multihop Ad Hoc Network Using Adaptive Carrier Sensing Range and Contention Window.
 Creator

Acholem, Onyekachi, Harvey, Bruce, Zhang, Zhenghao, Srivastava, Anuj, Roberts, Rodney, Foo, Simon, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Demand for decentralized, wireless, adhoc systems, where hosts are free to leave or join, to replace wired communication systems has seen a phenomenal growth. Such networks need little or no infrastructure support to operate. Deploying these networks such as in wireless sensor networks (WSN) enables new frontiers in developing opportunities to collect and process data from remote locations. The large number of nodes in these wireless networks invariably results in higher node densities and...
Show moreDemand for decentralized, wireless, adhoc systems, where hosts are free to leave or join, to replace wired communication systems has seen a phenomenal growth. Such networks need little or no infrastructure support to operate. Deploying these networks such as in wireless sensor networks (WSN) enables new frontiers in developing opportunities to collect and process data from remote locations. The large number of nodes in these wireless networks invariably results in higher node densities and increased levels of network interference. Interference mitigation is therefore crucial in ensuring these networks operate efficiently. Often the lack of network planning and regulations for such networks require the targeted access strategy to be adaptive to network conditions and distributed. The goal of this research is to design an algorithm employing mathematical tools in optimizing spatial reuse among nodes in the ad hoc network so that multiple communications between nodes can proceed simultaneously thereby maximizing the network throughput. To maximize spatial reuse, the IEEE 802.11 Medium Access Control (MAC) protocol would be modified so that each transmitting node can finetune its data rate and carrier sense range adaptively depending on minimal receiver response local data. All nodes must be able to detect and communicate with their neighbors in order to determine the network structure, to execute network functions and transmit collated information back to the remote node. The network topology will be discovered using clustering schemes such as the Kmeans technique that minimizes the Euclidean distance between random nodes. Each cluster will have a cluster head that would keep track of local information about nodes in its cluster. A further goal of this research would be to demonstrate that the physical carrier sensing incorporated in the 802.11 MAC protocol can adaptively optimize the sensing threshold of the nodes and minimize interference within the network without the benefit of the requesttosend and cleartosend handshake of the virtual carrier sensing. Considerable nodal energy and packet overhead would be saved by turning off the RTS/CTS handshake process. An analytic design will be presented for acquiring the optimal sensing threshold given a network topology; data rate and transmit/receive power of the nodes. Two major issues to be addressed in improving spatial reuse are: 1. The optimal range of transmit data rate/ carrier sense threshold for maximum network capacity 2. The relationship between the carrier sense threshold and contention window. Furthermore, results from this research will show that tuning the carrier sense threshold and contention window offers several advantages including delivering considerable aggregate throughput more than that obtained from a static carrier sense threshold network with no previous knowledge of the network topology. This will enable nodes sustain a high data rate, while maintaining the adverse effect of collision on other neighboring simultaneous communications at minimum. In the end, the communication protocol will be improved to achieve better utilization of the scarce wireless spectrum. The simulation and performance evaluation tools required for this work would be Network Simulator2 (NS2) simulator, AWK and PERL programming languages.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd0108
 Format
 Thesis
 Title
 Analysis of Aftereffect Phenomena and Noise Spectral Properties of Magnetic Hysteretic Systems Using Phenomenological Models of Hysteresis.
 Creator

Adedoyin, Ayodeji Adeoye, Andrei, Petru, Chiorescu, Irinel, Arora, Rajendra K., Foo, Simon Y., Zheng, Jim P., Department of Electrical and Computer Engineering, Florida State...
Show moreAdedoyin, Ayodeji Adeoye, Andrei, Petru, Chiorescu, Irinel, Arora, Rajendra K., Foo, Simon Y., Zheng, Jim P., Department of Electrical and Computer Engineering, Florida State University
Show less  Abstract/Description

A robust and computationally efficient MonteCarlo based technique is developed to analyze the magnetic aftereffect and noise passage phenomena in magnetic hysteretic systems by using phenomenological models of hysteresis. The technique is universal and can be applied to model the aftereffect and noise passage phenomena in the framework of both scalar and vector models of hysteresis. Using this technique, we analyze a variety of magnetic viscosity phenomena. Numerical results related to the...
Show moreA robust and computationally efficient MonteCarlo based technique is developed to analyze the magnetic aftereffect and noise passage phenomena in magnetic hysteretic systems by using phenomenological models of hysteresis. The technique is universal and can be applied to model the aftereffect and noise passage phenomena in the framework of both scalar and vector models of hysteresis. Using this technique, we analyze a variety of magnetic viscosity phenomena. Numerical results related to the decay of the magnetization as a function of time as well as to the viscosity coefficient are presented. It is shown that a logt (logarithmic time)  type dependence of the average value of the magnetization can be predicted qualitatively in the framework of phenomenological models of hysteresis, such as the Preisach, Energetic, JilesAtherton, and Hodgdon models. The basic assumption of the techniques developed in this dissertation is that the total applied field is equal to the external applied field plus a random perturbation field. The total magnetic field is used as input in the scalar or vector models of hysteresis (vector models of hysteresis are defined in this dissertation as a superposition of scalar models of hysteresis distributed along all possible spatial directions). A statistical approach is developed to compute the average value and direction of the magnetization vector as a function of time. Whereas in the case of isotropic materials the magnetization vector usually moves on a straight line oriented towards the direction of the applied field, in the case of anisotropic materials the magnetization vector can switch from one easy axis to another and cross the direction of the applied field. It is shown that, depending on the initial hysteretic state, the trajectory of the magnetization vector can deviate substantially from the straight line, which is a pure vectorial relaxation effect. The vectorial properties of magnetic viscosity and data collapse phenomena are also investigated. The definition of the viscosity coefficient, which has been traditionally used to model aftereffect phenomena in scalar magnetic systems, is generalized in order to describe three dimensional systems, where both the direction and the magnitude of the magnetization vector can change in time. Using this generalization of the vector viscosity coefficient, we have analyzed data collapse phenomena in vectorial magnetization processes. It was found that the traditional bellshaped curves of the scalar viscosity coefficient as a function of the applied field can have one or more maxima in the case of vectorial systems. The data collapse phenomena seem to apply to simple magnetization processes (such as firstorder rotational reversal curves); however, it cannot be generalized to more complex magnetization processes because of the relatively complicated magnetization dynamics. In the final part of this dissertation we present a statistical technique based on MonteCarlo simulations, which we developed to compute the spectral densities of the output variable in phenomenological models of hysteresis. The input signal is described by an OrnsteinUhlenbeck process and the magnetization is computed by using various phenomenological models of hysteresis: the Energetic, JilesAtherton, and Preisach models. General qualitative features of these spectral densities are examined and their dependence on various parameters is discussed. For values of the diffusion coefficient near and smaller than the coercive field, the output spectra deviate significantly from the Lorentzian shape, characteristic to the input process. The intrinsic differences between the transcendental, differential, and integral modeling of hysteresis yield significantly different spectra at low frequency region, which reflect the longtime correlation behavior.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd0119
 Format
 Thesis
 Title
 RealTime Small Signal Stability Assessment of the Power ElectronicBased Components in Contemporary Distribution Systems.
 Creator

Salmani, M. Amin, Edrington, Christopher S., Ordonez, Juan Carlos, Andrei, Petru, Foo, Simon Y., Florida State University, College of Engineering, Department of Electrical and...
Show moreSalmani, M. Amin, Edrington, Christopher S., Ordonez, Juan Carlos, Andrei, Petru, Foo, Simon Y., Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less  Abstract/Description

Power Electronicbased Distribution Systems (PEDS) can provide excellent features such as load regulation, high power factor, and transient performance; especially in the large scale grids which are highly penetrated with the renewable energy resources, as well as innovative Power Electronicbased Components (PECs) such as Solid State Transformers (SSTs), Fault Isolation Devices (FIDs), machine drives, and inverters. Conversely, they are prone to exhibit negative impedance instabilities due...
Show morePower Electronicbased Distribution Systems (PEDS) can provide excellent features such as load regulation, high power factor, and transient performance; especially in the large scale grids which are highly penetrated with the renewable energy resources, as well as innovative Power Electronicbased Components (PECs) such as Solid State Transformers (SSTs), Fault Isolation Devices (FIDs), machine drives, and inverters. Conversely, they are prone to exhibit negative impedance instabilities due to the regulated output voltage, high power factor and constantpower nature of the individual components in the system. Therefore, smallsignal and largesignal stability assessments of the PEDS play a prominent role in the different stages of systems analyses such as preoperational (design), operational, and postoperational stages. Herein, various stability analysis techniques, along with their pros and cons, are described. This work proposes to develop a novel "real time" stability analysis criterion and technique to assess smallsignal stability of the PECs in the contemporary distribution systems. This will consist of a new smallsignal stability criterion as well as appropriate technique to assess smallsignal stability of the PECs based on the proposed criterion. The proposed criterion is developed based on dq impedance measurement technique and Nyquist criterion. The advantages of the proposed criterion and technique include the capability to be developed for realtime applications, the simplicity of development on software and hardware, and the use of a powerful algorithm to address smallsignal stability of the PEDS, etc. The primary contribution of this work is the realtime stability analysis methodology; more specifically, the capability of the proposed criterion and technique to be implemented in a realtime platform. The parallel perturbation of source and load is one of the key features of the proposed method that enables realtime capability. In addition, the proposed stability criterion, based on impedance measurement and Nyquist stability criterion, contributes higher accuracy in smallsignal stability assessments of the systems by providing a complete Nyquist contour of the system's returnration matrix. Ultimately, this yields lighter computational loads, faster computation times, and more accurate evaluation of the system's stability in a way that enables the assessment of the relative and absolute stability of the PEDS. Another advantage of the proposed technique is that it takes part of the system's nonlinearities into account by perturbing the systems with chirp signal and in a range of frequencies, instead of exclusively fundamental frequency. Hardware development and experimental implementation also is presented in this work. In the experimental implementation section of the proposed work, an Impedance Measurement Unit (IMU) is developed via Power Hardware in the Loop (PHIL) experiment and measures source and load impedances in realtime. Subsequently, the proposed stability criterion is implemented on the real time digital simulator (RTDS) and by utilizing information from the developed IMU, smallsignal stability of the test bed is investigated in realtime.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9242
 Format
 Thesis
 Title
 Techniques to Improve the Accuracy of System Identification in NonGaussian and Time Varying Environments.
 Creator

Ta, Minh Quang, DeBrunner, Victor, Chicken, Eric, DeBrunner, Linda, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Estimation of a dynamical system under unknown influences is always subjected to uncertainty. Thus, reducing the estimation variance under external influences is absolutely desired and becomes the motivation for the field of System Identification. In this dissertation, the author proposes new techniques for system identification under two major general situations: offline estimation of fixed systems under the unknown nonGaussian distributed measurement noise, and onlineestimation of time...
Show moreEstimation of a dynamical system under unknown influences is always subjected to uncertainty. Thus, reducing the estimation variance under external influences is absolutely desired and becomes the motivation for the field of System Identification. In this dissertation, the author proposes new techniques for system identification under two major general situations: offline estimation of fixed systems under the unknown nonGaussian distributed measurement noise, and onlineestimation of timevarying systems undergoing systematic (long term correlated) changes. For the first situation of offline estimating of fixed systems under the unknown nonGaussian distributed measurement noise, a technique called Minimum Entropy Estimation is employed, which promises to be better than the traditional Least Square (LS) estimation method due to the ability to simultaneously estimate the system and the statistical property of the unknown measurement noise sequence. This method gives rise to two novel classes of generalized offline estimation algorithms being proposed in this dissertation: a method of estimating a MultipleInputMultipleOutput (MIMO) systems under unknown, independent and identically distributed (iid) nonGaussian measurement noise, and a more general method of estimating a feedback structure under unknown, possibly colored, nonGaussian distributed measurement noise. For the second situation of online estimation of timevarying systems undergoing systematic changes, a new method of ParameterFiltering Adaptation (PFA) algorithm is proposed for the first time as an attempt to solve this problem and improve the estimation quality. Instead of updating the parameter based on the prediction error and an estimated value of the parameter at single time iteration (before the current one) as in the traditional adaptive algorithms, the new method improves the estimation quality of the system parameter by incorporating its prediction from all previous estimated values. The parameter prediction transfer function itself is also updated adaptively. The PFA algorithm is firstly considered in the context of IIR filter estimation to show the benefit of better local quadratic approximation for the timevarying, nonquadratic error surface. Its application in the spectral estimation of timevarying chirps utilizing Adaptive Notch Filters has shown an enormously better estimation of the instantaneous frequency. In the context of estimating timevarying systems using FIR filters, it is discovered that the PFA has a filtering effect on the sequence of the (estimated) parameters. Consequently it is shown in the dissertation that the sparser in the frequency domain (less frequency bandwidth) the parameter variations are, the better their estimation quality. Simulation on tracking of sinusoidal timevarying systems, as well as periodically switching systems shows that the PFA has a superior estimation quality with virtually no lag comparing to the traditional tracking methods.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0311
 Format
 Thesis