Repository logo
 

Theses - Engineering

Browse

Recent Submissions

Now showing 1 - 20 of 1091
  • ItemOpen Access
    Polyhedral Computation for Differential System Analysis and Control
    Kousoulidis, Dimitris; Kousoulidis, Dimitris [0000-0002-1508-2403]
    In this thesis we investigate the use of polyhedra in the analysis and design of dynamical systems. The main motivation behind the use of polyhedra in this context is that they can, in principle, provide arbitrarily tight conditions on stability, monotonicity, and some system gains for a large class of systems. However, finding a suitable polyhedron is a difficult problem. This is to a large extent inevitable, since many of the above problems are known to be computationally intractable. Despite this, the conditions that a polyhedron must satisfy in the above problems have a strong geometric intuition and a fundamental connection to Linear Programming (LP), allowing for the development of effective and sound heuristics. These can be very valuable because they allow us to better leverage all computational power available and because for many practical scenarios, especially those involving design, tight results might not be necessary but any improvements over existing relaxations are still beneficial. The main contribution of this thesis is the development, presentation, and evaluation of such heuristics for variations of the aforementioned problems. A central idea is the use of LP not only to verify conditions for a given polyhedron, but also to iteratively refine a candidate polyhedron through a local optimisation procedure. This allows for a fine-tuned trade-off between conservativeness and computational tractability and can be used for both analysis and design. For each of the problems considered we also include numerical case studies that demonstrate the effectiveness of this idea in practice. However, more work is necessary to establish theoretical guarantees about the performance and convergence of this approach. We also provide a unified exposition on polyhedra with a focus on computational considerations and the differences between their two representations, including a novel characterisation of the subdifferential of polyhedral functions in one of the representations that leads to novel dissipativity conditions for bounding the L1 gain of systems. Differential analysis is used throughout to link the conditions on the polyhedra to the resulting system behaviour. We hope that this research broadens the applicability of polyhedral computation in systems and control theory and opens a promising avenue for future research.
  • ItemOpen Access
    Development and Optimisation of an Optofluidic Evanescent Field Nano Tweezer System for Trapping Nanometre Crystals for Synchrotron X-Ray Diffraction Experiments
    Diaz, Alexandre
    X-ray crystallography (the analysis of the diffraction pattern of a high-power X-ray) has become the preferred method for investigating complex molecular structures. Research has continued to improve both the hardware and software elements of this methodology; refining critical factors such as detectors, beam generation and post modelling computation to keep increasing performance. One overlooked, yet critical factor is increasingly impacting the developmental progress of this technology, sample loading. This is currently achieved via expensive, electromechanical and robotics systems, but technological advancement can increase measurement accuracy by improving positional accuracy, maintaining the crystal integrity by keeping them in their native solution and reduce experimentation times through faster loading; factors which are all currently limiting the minimum sample crystal size to >5 µm. A by product will be a reduction in investigation costs by reducing beamline set-up and testing time, potentially opening the technique to other research fields, e.g. biomedical, archaeological or engineering. This work focusses on a specific sub-section of the loading challenge, introducing a method which could achieve reliable sample loading of crystals below 5 µm, thus opening up the low micro and nano crystallography frontier. The solution proposed in this thesis implements a novel sample loading method, an optofluidic system which combines the advantages of both evanescent field optical tweezing and microfluidics. A low cost commercial Nanotweezer system and optofluidic chip technology were assessed via interferometry and SEM microscopy to determine the microfluidic and waveguide architecture. It was shown that across several chip batches the waveguide dimensions remained similar but with notable structural variations due to imperfect production and poor post manufacturing modifications, highlighting the need for an alternative chip design and manufacturing process. Factors affecting the sample generation and stabilization, crystallisation and microfluidic parameters were also investigated through a series of tweezing trials on latex nanospheres and lysosome crystals. Results indicated that an operational temperature of 2°C and a horizontal chip orientation (0° being optimal, but effective up to 60°) were the most critical factors. COMSOL modelling highlighted that both the evanescent field form and intensity could be tailored to the crystal size and shape via the addition of either “doughnut” or “bullseye” nano plasmonic antennas on the surface of the optofluidic chip waveguides. When combined with a 1-D PhC resonator array they formed a hybrid waveguide that increased the electric field intensity from 1.10 × 107 V/m to 1.70 × 107 V/m. A prototype manufacturing route for the refined architecture was evaluated. A first batch of upgraded optofluidics chips was upgraded using focused ion beam assisted gas deposition to generate the platinum plasmonic antennas. This method however, proved unsuccessful so the remaining batch was upgraded using e-beam lithography to generate gold plasmonic antennas. While satisfactory launching of the newly upgraded chips was not achieved due to technical limitations, characterisation testing of the default waveguide demonstrated microscale (2 µm) and sub-micron (0.8 µm) tweezing, and that such performance could theoretically be enhanced with the addition of the plasmonic antenna structures.
  • ItemOpen Access
    Learning Monocular Cues in 3D Reconstruction
    Bae, Gwangbin; Bae, Gwangbin [0000-0003-4189-3493]
    3D reconstruction is a fundamental problem in computer vision. While a wide range of methods has been proposed within the community, most of them have fixed input and output modalities, limiting their usefulness. In an attempt to build bridges between different 3D reconstruction methods, we focus on the most widely available input element -- *a single image*. While 3D reconstruction from a single 2D image is an *ill-posed* problem, monocular cues -- e.g. texture gradients and vanishing points -- allow us to build a plausible 3D reconstruction of the scene. The goal of this thesis is to propose new techniques to learn monocular cues and demonstrate how they can be used in various 3D reconstruction tasks. We take a data-driven approach and learn monocular cues by training deep neural networks to predict pixel-wise surface normal and depth. For surface normal estimation, we propose a new parameterisation for the surface normal probability distribution and use it to estimate the aleatoric uncertainty associated with the prediction. We also introduce an uncertainty-guided training scheme to improve the performance on small structures and near object boundaries. Surface normals provide useful constraints on how the depth should change around each pixel. By using surface normals to propagate depths between pixels, we demonstrate how depth refinement and upsampling can be formulated as a classification of choosing the neighbouring pixel to propagate from. We then address three challenging 3D reconstruction tasks to demonstrate the usefulness of the learned monocular cues. The first is multi-view depth estimation, where we use single-view depth probability distribution to improve the efficiency of depth candidate sampling and enforce the multi-view depth to be consistent with the single-view predictions. We also propose an iterative multi-view stereo framework where the per-pixel depth distributions are updated via sparse multi-view matching. We then address human foot reconstruction and CAD model alignment to show how monocular cues can be exploited in prior-based object reconstruction. The shape of the human foot is parameterised by a generative model while the CAD model shape is known *a priori*. We substantially improve the accuracy for both tasks by encouraging the rendered shape to be consistent with the single-view depth and normal predictions.
  • ItemOpen Access
    On Aerothermal Optimization of Low-Pressure Steam Turbine Exhaust System
    Cao, Jiajun
    This thesis addresses two challenges in the aerothermal optimization of Low-Pressure Steam Turbine Exhaust System (LPES). The first one is the high computational cost due to the complexity of LPES geometry. To make things worse, designers have to consider extra cost caused by change of constraints, multi-objective and multi-disciplinary optimization. The second is the vulnerability of optimization due to the lack of comprehensive validation of numerical simulation. Sparse experimental data from LPES rig can only give limited and sometimes misleading information for validation. To reduce the computational cost, a commonly used way is to build a surrogate model. However, manual parametrization of high-dimensional geometries like LPES is unreliable. Thus, a Non-Parametric Surrogate Model (NPSM) is developed, which directly builds a mapping relationship between surface mesh and two-dimensional distribution of fluid variables. It can select sensitive geometric features from surface meshes by Graph Neural Networks (GNNs) encoder according to the back-propagated prediction error, which reduces uncertainties caused by manual parameterization and gains the ability to process designs defined by different geometry generation methods. Based on NPSM, a non-parametric sensitivity analysis is conducted, which can calculate the distribution of sensitivity on the surface meshes. It can help users to identify important geometric features and redistribute the control points of geometry. Furthermore, a design classifier is built to detect predictable designs for NPSM, thereby preventing compromises to robustness of optimization. To enhance the robustness of optimization, validation of numerical simulation by exper iment is essential, but the sparsity of experimental data due to the large volume of LPES prevents a comprehensive comparison. This thesis demonstrates a Physical-Informed Neural Networks (PINNs)-based method to reconstruct sparse data, which has much better perfor mance than interpolation. In addition, it can be used to detect anomalies, which prevents data contamination due to mistakes in experiment. A Non-Uniform Rational B-spline (NURBS)-based optimization algorithm is also pre sented in this thesis. It generates surface meshes for NPSM and volume meshes for CFD solver based on control points given by optimizer. The conversion process is achieved by the evaluation of NURBS surfaces and parabolic mesh generator, which provides more degrees of freedom and keep mesh generation robust.
  • ItemEmbargo
    High Bandwidth Rogowski Coils, Commutation Loop Stray Inductance, and the Si IGBT and the SiC MOSFET Based Hybrid Concept
    Zhang, Tianqi
    Wide bandgap (WBG) semiconductor power devices are attracting increasing attention due to their superior performance across various applications. For accurate measurement of these devices, a current sensor with a minimum bandwidth of 70 MHz is required. This thesis thoroughly explores the design and analysis of both toroidal and solenoidal printed circuit board (PCB) Rogowski coils. The associated parameters of these Rogowski coils are meticulously extracted using ANSYS Q3D Extractor and subsequently measured with the Tektronix TTR500 vector network analyser (VNA). Following this, the thesis presents the design of an op-amp-based integrator, with parameters imported into LTspice for simulation. The bandwidths of the Rogowski coil-integrator assemblies are measured using the VNA, which reveals that the bandwidth of a 10-turn solenoidal PCB Rogowski coil impressively exceeds 300 MHz. Moreover, a comparative analysis of handmade coils and PCB coils is conducted. A relay-based edge generator, designed to facilitate time-domain testing of Rogowski coils, is capable of producing a voltage rise from 10% to 90% in just approximately 6 ns. The Rogowski coil measurement closely correlates with the onboard current measurement, thus confirming the validity and effectiveness of the design approaches presented within this thesis. This thesis also provides a comprehensive introduction to the fundamental aspects of the metal-oxide-semiconductor field-effect transistor (MOSFET) and the insulated-gate bipolar transistor (IGBT), deepening the reader’s understanding of these essential components in power electronics. It then focuses on the current commutation loop inductance, underscoring the importance of minimising this parameter to optimise the design and efficiency of converters and modules, especially when employing WBG power devices in high-speed applications. The thesis introduces various strategies to mitigate loop stray inductance, emphasising the use of compact PCB layouts and laminated bus bars. Additionally, it presents an experimental methodology for extracting loop inductance values, illustrated with a practical example that demonstrates the extraction process. To fully capitalise on the advantages of the Si IGBT’s low conduction loss and the silicon carbide (SiC) MOSFET’s low switching loss, a hybrid parallel connection of a Si IGBT and a SiC MOSFET is proposed. Six gate signal control strategies are introduced and evaluated. The total power loss and device junction temperatures across various load currents and switching frequencies are simulated, investigated and analysed using LTspice and PLECS. For light loads, the SiC MOSFET operates independently. In the case of heavy loads, the Si IGBT takes charge of conduction, while the SiC MOSFET serves as a current bypass to assist in IGBT switching. Simulation results indicate that the hybrid parallel connection can significantly reduce both the total loss and thermal requirements. Moreover, the hybrid parallel connection can also minimise the impact of the freewheeling diode’s (FWD) reverse recovery during the IGBT’s turn-on phase.
  • ItemOpen Access
    Active and Semi-Supervised Learning for Speech Recognition
    Kreyssig, Florian
    Recent years have seen significant advances in speech recognition technology, which can largely be attributed to the combination of the rise in deep learning in speech recognition and an increase in computing power. The increase in computing power enabled the training of models on ever-expanding data sets, and deep learning allowed for the better exploitation of these large data sets. For commercial products, training on multiple thousands of hours of transcribed audio is common practice. However, the manual transcription of audio comes with a significant cost, and the development of high-performance systems is typically limited to commercially viable tasks and languages. To promote the use of speech recognition technology across different languages and make it more accessible, it is crucial to minimise the amount of transcribed audio required for training. This thesis addresses this issue by exploring various approaches to reduce the reliance on transcribed data in training automatic speech recognition systems through novel methods for active learning and for semi-supervised learning. For active learning, this thesis proposes a method based on a Bayesian framework termed NBest-BALD. NBest-BALD is based on Bayesian Active Learning by Disagreement (BALD). NBest-BALD selects utterances based on the mutual information between the prediction for the utterance and the model parameters, i.e. I[θ, w|Dl, Xi]. Monte-Carlo Dropout is used to approximate sampling from the posterior of the model parameters and an N-Best list is used to approximate the entropy over the hypothesis space. Experiments on English conversational telephony speech showed that NBest-BALD outperforms random sampling and prior active learning methods that use confidence scores or the NBest-Entropy as the informativeness measure. NBest-BALD increases the absolute Word Error Rate (WER) reduction obtained from selecting more data by up to 14% as compared to random selection. Furthermore, a novel method for encouraging representativeness in active data selection for speech recognition was developed. The method first builds a histogram over the lengths of the utterances. In order to select an utterance, a word length is sampled from the histogram, and the utterance with the highest informativeness within the corresponding histogram bin is chosen. This ensures that the selected data set has a similar distribution of utterance lengths to the overall data set. For mini-batch acquisition in active learning on English conversational telephony speech, the method significantly improves the performance of active learning for the first batch. The histogram-based sampling increases the absolute WER reduction obtained from selecting more data by up to 57% as compared to random selection and by up to 50% as compared to an approach using informativeness alone. A further contribution to active learning in speech recognition was the definition of a cost function, which takes into account the sequential nature of conversations and meetings. The level of granularity at which data should be selected given the new cost function was examined. Selecting data on the utterance-level, as fixed-length chunks of consecutive utterances, as variable-length chunks of consecutive utterances and on the side-level were examined. The cost function combines a Real-Time Factor (RTF) for the utterance length (in seconds) with an overhead for each utterance (t1) and an overhead for a chunk of consecutive utterances (t2). The overhead t2 affects the utterance-level selection method (which previous methods in the literature rely on) the most, and this level of granularity yielded the worst speech recognition performance. This result showed that it is crucial to focus on methods for selection that can take a better cost function into account. For semi-supervised learning, the novel algorithm cosine-distance virtual adversarial training (CD-VAT) was developed. Whilst not directed at speech recognition, this technique was inspired by initial work towards using consistency-regularisation for speech recognition. CD-VAT allows for semi-supervised training of speaker-discriminative acoustic embeddings without the requirement that the set of speakers is the same for the labelled and the unlabelled data. CD-VAT is a form of consistency-regularisation where the supervised training loss is interpolated with an unsupervised loss. This loss is the CD-VAT-loss, which smoothes the model’s embeddings with respect to the input as measured by the cosine-distance between an embedding with and without adversarial noise. For a large-scale speaker verification task, it was shown that CD-VAT recovers 32.5% of the Equal Error Rate (EER) improvement that would be obtained when all speaker labels are available for the unlabelled data. For semi-supervised learning for speech recognition, this thesis proposes two methods to improve the input tokenisation that is used to derive the training targets that are used in masked-prediction pre-training; a form of self-supervised learning. The first method is biased self-supervised learning. Instead of clustering the embeddings of a model trained using unsupervised training, it clusters the embeddings of a model that was finetuned for a small number of updates. The finetuning is performed on the small amount of supervised data that is available in any semi-supervised learning scenario. This finetuning ensures that the self-supervised learning task is specialised towards the task for which the model is supposed to be used. Experiments on English read speech showed that biased self-supervised learning can reduce the WER by up to 24% over the unbiased baseline. The second method replaces the K-Means clustering algorithm that was previously used to tokenise the input with a Hidden Markov Model (HMM). After training, the tokenisation of the input is performed using the Viterbi algorithm. The result is a tokenisation algorithm that takes the sequential nature of the data into account and can temporally smooth the tokenisation. On the same English read speech task, the HMM-based tokenisation reduces the WER by up to 6% as compared to the tokenisation that uses K-Means.
  • ItemOpen Access
    Atomically thin microwave biosensors
    Gubeljak, Patrik; Gubeljak, Patrik [0000-0001-6955-419X]
    This thesis described the development of a new type of biosensor based on the integration of atomically thin graphene in a broadband radio-frequency (RF) coplanar waveguide (CPW). To combine both electrochemical and dielectric sensing concepts, a graphene channel is inserted into the CPW, which can then be functionalised using standard chemical processes developed for graphene direct-current (DC) sensors. The new RF sensors shown in this work inherit the strong response of graphene at low analyte concentrations, while enabling RF methods at concentrations not possible before. This is shown by limit of the detection for glucose, based on RF measurements, several orders of magnitude lower than that previously reported in literature, enabling measurements of glucose at levels found in human sweat. The combined effect of the two sensing responses can be seen as the high sensitivity at low concentrations, 7.30 dB/(mg/L), significantly higher than metallic state-of-the-art RF sensors, which gradually decreases as the graphene saturates and the response at higher concentrations aligns with that of previously reported metal sensors. This shows the extended operating range of the new sensors compared to the literature state-of-the-art. Similarly, for DNA strands, the graphene CPW sensors was able to distinguish between different types of DNA (perfect match, single mutation, and complete mismatch) at the limit of detection, 1 aΜ, improving the concentration response by orders of magnitude compared to existing RF DNA sensors and allowing for distinction of the types at lower concentration than those in contemporary DC graphene sensors. The additional effect of tunable graphene conductivity independently of the analyte, is the creation of multidimensional datasets for each *S*-parameter component by combining the frequency and potential responses. The resultant sensor response surfaces contain more information that can be used to determine the concentration and type of the analyte. In particular for DNA, the multidimensional approach, by considering the frequency and biasing condition for the largest joint changes in the parameters allowed for direct classification of the three DNA analyte types based on the sign of the changes in the parameters ($ℜ/ℑS_{ij}$ or $mag/∠S_{ij}$). Because of the multidimensional nature of the dataset, machine learning algorithms could be applied to extract features used for determining concentrations based on the raw measurements. For glucose the concentration was predicted with larger than 98% confidence, while the more feature-rich surfaces of the DNA response allowed the determination of the analyte type at 1 aΜ even with a simulated signal-to-noise ratio of 10 dB, proving the utility and resilience of the developed devices and measurement approach. The above approaches were only enabled by the new type of sensor combining both graphenes electronic properties and RF sensing concepts in a single device.
  • ItemOpen Access
    Humanitarian Sheltering: Analysing Global Structures of Aid
    George, Jennifer; George, Jennifer [0000-0002-0580-7386]
    The provision of shelter is an integral part of humanitarian response, in aiding communities affected by crises, in post-disaster, post-conflict and complex situations. Indeed, prior research has identified the wider impacts that shelter can have in these situations, across health, livelihoods, economic stimulation, education, food and nutrition, and reducing vulnerability. However, there is still a lack of understanding of the processes involved in shelter programming, the key decisions directing shelter response, and the influence that different stakeholders hold over those decisions. This thesis analyses the global shelter sector though a systems-thinking approach, including the structures which affect behaviour in this system, the relationship between different actors, and the relationship between decisions taken over time. It identifies key decision makers and decision moments which directly and indirectly influence outputs in shelter projects, analysing controls on decision-making and the complexity of humanitarian governance in shelter projects. This is achieved through expert interviews, analysis of historic cases of shelter and current guidance, and participant observation. This research reveals that community involvement in decision-making is often a very constrained exercise, despite repeated rhetoric over its necessity for project success. It also illustrates the top-down power dynamics that exist in decision-making, oftentimes hidden behind the supposed technocratic focus of the shelter and settlements sector. This includes influence over projects by donors, governments, the humanitarian system, private sector, and public opinion. It examines perceived constraints in-depth, including donor policies and funding timelines, political priorities of national governments, humanitarian mandates and priorities, private sector partnerships, iconography of shelter, and the role of affected populations themselves. This thesis will show that humanitarian shelter should be re-defined at a policy level as ‘an enabled process to facilitate a living environment with crisis-affected communities and individuals to meet their current and future needs, whilst also having due consideration for the needs of the host communities and environment’. This is required to shift perceptions of shelter across actors who are traditionally outside of the shelter sector and incorporate the learning in shelter and settlements that has occurred over the last forty years.
  • ItemEmbargo
    Conductive PEDOT:PSS fibres for modelling and assessing oligodendrocyte ensheathment
    Liu, Ruishan
    Oligodendrocytes are recognised for their capacity to ensheath neuronal axons with tightly packed, multi-layered cell membranes, forming the myelin sheath. This myelin sheath plays a pivotal role as an insulating layer surrounding neuronal axons. It effectively segregates the conductive environments inside and outside neuronal axons, thereby preventing ion leakage and minimising signal loss. The loss of the myelin sheath can lead to physical and mental health issues, including disabilities and depression. Consequently, there is a pressing need for in-vitro models to investigate oligodendrocyte ensheathment behaviour. Current models often employ engineered nano- or micro-fibres to simulate neuronal axons but frequently lack electrical conductivity. To address this gap, this thesis introduces novel conductive fibres made from Poly(3,4-ethylenedioxythiophene):Poly(Styrene Sulfonate) (PEDOT:PSS). These biocompatible fibres possess electrical conductivity and mechanical stiffness, mimicking the diameter of axons while facilitating electrical stimulation and real-time impedance measurements. Examination of Scanning Electron Microscopy (SEM) images reveals that oligodendrocytes adhere to these fibres, extend their cell membranes, and ensheath the fibres. Immunofluorescence images further indicate that the oligodendrocyte expresses Myelin Basic Protein (MBP), which is a characteristic of myelinating oligodendrocytes and has the ability to compact multiple cell membrane layers. Additionally, the author has derived an equivalent circuit to interpret device-level impedance results.
  • ItemControlled Access
    From Pine Cones to Minimal Surfaces: The Geometry and Mechanics of Morphing Bilayers
    Salsby, Barney
    Ubiquitous to nature, is an inherent ability to program form to fulfill function. In thin biological structures, such as leaves and plants, growth coupled with their slenderness, allows for a wealth of complex geometries to emerge. Underpinning this sophisticated phenomena is a problem of mechanics, in which growth, or more specifically volume changes, impart inhomogeneous strain profiles which frustrate the structure and trigger deformations by way of buckling or wrinkling. Inspired by this, researchers have mathematically and physically characterised these phenomena, enabling 2D sheets to be programmed into 3D shapes and reconfigure between their different stable forms. This would have wide ranging applications in a variety of contexts such as robotics for instance, where smart actuators and mimicking live tissue is needed, or for deployable structures in aeronautics. A subset of this type of growth is that of bilayers, where the Uniform Curvature model has enabled researchers to successfully investigate problems of buckling and multistability. However, owing to the free edge, this model fails to capture higher order effects pertaining to a boundary layer phenomenon, in which moments must dissipate, and hence geometry vary beyond the quadratic terms near the boundary. Traditionally, researchers have neglected their effect in view of the bulk behaviour. However, the resulting linearly scaled Gauss for the stretching energy does not distinguish between planform geometries, and given recent findings concerning the influence of edge effects on preferred bending direction, the validity of the Uniform Curvature model has been put into question. By introducing a ’fictitious’ edge moment rotation, the energy function is reduced commensurately with the dissipation that occurs within this boundary layer. These reduction terms are a function of the curvatures and we obtain a system of algebraic equations. By consideration of the neutrally stable shell, we observe the altering in preferred bending configuration and stability properties due to planform geometry effects, which we validate by way of a physical prototype. By introducing a non-linear scaling of Gauss for the stretching energy, we further investi- gate the cessation of multistability into monostability as aspect ratio is varied. By further coupling this non-linear scaling term with edge effects, we uncover a novel tristable structure and demonstrate how it can be straightforwardly fabricated in a table-top experiment. By use of soft elastomers, we investigate the one- and two-dimensional de-localisation of the boundary layer, noting a minimal surface for opposite-sense prestressing for two-dimensional de-localisation. This dissertation thus provides an insight into the role edge effects play on bilayers in the context of disparate planform geometries, which we further combine with a non- linear variation of the stretching energy, to see how the coupling of these accounts for the multistable properties and geometry as aspect ratio is varied. Beyond the insights expounded, the approach extends the Uniform Curvature model for the study, design and fabrication of morphing bilayers and their subsequent applications.
  • ItemEmbargo
    Flow visualisation and fundamental aspects of two-dimensional turbulent plumes
    Webb, James; Webb, James [0000-0002-1194-5582]
    Understanding the behaviour of turbulent, buoyant plumes is fundamental to modelling a multitude of fluid dynamics problems, for example, indoor airflows. Specifically, this thesis focuses on the turbulent plumes which develop above sources which directly supply buoyancy, but zero momentum, to the fluid. Sources of this type are particularly relevant to modelling the convective flows driven by underfloor heating in rooms. Indeed this form of heating underpinned the interest of the financial sponsor of the research presented herein. Remarkably, whilst the subject of a considerable body of research, this work begins by uncovering the fact that the far-field plume that forms above a heated plate is not well modelled by existing plume theory. The principle means of flow visualisation employed in this research to observe these plumes is shadowgraphy. Perhaps surprisingly, much of the underpinning theory of this method has, to date, been restricted to the case where the flow field of interest is illuminated by collimated light (i.e. illuminating light rays are parallel to each other). This is despite the fact that practical light sources produce a beam of light which diverges. The turbulent convective flow in the vicinity of a uniformly heated semi-infinite plate is investigated experimentally and theoretically. Guided by bespoke experimental observations, governing equations representing the conservation of mass, momentum and energy for this geometry are posed. On making the Boussinesq assumption, it is demonstrated that a so-called `horizontal plume' grows linearly from the plate edge. It is shown that in this near-plate region the mean horizontal velocity and mean buoyancy scale proportionally and inverse proportionally to the cube root of the distance from the plate edge, respectively. A theoretical model for the far-field flow above a uniformly heated two-dimensional plate is developed by considering the merger of two 'horizontal plumes', which grow linearly from both edges of the two-dimensional plate. New experimental data is combined with the analytical model to identify an 'apparent source', from which the results of classic plume theory can be applied. As a result, the applicability of existing analytical plume theory is extended to an entire class of buoyancy sources to which it previously could not be applied; namely, finite area sources that supply zero momentum to the fluid, for example heated surfaces. The new ability to model the convective flow above a heated surface is a fundamental step towards developing accurate, analytical predictive models for the environmental conditions resulting from underfloor heating. Whilst this was of particular interest to the financial sponsor, the wider applications encompass modelling the flow that develops from any two-dimensional area source of buoyancy including the convective flow above long sections of road heated by the sun, or the cool plumes that descend below chilled ceiling beams, for example. In order to aid the interpretation of thermal plumes visualised using the shadowgraph method, including those of interest herein, existing shadowgraph models are extended to the practical case where diverging incident light is used to illuminate the flow field. Contrary to the standard case where collimated incident light is used this results in shadowgrams which contain information about the spatial gradients of the refractive index field in three, rather than two, directions. The role of a co-flowing ambient fluid on the behaviour of turbulent planar plumes is then examined. Conservation equations are derived and their solution demonstrates that the plume behaviour is governed by a plume source Richardson number and a non-dimensional co-flow strength. For planar plumes emanating into a quiescent, unstratified environment there is single source Richardson number that results in an invariant local Richardson number at all heights. This result is extended to the case where a co-flow is present and it is demonstrated that this dynamical invariance is achieved for any pair of scaled source Richardson number and co-flow strength that sum to unity. It is also demonstrated that under certain conditions, the co-flow reduces the dilution of a plume, in some cases causing a concentration of buoyancy. A potential application of which is delivering cool air from a chilled beam at ceiling level, to occupants within a room, whilst minimising the dilution (and warming) of this air.
  • ItemEmbargo
    Numerical Study of Acoustophoretic and Thermophoretic Aggregation of Micro- and Nano-Sized Particles
    Dong, Jing
    Precise manipulation of bio-micro/nanoparticles, including cells, platelets, bacteria, and extracellular vesicles, is critical for tumour diagnostics, infectious disease detection, and cell analysis. The acoustofluidic aggregation method shows its significant advantages in microparticle aggregation, including highly scalable, label-free, contact-free, biocompatible and non-invasive. For nanoparticle enrichment, thermophoresis has been recently proposed as an efficient way to accumulate nanovesicles in biomedical applications. Although numerous experimental and analytical studies have been undertaken to study the micro/nanoparticle aggregation mechanisms, few studies have been conducted to optimise the acoustophoresis or thermophoresis device design. Therefore, this dissertation aims to improve the efficiency and accuracy of the micro/nanoparticle aggregation devices through numerical modelling using the finite element method. This research first conducts a numerical investigation of the acoustophoresis of microparticles suspended in a compressible liquid. The wall of the rectangular microchannel is made of Polydimethylsiloxane (PDMS), and Standing Surface Acoustic Waves (SSAW) are introduced into the channel from the bottom wall. The relative amplitude of the acoustic radiation force and the viscous drag force is evaluated for particles of different radii ranging from 0.1μm to 15μm. Only when the particle size is larger than a critical value can the particles accumulate at acoustic pressure nodes (PNs). While the displacement amplitude of SSAW impacts the time scale of particle movement, it does not influence the final position of the particles. The efficiency of the particle accumulation depends on the microchannel height, so an extensive parametric study is then undertaken to identify the optimum microchannel height. The optimum height, when normalised by the acoustic wavelength, is found to be between 0.57-0.82. Second, a numerical model is established to investigate the laser heating parameters on thermophoretic enrichment of nanoparticles. In the thermophoresis enrichment system, a microchamber containing particle/fluid mixture is sandwiched by a glass top, from where an infrared light laser heat source is introduced, and a sapphire bottom, which has a high heat conductivity to prevent overheating. First, the radius of the final nanoparticle distribution is found to be approximately 1.25 times the laser spot radius. A reduction in the laser attenuation length leads to a reduction of the time taken by the nanoparticles to reach the steady state, but an enlarged final area over which nanoparticles are concentrated. There exists an optimum range of the attenuation length, depending on the size of the target area. We have determined the threshold particle size, which decides whether the particle motion is convection-dominated or thermophoresis-dominated. Furthermore, an increase in the laser power reduces the accumulation time of nanoparticles. It is found from the second part of the research that the enrichment time for nanoparticles can be prolonged due to convection caused by local heat. To address this issue, a finite element (FE) model which incorporates SSAW with thermophoresis is developed in the third part. Based on the thermophoretic model from the second part, SSAW is introduced at the top of the microchamber by two pairs of interdigitated transducers (IDTs). The SSAW-induced thermoacoustic streaming can be properly controlled to move in the opposite direction of the convection, optimising its impact on thermophoresis and consequently reducing nanoparticle enrichment time. A parametric study is then conducted to examine the influence of the acoustic field on particle enrichment time with a laser power of 194 mW. With the optimised actuation condition of SSAW, the enrichment time of nanoparticles can be reduced by 61% compared to the thermophoresis enrichment without SSAW. Similar studies are then conducted with different laser powers ranging from 194 mW to 248 mW. About 61%-time reduction can be achieved for all the tested cases. The optimum magnitude of the maximum acoustic pressure increases slightly with laser powers. These findings provide insights into the design of the micro/nanoparticle aggregation devices.
  • ItemOpen Access
    The Role of Swirl in the Flow Structure and Response of Premixed Flames
    Kallifronas, Dimitrios Pavlos; Kallifronas, Dimitrios [0000-0002-1294-7272]
    Gas turbines have an important role in modern engineering and the development of low emission propulsion systems is a multi-dimensional challenge with no single solution. Combustion needs to be optimised and lean premixed combustion is an attractive approach to face this challenge. Unfortunately, lean premixed flames are often unstable as they may operate close to the flame blow off limit. Therefore, a suitable method of stabilisation is required. Swirlers or bluff bodies help in stabilising the flames, by creating recirculation zones of hot combustion products, and are common in practical applications. Another issue is the coupling of acoustic waves and heat release rate oscillations which create combustion instabilities. These instabilities can results is very large pressure fluctuations which may cause significant structural damage to the gas turbine. Recent computational advances allow to predict the flame behaviour and characteristics with high accuracy, reducing the time period required and cost of product development. This is achieved through the use of complex turbulence and combustion models, and also, increased computational resources. The Large Eddy Simulation framework, where the large scales of turbulence are resolved and the small ones are modelled, is now widely used in the scientific community to explore complex flows that posed challenges in the past. This thesis employs this framework to focus on the influence of swirl on both the flow structure and the response of the flame subjected to acoustic perturbations. To achieve those goals a range of bluff body based configurations with swirl numbers ranging from 0.30 to 0.97 are considered. Initially the recirculation zone and flow structure are compared under isothermal and reacting conditions. The recirculation zone created through vortex breakdown mechanisms is found to interact with the central recirculation zone of an upstream bluff body and this leads to a complex flow behaviour that depends on the blockage ratio and swirl number. In isothermal flows, as the swirl number or blockage ratio is increased, the vortex breakdown bubble moves upstream eventually merging with that central recirculation zone. The effect of heat release leads to considerable differences in the flow characteristics as the vortex breakdown bubble is pushed downstream due to dilatation. The critical swirl number, at which the vortex breakdown bubble and central recirculation zone merge, is observed to be higher in reacting flows for the same blockage ratio. The flame describing functions of those swirling flames show typical characteristics involving gain minima and maxima in the frequency space and the swirl number can alter the frequencies at which they are encountered. It is then attempted to scale the flame describing functions using Strouhal numbers based on two different flame length scales. A length scale based on the axial height of the maximum heat release rate per unit length leads to a good collapse of the flame describing function gain curves. However, it is also observed that flow instabilities present in the flow can affect the flame describing function scaling leading to an imperfect collapse. Furthermore, it is found that when the forcing convective wavelength is comparable to the flame length, swirl can affect the non-linear characteristics of the flame by altering the flame roll-up mechanisms. This is related to the variation of the local swirl number in space and time. For frequencies where the convective wavelength is large compared to the flame length, the effect of swirl is small.
  • ItemOpen Access
    High-Definition Holographic Head-Up Displays
    Skirnewskaja, Jana
    Real-time obstacle scanning and projection are crucial to decrease road accident rates worldwide. Current display advances in modern vehicles increase the risk of driver distraction and endanger safety. Conventional head-up displays require the driver to shift the field of view from the far field of the road towards a small region on the windscreen. Holographic 3D Head-Up Displays (HUDs) can provide a basis for the transportation sector to build on accessible and inclusive design strategies. However, current holographic HUDs lack a full-parallax effect, the number of pixels needed to recreate duplicate Augmented Reality (AR) images of the original object, and the display capabilities to reach real-time projections. Panoramic holographic projections that provide different depths of objects in creating a virtual reality experience in the driver’s field of view prevent driver distraction. In this thesis, holographic setups were developed to display 2D and 3D Ultra-High Definition (UHD) projections using Light Detection and Ranging (LiDAR) data in the driver’s field of view (eye box). Different light sources such as a HeNe laser and an Nd:YVO4 laser were compared in terms of accuracy, precision, and ease of use in the HUD applications. A UHD Spatial Light Modulator (SLM) with a panel resolution of 3840×2160 for UHD and a ferroelectric reflective SLM were utilised for the replay field projections. Both setups were compared, and the laser sources were used to illuminate Computer-Generated Holograms (CGHs) and generate the projection by reconstructing the object image in the far (replay) field. Graphics Processing Unit (GPU)-accelerated real-time holograms 16.6 times faster than the equivalent CPU processing time. A virtual Fresnel lens was used to enlarge the driver’s eye box to 25 mm × 36 mm. Real-time scanned road obstacles from different perspectives provide the driver a full view of risk factors such as generated depth in 3D mode and the ability to project any scanned object from different angles across 360°. The 3D holographic projection technology allows the driver to maintain focus on the road instead of the windshield. It enables assistance by projecting road obstacles hidden from the driver’s field of view. A multicolour 3D reconstruction method accelerated with GPU is introduced into the 360° LiDAR-based HUD architecture with single-mode fibre pigtailed laser diodes. The important aspect of this work is the leverage of driver integration into the method. The LiDAR data collection and real-time processing were based on the iPhone-integrated LiDAR sensors to be utilised during driving. Architectures including light sources, detectors, and rotating transmitters were previously accommodated to capture a panoramic scene with LiDAR. The data processing method was optimised with a point cloud algorithm accelerating computation time. Furthermore, experimental results of dynamic AR reconstructions were discussed, verifying the application for real-time colour 3D HUDs.
  • ItemControlled Access
    Closing the Acceptance Gap: In-Car Training for Drivers to Utilise Assistance Systems
    Caber, Nermin; Caber, Nermin [0000-0002-1792-6059]
    Road traffic accidents place a humanitarian and financial burden on society. Advanced Driver Assistance Systems (ADAS) can mitigate this as they increase driving safety. However, drivers’ acceptance rates are low and thus contradict drivers’ high appreciation of ADAS’ safety-enhancing effects, revealing a gap and impeding ADAS’ benefits. While several measures to close this gap have been suggested, no measure has fully achieved this. The primary objective of this research was to establish an appropriate measure to close the aforementioned gap. Here, a mixed-methods approach embedded in the Design Research Methodology was used. First, a literature review on technology acceptance delivered numerous factors influencing ADAS acceptance without clarifying their precise contribution. Second, an online survey and a focus group determined the key factors causing low ADAS acceptance, i.e. awareness and mental models, and pointed to training as a potential measure to manipulate these factors. Third, a literature review on ADAS training highlighted an inconsistency of ADAS training’s effect on technology acceptance and added experience as another key factor. It also revealed the shortcomings of pre-usage training materials, i.e. neglecting awareness and experience. Fourth, a simulator study provided qualitative data on the effect of ADAS training on mental model accuracy. Here, it deduced that once an adequate mental model is reached a more accurate model does not increase ADAS acceptance further. Fifth, an on-road study indicated the possibility of dynamic in-car training by identifying factors causing high workload and showing the attainability of a workload prediction system. Sixth, an online survey revealed drivers’ appreciation of in-car training and their division over dynamic training. The collected data on drivers’ preferences was used to develop a high-level concept of an in-car training system. Finally, expert interviews refined this concept by determining dynamic training to be driver-initiated. This thesis is the first research report to describe a concept of an in-car training system for ADAS and to present a comprehensive set of design guidelines for ADAS training. The suggested concept gives drivers control of their dynamic training, provides static training with a focus on ADAS’ benefits and raises awareness through targeted notifications. The guidelines advise on the training’s content, modality, and interaction design. They also recommend the usage of multiple technologies, e.g. smartphones and cars.
  • ItemOpen Access
    Identifiable Causal Representation Learning: Unsupervised, Multi-View, and Multi-Environment
    von Kügelgen, Julius; von Kügelgen, Julius [0000-0001-6469-4118]
    This thesis brings together ideas from causality and representation learning. Causal models provide rich descriptions of complex systems as sets of mechanisms by which each variable is influenced by its direct causes. They support reasoning about manipulating parts of the system, capture a whole range of interventional distributions, and thus hold promise for addressing some of the open challenges of artificial intelligence (AI), such as planning, transferring knowledge in changing environments, or robustness to distribution shifts. However, a key obstacle to a more widespread use of causal models in AI is the requirement that the relevant variables need to be specified a priori, which is typically not the case for the high-dimensional, unstructured data processed by modern AI systems. At the same time, machine learning (ML) has proven quite successful at automatically extracting useful and compact representations of such complex data. Causal representation learning (CRL) aims to combine the core strengths of ML and causality by learning representations in the form of latent variables endowed with causal model semantics. In this thesis, we study and present new results for different CRL settings. A central theme is the question of identifiability: Given infinite data, when are representations satisfying the same learning objective guaranteed to be equivalent? This is arguably an important prerequisite for CRL, as it formally characterises if and when a learning task is, at least in principle, feasible. Since learning causal models—even without a representation learning component—is notoriously difficult, we require additional assumptions on the model class or rich data beyond the classical i.i.d. setting. For unsupervised representation learning from i.i.d. data, we develop independent mechanism analysis, a constraint on the mixing function mapping latent to observed variables, which is shown to promote the identifiability of independent latents. For a multi-view setting of learning from pairs of non-independent observations, we prove that the invariant block of latents that are always shared across views can be identified. Finally, for a multi-environment setting of learning from non-identically distributed datasets arising from perfect single-node interventions, we show that the latents and their causal graph are identifiable. By studying and partially characterising identifiability for different settings, this thesis investigates what is possible and impossible for CRL without direct supervision, and thus contributes to its theoretical foundations. Ideally, the developed insights can help inform data collection practices or inspire the design of new practical estimation methods and algorithms.
  • ItemOpen Access
    Implementation of an anisotropic plasticity model and its application to pile foundations problems
    Alswaity, Eman
    Pile foundations are used in more challenging site conditions, such as soft soils, to ensure both sufficient capacities and desired performance. As construction moves to more problematic locations and demands increase in magnitude and complexity, pile foundations have been required to accommodate these additional requirements. While the behaviour of pile foundations under vertical and lateral load combinations has been the subject of numerous studies, the response under combined horizontal and torsional loading has received considerably less attention. The emphasis of this research is on using numerical simulation to investigate the response of pile foundations in soft clay under different loading conditions. The procedure is based on nonlinear Finite Element Analysis (FEA) and advanced modelling of geomaterials to accurately describe the realistic response of the soil. In general, natural clay exhibits particularly complex behaviour, which is challenging to address in a single constitutive model while retaining some ease of use. Thus, it is crucial to identify the essential aspects of soil features which may cause a particular problem behaviour and to opt for a suitable constitutive model that captures those aspects. Here, attention is given to the implementation of an advanced clay constitutive model that accounts for initial stress anisotropy features and its continuous evolution with plastic deformation. The effective stress elastoplastic Simple Anisotropic Clay model, SANICLAY, which has been developed by (Dafalias et al., 2006) as an extension to the Modified Cam Clay model with a minimal number of parameters, is used in this study. The SANICLAY model is implemented into the general-purpose finite element program ABAQUS via the user-defined material subroutine UMAT. The integration of the stress-strain relationship was based on an explicit method with automatic substepping and error control algorithms. The numerical implementation performance was verified and validated against the published model simulation results based on data from experimental tests. The implemented model was further applied to describe clay behaviour in solving bearing capacity boundary value problems in which different geometries and loading complexity levels are offered. In particular, the implemented model performance was first tested with the well-known undrained vertical bearing capacity of shallow strip footings and deep pile foundations problems. The model was then used to explore the effect of anisotropy on the undrained lateral bearing capacity of pile group and piled raft foundations, and the undrained combined lateral-torsional response of pile group foundations. The finite element results demonstrated that accounting for some soil anisotropy results in significantly lower vertical bearing capacity of shallow and deep foundations, compared to the Modified Cam Clay, Tresca, and plasticity-based solutions. SANICLAY also provided quite lower lateral bearing capacities for pile groups and piled raft problems, compared to those obtained by an isotropic modified Cam Clay, although the two analyses displayed the same trend. The results from both models confirmed that the lateral resistance of a pile group is generally smaller than the sum of the analytical lateral capacities of individual piles. Based on the FEA results, it is concluded that ignoring soil anisotropy may lead to a lower factor of safety in design procedures. The FE results of a single pile under pure torsion were in satisfactory agreement with analytical predictions for both constitutive behaviours. Whereas the individual piles in the group under pure torsion showed a more complicated response as they translate and rotate as well. This deflection-torsion coupling effect increased the overall torsional resistance of the group, especially for larger pile group sizes. The effect of anisotropy was significant in affecting the lateral resistance rather than the torsional response of the individual piles in the group, thus the overall torsional capacity of the group was lower using SANICLAY compared to Modified Cam Clay. The results further demonstrated the substantial interaction between lateral and torsional capacities, and the failure envelopes indicated that the lateral resistance of the single pile and the pile group foundations are significantly reduced by torsional moments. Anisotropy has led to further reduction of the lateral resistance due to combined lateral-torsional loading of pile group foundations. The corresponding interaction-failure envelopes, with mathematical expressions and graphical representations, were developed for both isotropic and anisotropic clay responses. Additionally, the eccentric lateral loading results illustrated the significant effect of high eccentricity levels in reducing the lateral capacity of pile group foundations. This research offers a practical tool to handle advanced numerical modelling and finite element analysis and to solve complex geotechnical boundary value problems. Further, the outcomes provide clear evidence of the necessity of accounting for soil anisotropy during pile foundation analysis and design procedures.
  • ItemEmbargo
    Application of Diagnostic Methods to Assess Ageing Processes in Polymers
    Raheem, Hamad
    The research objectives of this work aim to contribute to the understanding and assessment of ageing in semi-crystalline polymers through destructive and non-destructive tests, and to study the feasibility of acoustic means to be utilized for future age-monitoring systems. This called for a thorough evaluation of the governing material properties that can be practically used to infer material alteration, i.e., ageing, through systematic ageing-induced experiments that exposed polyethylene-based specimens to gaseous and supercritical CO2 at elevated temperatures (up to 90 ℃) and pressures (up to 400 barg). Polymer ageing is an umbrella term that summarizes all irreversible chemical or physical alterations that lead to a change in measurable material properties. The superstructure of polyethylene is presented with regards to polymer crystallinity and its measuring systems, thermal expansion, elasticity, relaxation processes and acoustic attenuation. This is along with literature reported effects of supercritical CO2 on polymer plasticization. The first type of induced-ageing experiments comprised of long-term (up to 60 days) continuous flow permeation tests, conducted on polyethylene of raised temperature and polyvinylidene fluoride specimens exposed to CO2. The tests included a differential pressure setup of a polymer membrane exposed to a high-pressure gas inlet from one side, and a gas chromatograph that measures the permeant content at the other side. The fluid transport coefficient analysis of permeability, diffusivity and solubility aimed to study the flux behaviour and infer from it signs for membrane ageing, at various pressure steps. It was shown that the diffusion and permeation coefficients of supercritical CO2 to polyethylene agreed with the equations of Fickian diffusion and the classical permeation model, where the diffusion and permeation coefficients decreased in general with increasing feed pressure. The second type of induced-ageing experiments comprised of systematic autoclave experiments using supercritical CO2 at constant 90 ℃ and 100 barg and various polyethylene specimens of different grades. The aim of this experiment was to compare the material properties of different polyethylene grades before and after prolonged exposure to supercritical CO2 with destructive tests (e.g., differential scanning calorimetry, dynamic mechanical analysis, tensile tests), and non-destructive tests (e.g., visual inspection, density, ultrasound broadband spectroscopy). The collective analysis of the experimental outcomes showed subtle differences in the degree of crystallinity between aged and unaged specimens, and a similar profile for the melt temperatures. Comparison of first and second melt cycles revealed permanent structural alterations in some specimens. Elastic moduli calculated from lamellar thickness, static tensile tests, dynamic mechanical analysis, and acoustic spectroscopy suggested that a difference could be drawn between aged and unaged specimens, in which elastic moduli generally increased with exposure to ageing conditions. Finally, empirical elastic modulus of various polyethylene grades fitted well with degree of crystallinity and the findings agree with literature. Furthermore, given the promising results of acoustic spectroscopy, a bespoke piezoelectric micromachined ultrasound transducer was designed and fabricated with a view towards developing an in-situ assessment of polymer ageing. The design and fabrication processes are thoroughly presented. Initial probing of various materials with the micro-transducers showed distinctions in signatures between polymeric and metallic specimens due to the differences in material properties. This demonstrates potential for the micro-transducers to be used for age-monitoring of semi-crystalline polymers in the future, even though initial probing of the aged versus unaged specimens did not reveal distinctive relationships using the current air-coupled design. The carried work contributes to the understanding of ageing of semi-crystalline polymers and provides significant contributions towards their deployment in high pressure applications. It also presents a novel outlook on the feasibility of material resonance-based analysis methods, such as piezoelectric micromachined ultrasound transducers, to be used for non-destructive evaluation and age-monitoring of solid specimens in general, and semi-crystalline polymers in particular.
  • ItemEmbargo
    Customisable Magnetic Components Using Nanocrystalline Flake Ribbon
    Li, Xinru; Li, Xinru [0000-0002-8087-8301]
    Power electronics plays a key role in addressing today’s energy challenges. The advancement of wide bandgap (WBG) semiconductors enables power electronic devices to operate at elevated switching frequencies, withstand higher temperatures, achieve greater power density, and deliver improved system efficiency. In accordance with the developing trend of power electronics, there is a growing demand for customising high-performance magnetic components with reduced losses, compact dimensions, lighter weight, and cost-effectiveness. Traditional methods for core customisation are cumbersome, time-consuming, and costly when producing small series of magnetic cores. Recently developed magnetic cores customised through additive manufacturing (AM) have not demonstrated desirable magnetic properties for high-power-density power electronics. In this thesis, a novel nanocrystalline flake ribbon (NFR) is proposed to customise high-performance magnetic components for power electronics in a cost-effective way without high-energy equipment. Conventional nanocrystalline alloys do offer much higher saturation flux density and lower core losses within certain frequency ranges compared to soft ferrites. However, their high-frequency performance is limited by elevated eddy current losses and uneven resin distribution within the laminations. To improve the magnetic properties of nanocrystalline materials and open new possibilities for customising magnetic components, NFR obtained from mechanically crushing the conventional nanocrystalline ribbon is introduced. The magnetic properties of NFR including the DC magnetisation curve, permeability, B-H curve and core losses are well examined. Core loss separation is performed based on Bertotti’s theory, and the classical eddy current loss and excess loss of NFR are compared to conventional nanocrystalline ribbons. To demonstrate the design of high-performance magnetic components using NFRs, this thesis first presents a sandwich-structured inductor manually made from NFR. This inductor achieves a rare combination of high current capacity and a low profile, which is challenging to achieve even with high-energy tools. The analytical models of inductance and conduction losses are developed and validated by finite element analysis (FEA) and experiments. A 5 V to 1 V, 50 A, voltage regulator (VR) is built to compare the performance of the fabricated NFR sandwich inductor to the latest commercial counterpart. Thermal performance is investigated, and stability tests under solder iron heat and reflow temperature are performed. Another demonstration involves introducing a tape-wound gapless transformer based on NFR and comparing it with two other transformers with similar specifications that use ferrite and nanocrystalline materials. The comparison is made by testing the built transformers in a 5.5 kW, 100 kHz dual active bridge (DAB) converter. Loss-separation is performed by measuring the core losses and conduction losses individually, and possible errors are analysed in detail for the loss measurement. Additionally, the transformer losses during the circuit test are measured using the high-precision power analyser, and the measurement errors are well discussed. Finally, the sum of the measured core losses and winding losses are compared with the measured transformer losses to verify the accuracy of the measurements.
  • ItemRestricted
    Using Thin-film Electronics to Interface with the Spinal Cord
    Woodington, Ben
    [Restricted]