Ingegneria industriale e dell'informazione
Permanent URI
Settori scientifico disciplinari compresi nell'area 9:
|
|
Browse
Browsing Ingegneria industriale e dell'informazione by Title
Now showing 1 - 20 of 80
Results Per Page
Sort Options
- Publication03-DPACS: an open source solution for critical PACS systems integratedin the 03 Consortium project.(Università degli studi di Trieste, 2008-03-18)
;Beltrame, MarcoAccardo, AgostinoABSTRACT The student started his work towards the PhD. in 2005, joining the Bioengineering and ICT group of the University of Trieste, whose core research was in the e-health area. The personal research project conducted by the PhD student was inserted in the O3 Consortium research project, which had the aim to propose a complete solution for the adoption of open technology in the healthcare environment. The solution should become a whole new model for e-health application and include new products integrated with a development, support and business model. The PhD student contributed to the project in thinking and designing all the aspects of the complete solution presented in this thesis, in the development and business models, as well as in developing the products and in writing and publishing the results of the steps of this work. He personally contributed proposing the original idea of the support model and designing and implementing a product to test the proposed model. The student had an intermediate step in his personal project: he built a state of the art server for the management of DICOM data and of imaging objects (PACS – Picture and Archiving Communication System), with the aim to ease the adoption of e-health technology and to develop a product on which the model would have been tested. The research was conducted in the following way: first, based on the analysis of real world needs, literature and past experience, the definition of the O3 Consortium project software design guidelines (called “requirements” in the thesis) was performed. Then the product was designed and a new release of the PACS system was developed, implementing original solutions to best achieve all those “requirements”. The technological choices and the original aspects against the state of the art have been discussed and underlined throughout the entire thesis, such as the compliance to all the requirements and the choices for portability, project organization, standard implementation and performance. The idea of O3-DPACS being an integrated project, system plus support model has been also presented and discussed in the thesis.. It should be remarked again that the O3 Consortium means not only software development but also new procedures in both technology and service delivering. Moreover, the PhD student performed the validation of the software and the model, which needed to verify the assumption and to obtain the first results in the O3 Consortium research on e-health adoption. Should be noted the originality of the O3 Consortium project proposal of a whole complete application model to the healthcare real world based on open source software. No other solution for open source software application makes a complete proposal for all the topics of development, design, software architecture, support and business opportunity. Thus, a real research interest exists in testing and validating the model.4737 8211 - PublicationA hardware field simulator for photovoltaic materials applications.(Università degli studi di Trieste, 2008-04-23)
;Massi Pavan, AlessandroRoitti, SergioIl presente lavoro riguarda la descrizione di un simulatore di campo fotovoltaico (in seguito simulatore). Il simulatore è un convertitore elettronico di potenza che, alimentato dalla rete elettrica, riproduce la caratteristica tensione corrente di un campo fotovoltaico (insieme di moduli fotovoltaici connessi in serie e in parallelo) operante in condizioni climatiche di temperatura e irraggiamento arbitrarie. Il nuovo dispositivo verrà impiegato nell’ambito del laboratorio fotovoltaico cui fa riferimento l’impianto in via di realizzazione sul tetto dell’edificio che ospita il Dipartimento dei Materiali e delle Risorse Naturali dell’Università di Trieste. Il simulatore viene proposto come utile strumento per i progettisti di dispositivi solari funzionanti in sistemi fotovoltaici connessi in rete. In particolare, il simulatore permetterà di prevedere il funzionamento di nuovi moduli fotovoltaici operanti in condizioni di ombreggiamento arbitrario e inseriti in un sistema fotovoltaico reale. L’uso del simulatore sarà particolarmente efficace nel caso di simulazioni di tecnologie in film sottile come, ad esempio, il silicio amorfo, il tellururo di cadmio, ecc. Il simulatore sarà anche necessario per testare i componenti che fanno parte di un sistema fotovoltaico connesso in rete, con particolare riferimento ai sistemi di condizionamento della potenza (detti anche inverter). Tali sistemi, oltre a convertire la tensione continua prodotta dai moduli fotovoltaici in una tensione compatibile e sincronizzata con quella della rete, devono garantire istante per istante l’inseguimento del punto di massima potenza estraibile dal campo fotovoltaico cui sono connessi. Il lavoro è stato suddiviso in cinque capitoli. Il primo capitolo fornisce una breve descrizione dello stato dell’arte e di alcune aspetti economici relativi alla tecnologia fotovoltaica. Nel secondo capitolo vengono richiamati il modello classico di una cella solare e le definizioni riguardo le sue caratteristiche principali (punto di massima potenza, efficienza, fill factor, ecc.). Nello stesso capitolo un’overview sui materiali e sulle tecnologie utilizzate nella realizzazione dei dispositivi fotovoltaici divide, come suggerito da Martin Green, le celle solari in tre diverse generazioni: la prima comprende i dispositivi realizzati in silicio cristallino (mono e policrisallino), la seconda quelli in film sottile (in silicio amorfo, tellururo di cadmio CdTe, diseleniuro di rame e indio CIS, diseleniuro di rame, indio e gallio CIGS, diseleniuro di rame, indio, gallio e zolfo CIGSS) e le celle di Graetzel, e la terza le celle multigiunzione, a banda intermedia e quelle organiche. Nel capitolo tre viene fornita una descrizione dei componenti costituenti un sistema fotovoltaico connesso in rete e viene proposto un nuovo metodo per la determinazione delle caratteristiche corrente tensione e potenza tensione prodotte da dispositivi fotovoltaici. Il metodo risulta efficace in quanto non necessita di misure sperimentali da effetture sui diversi dispositivi. I dati forniti nei comuni data sheet che vengono forniti a corredo dei moduli fotovoltaici sono sufficienti a determinarne il comportamento al variare della temperatura di funzionamento e del livello di radiazione solare. L’efficienza di un sistema fotovoltaico (Balance Of the System, BOS) viene calcolata nel capitolo quattro. Particolare enfasi viene data all’effetto di mismatching che è tanto più importante quanto più è elevato il livello di ombreggiamento presente sul piano dei moduli fotovoltaici costituenti l’impianto. Infine, l’ultimo capitolo riguarda la descrizione del simulatore e delle sue applicazioni.1772 1452 - PublicationA novel approach to the experimental study of thermoplastic composities fatigue behaviour(Università degli studi di Trieste, 2009-04-22)
;Crevatin, Andrea ;Spizzo, Fabio ;Meriani Merlo, SergioCelotto, MonicaSi tratta della messa a punto di un nuovo metodo di indagine per la descrizione del comportamento a fatica di materiali plastici compositi. Il lavoro è stato completato con la formulazione di un modello matematico in grado di descrivere il comportamento dei materiali.883 9482 - PublicationApplication of linear and nonlinear methods for processing HRV and EEG signals(Università degli studi di Trieste, 2015-04-13)
;Fornasa, ElisaAccardo, AgostinoL'elaborazione dei segnali biomedici è fondamentale per l'interpretazione oggettiva dei sistemi fisiologici, infatti, permette di estrarre e quantificare le informazioni contenute nei segnali che sono generati dai sistemi oggetto di studio. Per analizzare i segnali biomedici, sono stati introdotti un gran numero di algoritmi inizialmente nati in ambiti di ricerca differenti. Negli ultimi decenni, il classico approccio lineare, basato principalmente sull'analisi spettrale, è stato affiancato con successo da metodi e tecniche derivanti dalla teoria della dinamica nonlineare e, in particolare, da quella del caos deterministico. L'obiettivo di questa tesi è quello di valutare i risultati dell'applicazione di diversi metodi di elaborazione, lineari e non lineari, a specifici studi clinici basati sul segnale di variabilità cardiaca (Heart Rate Variability, HRV) e sul segnale elettroencefalografico (EEG). Questi segnali, infatti, mostrano comportamenti attribuibili a sistemi la cui natura può essere alternativamente di tipo lineare o non, a seconda delle condizioni nelle quali i sistemi vengono analizzati. Nella prima parte della tesi, sono presentati i due segnali oggetto di studio (HRV ed EEG) e le tecniche di analisi utilizzate. Nel capitolo 1 vengono descritti il significato fisiologico, i requisiti necessari per l'acquisizione dei dati e i metodi di pre-elaborazione dei segnali. Nel capitolo 2 sono presentati i metodi e gli algoritmi utilizzati in questa tesi per la caratterizzazione delle diverse condizioni sperimentali in cui HRV e EEG sono stati studiati, prestando particolare attenzione alle tecniche di analisi non lineare. Nei capitoli seguenti (capitoli 3-7), sono presentate le cinque applicazioni dell'analisi dei segnali HRV ed EEG esaminate durante il dottorato. Più precisamente, le prime tre riguardano la variabilità cardiaca, le altre due il segnale EEG. Per quanto riguarda il segnale HRV, il primo studio analizza le variazioni delle proprietà spettrali e frattali in soggetti sani di diversa età; il secondo è focalizzatosull'importanza dell'approccio nonlineare nell'analisi del segnale HRV ricavato da registrazioni polisonnografiche di pazienti affetti da gravi apnee notturne; il terzo presenta le differenze nelle caratteristiche spettrali e nonlineari della variabilità cardiaca in pazienti con scompenso cardiaco determinato da diverse eziologie. Invece, per il segnale EEG, il primo studio analizza le alterazioni negli indici spettrali e nonlineari in pazienti con deficit cognitivi soggettivi e lievi, mentre il secondo valuta l'efficacia di un nuovo protocollo per la riabilitazione della malattia di Parkinson, attraverso la quantificazione dei parametri spettrali dell'EEG.1110 1005 - PublicationApplications and limits of raman spectroscopy in the study of colonic and pulmonary malformations.(Università degli studi di Trieste, 2008-04-10)
;Codrich, DanielaSergo, ValterQuesta tesi nasce dalla collaborazione tra il Dipartimento di Materiali e Risorse Naturali dell’Università degli Studi di Trieste ed il Dipartimento di Chirurgia Pediatrica dell’Ospedale Infantile Burlo Garofalo. L’obiettivo del nostro gruppo di studio era di valutare le possibili applicazioni della spettroscopia Raman nello studio dei tessuti umani, con particolare attenzione verso i tessuti affetti da malformazioni congenite. L’interesse verso la spettroscopia Raman, che è una spettroscopia vibrazionale basata sullo scattering inelastico di fotoni, nasce dal fatto che questa tecnica può fornire dettagli precisi sulla composizione chimica e sulla struttura molecolare di cellule e tessuti. Durante lo svolgersi del progetto, sono state standardizzate le procedure di conservazione e preparazione dei campioni, i quali sono stati prelevati, con il consenso parentale, durante interventi chirurgici. Mediante l’uso di uno spettrometro Raman equipaggiato con un laser emittente luce monocromatica a 785 nm, sono stati analizzati campioni di colon e polmone normale, rappresentativi di un tessuto prevalentemente stratificato e di un parenchima omogeneo. Sono stati studiati anche tessuti polmonari malformati affetti da Malformazione Adenomatoide Cistica (CCAM) e Sequestri Broncopolmonari (BPS). Vengono descritte le procedure di acquisizione e processazione dei dati. Dopo applicazione di un’analisi multivariata come la k-means cluster analisi, sono state ottenute delle pseudo-mappe Raman colorate, che sono state poi confrontate con gli stessi campioni nativi non colorati, apposti su vetrino. Da ogni cluster sono stati estratti gli spettri Raman medi, che sono stati confrontati per evidenziare differenze fra aree diverse del campione. L’assegnazione delle principali bande alle diverse specie chimiche è stata fatta secondo la letteratura. L’analisi Raman è stata in grado di differenziare i diversi strati del colon ( sierosa, muscolatura, sottomucosa, mucosa, plessi nervosi), evidenziando strutture subcellulari in elementi nervosi quali i gangli. Sezioni normali e malformate di polmone hanno dimostrato una clusterizzazione e degli spettri medi diversi, permettendo una differenziazione tra CCAM e BPS, tanto che in un caso la nostra analisi, non concordando con la diagnosi del patologo, ha indotto una revisione dei vetrini e una riformulazione della diagnosi. Gli effetti di ampliamento del segnale Raman per risonanza di cromofori quali il gruppo eme dell’emoglobina con la radiazione a 785 nm sono stati discussi ed abbiamo proposto un metodo per minimizzare il contributo spettrale di questa molecola. Abbiamo inoltre confrontato i dati Raman con i dati ottenuti sugli stessi campioni presso l’Istituto di Chimica Analitica dell’ Università di Dresda mediante un’altra spettroscopia vibrazionale quale la spettroscopia ad Infrarosso. Ci è stata accordata per la discussione di questa tesi la possibilità di presentare i dati, per un confronto fra le due tecniche in relazione a tempi di acquisizione, risoluzione spaziale e spettrale.1835 10673 - PublicationApplications of x-ray computed microtomography to material science: devices and prespectives(Università degli studi di Trieste, 2008-04-23)
;Favretto, StefanoLucchini, ElioThe three-dimensional visualization of the inner microstructural features of objects and materials is an aspect of relevant interest for a wide range of scientific and industrial applications. X-ray computed microtomography (μ-CT) is a powerful non-destructive technique capable to satisfy these needs. Once the complete reconstruction of the sample is available, a quantitative characterisation of the microstructure is essential. Through digital image processing tools, image analysis software or custom developed algorithms, it is possible to obtain an exhaustive geometrical, morphological and topological description of the features inside the volume, or to extract other particular parameters of interest (e.g. porosity, voids distribution, cell size distribution, average struts length, connectivity between the cells, tortuosity). This thesis was carried out at the third-generation Elettra Synchrotron Radiation Facility (Trieste, Italy), where a hard X-ray imaging beamline is available. The experience developed at this beamline has leaded scientists to design a complementary state-of-the-art μ-CT facility based on a micro-focus X-ray source, working both in absorption and phase contrast mode. In this dissertation a detailed description of this facility is given together with a rigorous characterization of the imaging system capabilities, in terms of the actual achievable spatial resolution, in order to optimize the working parameters for the different experiments. The main artefacts that concur to the degradation of the quality of the reconstructed images have been considered (e.g. beam hardening effects, ring artefacts, uncertainness associated with the cone-beam geometry): procedures are presented in order to eliminate, or at least to reduce, the causes of these artefacts. The aspects related to the digital image processing of the reconstructed data are intensively developed in this study: appropriated methodologies have been elaborated capable to deal with the different three-dimensional data of complex porous media, providing a correlation between the microstructure and the macroscopic behaviour of the observed materials. Three representative examples obtained with the described methods are used to demonstrate the application of μ-CT, combined with the developed image processing tools, to material science: the geometrical and morphological characterisation of polyurethane foams employed in the automotive industry due their vibro-acoustic properties; a new approach to characterize the resonance spruce wood microstructure in order to study its acoustical behaviour; finally, the possibility of revealing defects in hybrid-friction stir welded aluminium joints, guiding the optimization of the process parameters.1469 5267 - PublicationBeamforming techniques for wireless communications in low-rank channels: analytical models and synthesis algorithms.(Università degli studi di Trieste, 2008-03-18)
;Comisso, Massimiliano ;Mania', LucioVescovo, RobertoThe objective of this thesis is discussing the application of multiple antenna technology in some selected areas of wireless networks and fourth-generation telecommunication systems. The original contributions of this study involve, mainly, two research fields in the context of the emerging solutions for high-speed digital communications: the mathematical modeling of distributed wireless networks adopting advanced antenna techniques and the development of iterative algorithms for antenna array pattern synthesis. The material presented in this dissertation is the result of three-year studies performed within the Telecommunication Group of the Department of Electronic Engineering at the University of Trieste during the course of Doctorate in Information Engineering. In recent years, an enormous increase in traffic has been experienced by wireless communication systems, due to a significant growth in the number of users as well as to the development of new high bit rate applications. It is foreseen that in the near future this trend will be confirmed. This challenging scenario involves not only the well established market of cellular systems, but also the field of emerging wireless technologies, such as WiMAX (Worldwide interoperability for Microwave Access) for wireless metropolitan area networks, and Wi-Fi (Wireless Fidelity) for wireless local area networks, mobile ad-hoc networks and wireless mesh networks. The rapid diffusion of architectures adopting an ad-hoc paradigm, in which the network infrastructure is totally or partially absent and that can be deployed using low-cost self-configuring devices, has further enlarged the number of systems that have to coexist within a limited frequency spectrum. In such evolving environment, the development of interference mitigation methods to guarantee the communication reliability, the implementation of proper radio resource allocation schemes for managing the user mobility as well as for supporting multimedia and high speed applications, represent the most relevant topics. Classic approaches are focused on the use of the time-frequency resources of the propagation channel. However, to satisfy the increasing demand of network capacity, while guaranteeing at the same time the necessary levels in the quality of the offered services, operators and manufacturers must explore new solutions. In this scenario, the exploitation of the spatial domain of the communication channel by means of multiple antenna systems can be a key improvement for enhancing the spectral efficiency of the wireless systems. In a rich scattering environment, the use of multiple antennas enables the adoption of diversity and spatial multiplexing techniques for mitigating and, respectively, exploiting multipath fading effects. In propagation environments characterized by small angular spreads, the combination of antenna arrays and beamforming algorithms provides the possibility to suppress the undesired sources and to receive the signals incoming from the desired ones. This leads to an increase of the signal to interference plus noise ratio at the receiver that can be exploited to produce relevant benefits in terms of communication reliability and/or capacity. A proper design of the medium access control layer of the wireless network can enable the simultaneous exchange of packets between different node pairs as well as the simultaneous reception of packets from multiple transmitters at a single node. Switched-beam antennas, adaptive antennas (also referred to as smart antennas), and phased-antenna arrays represent some of the available beamforming techniques that can be applied to increase the overall system capacity and to mitigate the interference, in a scenario where several different technologies must share the same frequency spectrum. In the context of distributed wireless networks using multiple antenna systems, the core of this thesis is the development of a mathematical model to analyze the performance of the network in presence of multipath fading, with particular reference to a scenario in which the signal replicas incoming at the receiver are confined within a small angle and are characterized by small relative delays. This propagation environment, referred to as low-rank, is the typical operating scenario of smart antennas, which necessitate high spatial correlation channels to work properly. The novel aspects of this study are represented by the theoretical and numerical modeling of the sophisticated adaptive antennas in conjunction with a detailed description of the channel statistics and of the IEEE 802.11 medium access control scheme. A theoretical model providing a more realistic perspective may be desirable, considering that, at present, not only cost and competition issues, but also too optimistic expectations, as compared to the first measurements on the field, have induced the wireless operators to delay the adoption of smart antenna technology. The presented analysis includes the most relevant elements that can influence the network behavior: the spatial channel model, the fading statistic, the network topology, the access scheme, the beamforming algorithm and the antenna array geometry. This last aspect is numerically investigated considering that the size of the user terminal represents a strict constraint on the number of antennas that can be deployed on the device, and so the maximization of the performance becomes related to the geometrical distribution of the radiators. In ad-hoc and mesh networks, the typical communication devices, such as laptops, palmtops and personal digital assistants require compact and cheap antenna structures as well as beamforming algorithms easy to implement. In particular, the low-cost characteristics have guaranteed a wide popularity to wireless mesh technology, which have encouraged the birth of a new social phenomenon, known as wireless community networks, whose objective is the reduction of the Internet access cost. The adoption of multi-antenna systems is the purpose of the IEEE 802.11n amendment, which, however, not considering modifications of the medium access control layer, provides higher bit rates for the single link, but does not allow simultaneous communications between different couples of nodes. This aspect must be taken into account together with the fact that, nowadays, IEEE 802.11x represents the leading family of standards for wireless local communications, and enhancement proposals have to pay careful attention to the backward compatibility issues. The mathematical model presented in this thesis discusses the suitable parameter settings to exploit advanced antenna techniques in 802.11-based networks when the access scheme supports multiple communications at the same time, maintaining a realistic description for the antenna patterns and the channel behavior. The presentation of two new iterative algorithms for antenna array pattern synthesis represents the core of the last part of this dissertation. The proposed solutions are characterized by implementation simplicity, low computational burden and do not require the modification of the excitation amplitudes of the array elements. These advantages make the presented algorithms suitable for a wide range of communication systems, while matching also the inexpensiveness of mesh and ad-hoc devices. In particular, phase-only synthesis techniques allow the adoption of a cheaper hardware, including only phase shifters, which are available at a reasonable price, while avoiding the use of the more expensive power dividers. The first presented algorithm employs the spatial statistic of the channel for properly placing the pattern nulls, in order to suppress the undesired power incoming from a given angular interval. This solution exploits the improved knowledge of the spatial properties of the propagation environment for enhancing the interference suppression capabilities at the transmitter and receiver sides. The second algorithm is a phase-only technique that is able to generate multiple nulls towards the undesired directions and multiple main lobes towards the desired ones. This method provides the possibility to perform spatial multiplexing adopting low-cost electronic components. The thesis is organized in three parts. The first one provides the background material and represents the basics of the following arguments, while the other two parts are dedicated to the original results developed during the research activity. With reference to the first part, the fundamentals of antenna array theory are briefly summarized in the first chapter. The most relevant aspects of the wireless propagation environment are described in the second chapter, focusing on the characteristics of the spatial domain in a low-rank scenario. The third chapter presents a classification of the different multiple antenna techniques according to the channel properties and provides an overview of the most common beamforming algorithms. The fourth chapter introduces the most significant aspects of the distributed wireless networks, presenting the main open issues and the current proposals for the exploitation of the potential offered by antenna array systems. The second part describes the original results obtained in the mathematical modeling of ad-hoc and mesh networks adopting smart antennas in realistic propagation scenarios. In particular, the fifth chapter presents the theoretical analysis to evaluate the number of simultaneous communications that can be sustained by a distributed wireless network using adaptive antennas in presence of multipath. The sixth chapter extends this model to switched-beam antennas, while addressing the mobility aspects and discussing the cost-benefit tradeoff that is related to the use of multiple antenna techniques in today's wireless networks. A detailed throughput-delay analysis is performed in the seventh chapter, where the impact of advanced antenna systems on 802.11-based networks is investigated using a Markov chain model. The influence of the antenna array geometry is examined in the eighth chapter adopting a numerical approach based on a discrete-time simulator, which is able to take into account the details of the channel and of the antenna system behavior. The third part describes the original results obtained in the field of antenna array pattern synthesis. The ninth chapter presents the technique developed to modify the excitation phases of an antenna array in order to reject interferers spread over an angular region according to a given spatial statistic. The tenth chapter describes the iterative algorithm for phased arrays, which is able to produce low side-lobe level patterns with multiple prescribed main lobes and nulls. Finally, the eleventh chapter summarizes the thesis contributions and remarks the most important conclusions. The intent of the work presented hereafter is to examine the benefits that derive from the employment of smart antenna techniques from a realistic perspective, as well as to provide some useful solutions to improve the reliability of the communications and to increase the network capacity.2433 9528 - PublicationBiomaterials for biotechnological applications: synthesis and activity evaluations(Università degli studi di Trieste, 2010-03-26)
;Perin, Danilo ;Grassi, Mario ;Grassi, Mario ;Grassi, GabrieleMurano, ErminioA biomaterial was defined as any non-living material used in a medical device that interacts with biological systems. Many different applications involve the use of biomaterials: pharmacology, controlled drug release, extracorporeal devices (contact lens, emodialysis devices, cardiopulmonary bypass oxygenators), artificial prostheses. One of the most interesting biomaterials application regards the release of nucleic acids and their derivatives as therapeutic agents. These molecules, defined as “nucleic acid base drugs” (NABDs), allows highly targeted cellular metabolism modifications. The aim of this research project concerns the characterization of biomaterials for biotechnological applications and evaluation of their activities. In particular, because of the great therapeutics and commercial interest and the delivery problems that are largely unresolved, the attention is focused on the study of new delivery systems for siRNA, proposed as model system as they represent the most common and best characterized NABD. SiRNA proved to be useful for what concerns the in-stent restenosis, pathology implying the re-occlusion of the artery due to the iper-proliferation of smooth muscle cells induced by the presence of the stent, a metal prosthesis that is applied to avoid the elastic recoil of the artery wall after balloon angioplasty. In this system, the siRNA should act as an anti-proliferative of smooth muscle cells without interfering with endothelial cells. In order to design an appropriate delivery system, it is crucial a precise structural and dimensional characterization of polymeric mesh. This purpose was achieved by the use of various techniques such as Rheology, low field NMR and Cryoporometry. Rheology allows the evaluation of the macroscopic mechanical properties of the system under investigation (Young's and shear modulus for example). Low field NMR, instead, allows evaluating the microscopic properties and, coupled to the rheology, provides an estimation of the polymeric mesh size distribution. Cryoporometry is another method to assess the mesh size distribution. In vivo release tests represent the final step of the experimental process. The polymeric system ability to carry and deliver the liposome-siRNA complexes, was tested in culture models of smooth muscle cells and endothelial cells. The attention has been focused on polymeric hydrogels, whose biocompatibility and biodegradability is well known: • Alginate (polymeric concentration 1% - 2% - 3%), crosslinked by Ca2+ or Cu2+ water solution • Pluronic™ F127 18% in water • Dextran 5% or 30% methacrylate (respectively D40MA5% and D500MA30%; polymeric concentration 5%) crosslinked by UV • Gel systems derived from benzofulvene And polymeric blends: • Pluronic™-alginate hydrogels respectively at 18% and 2% in water • Dextran methacrylate-alginate respectively at 5% and 3% in water (A3D40MA5% or A3D500MA30%)1425 5002 - PublicationCardiac pacing lead as hemodynamic sensor(Università degli studi di Trieste, 2011-03-31)
;Tomasic, Danko ;Accardo, AgostinoFerek - Petric, BozidarTherapy delivery in modern cardiac electrotherapy systems is based almost exclusively on the information about cardiac electrical depolarization. This kind of detection lacks any data about the myocardial contraction. An optimal heart rhythm control should integrate the assessment of the mechanical cardiac activity and related hemodynamic parameters to the already existing electrical signal analysis. A hemodynamic sensor integrated in pacing systems would be a valuable instrument for many applications. Only few hemodynamic sensors integrated in cardiac electrotherapy systems are currently available on the market. In order to fill the gap, I have explored the possibility of building a hemodynamic sensor for myocardial contraction detection that could be easily integrated in the existing cardiac pacing and defibrillator leads. In this thesis I propose two sensors. One is based on tribolectricity and the other one requires the measurement of high frequency lead parameters. The triboelectric sensor system measures the charge generated due to the triboelectric effect between one of the lead conductors and the inserted stylet as a result of the lead bending. The measurement system consists in sterile charge amplifiers for use in sterile operation field and a non-sterile enclosure containing isolation amplifiers and power supply. Atrial and right ventricular tensiometric signals were recorded during numerous ovine and human experiments and have shown good results under different measurement conditions. The main downside is the necessity of the additional hardware in terms of chronic stylet insertion in the pacing lead lumen. The sensor based on the measurement of high frequency (HF) pacing lead parameters has its origin in previous extensive in vitro experiments on the HF characteristics of the lead. These experiments have supported the idea of considering any bipolar lead to be a HF transmission line with its characteristic impedance and attenuation. An original study revaluing lead HF parameters after being soaked for more than a decade in the saline solution is presented. A parallel study on dry new leads was also carried out. The hemodynamic HF sensor is based on the variation of the pacing lead HF impedance and reflection coefficient due to its movement during cardiac contractions. The quality of the signal was proven in a series of ovine and human experiments and during dobutamine test in sheep. Both sensors would be feasible hemodynamic sensors for various applications: capture management, rate responsiveness, heart failure monitoring, CRT optimization, tachycardia hemodynamic stability assessment, AF therapy titration and vasovagal syncope prediction. These two sensors are unique for their simplicity and universality for all existing endovenous bipolar cardiac leads.1483 3615 - PublicationCoupling of experimental and computational approaches for the development of new dendrimeric nanocarriers for gene therapy.(Università degli studi di Trieste, 2015-03-23)
;Marson, DomenicoPricl, SabrinaGene therapy is increasingly critical in the treatment of different types of maladies. The approach of gene therapy can be fundamental in dealing with many kinds of tumors, viral infections (e.g., HIV, HSV), and disturbs linked to genetic anomalies. However, the use of nucleic acids is limited by their ability to reach their action site—the target cell and, often, the inside of its nucleus. Dendrimers, on the other hand, are an interesting kind of polymers, the general synthetic scheme of which is relatively of recent development (∼1980). Among the many possible uses of these polymers, they revealed themselves as great nanocarriers for drugs in general, and particularly for genetic material. Many of the properties of these molecules are directly linked to their structure, and this in turn is critically influenced by their molecular composition. Exploiting in silico techniques, we can reveal many informations about the atomistic structure of dendrimers, some of which are otherwise difficult to gather. The interactions between the carrier and its cargo, and also with all the biological systems that are interposed between the administration and the reaching of the target (e.g., serum proteins, lipid membranes. . . ) are of critical importance in the development of new dendrimers for gene therapy. These interactions can be described and studied at a detail once unthinkable, thanks to the in silico simulation of these systems. In this thesis many different molecular simulation techniques will be employed to give a characterization as precise as possible of the structure and interactions of new families of dendrimers. In particular two new families of dendrimers (viologen and carbosilane) will be structurally characterized, and their interactions with albumin and two oligodeoxynucleotide, respectively, will be described. Then, the point of view of these interactions will be changed: the interactions between a fifth generation triethanolamine-core poly(amidoamine) dendrimer (G5 TEA-core PAMAM) and a sticky siRNA will be studied, varying the length and chemical compositions of the overhangs of the siRNA. Studying dendrimers the use of new molecular simulations techniques were deepened, and such techniques will be employed in other parallel projects. We’ll see the steered molecular dynamic method applied in the study of one mutation of the SMO receptor. The development of biological membranes models (that will be used in future to study the interactions of dendrimers with such membranes) was also used to refine and better characterize the σ1 receptor 3D model, previously developed by our research group. A detailed characterization of the putative binding site of this receptor will be given, employing this refined model.1109 991 - PublicationCross-Layer design and analysis of cooperative wireless networks relying on efficient coding techniques(Università degli studi di Trieste, 2013-03-13)
;Crismani, AlessandroBabich, FulvioThis thesis work aims at analysing the performance of efficient cooperative techniques and of smart antenna aided solutions in the context of wireless networks. Particularly, original contributions include a performance analysis of distributed coding techniques for the physical layer of communication systems, the design of practical efficient coding schemes that approach the analytic limiting bound, the cross-layer design of cooperative medium access control systems that incorporate and benefit from advanced physical layer techniques, the study of the performance of such solutions under realistic network assumptions, and, finally the design of access protocols where nodes are equipped with smart antenna systems.822 1109 - PublicationDesign and modeling of a digital controller for multi mode DC-DC convertes(Università degli studi di Trieste, 2010-03-30)
;Meola, Marco ;Carrato, Sergio ;Bodano, EmanueleBodano, EmanueleConvertitori dc-dc in grado di fornire una elevata efficienza per un ampio intervallo di valori di carico trovano il loro impiego in tutte quelle applicazioni dove dispositivi alimentati a batteria vengono utilizzati. In particolare, l’ottimizzazione dell’efficienza di tali convertitori per basse correnti di carico `e uno degli argomenti pi`u stimolanti nella progettazione di convertitori dc-dc. Nei convertitori multi-modo tale efficienza viene massimizzata mediante l’utilizzo di strategie di controllo diverse in funzione della corrente di uscita. In quest’ambito, il controllo di convertitori a commutazione `e tradizionalmente ottenuto per via analogica tramite l’impiego di circuiti integrati dedicati. Tuttavia, mano a mano che i sistemi di potenza diventano sempre pi`u complessi e spesso costituiti a loro volta da sotto-sistemi fra loro interagenti, il classico concetto di controllo si `e gradualmente evoluto nella pi`u generale tematica del power management, richiedendo funzionalit`a difficilmente implementabili nei controllori analogici. L’elevata flessibilit`a offerta dai controllori digitali e la loro predisposizione ad implementare sofisticate strategie di controllo, insieme alla programmabilit`a dei parametri del controllore, fanno del controllo digitale una attraente alternativa per il miglioramento delle prestazioni dei convertirori dc-dc multi-modo. Tuttavia, il punto debole pi`u evidente di un controllore digitale risiede nelle prestazioni dinamiche a catena chiusa da esso ottenibili. I tempi impiegati per la conversione analogico-digitale della grandezza da controllare, i ritardi di calcolo cos`ı come i ritardi associati al campionamento pongono limiti severi alla massima banda di controllo ottenibile in un convertitore controllato digitalmente. Ulteriori limitazioni sono inoltre imposte dagli effetti di quantizzazione nella catena di controllo. Per le ragioni sopra esposte, la realizzazione di controllori digitali in grado di essere competitivi (in termini di prestazioni dinamiche) rispetto alle classiche soluzioni analogiche `e materia di 3 intensa attivit`a scientifica nonch´e interesse industriale. Inoltre, sebbene il controllo digitale appare capace di soddisfare le esigenze sopra menzionate, i convertitori dc-dc a controllo analogico dominano ancora il mercato. Infatti, il controllo digitale di convertitori dc-dc soffre della mancanza del solido know-how posseduto dai controllori analogici, risultando cos`ı meno accessibile. Questo lavoro di tesi si inquadra nel contesto cos`ı delineato. L’attivit`a principale svolta riguarda la progettazione e simulazione di controllori dc-dc a controllo digitale con l’obbiettivo di studiare l’ottimizzazione dell’efficienza per piccole correnti di carico. In questa tesi la struttura di un controllore digitale multi-modo per convertitori a basso costo e bassa consumo di potenza `e presentata. Criteri decisionali sulla scelta della strategia di controllo a seconda delle condizioni di carico sono proposti e testati su prototipo sperimentale di convertitore dc-dc dove il controllo digitale `e implementato in una FPGA. Inoltre, lo sviluppo di un modello a larghi segnali a tempo discreto dello stadio di potenza del convertitore, studiato appositamente per modellizzare il comportamento del convertitore nel passaggio da una strategia di controllo all’altra, costituisce un utile strumento per la progettazione di convertitori a commutazione sulla base dei criteri decisionali proposti, mettendo in luce le problematiche della progettazione di sistemi per il power management.1171 3691 - PublicationDesign and synthesis of functionalized metal nanoparticles for bio-analysis with surface-enhanced Raman scattering (SERS)(Università degli studi di Trieste, 2013-04-09)
;Marsich, Lucia ;Bonifacio, AloisSergo, ValterThe objective of this doctoral research is the development and the implementation of SERS-active substrates with biological samples. The substrates consist in coated silver nanoparticles synthesized by chemical reduction of a silver salt. The biological samples are the anionic chromophore bilirubin and two heme protein, the cationic cytochrome-c and the anionic protein cytochrome b5. In the first part of this work, positively charged nanoparticles were prepared by coating citrate-reduced silver nanoparticles with the cationic polymer poly-L-lysine and were employed with bilirubin in the experiments listed below: detection of nanomolar bilirubin concentrations in aqueous solutions, showing that the SERS intensity increases linearly with concentration in a range from 10 nM to 200 nM, allowing quantitative analysis of bilirubin aqueous solutions. indirect quantification of bilirubin cellular up-take, demonstrating the ability to detect the bilirubin also in a buffer solution suitable for cell growth with pH 7.4. Since the bilirubin quantification at this pH is no longer possible, the poly-L-lysine was substituted by two polymers with a quaternary nitrogen atom. bilirubin measurement in serum, but TEM images highlights the formation of a albumin layer around the nanoparticles, blocking the interaction between bilirubin and the nanoparticles. Hence the citrate-reduced silver nanoparticles were coated with an hydrophobic capping and re-dispersed in hexane, to avoid the albumin layer around the nanoparticles. In the second part of this doctoral thesis, silver nanoparticles were prepared via seed growth method and subsequently coated with chitosan or silica in order to obtain positively or negatively charged nanoparticles respectively. Such substrates enhance the spectrum of the cytochrome-c and cytochrome-b5 on polished silver electrode without directly interact with the protein. Thanks to the presence of chitosan or silica coated nanoparticles, the cytochrome-c and cytochrome-b5 can be detected on a gold substrate.1420 3358 - PublicationDetection and measurement of artefacts in digital video frames(Università degli studi di Trieste, 2012-03-15)
;Abate, LeonardoRamponi, GiovanniThis work presents original algorithms for the measurement of artefacts impairing the quality of digital video sequences. The intended use of these algorithms is the control of the restoration processes performed on the video in advanced monitors for costumer applications. The problem of the artefact measurement at this stage of the processing chain di ffer from the assessment of quality performed in other applications. Quality assessment aimed to the improvement of the encoding operation, for example, can be done using the original sequence for comparison, and based on the pixel-by-pixel di erences with it. Quality measurements in intermediate stages of the transmission chain of the video, where the sequence is available in compressed form, can employ useful information contained in the bitstream, such as the frequency distribution of the frame content, the bitrate, the quantisation step and the error rate, all factors related to the global quality. In the proposed application, i. e. at the monitor, the measurements of the frame degradation must instead take place on the decoded numerical values of the pixels of the sole altered sequence. In addition, these measurement should require a low computational cost, so that they can be used in real time. In the rst part of this work some of the existing methods for Quality Assessment are briefly overviewed and classi ed based on the chosen approach to the problems. In this overview three main classes of methods are identi ed, namely the methods based on the measurement of speci c frame and video artefacts, the methods measuring the discrepancies between some statistical properties of the pixel distribution or the sequence parameters and ideal models, and the methods processing highly generic measures with trained classi ers. The rst strategy is deemed the most promising in the intended application, due to the good achieved results with relatively little computation and the possibility to avoid a long and complex training phase. The proposed algorithms are therefore based on the measurement of speci c video artefacts. A second part of the work is devoted to the identi cation of the main potential degradation factors in one of the most recent encoding standard, namely H264. The main aspects of frame degradation, namely blockiness in smooth areas and texture, edge degradation, and blurriness, are identi ed, and their relationship to the encoding options is briefly examined. Based on this brief inspection, two of the most common artefacts of the transmitted video, namely blurriness and blockiness, are chosen for measurements estimating the picture quality degradation. The devised algorithms integrate measures of the inter-pixel relationships determined by the artefacts with models of human vision to quantify their subjective appearance. For the blurriness measurement two methods are proposed, the fi rst acting selectively on object edges, the second uniformly on the frame surface. In the measurement of the edge blurriness the hierarchical role of each edge is estimated, distinguishing between the marginal edges of the detail and the edges of the main objects of the frame. The former have reduced contrast and short length compared to the edges of the surrounding shapes, and have little e ect on the overall blurriness impression. Conversely, the state of the latter is the main responsible of the frame quality aspect. The edge blurriness measure is based on the edge width and steepness, corrected with the edge length and the activity of the surrounding scene. This measure of edge blurriness is further corrected with a measure of the local scene clutter, accounting for the fact that in cluttered scenes the perception of the artefact is reduced. The resulting method yields blurriness measurements in local frame parts. The correlation of this measurements with subjective impression is evaluated in experimental tests. The two metrics acting uniformly on the frame measure the decrement in perceived contrast and the lack of detail, respectively. Used together, they are e ective in identifying special types of blurriness resulting in the generation of large areas with few edges and little contrast. These forms of blurriness generally cause a milder degradation of the perceived quality compared to the blurriness caused by encoding. The ability to distinguish among blurriness types and corresponding quality ranges is veri ed in experimental tests. Also the artefacts resulting from block based compression are analysed with a method acting on the sole edges and another applied to the whole frame. The edge degradation, consisting in an unnatural geometric alteration of the main objects, was measured from the frequency and length of straight edge fractions and the incidence of square corners. A correction procedure is introduced in order to avoid false alarms caused by natural polygonal objects and by the intrinsic nature of digital pictures. The measure of the blocking artefact on the frame surface, which appears altered by an unnatural grid, is performed with an original solution especially devised for video frames, and aimed to detect the displacement of the synthetic block edges caused by the motion compensation performed in video encoding. On this purpose very sensitive local blockiness indicators are devised, and corrected with models of the human perception of discontinuities in luminance in order to avoid false alarms. Vision models are further integrated in the computation of a global frame blockiness measure consisting in a weighted sum of local measures on detection points. The metric is tested with respect to its constance on subsequent frames, robusteness to upscaling and correlation with the quality ratings produced in experiments by a group of human observers.1420 2569 - PublicationDevelopment of algorithms and methods for three-dimensional image analysis and biomedical applications(Università degli studi di Trieste, 2012-03-15)
;Brun, Francesco ;Accardo, AgostinoMancini, LuciaTomographic imaging is both the science and the tool to explore the internal structure of objects. The mission is to use images to characterize the static and/or dynamic properties of the imaged object in order to further integrate these properties into principles, laws or theories. Among the recent trends in tomographic imaging, three- dimensional (3D) methods are gaining preference and there is the quest for overcoming the bare qualitative observation towards the extraction of quantitative parameters directly from the acquired images. To this aim, Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), as well as the related micro-scale techniques (μ-CT and μ-MRI), are promising tools for all the fields of science in which non-destructive tests are required. In order to support the interpretation of the images produced by these techniques, there is a growing demand of reliable image analysis methods for the specific 3D domain. The aim of this thesis is to present approaches for effective and efficient three-dimensional image analysis with special emphasis on porous media analysis. State-of-the art as well as innovative tools are included in a special software and hardware solution named Pore3D, developed in a collaboration with the Italian 3rd generation synchrotron laboratory Elettra (Basovizza - Trieste, Italy). Algorithms and methods for the characterization of different kinds of porous media are described. The key steps of image segmentation and skeletonization of the segmented pore space are also discussed in depth. Three different clinical and biomedical applications of quantitative analysis of tomographic images are presented. The reported applications have in common the characterization of the micro-architecture of trabecular bone. The trabecular (or cancellous) bone is a 3D mesh- work of bony trabeculae and void spaces containing the bone marrow. It can then be thought of as a porous medium with an interconnected porous space. To be more specific, the first application aims at characterizing a structure (a tissue engineering scaffold) that has to mimic the architecture of trabecular bone. The relevant features of porosity, pore- and throat-size distributions, connectivity and structural anisotropy indexes are automatically extracted from μ-CT images. The second application is based on ex vivo experiments carried out on femurs and lumbar spines of mice affected by microgravity conditions. Wild type and transgenic mice were hosted in the International Space Station (ISS) for 3 months and the observed bone loss due to the near-zero gravity was quantified by means of synchrotron radiation μ-CT image analysis. Finally, the results of an in vivo study on the risk of fracture in osteoporotic subjects is reported. The study is based on texture analysis of high resolution clinical magnetic resonance (MR) images.1316 2786 - PublicationDEVELOPMENT OF ANALYTICAL NONLINEAR MODELS FOR PARAMETRIC ROLL AND HYDROSTATIC RESTORING VARIATIONS IN REGULAR AND IRREGULAR WAVES(Università degli studi di Trieste, 2006-03-31T11:28:15Z)
;Bulian, GabrieleFrancescutto, AlbertoParametrically excited roll motion has become a relevant technical issue, especially in recent years, due the increasing number of accidents related to this phenomenon. For this reason, its study has attracted the interest of researchers, regulatory bodies and classification societies. The objective of this thesis is the developing of nonlinear analytical models able to provide simplified tools for the analysis of parametrically excited roll motion in longitudinal regular and irregular long crested waves. The sought models will take into account the nonlinearities of restoring and of damping, in order to try filling the gap with the analytical modelling in beam sea. In addition, semi-empirical methodologies will be provided to try extending the usual static approach to ship stability based on the analysis of GZ curve, in a probabilistic framework where the propensity of the ship to exhibit restoring variations in waves is rationally accounted for. The thesis addresses three main topics: the modelling of parametric roll in regular sea (Chapter 2 to Chapter 5), the modelling of parametric roll motion in irregular long crested sea (Chapter 6 and Chapter 7) and the extension of deterministic stability criteria based on the analysis of geometrical GZ curve properties to a probabilistic framework (Chapter 8). Chapter 1 gives an introduction, whereas Chapter 9 reports a series of final remarks. For the regular sea case an analytical model is developed and analysed both in time domain and in frequency domain. In this latter case an approximate analytical solution for the nonlinear response curve in the first parametric resonance region is provided by using the approximate method of averaging. Prediction are compared with experimental results for four ships, and the analytical model is investigated with particular attention to the presence of multiple stable steady states and the inception of chaotic motions. The influence of harmonic components higher than the first one in the fluctuation of the restoring is also investigated. In the case of irregular sea, the Grim's effective wave concept is used to develop an analytical model for the long crested longitudinal sea condition, that allows for an approximate analytical determination of the stochastic stability threshold in the first parametric resonance region. Experimental results are compared with Monte Carlo simulations on a single ship, showing the necessity of a tuning factor reducing the hydrostatically predicted magnitude of parametric excitation. The non-Gaussianity of parametrically excited roll motion is also discussed. Finally, on the basis of the analytical modelling of the restoring term in irregular waves, an extension of the classical deterministic approach to ship static stability in calm water is proposed, to take into account, although is a semi-empirical form, restoring variations in waves. Classical calm water GZ curve is then extended from a deterministic quantity to a stochastic process. By limiting the discussion to the instantaneous ensemble properties of this process, it is shown how it is possible to extend any static stability criterion based on the geometrical properties of the GZ curve, in a rational probabilistic framework taking into account the actual operational area of the ship and the propensity of the ship to show restoring variations in waves. General measures of restoring variations are also discussed, such as the coefficient of variation of metacentric height, restoring lever and area under GZ. Both the short-term and long-term point of view are considered, and the method is applied to three different ships in different geographical areas.2302 3244 - PublicationDevelopment of micro electro mechanical devices for the study of mechanosensitive ion channels and mechanical cell properties(Università degli studi di Trieste, 2012-03-29)
;Fior, RaffaellaSbaizero, OrfeoThe objectives of this doctoral research involve the development of tools, in particular micro-nano devices for the study of mechanical properties of single living cells and for the analyses of mechanosensitive ionic channels (MSCs). BioMEMS (Biological Micro Electro Mechanical Systems) have been devised and used to investigate MSCs and the cell mechanics in a completely innovative way. Living cells in adhesion can be studied in a physiological condition; the mechanical stretch can be controlled and measured; the MSCs activity can be evaluate using different techniques from patch clamp to AFM (atomic force microscope) or fluorescence essays. Silicon BioMEMS have been designed and tested to evaluate morphological modifications of the stretched cells, and hysteretic behavior has been assessed. However, since they are not transparent, the use of this devices has been limited. Also completely transparent devices have been designed and microfabricated. These BioMEMS will allow testing cells and combining measurements of the mechanical properties, the cell’s morphology (with optical systems and atomic force microscopy), and MSCs activity (with patch clamp and/or conductive AFM). In this doctoral research, BioMEMS have been devised and realized, the measurement set-up optimized and a surface treatment protocol developed.1385 1806 - PublicationDistributed Discrete Consensus Algorithms: Theory and Applications for the Task Assignment Problem(Università degli studi di Trieste, 2015-04-13)
;Pedroncelli, Giovanni ;Ukovich, WalterFanti, Maria PiaDistributed computation paradigms belong to a research field of increasing interest. Using these algorithms will allow to exploit the capabilities of large scale networks and systems in the near future. Relevant information for the resolution of a problem are distributed among a network of agents with limited memory and computation capability; the problem is solved only by means of local computation and message exchange between neighbour agents. In this thesis we consider the multi-agent assignment problem dealt with distributed computation: a network of agents has to cooperatively negotiate the assignment of a number of tasks by applying a distributed discrete consensus algorithm which defines how the agents exchange information. Consensus algorithms are dealt with always more frequently in the related scientific literature. Therefore, in the first chapter of this thesis we present a related literature review containing some of the most interesting works concerning distributed computation and, in particular, distributed consensus algorithms: some of these works deal with the theory of consensus algorithms, in particular convergence properties, others deal with applications of these algorithms. In the second chapter the main contribution of this thesis is presented: aniterative distributed discrete consensus algorithm based on the resolution of local linear integer optimization problems (L-ILPs) to be used for the multi-agent assignment problem. The algorithm is characterized by theorems proving convergence to a final solution and the value of the convergence time expressed in terms of number of iterations. The chapter is concluded by a performance analysis by means of the results of simulations performed with Matlab software. All the results are presented considering two different network topologies in order to model two different real life scenarios for the connection among agents. The third chapter presents an interesting application of the proposed algorithm: a network of charging stations (considered as agents) has to reach a consensus on the assignment of a number of Electric Vehicles (EVs) requiring to be recharged. In this application the algorithm proposed in the previous chapter undergoes several modifications in order to model effectively this case: considering the inter-arrival times of vehicles to a charging station, a non-linear element appears in the objective function and therefore a novel algorithm to be performed before the assignment algorithm is presented; this algorithm defines the order in which the assigned vehicles have to reach a charging station. Moreover, a communication protocol is proposed by which charging stations and vehicles can communicate and exchange information also allowing charging stations to send to each assigned vehicle the maximum waiting time which can pass before a vehicle loses its right to be recharged. The chapter ends with an example of application of the rivisited assignment algorithm. In the fourth and last chapter, we present an application in an industrial environment: a network of Autonomous Guided Vehicles (AGVs) in a warehouse modeled as a graph has to perform the distributed discrete consensus algorithm in order to assign themselves a set of destinations in which some tasks are located. This application deals not only with the task assignment problem but also with the following destination reaching problem: therefore a distributed coordination algorithm is proposed which allows the AGVs to move into the warehouse avoiding collisions and deadlock. An example of the control strategy application involving both the assignment and coordination algorithms concludes this chapter.1002 913 - PublicationDistributed fault detection and isolation of large-scale nonlinear systems: an adaptive approximation approach(Università degli studi di Trieste, 2009-04-20)
;Ferrari, Riccardo ;Parisini, ThomasPolycarpou, MariosThe present thesis work introduces some recent and novel results about the problem of fault diagnosis for distributed nonlinear and large scale systems. The problem of automated fault diagnosis and accommodation is motivated by the need to develop more autonomous and intelligent systems that operate reliably in the presence of system faults. In dynamical systems, faults are characterized by critical and unpredictable changes in the system dynamics, thus requiring the design of suitable fault diagnosis schemes. A fault diagnosis scheme that drew considerable attention and provided remarkable results is the so called model based scheme, which is based upon a mathematical model of the healthy behavior of the system that is being monitored. At each time instant, the model is used to compute an estimate of what should be the current behavior of the system, assuming it is not affected by a fault. If the behavior of the system is characterized by the time evolution of its state vector x(t), and the inputs to the system are denoted as u(t), then the most general nonlinear and uncertain discrete time model can be represented by x(t + 1) = f (x(t), u(t)) + η(t) , where the nonlinear function f represents the nominal model of the healthy system, and η(t) is an uncertainty term. A proven way to compute an estimate of the state x(t) is by using a diagnostic observer, so that in healthy conditions the residual between the true and the estimated value is, in practice, close to zero. Should the residual cross at a certain point a suitable threshold ̄ǫ(t), the observed difference between the model estimate and the actual measurements will be explained by the presence of a fault. The model-based scheme outlined so far has showed many interesting properties and advantages over signal-based ones, but anyway poses practical implementation problems when one tries to apply it to actual distributed, large-scale systems. In fact an implicit assumption about the model-based scheme is that the task of measuring all the state and input vectors components, and the task of computing the estimate of x(t) can be done in real-time by some single and powerful computer. But for large enough systems, this assumptions cannot be fulfilled by available measurement, communication and computation hardware. This problem constitutes the motivation of the present work. It will be solved by developing decomposition strategies in order to break down the original centralized diagnosis problem into many distributed diagnosis subproblems, that are tackled by agents called Local Fault Diagnosers that have a limited view about the system, but that are allowed to communicate between neighboring agents. In order to take advantage of the distributed nature of the proposed schemes, the agents are allowed to cooperate on the diagnosis of parts of the system shared by more than one diagnoser, by using consensus techniques. Chapter 2 introduces the problem of model-based fault diagnosis by presenting recent results about the centralized diagnosis of uncertain nonlinear discrete time systems. The development of a distributed fault diagnosis architecture is covered in the key Chapter 3, while Chapters 4 and 5 show how this distributed architecture is implemented for discrete and continuous time nonlinear and uncertain large–scale systems. In every chapter an illustrative example is provided, as well as analytical results that characterize the performances attainable by the proposed architecture. ---------------------------------------------------1789 9348 - PublicationDistributed Methods for Estimation and Fault Diagnosis: the case of Large-scale Networked Systems(Università degli studi di Trieste, 2013-03-13)
;Boem, Francesca ;Parisini, Thomas ;Polycarpou, MariosFerrari, RiccardoL’obiettivo di questa tesi è il monitoraggio di sistemi complessi a larga-scala. L’importanza di questo argomento è dovuto alla rinnovata enfasi data alle problematiche riguardanti la sicurezza e l’affidabilità dei sistemi, diventate requisiti fondamentali nella progettazione. Infatti, la crescente complessità dei moderni sistemi, dove le relazioni fra i diversi componenti, con il mondo esterno e con il fattore umano sono sempre più importanti, implica una crescente attenzione ai rischi e ai costi dovuti ai guasti e lo sviluppo di approcci nuovi per il controllo e il monitoraggio. Mentre nel contesto centralizzato i problemi di stima e di diagnostica di guasto sono stati ampiamente studiati, lo sviluppo di metodologie specifiche per sistemi distribuiti, larga scala o “networked”, come i Cyber-Physical Systems e i Systems-of-Systems, è cominciato negli ultimi anni. Il sistema fisico è rappresentato come l’interconnessione di sottosistemi ottenuti attraverso una decomposizione del sistema complesso dove le sovrapposizioni sono consentite. L’approccio si basa sul modello dinamico non-lineare dei sottosistemi e sull’approssimazione adattativa delle non note interconnessioni fra i sottosistemi. La novità è la proposta di un’architettura unica che tenga conto dei molteplici aspetti che costituiscono i sistemi moderni, integrando il sistema fisico, il livello sensoriale e il sistema di diagnostica e considerando le relazioni fra questi ambienti e le reti di comunicazione. In particolare, vengono proposte delle soluzioni ai problemi che emergono dall’utilizzo di reti di comunicazione e dal considerare sistemi distribuiti e networked. Il processo di misura è effettuato da un insieme di reti di sensori, disaccoppiando il livello fisico da quello diagnostico e aumentando in questo modo la scalabilità e l’affidabilità del sistema diagnostico complessivo. Un nuovo metodo di stima distribuita per reti di sensori è utilizzato per filtrare le misure minimizzando sia la media sia la varianza dell’errore di stima attraverso la soluzione di un problema di ottimizzazione di Pareto. Un metodo per la re-sincronizzazione delle misure è proposto per gestire sistemi multi-rate e misure asincrone e per compensare l’effetto dei ritardi nella rete di comunicazione fra sensori e diagnostici. Poiché uno dei problemi più importanti quando si considerano sistemi distribuiti e reti di comunicazione è per l’appunto il verificarsi di ritardi di trasmissione e perdite di pacchetti, si propone una strategia di compensazione dei ritardi , basata sull’uso di Time Stamps e buffer e sull’introduzione di una matrice di consenso tempo-variante, che permette di gestire il problema dei ritardi nella rete di comunicazione fra diagnostici. Gli schemi distribuiti per la detection e l’isolation dei guasti sono sviluppati, garantendo la convergenza degli stimatori e derivando le condizioni sufficienti per la detectability e l’isolability. La matrice tempo-variante proposta permette di migliorare queste proprietà definendo delle soglie meno conservative. Alcuni risultati sperimentali provano l’efficacia del metodo proposto. Infine, le architetture distribuite per la detection e l’isolation, sviluppate nel caso tempo-discreto, sono estese al caso tempo continuo e nello scenario in cui lo stato non è completamente misurabile, sia a tempo continuo che a tempo discreto.1169 984