Ingegneria industriale e dell'informazione
Permanent URI
Settori scientifico disciplinari compresi nell'area 9:
|
|
Browse
Browsing Ingegneria industriale e dell'informazione by Issue Date
Now showing 1 - 20 of 80
Results Per Page
Sort Options
- PublicationDEVELOPMENT OF ANALYTICAL NONLINEAR MODELS FOR PARAMETRIC ROLL AND HYDROSTATIC RESTORING VARIATIONS IN REGULAR AND IRREGULAR WAVES(Università degli studi di Trieste, 2006-03-31T11:28:15Z)
;Bulian, GabrieleFrancescutto, AlbertoParametrically excited roll motion has become a relevant technical issue, especially in recent years, due the increasing number of accidents related to this phenomenon. For this reason, its study has attracted the interest of researchers, regulatory bodies and classification societies. The objective of this thesis is the developing of nonlinear analytical models able to provide simplified tools for the analysis of parametrically excited roll motion in longitudinal regular and irregular long crested waves. The sought models will take into account the nonlinearities of restoring and of damping, in order to try filling the gap with the analytical modelling in beam sea. In addition, semi-empirical methodologies will be provided to try extending the usual static approach to ship stability based on the analysis of GZ curve, in a probabilistic framework where the propensity of the ship to exhibit restoring variations in waves is rationally accounted for. The thesis addresses three main topics: the modelling of parametric roll in regular sea (Chapter 2 to Chapter 5), the modelling of parametric roll motion in irregular long crested sea (Chapter 6 and Chapter 7) and the extension of deterministic stability criteria based on the analysis of geometrical GZ curve properties to a probabilistic framework (Chapter 8). Chapter 1 gives an introduction, whereas Chapter 9 reports a series of final remarks. For the regular sea case an analytical model is developed and analysed both in time domain and in frequency domain. In this latter case an approximate analytical solution for the nonlinear response curve in the first parametric resonance region is provided by using the approximate method of averaging. Prediction are compared with experimental results for four ships, and the analytical model is investigated with particular attention to the presence of multiple stable steady states and the inception of chaotic motions. The influence of harmonic components higher than the first one in the fluctuation of the restoring is also investigated. In the case of irregular sea, the Grim's effective wave concept is used to develop an analytical model for the long crested longitudinal sea condition, that allows for an approximate analytical determination of the stochastic stability threshold in the first parametric resonance region. Experimental results are compared with Monte Carlo simulations on a single ship, showing the necessity of a tuning factor reducing the hydrostatically predicted magnitude of parametric excitation. The non-Gaussianity of parametrically excited roll motion is also discussed. Finally, on the basis of the analytical modelling of the restoring term in irregular waves, an extension of the classical deterministic approach to ship static stability in calm water is proposed, to take into account, although is a semi-empirical form, restoring variations in waves. Classical calm water GZ curve is then extended from a deterministic quantity to a stochastic process. By limiting the discussion to the instantaneous ensemble properties of this process, it is shown how it is possible to extend any static stability criterion based on the geometrical properties of the GZ curve, in a rational probabilistic framework taking into account the actual operational area of the ship and the propensity of the ship to show restoring variations in waves. General measures of restoring variations are also discussed, such as the coefficient of variation of metacentric height, restoring lever and area under GZ. Both the short-term and long-term point of view are considered, and the method is applied to three different ships in different geographical areas.2304 3250 - PublicationIMAGE PROCESSING FOR SECURITY APPLICATIONS: DOCUMENT RECONSTRUCTION AND VIDEO ENHANCEMENT(2007-03-26T09:06:10Z)
;Ukovich, AnnaRamponi, GiovanniImage and video processing play an important role in the development of technologies for dealing with security issues: surveillance cameras are widely diffused as means of crime reduction, and image analysis tools are used in the forensics field. In this thesis two problems are considered: the reconstruction of documents which have been reduced to a heap of paper strips by a shredder device and the enhancement of poorly illuminated surveillance videos. The system architecture we developed for the computer-based re-assembly of shredded documents includes as a first step the acquisition of the strips with a scanner. After a pre-processing step, each strip is represented by a digital image. A binary mask is then generated, which permits to separate the strip from the acquisition background. In order to perform the reconstruction, the visual content of the strips must be properly coded, while the piece shape, commonly used for jigsaw puzzle or works of art fragments reconstruction, does not provide the necessary information. After a first attempt of describing the visual content by the MPEG-7 features, we resorted to domain-specific features. We recognized the following features as relevant for representing the strip visual content: line spacing, font type, number of lines of text, position of the first line of text, position of the last line of text, which are expedient when the original document contains printed text; squared paper index, useful in the case of notebook paper; presence of a marker, ink color, paper color, text edge energy, strip border, useful in both cases of handwritten and printed text. We developed the algorithms that automatically extract each one of those features from the strip digital image. The algorithms are specifically designed for taking into account the shredded strips peculiarities. On the base of the features, strips can be grouped in such a way that the strips belonging to the same page in the original documents are assigned to the same group and there are ideally as many groups as many the pages were. A hierarchical clustering algorithm has been used for this aim. The number of groups to be found is automatically selected by the algorithm in a proper interval provided by the user. The clustering is effective in improving the performance of a computer-aided reconstruction. Moreover, the computer computational time for the on-line interaction with the human operator is reduced by clustering. The computer-aided reconstruction is modelled as an image retrieval task: the user selects one strip, and the ones most similar to it are retrieved (ordered by decreasing similarity measure) and shown on the monitor. Among them, the user recognizes the correctly matching strips and virtually glues them. The process is repeated iteratively until the reconstruction has been accomplished. In a fully automatic reconstruction scenario, the correctly matching strips have to be automatically detected by the computer. The information contained in the strip borders, along which the matching is performed, is exploited, namely the grey-level pixel appearance on the right (or left) strip border is used. The problem is modelled as a combinatorial optimization problem, and its NP-Completeness is demonstrated. Since it is NP-Complete, suboptimal algorithms must be devised for its solution. First, a local matching algorithm is proposed: given a piece, the correctly matching one on its right is the one whose left border is the most similar to the given strip right border. Errors may occur, since the border is noisy due both to the shredding and the digitalization processes. A global solution is thus explored, and the problem is modelled as an Assignment Problem: each left border must be assigned a right border, in such a way that the overall similarity is maximized. In conclusion, the original contributions developed in this thesis concerning the shredded document reconstruction problem are the following: 1. the problem characterization; 2. the design of a number of numerical low level features describing the strip visual content; the features are automatically extracted by the computer; 3. the definition of an algorithm for grouping the strips belonging to a same page; 4. the modelling of the problem as a combinatorial optimization problem and the definition of polynomial sub-optimal algorithms for its automatic solution. The second problem which has been studied during this PhD is the enhancement of poorly or non-uniformly illuminated images and videos. Both low dynamics and high dynamics images have been considered. In the latter, the enhancement is combined with a dynamics reduction, as explained below. High dynamics images are images that span a large range of luminosity. The Human Visual System has a high dynamics behavior: when looking towards a window from indoor we are able to distinguish both the internal and external details. Common acquisition devices lack this capability, and the resulting pictures could be saturated to white in the outdoor part or too dark in the indoor part. Techniques for the acquisition of high dynamics images exist. They consist in combining several pictures of the same scene taken with different exposure settings, or in using high dynamics sensors, such as the logarithmic CMOS sensor or the linear-logarithmic CMOS sensor. However, common display devices have a low dynamics and a dynamics reduction needs to be performed for visualization. The algorithm for dynamics reduction that has been considered in this thesis is the Locally Adaptive dynamics Reduction algorithm (LARx family). With respect to the existing literature, it has the advantage of being computationally light and thus suitable for real-time applications such as video surveillance and vehicle driving assistance. Like many other image enhancement algorithms, the LAR algorithm is based on the Retinex theory that states that when we observe an object, the image formed in our eye is the product of the illumination and of the object reflectance. It is the illumination that can present high dynamics, while the reflectance corresponds to the object details and has low dynamics. Therefore, for enhancing the images it suffices to compress the dynamics of the illumination, while keeping unchanged or enhancing the reflectance. The separation of the image into reflectance and illumination is however an ill-posed problem, and various solutions have been proposed in the literature. The LAR algorithms estimate the illumination using an edge-preserving smoothing filter. Their implementation by a Recursive Rational Filter results in a computationally light operator. In surveillance, the problem to be solved is the acquisition under low luminosity. The LAR algorithm for video sequences (LARS) has been optimized for this application, where the number of algorithm parameters to be set by the non-expert user should be small, and a real-time processing may be necessary. Due to the fact that cameras are quite noisy at low luminosity, and the noise becomes even more visible after the enhancement process, we developed a different version of the algorithm to handle this problem. In vehicle driving assistance, a high dynamics camera may be mounted on the vehicle rear mirror. The high dynamics sensor is particularly suitable for this application because the illumination could change very suddenly while the car is moving, for example when entering a tunnel or in case of direct sunlight. With the idea in mind of processing the videos directly on the camera, a lowcost hardware implementation of the LARS algorithm has been developed. It is tailored to FPGAs. Moreover, the temporal consistency has been taken into careful consideration to avoid annoying flickers. Though many algorithms for dynamics reduction exist in the literature, the problem of objectively assessing their performance is still unsolved. Usually, a subjective qualitative evaluation is performed, strongly relating the algorithm performance to the personal observer taste. We developed two novel quality measure algorithms, namely the tool based on the co-occurrence matrix and the local contrast measure. Both take as a reference the high dynamics image and regard as good the algorithms whose output is a low dynamics image with similar characteristics. The co-occurrence matrix tool describes the spatial distribution of high and low dynamics images by means of a visual representation as well as of a number of numerical features. The second measure we developed focuses on the local contrast feature. Based on the local contrast, noise, details and homogeneous parts can be separated. An algorithm which is able in particular to enhance the details part is considered to perform well according to this measure. A third methodology for assessing the enhancement algorithm performance is to setup an experimental environment where the luminosity can be varied. The same scene can then be acquired with good luminosity (reference images) or under poor lighting conditions (images to be processed by the algorithms). In this case a good algorithm, given the badly illuminated images, would output images with a low distance from the reference image. To summarize, the main original contributions of this thesis in the field of image and video enhancement are the following: 1. the LARS algorithm has been improved for surveillance and vehicle driving assistance applications (and an FPGA implementation has been proposed); 2. three novel objective quality measures to assess the performance of dynamics reduction algorithms have been developed.14953 5282 - PublicationSOLUZIONI INNOVATIVE E RIVESTIMENTI NANOSTRUTTURATI IN FILM FLESSIBILI AD ALTA BARRIERA(2007-05-24T09:02:53Z)
;MARRAS, LUIGISBAIZERO, ORFEOObiettivo e strategia Il lavoro svolto nel corso del Dottorato di Ricerca è stato incentrato sull’indagine, lo studio e lo sviluppo di nuove tecnologie applicative per la realizzazione di rivestimenti sottili in grado di conferire a materiali standard utilizzati nel packaging flessibile elevate proprietà, specialmente per quanto riguarda la barriera nei confronti di gas e vapori. Il lavoro è stato effettuato ponendo particolare attenzione alla reale fattibilità e alla possibile applicazione pratica dei processi e delle metodologie sviluppate in fase di ricerca, in virtù del possibile ottenimento di un prodotto estremamente promettente, nell’ottica, in particolare, di una sostituzione del foglio di alluminio da packaging multistrato. Esso parte da un’ampia analisi delle tecnologie ad oggi esistenti nel mercato dell’imballaggio flessibile, dalle problematiche ad esso connesse e alle prospettive prossime per un loro miglioramento, ed allo sviluppo di possibili nuovi prodotti competitivi, candidati, come detto, principalmente alla sostituzione del foglio di alluminio. In concerto con realtà aziendali ed imprenditoriali operanti nel settore (Metalpack S.r.l. soprattutto, Poligrafica Veneta S.r.l.) e in collaborazione con Istituti di Ricerca italiani e stranieri (VITO - Flemish Institute for High Technology, Mol (Belgio), Dipartimento di Ingegneria Ambientale e dei Materiali - Università di Modena e Reggio Emilia, Materials Department – Oxford Univeristy (Oxford, Regno Unito)), è stata man mano elaborata una via di sviluppo di prodotti e processi altamente innovativi e dalle promettenti prestazioni. Il processo metodologico che è stato guida nelle ricerche si articola in tre fasi sviluppate in parallelo: • individuazione delle problematiche e scelta dei giusti materiali su cui agire e su cui sono stati individuati ampi margini di miglioramento; • ottimizzazione delle tecnologie ad oggi esistenti; • sviluppo ed applicazione delle ultime tecnologie innovative. Quest’ultima parte, nello specifico, è stata oggetto di particolare attenzione attraverso lo studio di film sottili nanocompositi e nanostrutturati e tecnologie innovative di rivestimento come quelle basate sul plasma a pressione atmosferica. A conclusione del lavoro svolto è stata identificata un strategia di sviluppo prodotto dalle caratteristiche estremamente promettenti, ed in grado di rappresentare una soluzione davvero significativa nel mercato del packaging flessibile. Questo prodotto ha fatto nascere un forte interesse da parte di realtà imprenditoriali del settore. Particolare attenzione nello sviluppo del lavoro è stata posta al mercato dell’imballaggio nel campo alimentare; i risultati ottenuti nello studio delle diverse tecnologie affrontate e gli stessi prodotti con esse ottenuti offrono, tuttavia, soluzioni importanti per numerosi altri settori applicativi, dal vicino ambito del packaging farmaceutico o medico, ai settori dell’elettronica (OLED, schermi al plasma) o del biomedicale. L’obiettivo è stato raggiunto attraverso lo sviluppo di packaging flessibile basato sull’utilizzo di un materiale plastico comune (Polietilene a bassa densità, LDPE) ma opportunamente funzionalizzato, metallizzato in modo ottimale, trattato con plasma atmosferico e rivestito da un sottilissimo strato barriera nanostrutturato e con una vernice organica a base di alcool polivinilico ad elevate proprietà barriera; tale soluzione permette il raggiungimento degli standard più severi in termini di barriera a gas e vapori, un’ottima lavorabilità a basso prezzo (non necessita di processi integrativi) candidandosi, per esempio, quale scelta ottimale per la sostituzione dell’alluminio in packaging multistrato. La presente trattazione L’esposizione del presente elaborato segue lo sviluppo dei lavori e il processo logico che ne è stato guida. Esso si articola in 4 sezioni distinte, ciascuna a sua volta suddivisa nei vari capitoli distintivi le singole problematiche ed esperienze affrontate: Scenario Studio ed ottimizzazione di film barriera metallizzati Rivestimenti sottili e nanostrutturati Conclusioni Nella prima sezione viene introdotto il mondo del packaging flessibile, con particolare riguardo al campo alimentare. Vengono trattate le problematiche connesse con il presente mercato del packaging flessibile, in particolare per alimenti, e le possibili soluzioni per ottenere prodotti in grado di migliorare le performance in termini di conservazioni dei cibi, anche attraverso processi e materiali effettivamente utilizzabili e convenienti. Il risultato di tale fase è rappresentato dall’individuazione della sostituzione dell’alluminio in fogli dai film multistrato quale target primario per un prodotto competitivo e dalla realizzazione di film flessibili metallizzati ultra barriera opportunamente ottimizzati e migliorati quale mezzo per raggiungere tale obiettivo. La sezione successiva è quindi caratterizzata dall’ottimizzazione delle proprietà e caratteristiche dei comuni film metallizzati; si è agito in tal modo studiando le caratteristiche dei film metallizzati con alluminio, indagando in particolare le caratteristiche principali dei film sottili di alluminio (la permeazione dei gas e le proprietà meccaniche e di adesione in particolare), e valutando metodologie per il loro miglioramento (pre-trattamento al plasma in vuoto, e post trattamento al plasma in atmosfera). Gli argomenti trattati nei diversi capitoli della terza sezione sono stati sviluppati con l’intento di indagare le possibili soluzioni con l’utilizzo delle moderne tecnologie di rivestimento ed i materiali più promettenti, in grado di far raggiungere a tali film le performance necessarie; in particolare sono stati studiati ed applicati a tale scopo metodi di deposizione e materiali innovativi nanocompositi e nanostrutturati in forma di multistrato ultrasottile (attraverso la deposizione alternata di film organici ed inorganici), di rivestimento nanocomposito (attraverso la realizzazione di coating con dispersioni di riempitivi lamellari con dimensioni nanometriche) e di rivestimento sottile nanostrutturati ibrido organico-inorganico, ottenuto con tecnologie standard ed attraverso le più nuove tecnologie basate sull’utilizzo di plasma a pressione atmosferica. Ci si è soffermati in particolare su quest’ultima classe di esperimenti, valutati come i più promettenti. La sezione conclusiva è stata esposta con l’obiettivo di tradurre i risultati delle ricerche svolte nello sviluppo di un processo e di un possibile prodotto in grado di ottenere le prestazioni desiderate in un modo effettivamente realizzabile; la conclusione a cui tale fase ha portato è rappresentata dalla combinazione di tecnologie standard quali la metallizzazione con alluminio e il rivestimento con materiali organici altamente performanti (PVA), di materiali nuovi (substrato plastico ottimizzato) e tecnologie innovative come un post trattamento al plasma atmosferico sul film metallizzato per aumentarne la tensione superficiale e un rivestimento ultrasottile altamente funzionale realizzato con polimerizzazione assistita d plasma atmosferico in grado di fungere da pre-layer prima del coating alta barriera. Il risultato, in termini di barriera all’ossigeno ed al vapor d’acqua di quest’ultima soluzione è rappresentato nel presente diagramma.2726 19048 - PublicationMODELLING THE SELF-ASSEMBLY OF SUPRAMOLECULAR NANOSTRUCTURES ADSORBED ON METALLIC SUBSTRATES(2007-05-30T06:19:36Z)
;COMISSO, ALESSIODE VITA, ALESSANDROThe term Nanotechnology is used to describe a variety of techniques to fabricate materials and devices at the nanoscale. Nano-techniques include those used for fabrication of nanowires, those used in semiconductor fabrication such as deep ultraviolet and electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, molecular vapor deposition, and the ones including molecular self-assembly techniques. All these methods are still being developed and not all of them were devised with the sole purpose of creating devices for nanotechnology. A number of physical phenomena become noticeably pronounced as the system size decreases. These include statistical effects, as well as quantum effects, where the electronic properties of solids are altered if the particle size is greatly reduced. There are also effects which never come into play by going from macro to micro dimensions, while they become dominant when the nanometer scale is reached. Furthermore nanotechnology can be thought of as extensions of traditional disciplines towards the explicit consideration of all these effects. Traditional disciplines can be re-interpreted as specific applications of nanotechnology. Broadly speaking, nanotechnology is the synthesis and application of ideas from science and engineering towards the understanding and production of novel materials and devices with atomicscale control. Modern synthetic chemistry has reached the point where it is possible to prepare small molecules of almost any (stable) structure. Methods exist today to produce a wide variety of useful chemicals. A branch of nanotechnology, relevant to the present thesis work, is looking for methods to assemble single molecules into supramolecular assemblies arranged in a well defined manner. These approaches use molecular self-assembly and supramolecular chemistry to automatically arrange the single molecules into interesting and potentially useful structures. The scanning tunneling microscope (STM) is a non-optical microscope that scans an electrical probe (the tip) over a conductive surface to be imaged. It allows scientists to visualize regions of high electron density at the atomic scale, and hence infer the position of individual atoms and molecules on a material surface. STM is specially suited for the study of the self-assembly of molecules deposited on conductive substrates because it provides direct insight into the assembled structures. However, the STM images are often insufficient for a complete description of the phenomena, and computer simulations offer a complementary approach that can effectively integrate the experiments . The theoretical investigation of the molecular self-assembly aims at the understanding of the mechanisms that are involved in the assemblies formatiom. In particular the atomistic simulation can provide information on the geometry of the stable structures, the nature and the intensity of the interactions as well as on the dynamical processes. In this thesis, a combination of first principles and classical molecular dynamics simulations is used to shed light on the self-assembly of some organic molecules deposited on noble metal substrates. Three cases are discussed, the self-assembly of TMA and BTA molecules on Ag(111) and the self-assembly of an oxalic amide derivative on Au(111). When TMA and BTA molecules are deposited onto a silver surface at a temperature lower than room temperature they form a regular 2D honeycomb network featuring double hydrogen bonds between carboxylic groups. Even if this bonding makes the network very stable, when these systems are annealed to higher temperatures they undergo some irreversible phase transition into closer-packed supramolecular arrangements. Namely, the TMA has a transition from honeycomb to a high coverage “quartet” structure and the BTA has two transtions: from honeycomb to unidimensional stripes and from here to a closed-packed monolayer. A combination of experimental and theoretical techniques allowed us to identify the stepwise deprotonation of the carboxylic acid groups as the driving force driving the phase transitions. Our theoretical investigation targeted the electrostatic interaction involved in the formation of the various phases revealing that a depolarisation of the molecular ions occurs as a consequence of the deprotonation process. Therefore, the repulsive contribution arising from the interaction of negatively charged molecules can be overcome by the attractive hydrogen bond interaction involving the deprotonated carboxylic groups, thus resulting in a stable closed-packed arrangement. Rather remarkably, this exemplifies how higher-coverage phases can be obtained at each step of a series of phase transitions in a supramolecular assembled system, despite the increasing temperature and the increasing electrostatic repulsive energy cost accompanying deprotonation. The oxalic amide derivative molecules arranges themselves in linear chains both in the molecular solid and when adsorbed on a gold surface. However the intermolecular distance and the geometry of the chains are different in these two cases. Various relaxed bonding structure between molecules in the chains have been calculated from first principles in the present work. The rationale of the different linkage behaviour between molecules in the two situations described have also been investigated: the interaction with the substrate appears to be the main cause for the particular rearrangement observed in the chains. Both experimental observations and theoretical predictions indicate that a conformational change involving the rotation of the phenyl rings of the monomers is necessary for chain formation.1245 1718 - PublicationIl laser scan a supporto delle analisi geologiche e geomorfologiche(Università degli studi di Trieste, 2008-03-03)
;Potleca, Michele ;Cucchi, FrancoZini, LucaNel presente lavoro viene presentata una panoramica generale sulla tecnica laserscan, soffermandosi maggiormente sui sistemi utilizzati nell’ambito di questa ricerca, evidenziando le problematiche di carattere fisico, tecnico e applicativo per le analisi di tipo ambientale. In seguito è descritta la metodologia con cui sono stati affrontati i rilievi, le modalità e problematiche nell’elaborazione di dati LIDAR e le loro analisi dal punto di vista geologico applicativo descrivendo alcuni dei numerosi casi studio affrontati nel corso della ricerca. Sono state testate e provate diverse strumentazioni laser scan sia da terra che da elicottero. Si sono descritte le metodologia con cui si sono affrontate le acquisizioni, le procedure e le tecniche di processamento dati, nonché i metodi d’analisi e modellazione delle nuvole di punti. Per l’intera fase di processamento sono state confrontate le tecniche di rilievo terrestre tramite TLS e quelle aeree tramite ALS. Queste sono le principali fasi seguite: 1)Progetto delle prese per i rilievi TLS, o progettazione del volo per i rilievi aviotrasportati; 2)Calibrazione dei sensori; 3)Acquisizione dei dati; 4)Generazione delle nubi di punti; 5)Allineamento e target; 6)Georeferenziazione ; 7)Filtraggio; 8)Classificazione; 9)Ottimizzazione delle nuvole di punti; 10)Produzione dei file e modellazione La maggior attenzione si è posta sulle fasi di trattamento e modellazione dei dati laser in quanto sono passaggi di fondamentale importanza per le successive analisi a fini geologico – geomorfologico. Nei primi 2 anni della presente ricerca si sono affrontati ben 12 rilievi laser a fini geologici, geomorfologi e di modellistica idraulica 11 sul territorio del Friuli Venezia Giulia e 1 in Veneto. L’individuazione e scelta dei siti da è stata in funzione della loro rappresentatività in termini di pericolosità e rischio associato, nonché disponibilità di informazioni pregresse sui fenomeni in atto e sulla base della tipologia dei fenomeni Successivamente, è stato scelto di limitare lo studio solo sulle varie tipologie di frana scegliendo i 5 casi più rappresentativi, di questi è stato condotto uno studio di dettaglio e un adattamento e messa a punto degli strumenti e dei codici di calcolo più idonei agli scenari sotto indagine. Le applicazioni sperimentate sui diversi fenomeni franosi hanno fornito risultati incoraggianti, mostrando comunque che il futuro della tecnica laser è nell’integrazione con le metodologie fotogrammetriche, che forniscono informazioni complementari a quelle ottenute dalle riprese laser. Si è dimostrato come sia possibile la valutazione di spostamenti, modificazioni morfologiche e deformazioni in aree soggette a frane può avvenire attraverso in confronto fra 2 DTM o un DTM e una nuvola di punti anche laddove ci sia un’elevata coperture vegetale o in zone di difficile accesso. L’utilizzo dei rilievi TLS e ALS , ed in particolare l’analisi di serie di DTM multi-temporale, è risultato uno strumento essenziale per il controllo dell’evoluzione temporale di fenomeni franosi complessi. L’approccio digitale ha consentito di produrre in tempi brevi e con continuità modelli della superficie come riportato nei casi studio, eseguire confronti e valutazioni quantitative dei volumi interessati ai fenomeni franosi, seguire l’evoluzione morfologica del versante per fornire parametri di valutazione della stabilità dello stesso.1312 988 - PublicationNew strategies for efficient and practical genetic programming.(Università degli studi di Trieste, 2008-03-18)
;Fillon, CyrilBartoli, AlbertoIn the last decades, engineers and decision makers expressed a growing interest in the development of effective modeling and simulation methods to understand or predict the behavior of many phenomena in science and engineering. Many of these phenomena are translated in mathematical models for convenience and to carry out an easy interpretation. Methods commonly employed for this purpose include, for example, Neural Networks, Simulated Annealing, Genetic Algorithms, Tabu search, and so on. These methods all seek for the optimal or near optimal values of a predefined set of parameters of a model built a priori. But in this case, a suitable model should be known beforehand. When the form of this model cannot be found, the problem can be seen from another level where the goal is to find a program or a mathematical representation which can solve the problem. According to this idea the modeling step is performed automatically thanks to a quality criterion which drives the building process. In this thesis, we focus on the Genetic Programming (GP) approach as an automatic method for creating computer programs by means of artificial evolution based upon the original contributions of Darwin and Mendel. While GP has proven to be a powerful means for coping with problems in which finding a solution and its representation is difficult, its practical applicability is still severely limited by several factors. First, the GP approach is inherently a stochastic process. It means there is no guarantee to obtain a satisfactory solution at the end of the evolutionary loop. Second, the performances on a given problem may be strongly dependent on a broad range of parameters, including the number of variables involved, the quantity of data for each variable, the size and composition of the initial population, the number of generations and so on. On the contrary, when one uses Genetic Programming to solve a problem, he has two expectancies: on the one hand, maximize the probability to obtain an acceptable solution, and on the other hand, minimize the amount of computational resources to get this solution. Initially we present innovative and challenging applications related to several fields in science (computer science and mechanical science) which participate greatly in the experience gained in the GP field. Then we propose new strategies for improving the performances of the GP approach in terms of efficiency and accuracy. We probe our approach on a large set of benchmark problems in three different domains. Furthermore we introduce a new approach based on GP dedicated to symbolic regression of multivariate data-sets where the underlying phenomenon is best characterized by a discontinuous function. These contributions aim to provide a better understanding of the key features and the underlying relationships which make enhancements successful in improving the original algorithm.1620 2198 - PublicationTechniques for large-scale automatic detection of web site defacements.(Università degli studi di Trieste, 2008-03-18)
;Medvet, EricBartoli, AlbertoWeb site defacement, the process of introducing unauthorized modifications to a web site, is a very common form of attack. This thesis describes the design and experimental evaluation of a framework that may constitute the basis for a defacement detection service capable of monitoring thousands of remote web sites sistematically and automatically. With this framework an organization may join the service by simply providing the URL of the resource to be monitored along with the contact point of an administrator. The monitored organization may thus take advantage of the service with just a few mouse clicks, without installing any software locally nor changing its own daily operational processes. The main proposed approach is based on anomaly detection and allows monitoring the integrity of many remote web resources automatically while remaining fully decoupled from them, in particular, without requiring any prior knowledge about those resources. During a preliminary learning phase a profile of the monitored resource is built automatically. Then, while monitoring, the remote resource is retrieved periodically and an alert is generated whenever something "unusual" shows up. The thesis discusses about the effectiveness of the approach in terms of accuracy of detection---i.e., missed detections and false alarms. The thesis also considers the problem of misclassified readings in the learning set. The effectiveness of anomaly detection approach, and hence of the proposed framework, bases on the assumption that the profile is computed starting from a learning set which is not corrupted by attacks; this assumption is often taken for granted. The influence of leaning set corruption on our framework effectiveness is assessed and a procedure aimed at discovering when a given unknown learning set is corrupted by positive readings is proposed and evaluated experimentally. An approach to automatic defacement detection based on Genetic Programming (GP), an automatic method for creating computer programs by means of artificial evolution, is proposed and evaluated experimentally. Moreover, a set of techniques that have been used in literature for designing several host-based or network-based Intrusion Detection Systems are considered and evaluated experimentally, in comparison with the proposed approach. Finally, the thesis presents the findings of a large-scale study on reaction time to web site defacement. There exist several statistics that indicate the number of incidents of this sort but there is a crucial piece of information still lacking: the typical duration of a defacement. A two months monitoring activity has been performed over more than 62000 defacements in order to figure out whether and when a reaction to the defacement is taken. It is shown that such time tends to be unacceptably long---in the order of several days---and with a long-tailed distribution.1252 6585 - PublicationPerformance control of internet-based engineering applications.(Università degli studi di Trieste, 2008-03-18)
;Vercesi, PaoloBartoli, AlbertoGrazie alle tecnologie capaci di semplificare l'integrazione tra programmi remoti ospitati da differenti organizzazioni, le comunità scientifica ed ingegneristica stanno adottando architetture orientate ai servizi per: aggregare, condividere e distribuire le loro risorse di calcolo, per gestire grandi quantità di dati e per eseguire simulazioni attraverso Internet. I Web Service, per esempio, permettono ad un'organizzazione di esporre, in Internet, le funzionalità dei loro sistemi e di renderle scopribili ed accessibili in un modo controllato. Questo progresso tecnologico può permettere nuove applicazioni anche nell'area dell'ottimizzazione di progetti. Gli attuali sistemi di ottimizzazione di progetti sono di solito confinati all'interno di una singola organizzazione o dipartimento. D'altra parte, i moderni prodotti manifatturieri sono l'assemblaggio di componenti provenienti da diverse organizzazioni. Componendo i servizi delle organizzazioni coinvolte, si può creare un workflow che descrive il modello del prodotto composto. Questo servizio composto puo a sua volta essere usato da un sistema di ottimizzazione inter-organizzazione. I compromessi progettuali che sono implicitamente incorporati per architetture locali, devono essere riconsiderati quando questi sistemi sono messi in opera su scala globale in Internet. Ad esempio: i) la qualità delle connessioni tra i nodi può variare in modo impredicibile; ii) i nodi di terze parti mantengono il pieno controllo delle loro risorse, incluso, per esempio, il diritto di diminuire le risorse in modo temporaneo ed impredicibile. Dal punto di vista del sistema come un'entità unica, si vorrebbero massimizzare le prestazioni, cioè, per esempio, il throughput inteso come numero di progetti candidati valutati per unità di tempo. Dal punto di vista delle organizzazioni partecipanti al workflow si vorrebbe, invece, minimizzare il costo associato ad ogni valutazione. Questo costo può essere un ostacolo all'adozione del paradigma distribuito, perché le organizzazioni partecipanti condividono le loro risorse (cioè CPU, connessioni, larghezza di banda e licenze software) con altre organizzazioni potenzialmente sconosciute. Minimizzare questo costo, mentre si mantengono le prestazioni fornite ai clienti ad un livello accettabile, può essere un potente fattore per incoraggiare le organizzazioni a condividere effettvivamente le proprie risorse. Lo scheduling di istanze di workflows, ovvero stabilire quando e dove eseguire un certo workflow, in un tale ambiente multi-organizzazione, multi-livello e geograficamente disperso, ha un forte impatto sulle prestazioni. Questo lavoro investiga alcuni dei problemi essenziali di prestazioni e di costo legati a questo nuovo scenario. Per risolvere i problemi inviduati, si propone un sistema di controllo dell'accesso adattativo davanti al workflow engine che limita il numero di esecuzioni concorrenti. Questa proposta può essere implementata in modo molto semplice: tratta i servizi come black-box e non richiede alcuna interazione da parte delle organizzazioni partecipanti. La tecnica è stata valutata in un ampio spettro di scenari, attraverso simulazione ad eventi discreti. I risultati sperimentali suggeriscono che questa tecnica può fornire dei significativi benefici garantendo alti livelli di throughput e bassi costi.824 2283 - PublicationBeamforming techniques for wireless communications in low-rank channels: analytical models and synthesis algorithms.(Università degli studi di Trieste, 2008-03-18)
;Comisso, Massimiliano ;Mania', LucioVescovo, RobertoThe objective of this thesis is discussing the application of multiple antenna technology in some selected areas of wireless networks and fourth-generation telecommunication systems. The original contributions of this study involve, mainly, two research fields in the context of the emerging solutions for high-speed digital communications: the mathematical modeling of distributed wireless networks adopting advanced antenna techniques and the development of iterative algorithms for antenna array pattern synthesis. The material presented in this dissertation is the result of three-year studies performed within the Telecommunication Group of the Department of Electronic Engineering at the University of Trieste during the course of Doctorate in Information Engineering. In recent years, an enormous increase in traffic has been experienced by wireless communication systems, due to a significant growth in the number of users as well as to the development of new high bit rate applications. It is foreseen that in the near future this trend will be confirmed. This challenging scenario involves not only the well established market of cellular systems, but also the field of emerging wireless technologies, such as WiMAX (Worldwide interoperability for Microwave Access) for wireless metropolitan area networks, and Wi-Fi (Wireless Fidelity) for wireless local area networks, mobile ad-hoc networks and wireless mesh networks. The rapid diffusion of architectures adopting an ad-hoc paradigm, in which the network infrastructure is totally or partially absent and that can be deployed using low-cost self-configuring devices, has further enlarged the number of systems that have to coexist within a limited frequency spectrum. In such evolving environment, the development of interference mitigation methods to guarantee the communication reliability, the implementation of proper radio resource allocation schemes for managing the user mobility as well as for supporting multimedia and high speed applications, represent the most relevant topics. Classic approaches are focused on the use of the time-frequency resources of the propagation channel. However, to satisfy the increasing demand of network capacity, while guaranteeing at the same time the necessary levels in the quality of the offered services, operators and manufacturers must explore new solutions. In this scenario, the exploitation of the spatial domain of the communication channel by means of multiple antenna systems can be a key improvement for enhancing the spectral efficiency of the wireless systems. In a rich scattering environment, the use of multiple antennas enables the adoption of diversity and spatial multiplexing techniques for mitigating and, respectively, exploiting multipath fading effects. In propagation environments characterized by small angular spreads, the combination of antenna arrays and beamforming algorithms provides the possibility to suppress the undesired sources and to receive the signals incoming from the desired ones. This leads to an increase of the signal to interference plus noise ratio at the receiver that can be exploited to produce relevant benefits in terms of communication reliability and/or capacity. A proper design of the medium access control layer of the wireless network can enable the simultaneous exchange of packets between different node pairs as well as the simultaneous reception of packets from multiple transmitters at a single node. Switched-beam antennas, adaptive antennas (also referred to as smart antennas), and phased-antenna arrays represent some of the available beamforming techniques that can be applied to increase the overall system capacity and to mitigate the interference, in a scenario where several different technologies must share the same frequency spectrum. In the context of distributed wireless networks using multiple antenna systems, the core of this thesis is the development of a mathematical model to analyze the performance of the network in presence of multipath fading, with particular reference to a scenario in which the signal replicas incoming at the receiver are confined within a small angle and are characterized by small relative delays. This propagation environment, referred to as low-rank, is the typical operating scenario of smart antennas, which necessitate high spatial correlation channels to work properly. The novel aspects of this study are represented by the theoretical and numerical modeling of the sophisticated adaptive antennas in conjunction with a detailed description of the channel statistics and of the IEEE 802.11 medium access control scheme. A theoretical model providing a more realistic perspective may be desirable, considering that, at present, not only cost and competition issues, but also too optimistic expectations, as compared to the first measurements on the field, have induced the wireless operators to delay the adoption of smart antenna technology. The presented analysis includes the most relevant elements that can influence the network behavior: the spatial channel model, the fading statistic, the network topology, the access scheme, the beamforming algorithm and the antenna array geometry. This last aspect is numerically investigated considering that the size of the user terminal represents a strict constraint on the number of antennas that can be deployed on the device, and so the maximization of the performance becomes related to the geometrical distribution of the radiators. In ad-hoc and mesh networks, the typical communication devices, such as laptops, palmtops and personal digital assistants require compact and cheap antenna structures as well as beamforming algorithms easy to implement. In particular, the low-cost characteristics have guaranteed a wide popularity to wireless mesh technology, which have encouraged the birth of a new social phenomenon, known as wireless community networks, whose objective is the reduction of the Internet access cost. The adoption of multi-antenna systems is the purpose of the IEEE 802.11n amendment, which, however, not considering modifications of the medium access control layer, provides higher bit rates for the single link, but does not allow simultaneous communications between different couples of nodes. This aspect must be taken into account together with the fact that, nowadays, IEEE 802.11x represents the leading family of standards for wireless local communications, and enhancement proposals have to pay careful attention to the backward compatibility issues. The mathematical model presented in this thesis discusses the suitable parameter settings to exploit advanced antenna techniques in 802.11-based networks when the access scheme supports multiple communications at the same time, maintaining a realistic description for the antenna patterns and the channel behavior. The presentation of two new iterative algorithms for antenna array pattern synthesis represents the core of the last part of this dissertation. The proposed solutions are characterized by implementation simplicity, low computational burden and do not require the modification of the excitation amplitudes of the array elements. These advantages make the presented algorithms suitable for a wide range of communication systems, while matching also the inexpensiveness of mesh and ad-hoc devices. In particular, phase-only synthesis techniques allow the adoption of a cheaper hardware, including only phase shifters, which are available at a reasonable price, while avoiding the use of the more expensive power dividers. The first presented algorithm employs the spatial statistic of the channel for properly placing the pattern nulls, in order to suppress the undesired power incoming from a given angular interval. This solution exploits the improved knowledge of the spatial properties of the propagation environment for enhancing the interference suppression capabilities at the transmitter and receiver sides. The second algorithm is a phase-only technique that is able to generate multiple nulls towards the undesired directions and multiple main lobes towards the desired ones. This method provides the possibility to perform spatial multiplexing adopting low-cost electronic components. The thesis is organized in three parts. The first one provides the background material and represents the basics of the following arguments, while the other two parts are dedicated to the original results developed during the research activity. With reference to the first part, the fundamentals of antenna array theory are briefly summarized in the first chapter. The most relevant aspects of the wireless propagation environment are described in the second chapter, focusing on the characteristics of the spatial domain in a low-rank scenario. The third chapter presents a classification of the different multiple antenna techniques according to the channel properties and provides an overview of the most common beamforming algorithms. The fourth chapter introduces the most significant aspects of the distributed wireless networks, presenting the main open issues and the current proposals for the exploitation of the potential offered by antenna array systems. The second part describes the original results obtained in the mathematical modeling of ad-hoc and mesh networks adopting smart antennas in realistic propagation scenarios. In particular, the fifth chapter presents the theoretical analysis to evaluate the number of simultaneous communications that can be sustained by a distributed wireless network using adaptive antennas in presence of multipath. The sixth chapter extends this model to switched-beam antennas, while addressing the mobility aspects and discussing the cost-benefit tradeoff that is related to the use of multiple antenna techniques in today's wireless networks. A detailed throughput-delay analysis is performed in the seventh chapter, where the impact of advanced antenna systems on 802.11-based networks is investigated using a Markov chain model. The influence of the antenna array geometry is examined in the eighth chapter adopting a numerical approach based on a discrete-time simulator, which is able to take into account the details of the channel and of the antenna system behavior. The third part describes the original results obtained in the field of antenna array pattern synthesis. The ninth chapter presents the technique developed to modify the excitation phases of an antenna array in order to reject interferers spread over an angular region according to a given spatial statistic. The tenth chapter describes the iterative algorithm for phased arrays, which is able to produce low side-lobe level patterns with multiple prescribed main lobes and nulls. Finally, the eleventh chapter summarizes the thesis contributions and remarks the most important conclusions. The intent of the work presented hereafter is to examine the benefits that derive from the employment of smart antenna techniques from a realistic perspective, as well as to provide some useful solutions to improve the reliability of the communications and to increase the network capacity.2433 9554 - Publication03-DPACS: an open source solution for critical PACS systems integratedin the 03 Consortium project.(Università degli studi di Trieste, 2008-03-18)
;Beltrame, MarcoAccardo, AgostinoABSTRACT The student started his work towards the PhD. in 2005, joining the Bioengineering and ICT group of the University of Trieste, whose core research was in the e-health area. The personal research project conducted by the PhD student was inserted in the O3 Consortium research project, which had the aim to propose a complete solution for the adoption of open technology in the healthcare environment. The solution should become a whole new model for e-health application and include new products integrated with a development, support and business model. The PhD student contributed to the project in thinking and designing all the aspects of the complete solution presented in this thesis, in the development and business models, as well as in developing the products and in writing and publishing the results of the steps of this work. He personally contributed proposing the original idea of the support model and designing and implementing a product to test the proposed model. The student had an intermediate step in his personal project: he built a state of the art server for the management of DICOM data and of imaging objects (PACS – Picture and Archiving Communication System), with the aim to ease the adoption of e-health technology and to develop a product on which the model would have been tested. The research was conducted in the following way: first, based on the analysis of real world needs, literature and past experience, the definition of the O3 Consortium project software design guidelines (called “requirements” in the thesis) was performed. Then the product was designed and a new release of the PACS system was developed, implementing original solutions to best achieve all those “requirements”. The technological choices and the original aspects against the state of the art have been discussed and underlined throughout the entire thesis, such as the compliance to all the requirements and the choices for portability, project organization, standard implementation and performance. The idea of O3-DPACS being an integrated project, system plus support model has been also presented and discussed in the thesis.. It should be remarked again that the O3 Consortium means not only software development but also new procedures in both technology and service delivering. Moreover, the PhD student performed the validation of the software and the model, which needed to verify the assumption and to obtain the first results in the O3 Consortium research on e-health adoption. Should be noted the originality of the O3 Consortium project proposal of a whole complete application model to the healthcare real world based on open source software. No other solution for open source software application makes a complete proposal for all the topics of development, design, software architecture, support and business opportunity. Thus, a real research interest exists in testing and validating the model.4737 8216 - PublicationMultimedia over wireless ip networks:distortion estimation and applications.(Università degli studi di Trieste, 2008-03-18)
;D'Orlando, Marco ;Vatta, FrancescaMania', LucioThis thesis deals with multimedia communication over unreliable and resource constrained IP-based packet-switched networks. The focus is on estimating, evaluating and enhancing the quality of streaming media services with particular regard to video services. The original contributions of this study involve mainly the development of three video distortion estimation techniques and the successive definition of some application scenarios used to demonstrate the benefits obtained applying such algorithms. The material presented in this dissertation is the result of the studies performed within the Telecommunication Group of the Department of Electronic Engineering at the University of Trieste during the course of Doctorate in Information Engineering. In recent years multimedia communication over wired and wireless packet based networks is exploding. Applications such as BitTorrent, music file sharing, multimedia podcasting are the main source of all traffic on the Internet. Internet radio for example is now evolving into peer to peer television such as CoolStreaming. Moreover, web sites such as YouTube have made publishing videos on demand available to anyone owning a home video camera. Another challenge in the multimedia evolution is inside the house where videos are distributed over local WiFi networks to many end devices around the house. More in general we are assisting an all media over IP revolution, with radio, television, telephony and stored media all being delivered over IP wired and wireless networks. All the presented applications require an extreme high bandwidth and often a low delay especially for interactive applications. Unfortunately the Internet and the wireless networks provide only limited support for multimedia applications. Variations in network conditions can have considerable consequences for real-time multimedia applications and can lead to unsatisfactory user experience. In fact, multimedia applications are usually delay sensitive, bandwidth intense and loss tolerant applications. In order to overcame this limitations, efficient adaptation mechanism must be derived to bridge the application requirements with the transport medium characteristics. Several approaches have been proposed for the robust transmission of multimedia packets; they range from source coding solutions to the addition of redundancy with forward error correction and retransmissions. Additionally, other techniques are based on developing efficient QoS architectures at the network layer or at the data link layer where routers or specialized devices apply different forwarding behaviors to packets depending on the value of some field in the packet header. Using such network architecture, video packets are assigned to classes, in order to obtain a different treatment by the network; in particular, packets assigned to the most privileged class will be lost with a very small probability, while packets belonging to the lowest priority class will experience the traditional best–effort service. But the key problem in this solution is how to assign optimally video packets to the network classes. One way to perform the assignment is to proceed on a packet-by-packet basis, to exploit the highly non-uniform distortion impact of compressed video. Working on the distortion impact of each individual video packet has been shown in recent years to deliver better performance than relying on the average error sensitivity of each bitstream element. The distortion impact of a video packet can be expressed as the distortion that would be introduced at the receiver by its loss, taking into account the effects of both error concealment and error propagation due to temporal prediction. The estimation algorithms proposed in this dissertation are able to reproduce accurately the distortion envelope deriving from multiple losses on the network and the computational complexity required is negligible in respect to those proposed in literature. Several tests are run to validate the distortion estimation algorithms and to measure the influence of the main encoder-decoder settings. Different application scenarios are described and compared to demonstrate the benefits obtained using the developed algorithms. The packet distortion impact is inserted in each video packet and transmitted over the network where specialized agents manage the video packets using the distortion information. In particular, the internal structure of the agents is modified to allow video packets prioritization using primarily the distortion impact estimated by the transmitter. The results obtained will show that, in each scenario, a significant improvement may be obtained with respect to traditional transmission policies. The thesis is organized in two parts. The first provides the background material and represents the basics of the following arguments, while the other is dedicated to the original results obtained during the research activity. Referring to the first part in the first chapter it summarized an introduction to the principles and challenges for the multimedia transmission over packet networks. The most recent advances in video compression technologies are detailed in the second chapter, focusing in particular on aspects that involve the resilience to packet loss impairments. The third chapter deals with the main techniques adopted to protect the multimedia flow for mitigating the packet loss corruption due to channel failures. The fourth chapter introduces the more recent advances in network adaptive media transport detailing the techniques that prioritize the video packet flow. The fifth chapter makes a literature review of the existing distortion estimation techniques focusing mainly on their limitation aspects. The second part of the thesis describes the original results obtained in the modelling of the video distortion deriving from the transmission over an error prone network. In particular, the sixth chapter presents three new distortion estimation algorithms able to estimate the video quality and shows the results of some validation tests performed to measure the accuracy of the employed algorithms. The seventh chapter proposes different application scenarios where the developed algorithms may be used to enhance quickly the video quality at the end user side. Finally, the eight chapter summarizes the thesis contributions and remarks the most important conclusions. It also derives some directions for future improvements. The intent of the entire work presented hereafter is to develop some video distortion estimation algorithms able to predict the user quality deriving from the loss on the network as well as providing the results of some useful applications able to enhance the user experience during a video streaming session.1340 3768 - PublicationSviluppo di metodi e apparati per la navigazione del robot e per la mappatura simultanea di ambienti non strutturati.(Università degli studi di Trieste, 2008-03-18)
;Lenac, KristijanMumolo, EnzoThe work described in this thesis has been carried out in the context of the exploration of an unknown environment by an autonomous mobile robot. It is rather difficult to imagine a robot that is truly autonomous without being capable of acquiring a model of its environment. This model can be built by the robot exploring the environment and registering the data collected with the sensors over time. In the last decades a lot of progress has been made regarding techniques focused on environments which posses a lot of structure. This thesis contributes to the goal of extending existing techniques to unstructured environments by proposing new methods and devices for mapping in real-time. The first part of the thesis addresses some of the problems of ultrasonic sensors which are widely used in mobile robotics for mapping and obstacle detection during exploration. Ultrasonic sensors have two main shortcomings leading to disappointing performance: uncertainty in target location and multiple reflections. The former is caused by wide beam width and the latter gives erroneous distance measurements because of the insertion of spikes not directly connected to the target. With the aim of registering a detailed contour of the environment surrounding the robot, a sensing device was developed by focusing the ultrasonic beam of the most common ultrasonic sensor to extend its range and improve the spatial resolution. Extended range makes this sensor much more suitable for mapping of outdoor environments which are typically larger. Improved spatial resolution enables the usage of recent laser scan matching techniques on the sonar scans of the environment collected with the sensor. Furthermore, an algorithm is proposed to mitigate some undesirable effects and problems of the ultrasonic sensor. The method registers the acquired raw ultrasonic signal in order to obtain a reliable mapping of the environment. A single sonar measurement consists of a number of pulses reflected by an obstacle. From a series of sensor readings at different sonar angles the sequence of pulses reflected by the environment changes according to the distance between the sensor and the environment. This results in an image of sonar reflections that can be built by representing the reading angle on the horizontal axis and the echoes acquired by the sensor on the vertical one. The characteristics of a sonar emission result in a texture embedded in the image. The algorithm performs a 2D texture analysis of the sonar reflections image in such a way that the texture continuity is analyzed at the overall image scale, thus enabling the correction of the texture continuity by restoring weak or missing reflections. The first part of the algorithm extracts geometric semantic attributes from the image in order to enhance and correct the signal. The second part of the algorithm applies heuristic rules to find the leading pulse of the echo and to estimate the obstacle location in points where otherwise it would not be possible due to noise or lack of signal. The method overcomes inherent problems of ultrasonic sensing in case of high irregularities and missing reflections. It is suitable for map building during mobile robot exploration missions. It's main limitation is small coverage area. This area however increases during exploration as more scans are processed from different positions. Localization and mapping problems were addressed in the second part of the thesis. The main issue in robot self-localization is how to match sensed data, acquired with devices such as laser range finders or ultrasonic range sensors, against reference map information. In particular scan matching techniques are used to correct the accumulated positional error using dead reckoning sensors like odometry and inertial sensors and thus cancel out the effects of noise on localization and mapping. Given the reference scan from a known position and the new scan in unknown or approximately known position, the scan matching algorithm should provide a position estimate which is close to the true robot position from which the new scan was acquired. A genetic based optimization algorithm that solves this problem called GLASM is proposed. It uses a novel fitness function which is based on a look up table requiring little memory to speed the search. Instead of searching for corresponding point pairs and then computing the mean of the distances between them, as in other algorithms, the fitness is directly evaluated by matching points which, after the projection on the same coordinate frame, fall in the search window around the previous scan. It has a linear computational complexity, whereas the algorithms based on correspondences have a quadratic cost. The GLASM algorithm has been compared to it's closest rivals. The results of comparison are reported in the thesis and show, to summarize, that GLASM outperforms them both in speed and in matching success ratio. Glasm is suitable for implementation in feature-poor environments and robust to high sensor noise, as is the case with the sonar readings used in this thesis which are much noisier than laser scanners. The algorithm does not place a high computational burden on the processor, which is important for real world applications where the power consumption is a concern, and scales easily on multiprocessor systems. The algorithm does not require an initial position estimate and is suitable for unstructured environments. In mobile robotics it is critical to evaluate the above mentioned methods and devices in real world applications on systems with limited power and computational resources. In the third part of the thesis some new theoretical results are derived concerning open problems in non-preemptive scheduling of periodic tasks on a uniprocessor. This results are then used to propose a design methodology which is used in an application on a mobile robot. The mobile robot is equipped with an embedded system running a new real-time kernel called Yartek with a non-preemptive scheduler of periodic tasks. The application is described and some preliminary mapping results are presented. The real-time operating system has been developed in a collaborative work for an embedded platform based on a Coldfire microcontroller. The operating system allows the creation and running of tasks and offers a dynamic management of a contiguous memory using a first-fit criterion. The tasks can be real-time periodic scheduled with non-preemptive EDF, or non real-time. In order to improve the usability of the system, a RAM-disk is included: it is actually an array defined in the main memory and managed using pointers, therefore its operation is very fast. The goal was to realize small autonomous embedded system for implementing real-time algorithms for non visual robotic sensors, such as infrared, tactile, inertial devices or ultrasonic proximity sensors. The system provides the processing requested by non visual sensors without imposing a computation burden on the main processor of the robot. In particular, the embedded system described in this thesis provides the robot with the environmental map acquired with the ultrasonic sensors. Yartek has low footprint and low overhead. In order to compare Yartek with another operating system a porting of RTAI for Linux has been performed on the Avnet M5282EVB board and testing procedures were implemented. Tests regarding context switch time, jitter time and interrupt latency time are reported to describe the performance of Yartek. The contributions of this thesis include the presentation of new algorithms and devices, their applications and also some theoretical results. They are briefly summarized as: A focused ultrasonic sensing device is developed and used in mapping applications. An algorithm that processes the ultrasonic readings in order to develop a reliable map of the environment is presented. A new genetic algorithm for scan matching called GLASM is proposed. Schedulability conditions for non-preemptive scheduling in a hard real-time operating system are introduced and a design methodology is proposed. A real-time kernel for embedded systems in mobile robotics is presented. A practical robotic application is described and implementation details and trade-offs are explained.1469 2390 - PublicationApplications and limits of raman spectroscopy in the study of colonic and pulmonary malformations.(Università degli studi di Trieste, 2008-04-10)
;Codrich, DanielaSergo, ValterQuesta tesi nasce dalla collaborazione tra il Dipartimento di Materiali e Risorse Naturali dell’Università degli Studi di Trieste ed il Dipartimento di Chirurgia Pediatrica dell’Ospedale Infantile Burlo Garofalo. L’obiettivo del nostro gruppo di studio era di valutare le possibili applicazioni della spettroscopia Raman nello studio dei tessuti umani, con particolare attenzione verso i tessuti affetti da malformazioni congenite. L’interesse verso la spettroscopia Raman, che è una spettroscopia vibrazionale basata sullo scattering inelastico di fotoni, nasce dal fatto che questa tecnica può fornire dettagli precisi sulla composizione chimica e sulla struttura molecolare di cellule e tessuti. Durante lo svolgersi del progetto, sono state standardizzate le procedure di conservazione e preparazione dei campioni, i quali sono stati prelevati, con il consenso parentale, durante interventi chirurgici. Mediante l’uso di uno spettrometro Raman equipaggiato con un laser emittente luce monocromatica a 785 nm, sono stati analizzati campioni di colon e polmone normale, rappresentativi di un tessuto prevalentemente stratificato e di un parenchima omogeneo. Sono stati studiati anche tessuti polmonari malformati affetti da Malformazione Adenomatoide Cistica (CCAM) e Sequestri Broncopolmonari (BPS). Vengono descritte le procedure di acquisizione e processazione dei dati. Dopo applicazione di un’analisi multivariata come la k-means cluster analisi, sono state ottenute delle pseudo-mappe Raman colorate, che sono state poi confrontate con gli stessi campioni nativi non colorati, apposti su vetrino. Da ogni cluster sono stati estratti gli spettri Raman medi, che sono stati confrontati per evidenziare differenze fra aree diverse del campione. L’assegnazione delle principali bande alle diverse specie chimiche è stata fatta secondo la letteratura. L’analisi Raman è stata in grado di differenziare i diversi strati del colon ( sierosa, muscolatura, sottomucosa, mucosa, plessi nervosi), evidenziando strutture subcellulari in elementi nervosi quali i gangli. Sezioni normali e malformate di polmone hanno dimostrato una clusterizzazione e degli spettri medi diversi, permettendo una differenziazione tra CCAM e BPS, tanto che in un caso la nostra analisi, non concordando con la diagnosi del patologo, ha indotto una revisione dei vetrini e una riformulazione della diagnosi. Gli effetti di ampliamento del segnale Raman per risonanza di cromofori quali il gruppo eme dell’emoglobina con la radiazione a 785 nm sono stati discussi ed abbiamo proposto un metodo per minimizzare il contributo spettrale di questa molecola. Abbiamo inoltre confrontato i dati Raman con i dati ottenuti sugli stessi campioni presso l’Istituto di Chimica Analitica dell’ Università di Dresda mediante un’altra spettroscopia vibrazionale quale la spettroscopia ad Infrarosso. Ci è stata accordata per la discussione di questa tesi la possibilità di presentare i dati, per un confronto fra le due tecniche in relazione a tempi di acquisizione, risoluzione spaziale e spettrale.1837 10673 - PublicationApplications of x-ray computed microtomography to material science: devices and prespectives(Università degli studi di Trieste, 2008-04-23)
;Favretto, StefanoLucchini, ElioThe three-dimensional visualization of the inner microstructural features of objects and materials is an aspect of relevant interest for a wide range of scientific and industrial applications. X-ray computed microtomography (μ-CT) is a powerful non-destructive technique capable to satisfy these needs. Once the complete reconstruction of the sample is available, a quantitative characterisation of the microstructure is essential. Through digital image processing tools, image analysis software or custom developed algorithms, it is possible to obtain an exhaustive geometrical, morphological and topological description of the features inside the volume, or to extract other particular parameters of interest (e.g. porosity, voids distribution, cell size distribution, average struts length, connectivity between the cells, tortuosity). This thesis was carried out at the third-generation Elettra Synchrotron Radiation Facility (Trieste, Italy), where a hard X-ray imaging beamline is available. The experience developed at this beamline has leaded scientists to design a complementary state-of-the-art μ-CT facility based on a micro-focus X-ray source, working both in absorption and phase contrast mode. In this dissertation a detailed description of this facility is given together with a rigorous characterization of the imaging system capabilities, in terms of the actual achievable spatial resolution, in order to optimize the working parameters for the different experiments. The main artefacts that concur to the degradation of the quality of the reconstructed images have been considered (e.g. beam hardening effects, ring artefacts, uncertainness associated with the cone-beam geometry): procedures are presented in order to eliminate, or at least to reduce, the causes of these artefacts. The aspects related to the digital image processing of the reconstructed data are intensively developed in this study: appropriated methodologies have been elaborated capable to deal with the different three-dimensional data of complex porous media, providing a correlation between the microstructure and the macroscopic behaviour of the observed materials. Three representative examples obtained with the described methods are used to demonstrate the application of μ-CT, combined with the developed image processing tools, to material science: the geometrical and morphological characterisation of polyurethane foams employed in the automotive industry due their vibro-acoustic properties; a new approach to characterize the resonance spruce wood microstructure in order to study its acoustical behaviour; finally, the possibility of revealing defects in hybrid-friction stir welded aluminium joints, guiding the optimization of the process parameters.1469 5269 - PublicationUn nuovo dispositivo per analisi microcalorimetriche nel settore biologico: DSC-MEMS(Università degli studi di Trieste, 2008-04-23)
;Maggiolino, Stefano ;Sbaizero, OrfeoScuor, NicolaIl seguente lavoro di tesi di dottorato è inerente alla progettazione, modellizazione, caratterizzazione di uno sensore MEMS (Micro Electro Mechanical System) per misure calorimetriche su cellule. Sono stati progettati diversi dispositivi simulandone, in diversi casi, le scelte progettuali atte a migliorare le caratteristiche del sensore. Sono state simulate anche scelte che a causa della tecnologia usata non sono state applicate, ma con tecnologie differenti potrebbero andare a migliorare il sensore. In conclusione tutte le scelte progettuali indagate, ed in alcuni casi anche applicate ad un dispositivo realizzato, si possono suddividere in tre tipologie di scelte: scelta di materiali, scelta di soluzioni tecniche per ridurre la dispersione termica e scelte per incrementare il segnale direttamente dal dispositivo Dopo aver progettato e simulato i dispositivi sono andato studiare un metodo per poter rilevare il segnale elettrico adatto ad acquisire intensità molto basse. Lo studio è partito da semplici amplificatori e poi si è rivolto a metodi atti a minimizzare il rumore come l’amplificatore ad aggancio di frequenza coordinato con l’optical chopper e con l’electrical chopper. Questi sistemi alla fine sono stati studiati in modo tale da ottenere una massimizzazione del segnale. Sono stati caratterizati i dispositivi realizzati, andando a cercare la minima potenza termica sensibile da questi. I risultati sono che utilizzando le tecniche sopra descritte, sono riuscito a misurare potenze dell’ordine dei 25 nW valore molto promettente. I dispositivi sono stati testati anche in liquido e i risultati ottenuti dimostrano che non ci sono effetti di cortocircuitazione tra gli elettrodi e quindi possono essere usati tranquillamente in ambiente acquoso. In conclusione in questo lavoro di dottorato è stato sviluppato un dispositivo atto a misurare potenze termiche erogate dell’ordine di decine di nW, si sono identificati ed ottimizzati processi di self-reparing per ripristinare dispositivi non conformi al progetto, si sono identificate delle metodologie per effettuare la caratterizzazione dei dispositivi applicabili poi in seguito anche alla misura vera e propria, è stato intrapreso lo studio di una ottimizzazione del processo per la fabbricazioni di barriere da fotopolimerizzazione al laser e sono state proposte delle soluzioni, testate solamente via simulazione ad elementi finiti, che garantirebbe degli incrementi sulla intensità del segnale notevoli.983 5975 - PublicationPROGETTAZIONE E CARATTERIZZAZIONE DI UN BIOSENSORE MEMS.(Università degli studi di Trieste, 2008-04-23)
;Antoniolli, Francesca ;Sbaizero, OrfeoScuor, NicolaNegli ultimi anni, le cellule sono state oggetto di studio approfondito e, in taluni casi, di esperimenti molto sofisticati. Tuttavia, benché si conosca molto circa la loro struttura, poche sono le informazioni sulla meccanica cellulare e sulla risposta cellulare agli stimoli meccanici. Le cellule, infatti, possono sentire forze meccaniche e convertirle in risposte biologiche, oppure, viceversa, è da tempo noto come segnali biologici e biochimici influenzino l’abilità cellulare nel sentire, generare e sopportare forze di tipo meccanico. Negli ultimi anni sono stati ideati e realizzati svariati meccanismi per l’applicazione di forze meccaniche su cellule e la rilevazione delle conseguenti deformazioni. Questi sistemi, però, presentano dei limiti: - la forza esercitata non è adeguata al fenomeno investigato; - lo studio viene effettuato su un’intera popolazione di cellule; - la forza è esercitata localmente e non sull’intera cellula. Il presente lavoro di tesi, avente come obiettivo primo lo sviluppo, la progettazione e la realizzazione di un dispositivo per la sollecitazione meccanica della singola cellula e la rilevazione delle conseguenti deformazioni, si è focalizzato sullo studio di dispositivi che potessero bypassare i suddetti limiti. La scelta è ricaduta nei Sistemi Micro Elettro Meccanici, dal momento che, oltre ad avere dimensioni compatibili con le caratteristiche cellulari ed assicurare modesti costi realizzativi ed operativi, garantiscono - la possibilità di applicare forze in un ampio range (pN-µN); - la possibilità di effettuare studi sulla singola cellula, ed in particolare su cellule aderenti; - la possibilità di stimolare l’intera cellula, e non soltanto una porzione locale di questa. La prima parte del lavoro è stata rivolta alla messa a punto di dispositivi che, concepiti in maniera analoga a quelle che sono le tradizionali macchine universali per test meccanici, potessero consentire l’ancoraggio della singola cellula su di una piattaforma di geometrie differenti a seconda che si volesse applicare una sollecitazione di trazione uniassiale, biassiale, pluriassiale oppure di taglio. Tali dispositivi tuttavia hanno riscontrato diverse problematiche quando operanti in soluzioni saline quali i medium cellulari. Sono stai quindi concepiti e sviluppati dei nuovi dispositivi che potessero bypassare le problematiche riscontrate con i primi: il MEMS è stato quindi sdoppiato su due outline di 2x2 mm, di cui una ospitante il motore per l’attuazione del dispositivo operante in aria l’altra ospitante la piattaforma per la collocazione della cellula in esame. Per completare il funzionamento di tali dispositivi è stata sviluppata e realizzata con successo una tecnica di collegamento di questi mediante una fibra di carbonio ancorata ai MEMS mediante wire bonding. Infine sono state acquisite e messe a punto la strumentazione e le tecniche che potessero consentire di operare con cellule viventi: è stato individuato un materiale tale da consentire un ancoraggio ottimale della cellula e con il quale si potesse funzionalizzate localmente la piattaforma per la cellula; è stato allestito un laboratorio per colture cellulari presso il Dipartimento dei Materiali e delle Risorse Naturali; è stata messa a punto una tecnica per la manipolazione di singole cellule; sono state infine eseguite alcune preliminari prove di trazione sulla singola cellula.1617 9368 - PublicationA hardware field simulator for photovoltaic materials applications.(Università degli studi di Trieste, 2008-04-23)
;Massi Pavan, AlessandroRoitti, SergioIl presente lavoro riguarda la descrizione di un simulatore di campo fotovoltaico (in seguito simulatore). Il simulatore è un convertitore elettronico di potenza che, alimentato dalla rete elettrica, riproduce la caratteristica tensione corrente di un campo fotovoltaico (insieme di moduli fotovoltaici connessi in serie e in parallelo) operante in condizioni climatiche di temperatura e irraggiamento arbitrarie. Il nuovo dispositivo verrà impiegato nell’ambito del laboratorio fotovoltaico cui fa riferimento l’impianto in via di realizzazione sul tetto dell’edificio che ospita il Dipartimento dei Materiali e delle Risorse Naturali dell’Università di Trieste. Il simulatore viene proposto come utile strumento per i progettisti di dispositivi solari funzionanti in sistemi fotovoltaici connessi in rete. In particolare, il simulatore permetterà di prevedere il funzionamento di nuovi moduli fotovoltaici operanti in condizioni di ombreggiamento arbitrario e inseriti in un sistema fotovoltaico reale. L’uso del simulatore sarà particolarmente efficace nel caso di simulazioni di tecnologie in film sottile come, ad esempio, il silicio amorfo, il tellururo di cadmio, ecc. Il simulatore sarà anche necessario per testare i componenti che fanno parte di un sistema fotovoltaico connesso in rete, con particolare riferimento ai sistemi di condizionamento della potenza (detti anche inverter). Tali sistemi, oltre a convertire la tensione continua prodotta dai moduli fotovoltaici in una tensione compatibile e sincronizzata con quella della rete, devono garantire istante per istante l’inseguimento del punto di massima potenza estraibile dal campo fotovoltaico cui sono connessi. Il lavoro è stato suddiviso in cinque capitoli. Il primo capitolo fornisce una breve descrizione dello stato dell’arte e di alcune aspetti economici relativi alla tecnologia fotovoltaica. Nel secondo capitolo vengono richiamati il modello classico di una cella solare e le definizioni riguardo le sue caratteristiche principali (punto di massima potenza, efficienza, fill factor, ecc.). Nello stesso capitolo un’overview sui materiali e sulle tecnologie utilizzate nella realizzazione dei dispositivi fotovoltaici divide, come suggerito da Martin Green, le celle solari in tre diverse generazioni: la prima comprende i dispositivi realizzati in silicio cristallino (mono e policrisallino), la seconda quelli in film sottile (in silicio amorfo, tellururo di cadmio CdTe, diseleniuro di rame e indio CIS, diseleniuro di rame, indio e gallio CIGS, diseleniuro di rame, indio, gallio e zolfo CIGSS) e le celle di Graetzel, e la terza le celle multigiunzione, a banda intermedia e quelle organiche. Nel capitolo tre viene fornita una descrizione dei componenti costituenti un sistema fotovoltaico connesso in rete e viene proposto un nuovo metodo per la determinazione delle caratteristiche corrente tensione e potenza tensione prodotte da dispositivi fotovoltaici. Il metodo risulta efficace in quanto non necessita di misure sperimentali da effetture sui diversi dispositivi. I dati forniti nei comuni data sheet che vengono forniti a corredo dei moduli fotovoltaici sono sufficienti a determinarne il comportamento al variare della temperatura di funzionamento e del livello di radiazione solare. L’efficienza di un sistema fotovoltaico (Balance Of the System, BOS) viene calcolata nel capitolo quattro. Particolare enfasi viene data all’effetto di mismatching che è tanto più importante quanto più è elevato il livello di ombreggiamento presente sul piano dei moduli fotovoltaici costituenti l’impianto. Infine, l’ultimo capitolo riguarda la descrizione del simulatore e delle sue applicazioni.1772 1455 - PublicationStudio sperimentale sulla adesione della vetroceramica a substrati tradizionali ed innovativi in clinica protesica.(Università degli studi di Trieste, 2008-04-28)
;Ret, Viviana ;De Stefano, ElettraSchmid, ChiaraSTUDIO SPERIMENTALE SULLA ADESIONE DELLA VETROCERAMICA A SUBSTRATI TRADIZIONALI ED INNOVATIVI IN CLINICA PROTESICA. La protesi fissa è in grado di ripristinare con successo la salute dei tessuti orali con estetica e funzionalità stabili nel tempo. La ceramica, introdotta in odontoiatria da oltre due secoli, con la sua applicazione nella tecnica metallo-ceramica occupa da cinquant’anni una parte preponderante di questa branca dell’odontoiatria protesica; i materiali ceramici infatti sono divenuti sinonimi di estetica, resistenza nell’ambiente orale e biocompatibilità e la loro intrinseca fragilità è stata superata grazie allo sviluppo di sottostrutture di supporto con idonee proprietà meccaniche quali appunto sono le leghe metalliche. Attualmente l’odontoiatria protesica ha ricevuto una forte spinta verso le ricostruzioni ceramiche definite “metal free” o “ceramica integrale” e tra i materiali più innovativi attualmente usati in clinica protesica, spicca la zirconia, un materiale ceramico ad alta resistenza che viene proposto come supporto alternativo al metallo per il rivestimento vetroceramico nei restauri destinati a tutti i settori del cavo orale. La delaminazione del rivestimento vetroceramico è un’evenienza non rara che riguarda tutti i tipi di substrati. Tali fratture pregiudicano la funzione e compromettono l’estetica e trovano soluzione definitiva solo con la sostituzione del manufatto. Con questa tesi di dottorato ci siamo prefissi l’obiettivo di mettere a confronto l’adesione metallo-ceramica e l’adesione zirconia-vetroceramica attraverso lo studio di un numero rilevante di campioni sottoposti ad una serie di prove di laboratorio significative per il confronto preclinico e predittive del comportamento clinico. Sono stati costruiti 195 campioni in metallo-ceramica (lega IPS d.SIGN® 30 Williams Alloy U.S.A., vetroceramica IPS d.SIGN Ivoclar-Vivadent, Liechtenstein) da 13 operatori odontotecnici diversi. I campioni, suddivisi in 15 gruppi diversificati per procedure di fusione-rifinitura della lega e cottura della ceramica, sono stati sottoposti a test flessione a tre punti secondo normativa UNI-ISO 9693 [1], seconda edizione aprile 2001; le superfici di distacco sono state esaminate al microscopio elettronico a scansione. Successivamente sono state eseguite analisi profilometrica e prove di microdurezza su superfici della stessa lega con diversa rifinitura superficiale. Sono stati inoltre costruiti 25 campioni in zirconia (Zirkonzahn®, Brunico (Bz), Italy) sui quali è stato applicato uno strato di vetroceramica compatibile (Ice Zirkon Keramic, Zirkonzahn® Bz-Italy) con tecnica di stratificazione tradizionale analogamente ai campioni metallici, per essere sottoposti a prove di taglio. Le superfici di distacco sono state successivamente esaminate al SEM e sottoposte ad analisi EDX e spettroscopia Raman. I risultati di questa ricerca dimostrano la validità del legame metallo-ceramico con una lega Co-Cr indipendentemente dalla manualità dell’operatore (40,15±9,28 MPa media complessiva) evidenziandone però la componente micromeccanica come già riportato in letteratura (Hammad IA e Stein RS, 1990; Lubovich RP e Goodkind RJ, 1977; Barghi JN et al., 1987); di questo aspetto gli operatori devono tener conto: nella lavorazione delle leghe di base per ricostruzioni di protesi fissa l’attenzione andrà posta al massimo rispetto dei trattamenti superficiali che precedono l’applicazione della ceramica. L’adeguata forza di adesione tra metallo e ceramica è stata determinata dalla normativa ISO quando il distacco avviene sopra ai 25 MPa mentre una adeguata forza di adesione per i sistemici totalmente ceramici non è stata ancora definita (Al-Dohan HM et al., 2004). L’introduzione della zirconia quale materiale ceramico per la costruzione di sottostrutture per protesi fisse ha posto l’attenzione sulla difficoltà a realizzare un valido legame con i rivestimenti ceramici in uso. L’ittria-zirconia presenta un modulo di elasticità di 220 GPa (come la lega Co-Cr) ed offre una resistenza a flessione che varia a seconda dei sistemi di produzione tra 800 e 1200 MPa, la più alta tra le sottostrutture ceramiche, ed è ritenuta idonea a ricostruzioni anche nei settori posteri per ponti di più elementi (Raigrodski AJ, 2004). I nostri risultati sull’adesione tra un supporto in ittria-zirconia e il rivestimento di vetroceramica compatibile hanno riportato una media di 29,53 MPa (±13,97) mostrando un comportamento in linea con i dati della letteratura per lo stesso tipo di materiali e tecnica di costruzione (Al-Dohan HM et al., 2004; Aboushelib MN et al., 2005). Se confrontiamo questi valori con i sistemi in metallo-ceramica è evidente che molti campioni non superano il limite minimo indicato di 25 MPa soprattutto a causa della rottura a basso carico a livello della vetroceramica probabilmente a causa di difettosita' interne (bolle e porosità). Emerge pertanto la grande importanza dell’applicazione della ceramica che, come riportato dalla letteratura (Dundar M et al., 2007), richiede una grande attenzione tecnica poiché in grado di influenzare pesantemente il risultato finale, soprattutto in un sistema “delicato” come quello zirconia-vetroceramica che dimostra in generale valori di adesione più bassi rispetto ai sistemi metallo-ceramici. Bibliografia: 1: UNI EN ISO 9693 “Sistemi per restaurazioni dentali di metallo-ceramica” Norma Italiana Seconda Edizione Aprile 2001 Aboushelib MN, de Jager N, Kleverlaan CJ. Microtensile bond strength of different components of core veneered all ceramic restorations. Dent Mater 2005; 21, 984-991. Al-Dohan HM, Yaman P, Dennison JB, Razzoog ME, Lang BR. Shear strength of core-veneer interface in bi-layered ceramics. J Prostht Dent 2004;91:3439-55 Barghi N, McKeehan Whitmer M, Aranda R. Comparison of fracture strength of porcelain-veneered to high noble and base metal alloys. J Prosthet Dent 1987; 57: 23-25. Dundar M, Ozcan M, Gokce B, Comlekoglu E, Leite F, Valandro LF. Comparison of two bond strength testing methodologies for bilayered all-ceramics. Dent Mater 2007; 23(5):630-6 Hammad IA, Sheldon Stein R. A qualitative study for the bond and colour of ceramometals. Part I. J Prosthet Dent 1990; 63, 643-53. Lubovich RP, Goodkind RJ. Bond strength studies of precious, semiprecious and nonprecious ceramic-metal alloys with two porcelains. J Prosthet Dent 1977; 37, 3: 288-99. Raigrodski AJ. Contemporary materials and technologies for all-ceramic fixed partial dentures: A review of the literature. J Prosthet Dent 2004;92,:557-62.2997 22659 - PublicationMultiscale simulation of polymer-clay nanocomposites(Università degli studi di Trieste, 2009-04-08)
;Scocchi, Giulio ;Fermeglia, MaurizioDanani, AndreaThe main subject of this thesis consisted in the development of a multiscale procedure for the simulation of polymer – layered silicate nanocomposites (PLSN). The final objective was to provide a concrete support to in the component selection stage in new materials design process. In particular, polymer/silicate interface characteristics have been studied by using MD (Molecular Dynamics) techniques, aggregated platelet structure (stacks) by using the DPD (Dissipative Particle Dynamics) method, while macroscopic models have been built and analyzed using a FEM (Finite Element Method) based approach. Our sequential multiscale scheme allowed us to successfully predict Young’s modulus for different PLSN systems.1691 2374 - PublicationNew approaches for discrete and continuum analysis of the mechanical behaviour of cell(Università degli studi di Trieste, 2009-04-08)
;Codan, Barbara ;Sergo, ValterValdevit, LorenzoLa tesi qui presentata riguarda la meccanica delle cellule. Negli ultimi anni l’interesse della comunità scientifica è stato rivolto a questo aspetto della biologia cellulare, perché, com’è stato dimostrato, all’interno della cellula stimoli meccanici e segnali biochimici sono strettamente correlati, ma ancora non è chiaro il meccanismo che li lega. Le tecniche disponibili si suddividono in due grandi categorie in base al numero di cellule analizzate, ovvero si differenziano in base allo studio su una popolazione cellulare o su singola cellula. Dopo un’attenta analisi delle metodologie disponibili, si è deciso di sviluppare due nuovi metodi. Il primo riguarda la deformazione di un gel poliacrilammidico su cui sono state depositate delle particelle fluorescenti. Questo metodo trae ispirazione dalla deformazione di substrati e dalla traction force microscopy, ovvero dallo studio dello spostamento delle particelle dovuto alla presenza della cellula è possibile ottenere informazioni sulle forze applicate da quest’ultima. Un nuovo dispositivo è stato realizzato ed ha permesso di tirare il gel e quindi deformare una singola cellula e di studiare la risposta alla deformazione. In parallelo a questi studi caratterizzati dall’impiego di un substrato continuo e macroscopico, si è deciso di sviluppare un nuovo dispositivo microelettromeccanico (MEMS), in cui l’aspetto più innovativo è la presenza sullo stesso dispositivo di attuatori, deputati alla deformazione della cellula, e di sensori, che permettono di leggere le componenti della forza esercitata dalla cellula in risposta alla deformazione esercitata con gli attuatori. Per entrambe queste componenti si è scelta la struttura del comb drive. Tale dispositivo è stato progettato seguendo i vincoli costruttivi della tecnologia SOIMUMPs®, che realizza dispositivi MEMS con tecnologia SOI, una delle più adatte allo studio cellulare. Sono state effettuate delle simulazioni agli elementi finiti, in particolare del sensore, in modo da poter valutare la sensibilità, che risulta essere dell’ordine dei µN. Durante la progettazione di questo dispositivo, è sorto il problema del posizionamento della cellula al centro del MEMS. La soluzione arriva dalla localizzazione di spot di proteine, che creano punti di ancoraggio per la cellula. In letteratura sono presenti alcuni lavori sul patterning di proteine, ma nessuno di questi soddisfa i vincoli imposti da un dispositivo tridimensionale quale il MEMS progettato. Un nuovo utilizzo di uno spettroscopio per microraman è stato sviluppato nell’ambito della litografia maskless. Tale tecnica permette di realizzare substrati patternati con risoluzione submicrometrica senza il vincolo di superfici piatte e la presenza di una maschera. Tale tecnica è stata utilizzata per depositare spot di proteine. Sono state testate positivamente la resistenza della fibronectina al processo litografico e la compatibilità di quest’ultima alle cellule dopo il trattamento della litografia. Il risultato finale è stata la realizzazione di spot proteici con geometrie definite dall’utente e dimensioni paragonabili a quelle dei complessi cellulari per l’adesione (focal adhesion).1756 1504