To see the other types of publications on this topic, follow the link: Astronomical imaging.

Dissertations / Theses on the topic 'Astronomical imaging'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Astronomical imaging.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wong, Alison. "Artificial Intelligence for Astronomical Imaging." Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30068.

Full text
Abstract:
Astronomy is the ultimate observational science. Objects outside our solar system are beyond our reach, so we are limited to acquiring knowledge at a distance. This motivates the need to advance astrophysical imaging technologies, particularly for the field of high contrast imaging, where some of the most highly prized science goals require high fidelity imagery of exoplanets and of the circumstellar structures associated with stellar and planetary birth. Such technical capabilities address questions of both the birth and death of stars which in turn informs the grand recycling of matter in the chemical evolution of the galaxy and universe itself. Ground-based astronomical observation primarily relies on extreme adaptive optics systems in order to extract signals arising from faint structures within the immediate vicinity of luminous host stars. These systems are distinguished from standard adaptive optics systems in performing faster and more precise wavefront correction which leads to better imaging performance. The overall theme of this thesis therefore ties together advanced topics in artificial intelligence with techniques and technologies required for the field of high contrast imaging. This is accomplished with demonstrations of deep learning methods used to improve the performance of extreme adaptive optics systems and is deployed and benchmarked with data obtained at the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system operating at the observatory on the summit of Mauna Kea in Hawaii. Solutions encompass both hardware and software, with optimal recovery of scientific outcomes delivered by model fitting of high contrast imaging data with modern machine learning techniques. This broad-ranging study subjecting acquisition, analysis and modelling of data hopes to yield more accurate and higher fidelity observables which in turn delivers improved interpretation and scientific delivery.
APA, Harvard, Vancouver, ISO, and other styles
2

Leung, Wun Ying Valerie. "Inverse problems in astronomical and general imaging." Thesis, University of Canterbury. Electrical and Computer Engineering, 2002. http://hdl.handle.net/10092/7513.

Full text
Abstract:
The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object.
APA, Harvard, Vancouver, ISO, and other styles
3

Clare, Richard M. "Wavefront sensing and phase retrieval for astronomical imaging." Thesis, University of Canterbury. Electrical and Computer Engineering, 2004. http://hdl.handle.net/10092/7841.

Full text
Abstract:
Images of astronomical objects captured by ground-based telescopes are distorted by the earth's atmosphere. The atmosphere consists of random time-varying layers of air of differing density and hence refractive index. These refractive index fluctuations cause wavefronts that propagate through the atmosphere to become aberrated, resulting in a loss in resolution of the astronomical images. The wavefront aberrations that are induced by the atmosphere can be compensated by either real-time adaptive optics, where a deformable mirror is placed in the optical path, or by computer post-processing algorithms on the distorted images. In an adaptive optics system, the wavefront sensor is the element that estimates the wavefront phase aberration. The wavefront cannot be measured directly, and instead an aberration is introduced to the optical path to produce two or more intensity distributions, from which the wavefront slope or curvature can be estimated. Wavefront sensing is one of the topics of this thesis. A number of computer post-processing algorithms exist to deblur astronomical images, such as phase diversity, deconvolution from wavefront sensing (DWFS) and phase retrieval, with improvements to the latter two published in this thesis. The pyramid wavefront sensor consists of a four-sided glass prism placed in the focal plane of the telescope, which subdivides the focal plane in four, and a relay lens which re-images the four sections of the focal plane to form four images of the aperture at the conjugate aperture plane. The wavefront slope is estimated as a linear combination of the aperture images. The pyramid sensor can be generalised to a class of N-sided glass prism wavefront sensors that subdivide the focal plane into N equal sections, forming N aperture images at the conjugate aperture plane. The minimum number of sides required to estimate the slope in two orthogonal directions is three, and the cone sensor is derived by letting N tend to infinity. Simulation results show that in the presence of photon, but not read, noise the cone sensor provides the best wavefront estimate. For the pyramid sensor, the wavefront is typically reconstructed from the estimate of the wavefront slope in two orthogonal directions. Some information is inherently lost when the four measurements (aperture images) are reduced to two slope estimates. A new method is proposed to reconstruct the wavefront directly from the aperture images, removing the intermediate step of forming the slope estimates. Reconstructing the wavefront directly from the images is shown through simulation of atmospheric phase screens to give a better wavefront estimate than reconstructing from the slope estimates. This result is true for all pyramid type sensors tested. The pyramid wavefront sensor can be generalised by placing the lenslet array at the focal plane to subdivide the complex field in the focal plane into more than four sections. Using this framework, the pyramid sensor can be considered as the dual of the Shack Hartmann sensor, which subdivides the aperture plane with a lenslet array, since the two sensors subdivide each one of a Fourier pair. Both sensors estimate the wavefront slope with a centroid operator on the low resolution images. Also, in both sensors there exists a trade-off between the spatial resolution obtainable and the accuracy of the slope estimates. This trade-off is determined by the size of the lenslets in the array for both sensors, and is inverted between the two sensors. Simulation results run in open loop demonstrate that the lenslet array at the aperture (Shack-Hartmann) and focal (pyramid) planes do provide wavefront estimates of equivalent quality. The lenslet array at the focal plane, however, can be modulated so as to increase its linear range and thus provide a better wavefront estimate than the Shack-Hartmann sensor in open loop simulations. Phase retrieval is a non-linear iterative technique that is used to recover the phase in the aperture plane from intensity measurements at the focal plane and other constraints. A novel phase retrieval algorithm, which subdivides the focal plane of the telescope with a lenslet array and uses the aperture images formed at the conjugate aperture plane as a magnitude constraint, is proposed. This algorithm is more heavily constrained than conventional phase retrieval or phase retrieval in conjunction with the Shack-Hartmann sensor, with constraints applied at three Fourier planes: the aperture, focal and conjugate aperture planes. The subdivision of the focal plane means that the ambiguity problem that exists in other phase retrieval algorithms between an object A(x,y) and its twin A* (x,y) is removed, and this is supported by simulation results. Simulation results also show that the performance of the algorithm is dependent on the starting point, and that starting with the linear estimate from the aperture images gives a better wavefront estimate than starting with zero phase. DWFS is a computer post-processing algorithm that combines the distorted image and wavefront sensing measurements in order to compensate the image for the atmospheric turbulence. An accurate calibration of the reference positions for the centroids of the Shack-Hartmann sensor is essential for an accurate estimate of the wavefront and hence astronomical object, with DWFS. The conventional method for estimating these reference positions is to image a laser beam through the Shack-Hartmann lenslet array but not through the atmosphere. An alternative calibration technique is to observe a single bright star and optimise the Strehl ratio with respect to the reference positions. Results using DWFS on data captured at the Observatoire de Lyon show that this new technique can provide wavefront estimates of similar quality as the grid calibration technique, but without the need for a separate calibration laser.
APA, Harvard, Vancouver, ISO, and other styles
4

Dunlop, Colin Nigel. "The imaging properties of large reflecting astronomical telescopes." Thesis, Durham University, 1986. http://etheses.dur.ac.uk/7019/.

Full text
Abstract:
This thesis is concerned with some of the limitations concerned with the imaging properties of astronomical telescopes of large apertures. These arise from the atmosphere, the diffracting aperture, the residual errors in the optically worked surfaces and the characteristics of the detection devices. Methods of Fourier optics are used to determine modulation transfer functions and associated point spread function. They are applied to three problems. The first of these is a comparison of the diffraction patterns that are expected from the multi-mirror telescopes. These are made either of separated individual mirrors or of segmented mirrors shaped to an overall parabolic shape. The effect of the dilution of the aperture in the former and the effect of misalignment in the latter is investigated. In the second study, the factors contributing to the imaging of the UK Schmidt telescope are considered and design studies of this and other two variants are examined. In particular the limiting effect of the atmosphere and of the detecting photographic emulsion is noted. Thirdly the overall limitation of the atmospheric seeing is considered experimentally. The Durham Polaris seeing monitor has been designed and built with a shear interferometer. It has been tested at local ground level where local measurements of seeing have been made. In the near future it will be taken and used at La Palma.
APA, Harvard, Vancouver, ISO, and other styles
5

Lagadec, Tiphaine. "Advanced photonic solutions for high precision astronomical imaging." Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/22078.

Full text
Abstract:
The sharply rising productivity of exoplanet searches over the past two decades has delivered profound statistical insights into the prevalence and diversity of worlds around other stars. The frontier for astronomers has now expanded into the new era of exoplanet characterisation. Major progress here will only be achieved with new instrumental advances. Most highly sought-after is the capability to separate the faint light from a planet from the glare of the host star. The direct detection of planetary photons will enable unique spatial and spectral studies, revealing intrinsic properties of atmospheres and surfaces. In this project, a prototype instrument GLINT South (Guided Light Interferometric Nulling Technology) was developed. It employs nulling interferometry in which the light from the host star is actively rejected though destructive interference. Such advanced control and processing of starlight is accomplished by way of photonic technology fabricated into integrated optical chips. A monochromatic null depth was measured in the laboratory consistent with 0 within an uncertainty of 10-3. The instrument was tested at the Anglo Australian Telescope, and a sample of infrared-bright stars were observed retrieving uniform disk diameters in close agreement to the literature values, despite the stellar diameters being beyond the telescopes formal di raction limit. Furthermore, an algorithm was created to optimise the design of integrated optics waveguides for pupil remapping chips leading to the design of a 4-input remapping chip which will signi cantly expand capabilities and deliver multi-channel nulling as well as complex visibility data. The photonic nulling devices, inscribed within miniature, robust and environmentally stable monolithic chips are a promising avenue to one of astronomy's grandest challenges of characterising the chemical and physical environments of exoplanets.
APA, Harvard, Vancouver, ISO, and other styles
6

Tubbs, Robert Nigel. "Lucky exposures : diffraction limited astronomical imaging through the atmosphere." Thesis, University of Cambridge, 2003. https://www.repository.cam.ac.uk/handle/1810/224517.

Full text
Abstract:
The resolution of astronomical imaging from large optical telescopes is usually limited by the blurring effects of refractive index fluctuations in the Earth's atmosphere. By taking a large number of short exposure images through the atmosphere, and then selecting, re-centring and co-adding the best images this resolution limit can be overcome. This approach has significant benefits over other techniques for high-resolution optical imaging from the ground. In particular the reference stars used for our method (the Lucky Exposures technique) can generally be fainter than those required for the natural guide star adaptive optics approach or those required for other speckle imaging techniques. The low complexity and low instrumentation costs associated with the Lucky Exposures method make it appealing for medium-sized astronomical observatories. The method can provide essentially diffraction-limited I-band imaging from well-figured ground-based telescopes as large as 2.5 m diameter. The faint limiting magnitude and large isoplanatic patch size for the Lucky Exposures technique at the Nordic Optical Telescope means that 25% of the night sky is within range of a suitable reference star for I-band imaging. Typically the 1%-10% of exposures with the highest Strehl ratios are selected. When these exposures are shifted and added together, field stars in the resulting images have Strehl ratios as high as 0.26 and full width at half maximum flux (FWHM) as small as 90 milliarc seconds. Within the selected exposures the isoplanatic patch is found to be up to 60 arc seconds in diameter at 810 nm wavelength. Images within globular clusters and of multiple stars from the Nordic Optical Telescope using reference stars as faint as I 16 are presented. A new generation of CCDs (Marconi L3Vision CCDs) were used in these observations, allowing extremely low noise high frame-rate imaging with both fine pixel sampling and a relatively wide field of view. The theoretical performance of these CCDs is compared with the experimental results obtained.
APA, Harvard, Vancouver, ISO, and other styles
7

Zadnik, Jerome A. "The use of charge coupled devices in astronomical speckle imaging." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/14947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Young, N. G. "The digital processing of astronomical and medical coded aperture images." Thesis, University of Southampton, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.482729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brockie, Richard. "Extending the limits of direct high angular resolution infrared astronomical imaging." Thesis, University of Edinburgh, 1998. http://hdl.handle.net/1842/30315.

Full text
Abstract:
Observing in the infrared (IR) part of the electromagnetic spectrum is now an established tool of astronomy. It allows investigations of, among others, high redshift galaxies, star formation regions and very low mass stars close to the hydrogen burning limit as well as providing information complementary to that obtained in other regions of the spectrum. The dimensions of infrared arrays have increased over the years from 62 x 58 in IRCAM1, the first infrared imager on the UK Infrared Telescope, to the 2562 array in IRCAM3, the current camera, soon to be superseded by 10242 arrays in the next generation of instruments. In this thesis, I describe the first observing programme which uses infrared observations to measure trigonometric parallaxes - made possible through the introduction of larger IR assays. In this programme, certain difficulties associated with infrared techniques are encountered and described with results presented for a previously measured star and a brown dwarf candidate. A major benefit of observing in the infrared is that atmospheric distortion has less of an effect on the formation of images - seeing on a good site can be < 0.5" at 2μm. The recent development of Adaptive Optics (AO) systems, which compensate for wavefront aberrations as observations are made, further reduce the effects of atmospheric distortion. AO systems have a servo-loop in which a deformable mirror attempts to remove the distortion present in the measured wavefront. In this thesis, I describe a method of real time characterisation of the most recent behaviour of the atmosphere, as observed by an AO system. Rather than reacting to the last measured distortion, this knowledge can be used in the servo-loop to reduce mirror fitting errors by predicting the next mirror shape. I describe a series of simulations which prove the validity of this novel technique. Finally, with simulations of the AO system being built for the William Herschel Telescope, I show that the improvement in performance available through prediction allows use of an AO guide star about 0.25 magnitudes fainter when compared with the non-predictive case.
APA, Harvard, Vancouver, ISO, and other styles
10

Duncan, Stephen Howard. "The application of parallel processing techniques in coded aperture imaging." Thesis, University of Southampton, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Whiteley, Mark Julian. "Developing the soft X-ray performance of CsI-coated microchannel plate detectors." Thesis, University of Leicester, 1987. http://hdl.handle.net/2381/35824.

Full text
Abstract:
The initial aim of the work presented in this thesis was to increase the soft x-ray quantum detection efficiency of a tandem-pair microchannel plate detector by the use of a CsI deposition photocathode. This aim was achieved. The coating technique and initial measurements are presented herein. After showing the use of such photocathodes, we investigated their stability and reproducibility. The effects of storage in poor vacuum, high vacuum and desiccated air are presented as is the stability of CsI photocathodes under prolonged X-ray bombardment. One consequence of the use of CsI is that a degree of energy resolution can be conferred upon a microchannel plate detector. We present further research in this field, including measurements performed on detectors with eight micron diameter channels. A feature of microchannel plate operation that is undesirable is the phenomenon of gain degradation. We performed a series of lifetests on a number of microchannel plate detectors.
APA, Harvard, Vancouver, ISO, and other styles
12

Ren, Deqing. "New techniques of multiple integral field spectroscopy." Thesis, Durham University, 2001. http://etheses.dur.ac.uk/3800/.

Full text
Abstract:
The work of this thesis is to investigate new techniques for Integral Field Spectroscopy (IPS) to make the most efficient use of modem large telescopes. Most of the work described is aimed at the FMOS for the SUBARU 8m telescope. Although this is primarily a system for Multiple Object Spectroscopy (MOS) employing single fibres, there is an option to include a multiple-IFS (MIPS) system. Much of this thesis is therefore aimed at the design and prototyping of critical systems for both the IPS and MOS modes of this instrument. The basic theory of IFU design is discussed first. Some particular problems are described and their soludons presented. The design of the MIPS system is described together with the construction and testing of a prototype deployable IFU. The assembly of the pickoff/fore-optics, microlens array and fibre bundle and their testing are described in detail. The estimated performance of the complete module is presented together with suggestions for improving the system efficiency which is currently limited by the performance of the microlens array. The prototyping of the MIPS system is supported by an extensive programme of testing of candidate microlens arrays. Another critical aspect of the instrument is the ability to disconnect the (IPS and MOS) fibre input which is installed on a removable prime focus top-end ring from the spectrographs which are mounted elsewhere on the telescope. This requires high-performance multiple fibre connectors. The designs of connectors for the MOS and IPS modes are described. Results from the testing of a prototype for the MOS mode are presented. This work is supported by a mathematical model of the coupling efficiency which takes into account optical aberrations and alignment errors. The final critical aspect of FMOS which has been investigated is the design of the spectrographs. The baseline system operates in the near-infrared (NIR) but an additional visible channel is an option. Efficient designs for both the visible and NIR systems are presented. The design of the NIR spectrograph presents challenges in the choice of materials for the doublet and triplet lenses employed. The choice of material and the combinations in which they can be used are described. This thesis shows that all these critical aspects of FMOS have good solutions that will result in good performance of the whole instrument. For the multiple IFU system, the prototype demonstrates acceptable performance which can be made excellent by the use of a better microlens array. The multiple fibre connector prototype already indicates excellent performance. Finally, the spectrograph designs presented should result in high efficiency and good image quality.
APA, Harvard, Vancouver, ISO, and other styles
13

Denis, Jean Marc. "Characterization of online archives of astronomical imaging vis-a-vis serendipitous asteroids, and their astrometric properties." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5186.

Full text
Abstract:
The identification of known asteroids on existing CCD pictures would allow us to obtain accurate astrometric and photometric asteroid properties. Some asteroids might have ambiguous orbital elements, thus their identification along with their exact positions on multiple picture frames could significantly improve their orbital elements. Furthermore, the possibility of identifying known asteroids on older pictures, sometimes preceding their discovery date, might allow the study of non-gravitational effects like the Yarkovsky effect. Identifying a potential Yarkovsky effect on asteroids is challenging because it is extremely weak. However, this effect cumulates with time, therefore, it is necessary to find astronomical pictures that are as old as possible. In addition, we need to collect high quality CCD pictures and use a methodology that would allow obtaining a statistically significant sample of asteroids. To accomplish this, we decided to use the online archive of the Subaru telescope at Mauna Kea Hawaii because it has a prime-focus camera with a very high resolution of 80 millions pixels very well suited to capture serendipitous asteroids. In addition, the Subaru online archive has pictures from the last 10 years. The methodology used in this thesis is to build a database that contains the orbital elements of all the known asteroids, allowing us to write a program that calculates the approximate position of all the asteroids at the date and time of each CCD picture we collect. To obtain a more precise position, the program also interfaces the JPL NASA Horizons on-line computation service. Every time an asteroid is found on a picture, Horizons sends its theoretical location back to the program. A later visual identification of this asteroid at this theoretical location on the picture triggers its input into our sample for further study. This method allowed us to visually confirm 508 distinct asteroids on 692 frames with an average diameter of 3.6 km. Finally, we use the theory (given in appendix A) to calculate the theoretical drift of these asteroids that we compare with the one we measured on the CCD pictures.
ID: 031001319; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Title from PDF title page (viewed March 27, 2013).; Thesis (M.S.)--University of Central Florida, 2012.; Includes bibliographical references (p. 192-195).
M.S.
Masters
Physics
Sciences
Physics
APA, Harvard, Vancouver, ISO, and other styles
14

Norris, Barnaby Richard Metford. "Secrets in Stellar Halos: Imaging Against the Glare." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14304.

Full text
Abstract:
The imaging of astronomical objects at extremely high angular resolution is an invaluable tool for myriad areas of astronomy, including the study of protoplanetary disks, mass-loss of evolved stars and AGN. But this poses a significant technical challenge as Earth’s turbulent atmosphere massively degrades the resolution achievable by large telescopes. In this thesis the development and implementation of two novel techniques to overcome this seeing limit are presented, building upon the technique of astronomical interferometry. First, differential polarimetry is combined with aperture-masking interferometry to produce diffraction-limited measurements of polarised structure (such as protoplanetary disks and evolved-star mass-loss shells) at precisions well beyond conventional aperture-masking. Observations using this technique are presented, and the development of an entirely new, purpose-built instrument - VAMPIRES - is described (now at the 8 m Subaru telescope). First on-sky tests and science observations are also presented. The second technique described replaces the traditional aperture mask with photonic pupil-remapping technologies. In this instrument - Dragonfly - optical waveguides inscribed in three dimensions within a photonic chip (using laser direct-write) are used to re-map an arbitrary set of telescope sub-apertures into a one-dimensional output, yielding several advantages. The requirement for a non-redundant input is removed, potentially allowing the entire telescope pupil to be utilised, vastly increasing throughput. The waveguides are single-moded, acting as a spatial filter and greatly improving closure phase precision. This output configuration is then ideal for a photonic beam combiner chip or direct fringe production, and spectral dispersion. Various technical challenges and their characterisation and mitigation are also presented. Together, these technologies stand to play a key role in high angular resolution, high contrast astronomical imaging.
APA, Harvard, Vancouver, ISO, and other styles
15

Rest, Armin. "Calibration of a CCD Camera and Correction of its Images." PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/5186.

Full text
Abstract:
Charge-Coupled-Device (CCD) cameras have opened a new world in astronomy and other related sciences with their high quantum efficiency, stability, linearity, and easy handling. Nevertheless, there is still noise in raw CCD images and even more noise is added through the image calibration process. This makes it essential to know exactly how the calibration process impacts the noise level in the image. The properties and characteristics of the calibration frames were explored. This was done for bias frames, dark frames and flat-field frames at different temperatures and for different exposure times. At first, it seemed advantageous to scale down a dark frame from a high temperature to the temperature at which the image is taken. However, the different pixel populations have different doubling temperatures. Although the main population could be scaled down accurately, the hot pixel populations could not. A global doubling temperature cannot be used to scale down dark frames taken at one temperature to calibrate the image taken at another temperature. It was discovered that the dark count increased if the chip was exposed to light prior to measurements of the dark count. This increase, denoted as dark offset, is dependent on the time and intensity of the prior exposure of the chip to light. The dark offset decayes with a characteristic time constant of 50 seconds. The cause might be due to storage effects within chip. It was found that the standard procedures for image calibration did not always generate the best and fastest way to process an image with a high signal-to-noise ratio. This was shown for both master dark frames and master flat-field frames. In a real world example, possible night sessions using master frame calibration are explained. Three sessions are discussed in detail concerning the trade-offs in imaging time, memory requirements, calibration time, and noise level. An efficient method for obtaining a noise map of an image was developed, i.e., a method for determining how accurate single pixel values are, by approximating the noise in several different cases.
APA, Harvard, Vancouver, ISO, and other styles
16

Laag, Edward Aric. "Observations of starburst galaxies science and supporting technology /." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957320791&SrchMode=2&sid=2&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268854875&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Available via ProQuest Digital Dissertations. Title from first page of PDF file (viewed March 16, 2010). Includes bibliographical references. Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
17

Spencer, Locke Dean, and University of Lethbridge Faculty of Arts and Science. "Spectral characterization of the Herschel SPIRE photometer." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2005, 2005. http://hdl.handle.net/10133/291.

Full text
Abstract:
The European Space Agency's Herschel Space Observatory is comprised of three cryogenically cooled instruments commissioned to explore the far infrared/submillimetre universe. The Spectral and Photometric Imaging REceiver (SPIRE) is one of Herschel's instruments and consists of a three band imaging photometer and a two band imaging spectrometer. Canada is involved in the SPIRE project through provision of instrument development hardware and software, mission flight software, and support personnel. This thesis discusses Fourier transform spectroscopy (FTS) and FTS data processing. A detailed discussion is included on FTS phase correction, with results presented from the optimization of an enhanced Forman phase correction routine developed for this work. This thesis discusses the design, verification, and use of the hardware and software provided by Dr. Naylor's group as it relates to SPIRE verification testing. Results of the photometer characterization are presented. The current status of SPIRE and its future schedule is also discussed.
xvii, 239 leaves : ill. (some col.) ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
18

GUASTAVINO, SABRINA. "Learning and inverse problems: from theory to solar physics applications." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/998315.

Full text
Abstract:
The problem of approximating a function from a set of discrete measurements has been extensively studied since the seventies. Our theoretical analysis proposes a formalization of the function approximation problem which allows dealing with inverse problems and supervised kernel learning as two sides of the same coin. The proposed formalization takes into account arbitrary noisy data (deterministically or statistically defined), arbitrary loss functions (possibly seen as a log-likelihood), handling both direct and indirect measurements. The core idea of this part relies on the analogy between statistical learning and inverse problems. One of the main evidences of the connection occurring across these two areas is that regularization methods, usually developed for ill-posed inverse problems, can be used for solving learning problems. Furthermore, spectral regularization convergence rate analyses provided in these two areas, share the same source conditions but are carried out with either increasing number of samples in learning theory or decreasing noise level in inverse problems. Even more in general, regularization via sparsity-enhancing methods is widely used in both areas and it is possible to apply well-known $ell_1$-penalized methods for solving both learning and inverse problems. In the first part of the Thesis, we analyze such a connection at three levels: (1) at an infinite dimensional level, we define an abstract function approximation problem from which the two problems can be derived; (2) at a discrete level, we provide a unified formulation according to a suitable definition of sampling; and (3) at a convergence rates level, we provide a comparison between convergence rates given in the two areas, by quantifying the relation between the noise level and the number of samples. In the second part of the Thesis, we focus on a specific class of problems where measurements are distributed according to a Poisson law. We provide a data-driven, asymptotically unbiased, and globally quadratic approximation of the Kullback-Leibler divergence and we propose Lasso-type methods for solving sparse Poisson regression problems, named PRiL for Poisson Reweighed Lasso and an adaptive version of this method, named APRiL for Adaptive Poisson Reweighted Lasso, proving consistency properties in estimation and variable selection, respectively. Finally we consider two problems in solar physics: 1) the problem of forecasting solar flares (learning application) and 2) the desaturation problem of solar flare images (inverse problem application). The first application concerns the prediction of solar storms using images of the magnetic field on the sun, in particular physics-based features extracted from active regions from data provided by Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). The second application concerns the reconstruction problem of Extreme Ultra-Violet (EUV) solar flare images recorded by a second instrument on board SDO, the Atmospheric Imaging Assembly (AIA). We propose a novel sparsity-enhancing method SE-DESAT to reconstruct images affected by saturation and diffraction, without using any a priori estimate of the background solar activity.
APA, Harvard, Vancouver, ISO, and other styles
19

Jasinghege, Don Prasanna Deshapriya. "Spectrophotometric properties of the nucleus of the comet 67P/Churyumov-Gerasimenko observed by the ROSETTA spacecraft." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC007/document.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre de la mission spatiale Rosetta et porte sur les propriétés spectrophotométriques de la comète 67P/Churyumov-Gerasimenko à l’aide de l’instrument OSIRIS. Cet instrument est composé de deux caméras pour les observations du noyau et de la coma de la comète. Elles permettent d’acquérir des images avec des filtres qui opèrent dans la gamme du proche UV au proche IR. Dans un premier temps, j'ai analysé les courbes spectrophotométriques des taches claires qui sont apparues sur le noyau de la comète. Une étude comparative de celles-ci grâce aux données du spectro-imageur VIRTIS a ainsi permis de constater que les taches claires sont liées à la glace de H2O. Dans un second temps, j’ai entrepris une étude spectrophotométrique de la région Khonsu, qui a mis en évidence les variations saisonnières de la pente spectrale de différents terrains. Par la suite, j’ai élargi mon analyse des taches à tout le noyau de la comète. J’ai détecté plus de 50 taches claires dues à la présence de glace de H2O et j’ai produit une carte pour repérer leurs emplacements sur le noyau, afin d’étudier plus en détail leur répartition et leur évolution au cours de temps. Ceci m’a permis d’identifier quatre types de taches regroupés en fonction de leur morphologie et de constater qu'elles sont dues à différentes sources d'activité cométaire
This thesis is based on the spectrophotometric properties of the comet 67P/Churyumov-Gerasimenko, using the OSIRIS instrument of Rosetta space mission. Composed of two scientific cameras to observe the nucleus and the coma of the comet, OSIRIS images are acquired with multiple filters, that span the near-UV to near-IR wavelength range. They were used to study the spectrophotometric curves of the exposed bright features that appeared on the surface of the cometary nucleus, leading to a comparative study, that was carried out in collaboration with the VIRTIS spectro-imager aboard Rosetta, that demonstrated, that these exposures are related to H2O ice, using its absorption band located at 2 microns. The thesis further details a spectrophotometric study of the Khonsu region in the southern latitudes of the comet, where the seasonal variation of the spectral slope of different types of terrains is explored. Finally, the results of an extended survey of exposed bright features are presented. More than 50 individual features are presented under four morphologies along with an albedo calculation, suggesting that different activity sources are responsible for their appearance on the nucleus
APA, Harvard, Vancouver, ISO, and other styles
20

Lima, Melina Silva de. "Manipula??o de imagens astron?micas com o uso Aladin para o ensino de astronomia." Universidade Estadual de Feira de Santana, 2015. http://localhost:8080/tede/handle/tede/297.

Full text
Abstract:
Submitted by Luis Ricardo Andrade da Silva (lrasilva@uefs.br) on 2016-02-04T21:17:06Z No. of bitstreams: 1 Dissertacao Melina Silva de Lima.pdf: 10052548 bytes, checksum: 73b20a87c9bf7f70b05b1e58665c5b7a (MD5)
Made available in DSpace on 2016-02-04T21:17:07Z (GMT). No. of bitstreams: 1 Dissertacao Melina Silva de Lima.pdf: 10052548 bytes, checksum: 73b20a87c9bf7f70b05b1e58665c5b7a (MD5) Previous issue date: 2015-08-13
We have used the Aladin software, a sky atlas used to visualize and manipulate astronomical images developed by CDS of Strasbourg. To elaborate teaching activities involving astronomical concepts, such as distance, brightness, image manipulation, colors as well as to explain the nature of different objects showing their images in different filters, among others. In the total, we have elaborated four activities that were applied to students of the 6th year of elementary school, high school and undergraduate ones for the Engineering and Pedagogy courses. All activities and the results of their evaluation with students are detailed discussed and analyzed; a teacher guide is also provided. Our results show that in the four activities, the students have a significant learning supporting the use of such methodology. Also, we elaborated two memory games, based on Java platform, with the images of some astronomical objects. The activities are in The Portuguese language, but they can easily be adapted for any other language. This research also made the translation of Aladin to Portuguese.
Este trabalho contempla a an?lise de um estudo de aplica??es, em sala de aula, de estrat?gias e t?cnicas de facilita??o, que utilizem o software Aladin (http://aladin.u-strasbg.fr/) na forma??o de conceitos de Astronomia e possibilidade de desenvolvimento cognitivo, por meio da aprendizagem, consolidando o aprendizado atrav?s do uso de computadores. Conceitos de dist?ncia, brilho, cor, exist?ncia de diferentes tipos de objetos astron?micos, manipula??o de imagens e dados astron?micos, entre outros aspectos, foram tratados em sala de aula e fizeram parte deste trabalho. A pesquisa foi aplicada em diversas etapas do ciclo de aprendizagem escolar, mais especificamente: 6? ano do ensino fundamental, 2? ano do ensino m?dio, alunos de gradua??o em Engenharia e, por fim, uma turma de alunas de Pedagogia, quase todos professores atuantes e, portanto, divulgadores dos conceitos para eles passados. Todas as atividades utilizaram o software Aladin e uma o aplicativo de Observat?rio Virtual, denominado VO-Stat. Como produto, foi elaborado material did?tico com o conte?do das atividades assim como um roteiro voltado para os professores realizarem a aplica??o da atividade para o ensino fundamental, um jogo da mem?ria virtual que trata de Astronomia. A tradu??o do Aladin para a l?ngua portuguesa tamb?m foi realizada nesse trabalho.
APA, Harvard, Vancouver, ISO, and other styles
21

Caro, Arias Fernando Ignacio. "Analysis and development of multi-frame super-resolution algorithms for astronomical images." Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/138902.

Full text
Abstract:
Magíster en Ciencias, Mención Computación
Ingeniero Civil en Computación
En esta tesis se aborda el problema de analizar el rendimiento de cuatro algoritmos de super-resolución multi-imagen cuando éstos son usados para recuperar imágenes astronómicas de alta resolución. Super-resolución multi-imagen es el nombre dado a los procesos que usan un conjunto de imágenes de baja resolución de una misma escena para obtener una nueva imagen con mayor resolución espacial, además de menos desenfoque y ruido, que cualquiera de las imágenes utilizadas como input. Estos algoritmos funcionan mediante la minimización de una función de costo, donde un prior es incluido para regularizar el proceso de reconstrucción, usando para ello un procedimiento de optimización basado en el cálculo del gradiente. Cada uno de los cuatro algoritmos desarrollados corresponde a una de las cuatro posibles combinaciones entre dos priors (Laplaciano y gradiente) para la función de costo y dos mecanismos para calcular su gradiente (la expresión analítica de dicho gradiente y la aproximación de Zomet). El principal objetivo de esta investigación consiste en estudiar el comportamiento del rendimiento de estos algoritmos en función de la Razón Señal-a-Ruido (SNR) de la imágenes de baja resolución empleadas como input en el proceso de reconstrucción. Para lograr este objetivo se requiere hacer uso de simulaciones, ya que se necesitan conjuntos de imágenes de baja resolución caracterizados por distintos valores de SNR para testear el funcionamiento de los cuatro algoritmos. Las imágenes simuladas fueron obtenidas usando dos herramientas de simulación, una basada en la replicación del proceso mediante el cual una imagen es adquirida por un dispositivo y que se conoce como Modelo de Observación de Imágenes (IOM), y otra basada en un enfoque de Monte Carlo y cuyo nombre es PhoSim. Considerando un rango de siete valores de SNR, muestreados en intervalos regulares entre 1 y 100 con una escala logarítmica, y usando un grupo de 100 templates de alta-resolución, se generaron 700 conjuntos, compuesto cada uno por 10 imágenes simuladas de baja resolución, utilizando para ello las dos herramientas de simulación previamente mencionadas. Luego, cada uno de los cuatro algoritmos fue empleado para reconstruir una imagen de alta resolución usando cada uno de estos conjuntos como input. El experimento descrito se llevó a cabo en dos instancias, primero usando registro afín para alinear las imágenes de baja resolución contenidas en cada conjunto utilizado como input, y luego utilizando registro cuadrático para cumplir dicha tarea. El rendimiento de los algoritmos fue evaluado, luego de realizar estos experimentos, usando como métricas el Peak de la Razón Señal-a-Ruido (PSNR) y el χ² reducido. De acuerdo a los resultados obtenidos, para cada uno de los algoritmos el PSNR aumenta a medida que la SNR crece, mientras que el χ² reducido se mantiene relativamente constante independientemente de la SNR. Los resultados correspondientes al PSNR sugieren que para valores pequeños de la SNR la aproximación de Zomet y el prior Laplaciano representan la mejor opción, mientras que para valores altos de la SNR la expresión analítica del gradiente junto al prior gradiente son la mejor opción, aunque, en este caso, por un margen estrecho. La magnitud de la disminución de rendimiento que se observa cuando los parámetros de registro y desenfoque son estimados es mayor cuando se usa PhoSim que cuando se usa el IOM. La utilización de diferentes procedimientos de registro no implicó variaciones significativas en el rendimiento de los cuatro algoritmos de super-resolución multi-imagen.
APA, Harvard, Vancouver, ISO, and other styles
22

Cabral, Maria Leonor Fonseca. ""A Imagem no Ensino de Astronomia" : Exploração didáctica e pedagógica de imagens no âmbito da Astronomia." Master's thesis, Universidade do Porto. Reitoria, 2001. http://hdl.handle.net/10216/10047.

Full text
Abstract:
Dissertação de Mestrado em Ensino da Astronomia apresentada à Faculdade de Ciências da Universidade do Porto
Reduzindo a idade do Universo a um ano, podemos considerar que pelas vinte e três horas do dia 31 de Dezembro, seriam realizados os primeiros registos do céu. O homem desde sempre se impressionou e orientou pelos astros e efemérides e foi registando, sucessivamente, em pedra, osso, papiro, papel, emulsão fotográfica e, por fim, em pixels, os acontecimentos celestes que observava durante a sua efémera existência.Na Antiguidade, gravar a regularidade do movimento dos astros servia para que as gerações seguintes pudessem prever ou planificar o futuro.Hoje, cada vez mais, o registo do céu conduz ao conhecimento do passado e, com o advento dos instrumentos de amplificação e detecção de todo o espectro da radiação, podemos mesmo vislumbrar a origem do nosso Universo e formular hipóteses para a sua evolução.Neste contexto, trabalhar com a imagem tornou-se um processo que permite registar acontecimentos que o fluir do tempo impede o Homem de reter.Até muito recentemente, ver o Universo, apenas era possível para os profissionais nos grandes telescópios. O desenvolvimento da tecnologia possibilitou a construção de pequenos telescópios, computadores pessoais e câmaras CCD, os quais permitem obter imagens de objectos celestes com relativa facilidade. Este recurso tornou-se, assim, uma forma privilegiada de experimentar Astronomia, pela exploração da imagem astronómica, no contexto multidisciplinar dos curricula dos Ensinos Básico e Secundário.
APA, Harvard, Vancouver, ISO, and other styles
23

Cabral, Maria Leonor Fonseca. ""A Imagem no Ensino de Astronomia" : Exploração didáctica e pedagógica de imagens no âmbito da Astronomia." Dissertação, Universidade do Porto. Reitoria, 2001. http://hdl.handle.net/10216/10047.

Full text
Abstract:
Dissertação de Mestrado em Ensino da Astronomia apresentada à Faculdade de Ciências da Universidade do Porto
Reduzindo a idade do Universo a um ano, podemos considerar que pelas vinte e três horas do dia 31 de Dezembro, seriam realizados os primeiros registos do céu. O homem desde sempre se impressionou e orientou pelos astros e efemérides e foi registando, sucessivamente, em pedra, osso, papiro, papel, emulsão fotográfica e, por fim, em pixels, os acontecimentos celestes que observava durante a sua efémera existência.Na Antiguidade, gravar a regularidade do movimento dos astros servia para que as gerações seguintes pudessem prever ou planificar o futuro.Hoje, cada vez mais, o registo do céu conduz ao conhecimento do passado e, com o advento dos instrumentos de amplificação e detecção de todo o espectro da radiação, podemos mesmo vislumbrar a origem do nosso Universo e formular hipóteses para a sua evolução.Neste contexto, trabalhar com a imagem tornou-se um processo que permite registar acontecimentos que o fluir do tempo impede o Homem de reter.Até muito recentemente, ver o Universo, apenas era possível para os profissionais nos grandes telescópios. O desenvolvimento da tecnologia possibilitou a construção de pequenos telescópios, computadores pessoais e câmaras CCD, os quais permitem obter imagens de objectos celestes com relativa facilidade. Este recurso tornou-se, assim, uma forma privilegiada de experimentar Astronomia, pela exploração da imagem astronómica, no contexto multidisciplinar dos curricula dos Ensinos Básico e Secundário.
APA, Harvard, Vancouver, ISO, and other styles
24

Carrasco, Davis Rodrigo Antonio. "Image sequence simulation and deep learning for astronomical object classification." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/170955.

Full text
Abstract:
Tesis para optar al grado de Magíster en Ciencias de la Ingeniería, Mención Eléctrica
Memoria para optar al título de Ingeniero Civil Eléctrico
En esta tesis, se propone un nuevo modelo de clasificación secuencial para objetos astronómicos basado en el modelo de red neuronal convolucional recurrente (RCNN) que utiliza secuencias de imágenes como entradas. Este enfoque evita el cálculo de curvas de luz o imágenes de diferencia. Esta es la primera vez que se usan secuencias de imágenes directamente para la clasificación de objetos variables en astronomía. Otra contribución de este trabajo es el proceso de simulación de imagen. Se simularon secuencias de imágenes sintéticas que toman en cuenta las condiciones instrumentales y de observación, obteniendo una serie de películas de ruido variable, realistas, muestreadas de manera irregular para cada objeto astronómico. El conjunto de datos simulado se utiliza para entrenar el clasificador RCNN. Este enfoque permite generar conjuntos de datos para entrenar y probar el modelo RCNN para diferentes estudios astronómicos y telescopios. Además, el uso de un conjunto de datos simulado es más rápido y más adaptable a diferentes surveys y tareas de clasificación. El objetivo es crear un conjunto de datos simulado cuya distribución sea lo suficientemente cercana al conjunto de datos real, de modo que un ajuste fino sobre el modelo propuesto pueda hacer coincidir las distribuciones y resolver el problema de adaptación del dominio entre el conjunto de datos simulado y el conjunto de datos real. Para probar el clasificador RCNN entrenado con el conjunto de datos sintéticos, se utilizaron datos reales de High Cadence Transient Survey (HiTS), obteniendo un recall promedio del 85% en 5 clases, mejorado a 94% después de realizar un ajuste fino de 1000 iteraciones con 10 muestras reales por clase. Los resultados del modelo RCNN propuesto se compararon con los de un clasificador de bosque aleatorio o random forest de curvas de luz. El RCNN propuesto con ajuste fino tiene un rendimiento similar en el conjunto de datos HiTS en comparación con el clasificador de bosque aleatorio de curva de luz, entrenado en un conjunto de entrenamiento aumentado con 100 copias de 10 muestras reales por clase. El enfoque RCNN presenta varias ventajas en un escenario de clasificación de streaming de alertas astronómicas, como una reducción del preprocesamiento de datos, una evaluación más rápida y una mejora más sencilla del rendimiento utilizando unas pocas muestras de datos reales. Los resultados obtenidos fomentan el uso del método propuesto para los sistemas astronomical alert brokers que procesarán streamings de alertas generados por nuevos telescopios, como el Large Synoptic Survey Telescope (LSST). Se proponen ideas para un clasificador multibanda y un mejor simulador de imágenes en función de las dificultades encontradas en este trabajo.
APA, Harvard, Vancouver, ISO, and other styles
25

Biancalani, Enrico. "Towards a novel concept of imaging spectrograph." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14056/.

Full text
Abstract:
L'obiettivo della presente tesi è di introdurre una nuova concezione di fotorivelatore astrofisico, in grado di analizzare immediatamente la composizione spettrale di immagini bidimensionali. Prima di tutto, la sua innovativa tecnologia, che si basa sull'effetto superconduttivo dell'induttanza cinetica, è confrontata con lo standard dei dispositivi semiconduttivi ad accoppiamento di carica. Essi contano sull'effetto fotoelettrico e possono solo percepire l'intensità della luce su un periodo d'esposizione, di per sé, senza elementi dispersivi o filtri monocromatici dinnanzi. In questo contesto, riporto il risultato della mia collaborazione con l'Università di Oxford, dov'è in via di sviluppo uno spettrografo a campo integrale chiamato KIDSpec, operativo nella banda elettromagnetica che va dall'ultravioletto al vicino infrarosso. Sarà il primo strumento di tal sorta ad essere costruito al di fuori degli Stati Uniti d'America. Il mio compito consisteva nell'assemblare uno spettrometro ottico destinato ad uno studio di fattibilità di questo apparato. Quindi, le carenze del sistema ottico sono discusse in breve, in vista della sua implementazione. A seguire, le indagini ottiche attraverso cui esaminai la prestazione dello spettrometro, quanto alle sue risposte in fatto di diffrazione, a varî angoli d'uscita e lunghezze d'onda in ingresso. In aggiunta, si fornisce un'eziologia dei motivi circolari causati dalla fibra ottica ivi impiegata. Infine, sono esaminate alcune possibili applicazioni astrofisiche del nuovo tipo di rivelatore: particolare attenzione è data all'emergente frontiera della caratterizzazione esoplanetaria diretta.
APA, Harvard, Vancouver, ISO, and other styles
26

Cabrera, Vives Guillermo. "Extraction and classification of objects from astronomical images in the presence of labeling bias." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/133321.

Full text
Abstract:
Doctor en Ciencias, Mención Computación
Giga, tera y petabytes de datos astronómicos están empezando a fluir desde la nueva generación de telescopios. Los telescopios de rastreo escanean una amplia zona del cielo con el fin de mapear la galaxia, nuestro universo, y detectar fuentes variables como la explosion de estrellas (o supernovas) y asteroides. Al igual que en otros campos de la ciencia observacional, lo único que podemos hacer es observar estas fuentes a través de la luz que emiten y que podemos capturar en nuestras cámaras. Debido a la gran distancia a la que estos objetos se encuentran, aún cuando podemos tener una caracterización estimada de estas fuentes, es imposible conocer las propiedades reales de ellas. En esta tesis, proponemos un método para la extracción de los llamados perfiles de Sérsic de fuentes astronómicas y su aplicación a clasificación morfológica de objetos. Este perfil de Sérsic es un modelo paramétrico radial asociado con la morfología de galaxias. La novedad de nuestro enfoque es que convierte la imagen 2D en un perfil radial 1D utilizando curvas de nivel elípticas, por lo que incluso cuando el espacio de parámetros de Sérsic es el mismo, la complejidad se ve reducida 10 veces en comaración a ajustes de modelos en 2D de la literatura. Probamos nuestro método sobre simulaciones y obtenemos un error de entre un 40% y un 50% en los parámetros de Sérsic, mientras que obtenemos un chi cuadrado reducido de 1,01. Estos resultados son similares a los obtenidos por otros autores, lo que sugiere que el modelo de Sérsic es degenerado. A su vez, aplicamos nuestro método a imágenes del SDSS y mostramos que somos capaces de extraer la componente suave del perfil de las galaxias, pero, como era de esperar, fallamos en obtener su estructura más fina. También mostramos que las etiquetas creadas por los seres humanos son sesgadas en términos de parámetros observables: al observar galaxias pequeñas, débiles o distantes, la estructura fina de estos objetos se pierde, produciendo un sesgo en el etiquetado sistemático hacia objetos más suaves. Creamos una métrica para evaluar el nivel de sesgo en los catálogos de las etiquetas y demostramos que incluso etiquetas obtenidas por expertos muestran cierto sesgo, mientras que el sesgo es menor para etiquetas obtenidas a partir de modelos de aprendizaje supervisado. Aun cuando este sesgo ha sido notado en la literatura, hasta donde sabemos, esta es la primera vez que ha sido cuantificado. Proponemos dos métodos para des-sesgar etiquetas. El primer método se basa en seleccionar una sub-muestra no-sesgada de los datos para entrenar un modelo de clasificación, y el segundo método ajusta simultáneamente un modelo de sesgo y de clasificación a los datos. Demostramos que ambos métodos obtienen el sesgo más bajo en comparación con otros conjuntos de datos y procedimientos de procesamiento.
APA, Harvard, Vancouver, ISO, and other styles
27

Concha, Ramírez Francisca Andrea. "FADRA: A CPU-GPU framework for astronomical data reduction and Analysis." Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/140769.

Full text
Abstract:
Magíster en Ciencias, Mención Computación
Esta tesis establece las bases de FADRA: Framework for Astronomical Data Reduction and Analysis. El framework FADRA fue diseñado para ser eficiente, simple de usar, modular, expandible, y open source. Hoy en día, la astronomía es inseparable de la computación, pero algunos de los software más usados en la actualidad fueron desarrollados tres décadas atrás y no están diseñados para enfrentar los actuales paradigmas de big data. El mundo del software astronómico debe evolucionar no solo hacia prácticas que comprendan y adopten la era del big data, sino también que estén enfocadas en el trabajo colaborativo de la comunidad. El trabajo desarollado consistió en el diseño e implementación de los algoritmos básicos para el análisis de datos astronómicos, dando inicio al desarrollo del framework. Esto consideró la implementación de estructuras de datos eficientes al trabajar con un gran número de imágenes, la implementación de algoritmos para el proceso de calibración o reducción de imágenes astronómicas, y el diseño y desarrollo de algoritmos para el cálculo de fotometría y la obtención de curvas de luz. Tanto los algoritmos de reducción como de obtención de curvas de luz fueron implementados en versiones CPU y GPU. Para las implementaciones en GPU, se diseñaron algoritmos que minimizan la cantidad de datos a ser procesados de manera de reducir la transferencia de datos entre CPU y GPU, proceso lento que muchas veces eclipsa las ganancias en tiempo de ejecución que se pueden obtener gracias a la paralelización. A pesar de que FADRA fue diseñado con la idea de utilizar sus algoritmos dentro de scripts, un módulo wrapper para interactuar a través de interfaces gráficas también fue implementado. Una de las principales metas de esta tesis consistió en la validación de los resultados obtenidos con FADRA. Para esto, resultados de la reducción y curvas de luz fueron comparados con resultados de AstroPy, paquete de Python con distintas utilidades para astrónomos. Los experimentos se realizaron sobre seis datasets de imágenes astronómicas reales. En el caso de reducción de imágenes astronómicas, el Normalized Root Mean Squared Error (NRMSE) fue utilizado como métrica de similaridad entre las imágenes. Para las curvas de luz, se probó que las formas de las curvas eran iguales a través de la determinación de offsets constantes entre los valores numéricos de cada uno de los puntos pertenecientes a las distintas curvas. En términos de la validez de los resultados, tanto la reducción como la obtención de curvas de luz, en sus implementaciones CPU y GPU, generaron resultados correctos al ser comparados con los de AstroPy, lo que significa que los desarrollos y aproximaciones diseñados para FADRA otorgan resultados que pueden ser utilizados con seguridad para el análisis científico de imágenes astronómicas. En términos de tiempos de ejecución, la naturaleza intensiva en uso de datos propia del proceso de reducción hace que la versión GPU sea incluso más lenta que la versión CPU. Sin embargo, en el caso de la obtención de curvas de luz, el algoritmo GPU presenta una disminución importante en tiempo de ejecución comparado con su contraparte en CPU.
Este trabajo ha sido parcialmente financiado por Proyecto Fondecyt 1120299
APA, Harvard, Vancouver, ISO, and other styles
28

Mallmith, Décio de Moura. "Pré-seleção de sítios astronômicos por imagens de satélites meteorológicos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2004. http://hdl.handle.net/10183/5097.

Full text
Abstract:
A Astronomia, como origem e, talvez, como fim de todas as Ciências, sempre esteve voltada à observação dos astros e à busca de novas técnicas e instrumentos que permitissem ampliar os limites e a qualidade destas observações. Dessa busca resultou o desenvolvimento do telescópio óptico, em 1608, por Galileu Galilei. Com o passar dos anos, esse instrumento foi sofrendo incontáveis aperfeiçoamentos, chegando aos nossos dias como um aparelho preciso e de sofisticada tecnologia. Apesar das fronteiras observacionais terem avançado para além do espectro visível, o telescópio óptico ainda é indispensável à Astronomia. O Brasil, embora não apresente condições meteorológicas ideais para observações astronômicas, possui observatórios ópticos de razoável qualidade, como o Laboratório Nacional de Astrofísica, LNA, em Minas Gerais, entre outros. Em seu extremo sul, no entanto, o País carece de bons instrumentos ópticos, destacando-se unicamente o telescópio da UFRGS instalado no Observatório do Morro Santana, em Porto Alegre. O aumento da iluminação artificial na Região Metropolitana de Porto Alegre e, conseqüentemente, da claridade do céu noturno tem praticamente inviabilizado a operação do Observatório. Assim, a escolha de novos locais para a futura instalação de telescópios ópticos no Estado é imprescindível. Acrescenta-se a isto o fato do ciclo climático desta região diferenciarse daquele das demais regiões do País, fato relevante, dado que um dos fatores determinantes na escolha de novos sítios para telescópios ópticos é a taxa de nebulosidade. Levantamentos in situ de nebulosidade são longos e custosos. Como alternativa, neste trabalho foi realizado um estudo estatístico do Estado, a partir da montagem de um banco de 472 imagens noturnas dos satélites GOES e MeteoSat. A combinação das imagens, por processo de superposição e cálculo de valores médios de contagens (brilho), à escala de pixel, forneceu informações em nível de préseleção, ou indicativo, de locais com altas taxas de noites claras. Foram cobertos os períodos de 1994-1995 e 1998-1999, com focalização nas áreas em torno de Bom Jesus, Vacaria e Caçapava do Sul. Como controle, foi também monitorada a área em torno do LNA em Minas Gerais. Ademais da demonstração metodológica, os dados orbitais indicaram que, na média destes anos, estas áreas são adequadas à instalação de observatórios astronômicos, pela conjugação dos fatores de nebulosidade e altitude.
APA, Harvard, Vancouver, ISO, and other styles
29

Mazzuca, Junior Juarez. "Localização de sítios astronômicos através de imagens dos satélites NOAA." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1999. http://hdl.handle.net/10183/5172.

Full text
Abstract:
Este trabalho se ocupa do problema da localização de sítios adequados para a operação de observatórios astronômicos. O constante crescimento das cidades e a urbanização de grandes áreas vizinhas é acompanhada por um aumento proporcional da iluminação artificial, o que resulta em níveis maiores de brilho do céu noturno. Isto tem consequências extremas para a astronomia óptica, a qual exige um céu escuro, e justifica buscas de novos sítios para a instalação de futuros telescópios ópticos. Um dos critérios mais importantes para sítios astronômicos, além de céu escuro, é uma alta proporção de céu claro, sem nuvens. Buscas de sítios astronômicos são estudos conduzidos ao longo de períodos de tempo de anos. É sugerido que imagens de satélites meteorológicos podem ser úteis para a seleção preliminar destes sítios. A metodologia utilizada é fundamentada em correções geométricas de imagens de dados orbitais das naves NOAA12 e NOAA14 e na soma de imagens obtidas em datas diferentes. As imagens foram coletadas pela estação de recepção instalada no Centro Estadual de Pesquisas em Sensoriamento Remoto e Meteorologia da UFRGS. Somando, pixel por pixel, imagens colhidas em datas diferentes, após correções geométricas, obtém-se médias temporais de cobertura de nuvens, o que é o equivalente moderno da construção de médias a partir de uma série temporal de dados meteorológicos de estações terrestres. Nós demonstramos que esta metodologia é factível para este tipo de dado, originário de órbitas de menor altitude e melhor resolução, se comparado com imagens vindas de satélites de órbitas altas, geoestacionários, com menor resolução, imagens estas de manipulação muito mais fácil, pois o ponto de observação tende a ser estável no tempo.
APA, Harvard, Vancouver, ISO, and other styles
30

Martins, Neto Luzita Erichsen. "Alfabetização visual e científica: aproximação a partir da leitura de imagens de temas da astronomia." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1965.

Full text
Abstract:
Acompanha: Produção científica e visual por meio de oficinas de leitura de imagem de temas da astronomia (material didático)
A formação de alunos com competência para o enfrentamento aos avanços acelerados no contexto tecnológico e científico é uma condição exigida pela sociedade atual. Mas, como atender as atuais necessidades do país, se ainda nos deparamos com um conhecimento fragmentado em disciplinas, sem fazer relação com a realidade do aluno? Como conseguir potencializar competências transversais se por outro lado ainda nos defrontamos com dificuldades no ensino de Ciências no cenário brasileiro? O presente estudo objetivou sugerir uma proposta de alfabetização visual e científica, que propiciasse a leitura, e a análise de representações astronômicas em espaços interdisciplinares, que favorecessem as áreas de Artes Visuais e Física. A pesquisa se justifica porque apresenta uma proposta de tornar exequível de um ensino, que não apenas propicie conhecimentos prontos, mas que priorize uma alfabetização científica e visual, que permita fazer conexões entre diferentes áreas do conhecimento, e que operadas juntas possam dar conta de esclarecer as relações existentes entre Ciência, Arte, tecnologia e sociedade. Para responder estas indagações partiu-se da hipótese que o entendimento da leitura de imagem aplicada no cotidiano dos alunos propiciaria competência para resolver problemas do cotidiano. A leitura e a análise de representações astronômicas em espaços interdisciplinares oferecem uma inovação na maneira de interpretar, ou uma nova linguagem que possa ir além do senso comum. Na atualidade se faz premente novas práticas pedagógicas, que ampliem o conhecimento, e habilitem os nossos alunos para o enfrentamento das novas tecnologias, para a compreensão da Ciência, para que possam corresponder aos novos desafios da contemporaneidade.
The training of students with competence to cope with accelerated advances in the technological and scientific context is a condition demanded by today's society. But, how to meet the current needs of the country, if we still come across a fragmented knowledge in disciplines, without relation to the reality of the student? How can we maximize transversal competences if, on the other hand, we still face difficulties in teaching science in the Brazilian scenario? The present study aimed to suggest a proposal of visual and scientific literacy that would allow the reading and analysis of astronomical representations in interdisciplinary spaces favoring the Visual Arts and Physics areas. The research is justified because it presents a proposal to make feasible a teaching that not only provides ready knowledge, but which prioritizes a scientific and visual literacy, that allows to make connections between different areas of knowledge, and that operated together can account for clarifying The existing relations between Science, Art, technology, and society. In order to answer these questions, it was hypothesized that the understanding of the reading of the image applied in the daily life of the students would give competence to solve daily problems. Reading and analyzing astronomical representations in interdisciplinary spaces offers an innovation in the way of interpreting, or a new language that can go beyond common sense. Nowadays, new pedagogical practices are intensified, which broaden the knowledge, and enable our students to face the new technologies, to understand science, so that they can respond to the new challenges of the contemporary world.
APA, Harvard, Vancouver, ISO, and other styles
31

Morgan, John <1981&gt. "Very Long Baseline Interferometry in Italy Wide-field VLBI imaging and astrometry and prospects for an Italian VLBI network including the Sardinia Radio Telescope." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2830/2/thesis.pdf.

Full text
Abstract:
In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.
APA, Harvard, Vancouver, ISO, and other styles
32

Morgan, John <1981&gt. "Very Long Baseline Interferometry in Italy Wide-field VLBI imaging and astrometry and prospects for an Italian VLBI network including the Sardinia Radio Telescope." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amsdottorato.unibo.it/2830/.

Full text
Abstract:
In this thesis the use of widefield imaging techniques and VLBI observations with a limited number of antennas are explored. I present techniques to efficiently and accurately image extremely large UV datasets. Very large VLBI datasets must be reduced into multiple, smaller datasets if today’s imaging algorithms are to be used to image them. I present a procedure for accurately shifting the phase centre of a visibility dataset. This procedure has been thoroughly tested and found to be almost two orders of magnitude more accurate than existing techniques. Errors have been found at the level of one part in 1.1 million. These are unlikely to be measurable except in the very largest UV datasets. Results of a four-station VLBI observation of a field containing multiple sources are presented. A 13 gigapixel image was constructed to search for sources across the entire primary beam of the array by generating over 700 smaller UV datasets. The source 1320+299A was detected and its astrometric position with respect to the calibrator J1329+3154 is presented. Various techniques for phase calibration and imaging across this field are explored including using the detected source as an in-beam calibrator and peeling of distant confusing sources from VLBI visibility datasets. A range of issues pertaining to wide-field VLBI have been explored including; parameterising the wide-field performance of VLBI arrays; estimating the sensitivity across the primary beam both for homogeneous and heterogeneous arrays; applying techniques such as mosaicing and primary beam correction to VLBI observations; quantifying the effects of time-average and bandwidth smearing; and calibration and imaging of wide-field VLBI datasets. The performance of a computer cluster at the Istituto di Radioastronomia in Bologna has been characterised with regard to its ability to correlate using the DiFX software correlator. Using existing software it was possible to characterise the network speed particularly for MPI applications. The capabilities of the DiFX software correlator, running on this cluster, were measured for a range of observation parameters and were shown to be commensurate with the generic performance parameters measured. The feasibility of an Italian VLBI array has been explored, with discussion of the infrastructure required, the performance of such an array, possible collaborations, and science which could be achieved. Results from a 22 GHz calibrator survey are also presented. 21 out of 33 sources were detected on a single baseline between two Italian antennas (Medicina to Noto). The results and discussions presented in this thesis suggest that wide-field VLBI is a technique whose time has finally come. Prospects for exciting new science are discussed in the final chapter.
APA, Harvard, Vancouver, ISO, and other styles
33

Andrade, Denis Furtado de. "Sistema embarcado para aquisição de imagens astronômicas." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-04042011-123639/.

Full text
Abstract:
Neste trabalho são apresentados os resultados obtidos no desenvolvimento de um sistema eletrônico embarcado dedicado à aquisição de imagens astronômicas. Este sistema é composto principalmente por uma câmera científica astronômica, que abriga um detector de imagem do tipo EMCCD, e por um controlador eletrônico utilizado para a operação e leitura do detector de imagem. São detalhados os estágios que formam o sistema embarcado específico para um instrumento astronômico instalado no telescópico internacional SOAR, no Chile. Cada um destes estágios: sensor, câmera, controlador eletrônico, placa de aquisição de imagens e software, será descrito detalhadamente, desde uma revisão das alternativas de solução até as técnicas de operação de sensores, além da manipulação de controladores eletrônicos. São analisados detalhes referentes ao projeto da câmera, bem como a influência desses detalhes em seu funcionamento. Alguns resultados científicos já alcançados com sistemas embarcados equivalentes são expostos. Também são apresentados os resultados dos ensaios e trabalhos realizados em laboratório para este projeto, e os resultados atingidos com o instrumento já em operação no telescópio.
This work presents the results obtained in the development of an embedded electronic system dedicated to the acquisition of astronomical images. This system consists primarily of a scientific astronomical camera, which houses an imaging detector EMCCD, and an electronic controller used for operating and reading the imaging detector. The stages that constitute the embedded system specific to an astronomical instrument installed in the telescopic international SOAR, at Chile, are detailed. Each of these stages: sensor, camera, electronic controller, image acquisition board and software, is described in detail, from a review of alternative solutions to the technical operation of sensors, and manipulation of electronic controllers. Some details of the camera design are analyzed as well as its influence in the camera operation. Some scientific results already achieved with equivalent embedded systems are exposed. It also presents the results of laboratory tests and work done for this project, and the results achieved with the instrument already operating at the telescope.
APA, Harvard, Vancouver, ISO, and other styles
34

UBEIRA, GABELLINI MARIA GIULIA. "THE ROLE OF (SUB-)STELLAR COMPANIONS ON THE DYNAMICAL EVOLUTION OF PROTOPLANETARY DISCS." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/798394.

Full text
Abstract:
The study of planet formation has become progressively more important in the last few years given the great number of diverse exoplanets recently discovered. It is, indeed, only by studying extrasolar planetary systems embedded in their natal (protoplanetary) discs that we can make statistical studies of the range of outcomes of the planet formation process. In particular, the discs that present a cavity (transitional discs) or a gap in the dust radial profile are related to disc clearing mechanisms by young giant planets. In this Thesis, we analyze observations taken with the most advanced telescopes (ALMA and VLT/SPHERE) combining multi-wavelength observations to discriminate between different formation processes in systems with disc sub-structures. We provide a general overview on protoplanetary discs and planets/binaries, followed by the description of dust and gas dynamics and thermal disc structure. Moreover, we describe the two most accredited scenarios of planet formation: core accretion and gravitational instability. In the second part of the Thesis, we present a work on the dust and gas cavity of the disc around CQ Tau observed with ALMA together with thermochemical models and hydro-dynamical simulations, which provide insight on a massive planet responsible for the clearing of such disc structure. Secondly, we describe an analysis done on a survey of 22 Herbig and F/G type stars imaged by SPHERE that confirms that the large near-infrared excess observed in the SEDs of Group I Herbig stars can be explained by the presence of a large gap in their discs. We spatially resolve spirals in HD 100453, HD 100546, CQ Tau; ring-like disc in HD 169142 and HD 141569; and single inclined thin disc in AK Sco and T Cha. We compare the results with ALMA and PDI observations and with simulations. Moreover, we detect and confirm the presence of a novel gravitationally bound companion to the young MWC 297 star. Finally, we describe a novel routine that exploits the known radial variation of stellar artifacts with wavelength together with the spectral slope of the star.
APA, Harvard, Vancouver, ISO, and other styles
35

Vassallo, Daniele. "A virtual coronagraphic test bench for SHARK-NIR, the second-generation high-contrast imager for the Large Binocular Telescope." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3422319.

Full text
Abstract:
SHARK-NIR is the second-generation high-contrast coronagraphic imager for the Large Binocular Telescope (LBT). In my Ph.D. project I have been involved in the conceptual and final design phase of the instrument. In specific, I developed a simulator in IDL language that operated as a virtual test bench to make a comparative study of several coronagraphic techniques identified as suitable candidates for implementation in the instrument. The simulator is based on physical optics propagation and adopts an end-to-end approach to generate images in presence of several sources of optical aberrations, from atmospheric residuals to telescope vibrations and non common path aberrations (NCPA). In particular, a big effort has been devoted to the optimization of the software efficiency through a dedicated parallelization scheme, to modelling of NCPA spatial and temporal properties, to the investigation of the effects of telescope vibrations and of the impact of the forthcoming upgrade of LBT Adaptive Optics system. I explored the coronagraphic performance in a wide range of observing conditions and characterized the coronagraphs sensitivity to aberrations, misalignments of optical components and chromatism. I also helped developing a data reduction pipeline to process simulated data adopting several algorithms. Simulations results have been used to define a final set of coronagrahic solutions that allow to fulfill the top-level scientific requirements.\\Finally, I validated with simulations the phase diversity approach as a strategy for on-line sensing of NCPA. Simulations contributed to the final choice of the internal DM for both NCPA and fast tip-tilt correction.
SHARK-NIR è l'imager ad alto contrasto di seconda generazione per il Large Binocular Telescope. Durante il mio Ph.D. sono stato coinvolto nella fasi di design concettuale e finale dello strumento. In specifico, ho sviluppato un simulatore in IDL che è stato utilizzato come banco di test virtuale per realizzare uno studio comparativo di diverse tecniche coronografiche identificate come possibili candidate a essere implementate nello strumento. Il simulatore è basato sulla propagazione di fronti d'onda e utilizza un approccio end-to-end per generare immagini in presenza di svariate sorgenti di aberrazioni ottiche, da residui atmosferici a vibrazioni e aberrazioni di non-common path (NCPA). Un'attenzione particolare è stata rivolta all'ottimizzazione del software attraverso specifici schemi di parallelizzazione, alla modellizzazione delle proprietà temporali e spaziali delle NCPA e allo studio dell'impatto del prossimo upgrade dei sistema di Ottica Adattiva di LBT. Ho esplorato le performance di diversi coronografi in un ampio range di condizioni osservative e caratterizzato la loro sensibilità ad aberrazioni, disallineamenti e cromatismo. Ho anche contribuito allo sviluppo di una pipeline di riduzione dati rivolta a processare le immagini simulate adottando diversi algoritmi. I risultati delle simulazioni sono stati utilizzati per effettuare una selezione di tecniche coronografiche in grado di soddisfare i requisiti scientifici dello strumento. Infine, ho validato attraverso simulazioni un approccio denominato Phase Diversity il cui fine è misurare on-line le NCPA. Le simulazioni hanno contribuito alla scelta di implementare uno specchio deformabile interno per la correzione simultanea di NCPA e vibrazioni residue ad alta frequenza.
APA, Harvard, Vancouver, ISO, and other styles
36

Bonato, Matteo. "Predictions for imaging and spectroscopic surveys of galaxies and Active Galactic Nuclei in the mid-/far-Infrared." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3423979.

Full text
Abstract:
While continuum imaging data at far-infrared to sub-millimeter wavelengths have provided tight constraints on the population properties of dusty star-forming galaxies up to high redshifts, future space missions like the Space Infra-Red Telescope for Cosmology and Astrophysics (SPICA) and ground based facilities like the Atacama Large Millimeter/submillimeter Array (ALMA) and the Cerro Chajnantor Atacama Telescope (CCAT) will allow detailed investigations of their physical properties via their mid-/far-infrared line emission. The goal of this thesis project was to carry out predictions for these spectroscopic surveys using both a phenomenological approach and physically grounded models. These predictions are useful to optimize the planning of the surveys. In the first part of the work, I present updated predictions for the number counts and the redshift distributions of star-forming galaxies spectroscopically detectable by these future missions. These predictions exploit a recent upgrade of evolutionary models, that includes the effect of strong gravitational lensing, in the light of the most recent Herschel and South Pole Telescope (SPT) data. Moreover the relations between line and continuum infrared luminosity are re-assessed, considering also differences among source populations, with the support of extensive simulations that take into account dust obscuration. My reference model for the redshift dependent IR luminosity functions is the one worked out by Cai et al. (2013) based on a comprehensive hybrid approach combining a physical model for the progenitors of early-type galaxies with a phenomenological one for late-type galaxies. The derived line luminosity functions are found to be highly sensitive to the spread of the line to continuum luminosity ratios. Estimates of the expected numbers of detections per spectral line by the SpicA FAR infrared Instrument (SAFARI) and by CCAT surveys for different integration times per field of view at fixed total observing time are presented. Comparing with the earlier estimates by Spinoglio et al. (2012), I find, in the case of SPICA-SAFARI, differences within a factor of two in most cases, but occasionally much larger. More substantial differences are found for CCAT. Thereafter I present new estimates of redshift-dependent luminosity functions of IR lines detectable by SPICA-SAFARI and excited both by star formation and by AGN activity. The new estimates improve over previous work by dealing in a self consistent way with the emission of galaxies as a whole, including both the starburst and the AGN component. While the galaxy-AGN co-evolution was already worked out by Cai et al. (2013) in the case of proto-spheroidal galaxies, the evolution of late-type galaxies was dealt with independently of that of AGNs associated with them. I upgraded the model to enable it to take into account in a coherent way the contributions of both starbursts and AGNs to the IR emission during the cosmic evolution also of late-type galaxies. New relationships between line and AGN bolometric luminosity have been derived and those between line and IR luminosities of the starburst component have been updated. These ingredients were used to work out predictions for the source counts in 11 mid/far-IR emission lines partially or entirely excited by AGN activity. I find that the statistics of the emission line detection of galaxies as a whole is mainly determined by the star formation rate, because of the rarity of bright AGNs. I also find that the slope of the line integral number counts is flatter than 2 implying that the number of detections at fixed observing time increases more by extending the survey area than by going deeper. I thus propose a spectroscopic survey of 1 hour integration per field-of-view over an area of 5 deg^2 to detect (at 5σ) ~760 AGNs in [OIV]25.89 μm - the brightest AGN mid-infrared line - out to z~2. Pointed observations of strongly lensed or hyper-luminous galaxies previously detected by large area surveys such as those by Herschel and by SPT can provide key information on the galaxy-AGN co-evolution out to higher redshifts. Finally, as third step of the work, I present predictions for number counts and redshift distributions of galaxies detectable in continuum and in emission lines with the Mid-infrared (MIR) Instrument (SMI) proposed for SPICA. I have considered 24 MIR emission fine-structure lines, four Polycyclic Aromatic Hydrocarbon (PAH) bands (at 6.2, 7.7, 8.6 and 11.3 μm) and two silicate bands (in emission and in absorption) at 9.7 μm and 18.0 μm. Six of these lines are primarily associated with Active Galactic Nuclei (AGNs), the others primarily with star formation. Altogether, they allow us to study the interplay between star formation and super-massive black hole growth. A survey with the SMI spectrometers of 1 hour integration per field-of-view (FoV) over an area of 1 deg^2 will yield 5σ detections of ~140 AGN lines, produced by ~110 AGNs, and of ~5.2x10^4 star-forming galaxies, ~1.6x10^4 of which will be detected in at least two lines. The combination of a shallow (20.0 deg^2, 1.4x10^(-1) h integration per FoV) and a deep survey (6.9x10^(-3) deg^2, 635 h integration time), with the SMI camera, for a total of ~1000 h, will accurately determine the MIR number counts of galaxies and of AGNs over five orders of magnitude in flux density, reaching values more than one order of magnitude fainter than the Spitzer 24 μm surveys. This will allow us to resolve almost completely the extragalactic background and to determine the cosmic star formation rate (SFR) function down to SFRs more than 100 times fainter than reached by the Herschel Observatory. These spectroscopic observations will allow us to probe all phases of the interstellar medium (ionized, atomic and molecular). Measurements of these lines will provide redshifts and key insight on physical conditions of dust obscured regions and on the energy sources controlling their temperature and pressure. This information is critically important for investigating the complex physics ruling the dust-enshrouded active star-forming phase of galaxy evolution and the relationship with nuclear activity. Observations of strongly gravitationally lensed galaxies will be of special interest, because strong lensing allows us to measure the gas/dust distribution in galaxies up to high-z and to gain information on sources too faint to be detected with current instrument sensitivities, thus testing models for galaxy formation and dark matter.
Mentre i dati fotometrici sulla luminosità del continuo, alle lunghezze d'onda che vanno dal lontano infrarosso al sub-millimetrico, hanno fornito vincoli stringenti sulle proprietà delle popolazioni di galassie polverose con alti tassi di formazione stellare fino a redshift elevati, future missioni spaziali, come lo Space Infra-Red Telescope for Cosmology and Astrophysics (SPICA) e telescopi da terra come l'Atacama Large Millimeter/submillimeter Array (ALMA) e il Cerro Chajnantor Atacama Telescope (CCAT), consentiranno indagini dettagliate sulle loro proprietà fisiche, tramite l'analisi della loro emissione in riga nel medio/lontano infrarosso. L'obiettivo di questa tesi è stato di realizzare predizioni per queste indagini spettroscopiche, utilizzando sia un approccio fenomenologico che modelli fisici. Queste predizioni risultano particolarmente utili per ottimizzare la pianificazione delle survey. Nella prima parte del lavoro, presento predizioni aggiornate per i conteggi e le distribuzioni in redshift di galassie con alti tassi di formazione stellare, rilevabili spettroscopicamente da queste future missioni. Queste predizioni si servono dei recenti aggiornamenti dei modelli evolutivi, che includono l'effetto di lente gravitazionale forte, alla luce dei più recenti dati di Herschel e del South Pole Telescope (SPT). Inoltre le relazioni tra la luminosità in riga e quella del continuo infrarosso sono state ricalcolate, considerando anche differenze tra le popolazioni di sorgenti, con il supporto di estese simulazioni che tengono conto dell'oscuramento da polveri. Il mio modello di riferimento per le funzioni di luminosità IR dipendenti dal redshift è stato quello elaborato da Cai et al. (2013), basato su un esaustivo approccio ibrido, che combina un modello fisico per i progenitori delle galassie early-type con uno fenomenologico per le galassie late-type. Le funzioni di luminosità in riga derivate sono risultate essere molto sensibili alla dispersione dei rapporti tra la luminosità in riga e quella del continuo. Vengono presentate stime del numero atteso di detezioni per riga spettrale rilevabili da survey con lo SpicA FAR infrared Instrument (SAFARI) e con CCAT, per diversi tempi di integrazione per campo di vista, con un tempo totale di osservazione fissato. Confrontando queste stime con altre calcolate precedentemente da Spinoglio et al. (2012), ho trovato, nel caso di SPICA-SAFARI, nella maggior parte dei casi differenze all'interno di un fattore due, ma a volte molto maggiori. Per CCAT sono state trovate differenze più sostanziali. Inoltre presento nuove stime di funzioni di luminosità in riga dipendenti dal redshift, per righe IR rilevabili da SPICA-SAFARI ed eccitate da attività sia di formazione stellare sia di AGN. Le nuove stime sono più accurate rispetto alle precedenti, poichè trattano in modo autoconsistente l'emissione complessiva delle galassie, comprendendo sia la componente starburst sia quella AGN. Mentre la coevoluzione galassia-AGN, nel caso delle galassie proto-sferoidali, era stata già elaborata da Cai et al. (2013), l'evoluzione delle galassie late-type veniva lì trattata in modo indipendente da quella degli AGN ad esse associati. Ho aggiornato il modello per far sì che trattasse in modo coerente i contributi all'emissione IR sia degli starburst che degli AGN, durante la loro evoluzione cosmica, anche per le galassie late-type. Sono state ricavate nuove relazioni tra la luminosità in riga e la luminosità bolometrica dell'AGN e sono state aggiornate quelle tra la luminosità in riga e la luminosità IR della componente starburst. Questi ingredienti sono stati utilizzati per realizzare predizioni per i conteggi in 11 righe di emissione nel medio/lontano IR, parzialmente o interamente eccitate dall'attività di AGN. Ho trovato che la statistica delle detezioni nelle righe di emissione delle galassie (considerate nel loro complesso, cioè starburst+AGN) è determinata principalmente dal tasso di formazione stellare, a causa della rarità di AGN luminosi. Ho trovato anche che la pendenza dei conteggi integrali in riga è minore di 2, il che implica che il numero di detezioni, ad un tempo di osservazione totale fissato, aumenta maggiormente estendendo l'area della survey che andando più in profondità. Ho quindi proposto una survey spettroscopica di 1 h di tempo di integrazione per campo di vista, su un'area di 5 deg^2, per rilevare (a 5σ) ~760 AGN nell'[OIV]25.89 μm - la riga da AGN più brillante nel medio IR - fino a z~2. Osservazioni puntate di galassie fortemente lensate o di galassie iper-luminose precedentemente rilevate da survey a grande area, come quelle realizzate con Herschel e con SPT, possono fornire informazioni chiave sulla coevoluzione galassia-AGN fino a redshift più elevati. Infine, come terza pate del lavoro, presento predizioni per conteggi e distribuzioni in redshift di galassie rilevabili nel continuo e in righe di emissione dal Mid-infrared (MIR) Instrument (SMI) proposto per SPICA. Ho considerato 24 righe MIR di emissione, quattro bande di idrocarburi policiclici aromatici (PAH) (a 6.2, 7.7, 8.6 e 11.3 μm) e due bande di silicati (in emissione e in assorbimento) a 9.7 μm e a 18.0 μm. Sei di queste righe sono associate principalmente ad AGN, le altre soprattutto alla formazione stellare. Complessivamente, ci permettono di studiare l'interazione tra la formazione stellare e l'accrescimento del buco nero super-massiccio. Una survey con gli spettrometri SMI di 1 h di tempo di integrazione per campo di vista, su un'area di 1 deg^2, permetterà la detezione (a 5σ) di ~140 righe AGN, prodotte da ~110 AGN, e di ~5.2x10^4 galassie con alto tasso di formazione stellare, ~1.6x10^4 delle quali verranno rilevate in almeno due righe. La combinazione di una survey superficiale (20.0 deg^2, 1.4x10^(-1) h di tempo di integrazione per campo di vista) e una survey profonda (6.9x10^(-3) deg^2, 635 h di tempo di integrazione per campo di vista), con la camera dello SMI, per un totale di ~1000 h, determinerà accuratamente i conteggi MIR di galassie e di AGN per oltre cinque ordini di grandezza in densità di flusso, raggiungendo valori più di un ordine di grandezza più deboli delle survey Spitzer a 24 μm. Questo ci permetterà di risolvere quasi completamente il fondo extragalattico e di determinare la funzione del tasso di formazione stellare cosmica fino a tassi di formazione stellare più di 100 volte più piccoli di quanto è possibile con Herschel. Queste osservazioni spettroscopiche ci permetteranno di indagare tutte le fasi del mezzo interstellare (ionizzato, atomico e molecolare). Le misurazioni di queste righe forniranno redshift e importanti indicazioni sulle condizioni fisiche delle regioni oscurate da polveri e sulle sorgenti energetiche che controllano la loro temperatura e pressione. Queste informazioni sono di fondamentale importanza per lo studio dei complessi processi fisici che regolano la fase polverosa di attiva formazione stellare di evoluzione delle galassie e le relazioni con l'attività nucleare. Le osservazioni di galassie fortemente lensate saranno di particolare interesse, perchè ci permetteranno di misurare la distribuzione di gas/polvere in galassie fino ad alti z e di ottenere informazioni su sorgenti troppo deboli per essere rilevate con le sensibilità degli strumenti attuali, testando perciò i modelli di formazione delle galassie e di materia oscura.
APA, Harvard, Vancouver, ISO, and other styles
37

Nardello, Marco. "Optical subsystems of metis (multi element telescope for imaging and spectroscopy) on board of the solar orbiter mission." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3426761.

Full text
Abstract:
The Lyman- spectral line, at 121.6 nm, is a wavelength of great interest for solar environment exploration and astrophysics. It is an important hydrogen emission line and can give information on the dynamics of heated regions of space like the solar photosphere and corona. Among others, the instrument METIS (Multi Element Telescope for Imaging and Spectroscopy) will be on board the mission Solar Orbiter, an ESA mission in collaboration with NASA, that from the year 2018 will undertake a trip towards the Sun to explore the dynamics of the solar dynamo and its connection with corona and heliosphere. METIS will take images of the corona in the visible range and at Lyman-, studying shape and evolution of the processes expanding from the Sun to the heliosphere. In the CNR-IFN UOS Padova laboratories I employed deposition and characterization facilities to study the characteristics of materials and devices to be used as optical elements at Lyman- wavelength. A morphological characterization was carried on with an atomic force microscope (AFM) and an optical characterization revealed the performances of materials and devices. Variations in performances were related to modications in the experimental conditions and the acquired knowledge was used to optimize the performances of the nal product. Annealing is an approach never fully explored before to increment the optical quality of magnesium uoride thin lms, and consequently the reectivity of VUV optical elements. I conducted a study of the phenomena involved in the process and applied the procedure to the realization of improved mirrors for this spectral region. This work will present all the experimental steps that led to the realization of the nal devices and describe the characteristics of this novel annealing approach.
La linea spettrale Lyman-, a 121.6 nm, è una lunghezza d'onda di grande interesse per l'esplorazione dell'ambiente solare e per l'astrosica. È un'importante linea di emissione dell'idrogeno e può dare informazioni sulle dinamiche di regioni calde dello spazio come la fotosfera solare e la corona. Lo strumento METIS (Multi Element Telescope for Imaging and Spectroscopy) sarà a bordo della missione Solar Orbiter, una missione dell'ESA in collaborazione con la NASA che dal 2018 intraprenderà un viaggio verso il Sole per esplorare le dinamiche della dinamo solare e la sua connessione con la corona e l'eliosfera. METIS acquisirà immagini nel visibile e alla lunghezza d'onda Lyman-, studiando la forma e l'evoluzione dei processi in espansione dal Sole verso l'eliosfera. Nei laboratori del CNR-IFN UOS Padova ho utilizato facility di deposizione e caratterizzazione per studiare le caratteristiche di materiali e dispositivi utilizzabili come elementi ottici per la lunghezza d'onda Lyman-. Grazie ad un microscopio a forza atomica (AFM) è stata realizzata una caratterizzazione di tipo morfologico mentre una caratterizzazione ottica ha rivelato le performance di materiali e dispositivi. Le variazioni di tali performance sono state ricondotte a modiche delle condizioni sperimentali e le conoscenze acquisite sono state utilizzate per ottimizzare le performance del prodotto nito. L'annealing è un aproccio che non è mai stato completamente esplorato e che può aumentare le qualità ottiche dei lm sottili di uoruro di magnesio, e di conseguenza aumentare la riettività degli elementi ottici per il range VUV. Io ho condotto uno studio dei fenomeni coinvolti nel processo e applicato la procedura alla realizzazione di migliori specchi per questa regione spettrale. In questo lavoro sono presentati tutti i passaggi sperimentali che hanno condotto alla realizzazione dei dispositivi ultimati e sono descritte le caratteristiche del nuovo approccio dell'annealing.
APA, Harvard, Vancouver, ISO, and other styles
38

Martins, Alberto Garcez de Oliveira Krone. "Ampliando horizontes da missão espacial Gaia graças à análise de objetos extensos." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/14/14131/tde-30082011-180504/.

Full text
Abstract:
Ce travail a comme objectif principal de vérifier s\'il est possible de faire de la science avec les observations d\'objets étendus qui seront réalisées par la mission spatiale Gaia. Cette mission, l\'un des plus ambitieux projets de l\'Astronomie moderne, observera plus d\'un milliard d\'objets dans tout le ciel avec des précisions inédites, fournissant des données astrométriques, photométriques et spectroscopiques. Naturellement, en fonction de sa priorité astrométrique, Gaia a été optimisé pour l\'étude d\'objets ponctuels. Néanmoins, diverses sources associées à des émissions étendues seront observées. Ces émissions peuvent avoir une origine intrinsèque, telles que les galaxies, ou extrinsèque, telles que les projections d\'objets distincts sur la même ligne de visée, et présenteront probablement de solutions astrométriques moins bonnes. Pour étudier ces émissions, leurs images bidimensionnelles doivent être analysées. Néanmoins, comme Gaia ne produit pas de telles données, nous avons commencé ce travail en vérifiant si à partir de ses observations unidimensionnelles il serait possible de reconstruire des images 2D d\'objets dans tout le ciel. Nous avons ainsi estimé la quantité de cas sujets à la présence démissions étendues extrinsèques, et nous avons présenté une méthode que nous avons développée pour analyser leurs images reconstruites. Nous avons montré que l\'utilisation de cette méthode permettra détendre le catalogue final de façon fiable à des millions de sources ponctuelles dont beaucoup dépasseront la magnitude limite de l\'instrument. Dun autre coté, dans le cas démissions intrinsèques, nous avons premièrement obtenu une estimation supérieure du nombre de cas que Gaia pourra observer. Nous avons alors vérifié qu\'après les reconstructions d\'images, les codes que nous avons développés permettront de classifier morphologiquement des millions de galaxies dans les types précoce/tardif et elliptique/spirale/irrégulière. Nous avons de plus présenté une méthode que nous avons développée pour réaliser la décomposition bulbe/disque directement à partir des observations unidimensionnelles de Gaia de façon complètement automatique. Finalement nous avons conclu qu\'il est possible d\'utiliser beaucoup de ces données qui pourraient être ignorées pour faire de la science. Et que le fait de les exploiter permettra aussi bien la détection de millions d\'objets qui dépassent la limite de magnitude de Gaia, que de mener des études sur la morphologie de millions de galaxies dont les structures ne peuvent être révélées qu\'à partir de l\'espace ou au moyen d\'optique adaptative, augmentant un peu plus les horizons de cette mission déjà immense.
Este trabalho tem como objetivo principal verificar se é possível fazer ciência com as observações de objetos extensos que serão realizadas pela missão espacial Gaia. Um dos mais ambiciosos projetos da Astronomia moderna, essa missão observará mais de um bilhão de objetos em todo o céu com precisões inéditas, fornecendo dados astrométricos, fotométricos e espectroscópicos. Naturalmente, devido à sua prioridade astrométrica o Gaia foi optimizado para o estudo de objetos pontuais. Contudo, diversas fontes associadas a emissões extensas serão observadas. Essas emissões podem ter origem intrínseca, como galáxias, ou extrínseca, como projeções de objetos distintos na mesma linha de visada, e deverão ter soluções astrométricas aquém do ideal. Para estudar essas emissões suas imagens bidimensionais devem ser analisadas. Contudo, como o Gaia não obtém tais dados, iniciamos este trabalho verificando se a partir de suas observações unidimensionais seria possível reconstruir imagens de objetos em todo céu. Dessa forma, por um lado, nós estimamos a quantidade de casos sujeitos à presença de emissões extensas extrínsecas, apresentamos um método que desenvolvemos para segregar fontes astronômicas em imagens reconstruídas, e mostramos que sua utilização possibilitará estender o catálogo final de forma confiável em milhões de fontes pontuais, muitas das quais estarão além da magnitude limite do instrumento. Por outro lado, no caso de emissões intrínsecas, primeiro obtivemos uma es- timativa superior para o número de casos que o Gaia poderá observar. Então verificamos que após reconstruções de imagens, os códigos aqui desenvolvidos per- mitirão classificar morfologicamente milhões de galáxias nos tipos precoce/tardio e elíptico/espiral/irregular. Mostramos ainda um método que construímos para realizar a decomposição bojo/disco diretamente a partir das observações unidimensionais do Gaia de forma completamente automática. Finalmente concluímos que sim, é possível aproveitar muitos desses dados que poderiam ser ignorados para fazer ciência. E que salva-los possibilitará tanto a detecção de milhões de objetos além do limite de magnitude do Gaia, quanto estudos da morfologia de milhões de galáxias cujas estruturas podem ser apenas reveladas do espaço ou por meio de óptica adaptativa, expandindo um pouco mais os horizontes dessa já abrangente missão.
APA, Harvard, Vancouver, ISO, and other styles
39

Cavazzoni, Vittoria. "Proprietà generali dei pianeti del Sistema Solare e ricerca dei pianeti esterni." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23920/.

Full text
Abstract:
Il Sistema Solare non è solo la culla del pianeta Terra che ospita l’umanità, ma è molto di più. Un complesso equilibrio di pianeti, pianeti nani e corpi minori che orbitano attorno alla grande stella centrale, il Sole. In questo elaborato si cercherà di spiegare come si pensa che il Sistema Solare si sia formato. Si evidenzieranno, poi, le principali caratteristiche dei pianeti che lo compongono, dividendoli in pianeti rocciosi e pianeti gassosi. Si analizzeranno brevemente, per completezza, anche i corpi minori (comete, asteroidi, meteore, meteoriti) e i pianeti nani, incuriosendo infine il lettore con qualche caratteristica dell’ipotetico Pianeta X. Nella seconda parte dell’elaborato si analizzeranno le principali tecniche utilizzate per la ricerca di pianeti esterni al Sistema Solare, detti esopianeti. Questi metodi si dividono in diretti, come il direct imaging, e indiretti, come velocità radiale, transito, microlensing e astrometria. Si vedrà, quindi, come si è arrivati alla scoperta del primo pianeta extrasolare, come si applicano i vari metodi e quali hanno portato a numerose scoperte e quali a poche.
APA, Harvard, Vancouver, ISO, and other styles
40

Zurlo, Alice. "Characterization of exoplanetary systems with the direct imaging technique: towards the first results of SPHERE at the Very Large Telescope." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424178.

Full text
Abstract:
In the year of the 20th anniversary of the discovery of the first extrasolar planet we can count more than 1800 companions found with different techniques. The majority of them are indirect methods that infer the presence of an orbiting body by observing the parent star (radial velocity, transits, astrometry). In this work we explore the technique that permits to directly observe planets and retrieve their spectra, under the conditions that they are bright and far enough from their host star. Direct imaging is a new technique became possible thanks to a new generation of extreme adaptive optics instruments mounted on 8m class telescopes. On the Very Large Telescope two instruments dedicated to the research for exoplanets with direct imaging are now operative: NACO and SPHERE. This thesis will describe the development and results of SPHERE from its predecessor NACO to its integration in laboratory and the final on sky results. Chapter 1 gives a presentation of the exoplanet research, the formation mechanisms, and the characterization of planet atmospheres. Chapter 2 gives a general frame of the two instruments used for the results presented in this thesis: NACO and SPHERE. In Chapter 3 I describe an example of a false positive in the direct imaging technique, found during the survey NACO-Large Program. This work have been published in Zurlo et al. 2013. In Chapter 4 I present the performance of SPHERE, in particular of the subsystems IRDIS and IFS, deeply tested in the laboratory before the shipping to Paranal. This work has been published in Zurlo et al. 2014. Chapter 5 presents a work done to find special targets for the NIRSUR survey, these object are radial velocity long period planets which are observable with SPHERE. In Chapter 6 I present one of the first on sky result, the observations and analysis of the multi-planetary system HR\,8799. In Chapter 7 I give the conclusions and future prospects.
Al giorno d'oggi più di 1800 pianeti sono stati scoperti orbitare attorno a stelle al di fuori del sistema solare. Le tecniche utilizzate per la ricerca di pianeti extrasolari sono molteplici: alcune, dette metodi indiretti, si basano sull'osservazione della perturbazione indotta dal pianeta orbitante sulla stella ospite, mentre altre si basano sull'osservazione diretta del pianeta stesso. La maggior parte dei pianeti scoperti attualmente é stata rivelata grazie ai primi. Specialmente il metodo delle velocità radiali e dei transiti hanno fornito il più alto numero si scoperte. Lo svantaggio di questo tipo di tecniche é che la caratterizzazione del pianeta non può essere completa a meno che non vengano usate simultaneamente più tecniche. Inoltre, per ottenere lo spettro del pianeta, quest'ultimo deve transitare e anche in questo caso il segnale é difficilmente estrapolabile. L'osservazione diretta di questi oggetti, detta “direct imaging”, é oggi possibile grazie ad avanzati sistemi di ottica adattiva installati su telescopi della classe 8m. Il direct imaging permette l'osservazione diretta di pianeti sufficientemente luminosi e distanti dalla stella ospite grazie ad una maschera che oscura la luce di quest'ultima. Questa tecnica quindi é particolarmente efficiente su sistemi giovani e vicini, dato che la luminosità intrinseca del pianeta diminuisce con l'età e che la separazione effettiva del pianeta dipende dalla distanza del sistema stesso. Sul Very Large Telescope a Paranal (Chile) due strumenti sono dedicati a questo tipo di ricerca: NACO e SPHERE. NACO é stato pensato come predecessore e prototipo di SPHERE, ma viene mantenuto grazie alle sue performance ancora competitive ed ad alcune caratteristiche che non sono presenti in SPHERE. SPHERE ha visto la sua prima luce in Maggio 2014 ed é ora pronto per cominciare una survey dedicata alla scoperta di pianeti attorno a sistemi giovani e vicini, NISUR. Questo strumento é composto da tre sottosistemi: IRDIS, IFS e ZIMPOL. IRDIS é una camera infrarossa cui detector é suddiviso in due porzioni uguali per sfruttare l'immagine simultanea del target in due filtri adiacenti. IFS é lo spettrografo di SPHERE, permette di estrarre lo spettro del pianeta con risoluzioni di 30 e 50 a seconda della banda spettrale utilizzata. ZIMPOL é l'unico sottosistema che lavora nel visibile, viene utilizzato per osservare la polarizzazione dei sistemi planetari. In questo lavoro viene presentato lo strumento SPHERE e il suo predecessore NACO, focalizzando sui risultati e sulle performance nella caratterizzazione dei sistemi planetari.
APA, Harvard, Vancouver, ISO, and other styles
41

DEL, MONTE ETTORE. "SuperAGILE: an X-Ray monitor for a gamma mission." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2006. http://hdl.handle.net/2108/206.

Full text
Abstract:
La Tesi di Dottorato di Ricerca, svolta all’IASF CNR/INAF di Roma sotto la supervisione del dott. Enrico Costa, contiene lo studio delle prestazioni scientifiche dello strumento SuperAGILE. SuperAGILE è il monitor a raggi X di AGILE, missione su satellite dell’ASI composta da due strumenti, sensibili rispettivamente nelle bande di energia 15-40 keV e 30 MeV-50 GeV, il cui lancio è previsto per la fine del 2005. SuperAGILE è uno strumento a maschera codificata, con rivelatore a microstrip di silicio e maschera di tungsteno. Oggetto della mia Tesi di Dottorato è lo studio delle prestazioni scientifiche di SuperAGILE e delle più importanti criticità dello strumento: misura dell’uniformità delle prestazioni del circuito XAA1.2 dell’elettronica di lettura, della sua stabilità termica e della stabilità per variazioni della tensione di alimentazione, studio dell’interazione dei raggi cosmici nel circuito con misure sperimentali e stima del flusso aspettato in orbita, misura delle prestazioni scientifiche del modello da volo di SuperAGILE e, infine, studio dell’effetto della disuniformità di soglia sulle immagini. Le misure dell’uniformità di prestazioni dell’XAA1.2, della stabilità termica (tra –20° C e +40° C) e della stabilità per variazioni della tensione di alimentazione si effettuano con una scheda di acquisizione dedicata e stimolando il circuito per mezzo di un generatore di impulsi di tensione all’interno della scheda. Dalle misure si trova una variazione dei segnali di indirizzo dell’XAA1.2 (fondamentali per ricostruire le immagini delle sorgenti in Cielo) sulla scala dei 10° C. Lo studio dell’effetto dell’interazione dei raggi cosmici nel circuito, non progettato per applicazioni spaziali, riguarda il latch-up (aumento delle correnti di alimentazione che può danneggiare il chip per surriscaldamento) e il SEU (variazione di un bit nella memoria con perdita di programmazione) e l’effetto della dose assorbita sulla linearità e sul consumo. La misure sperimentali sono state effettuate irraggiando l’XAA1.2 con ioni (da 16O a 197Au) all’esperimento dedicato SIRAD dell’acceleratore Tandem nei Laboratori Nazionali INFN di Legnaro presso Padova. Al variare del LET, che misura l’energia per unità di lunghezza rilasciata dalle particelle cariche nel silicio, si misura la sezione d’urto di latch-up e SEU. Ad intervalli regolari si effettuano misure di linearità con l’impulso di calibrazione per studiare l’effetto della dose. Stimando il flusso di ioni in orbita con il codice CREME96 e usando un modello approssimato che tiene conto della spallazione dei protoni, ho trovato che il tasso aspettato di latch-up e SEU in orbita è minore di un evento per tutta la durata di AGILE e che l’effetto della dose assorbita è trascurabile. La Tesi di Dottorato contiene anche la caratterizzazione del modello da volo di SuperAGILE, che consiste nel misurare con l’impulso di carica la linearità e il rumore dell’elettronica di lettura dopo il montaggio degli XAA1.2, dopo la procedura di burn-in (accensione della scheda programmata in configurazione nominale all’interno di un forno a 75° C per 240 ore consecutive) e dopo l’integrazione del rivelatore. Dalle misure si trova che il burn-in non produce variazioni di prestazioni e che, dopo il montaggio del rivelatore, il rumore dell’elettronica di lettura è pari a circa 7.5 keV FWHM mentre l’energia di soglia è circa 19 keV. Il rumore dell’elettronica è stato misurato anche tramite l’acquisizione di sorgenti di raggi X (241Am, 57Co, 109Cd e righe di fluorescenza del Ba) e i valori trovati sono in buon accordo con le misure con l’impulso di carica. La Tesi contiene anche la discussione delle principali problematiche affrontate durante la scrittura di programmi per l’analisi dei dati raccolti in laboratorio. A causa del gran numero di pixel del rivelatore di SuperAGILE, infatti, la linearità e il rumore (sia con impulso di carica che con sorgenti di raggi X) devono essere stimate in modo automatico, senza l’immissione di parametri dall’utente. Infine, ho stimato l’effetto della disuniformità dell’energia di soglia sulle immagini di SuperAGILE generando immagini del fondo sul rivelatore, applicando diversi modelli di disuniformità di soglia e decodificando le immagini del cielo. Mentre l’attuale livello di uniformità di soglia non è sufficiente per poter osservare sorgenti deboli con integrazioni dell’ordine di 106 s, l’equalizzazione fine digitale delle soglie del circuito, tramite un DAC a 3 bit, permette di ottenere una uniformità sufficiente per poter effettuare osservazioni per 106 s.
The Ph.D. Thesis, performed at IASF CNR/INAF in Rome under the supervision of dr. Enrico Costa, contains the study of the scientific performances of the SuperAGILE instrument. SuperAGILE is the X-ray monitor of AGILE, satellite-borne mission of ASI whose payload is composed of two instruments, sensitive in the 15-40 keV and 30 MeV-50 GeV energy bands respectively, and whose launch is foreseen in late 2005. SuperAGILE is a coded aperture instrument with silicon microstrip detector and tungsten coded mask. Topic of my Ph.D. Thesis is the study of the SuperAGILE scientific performances and criticalities: measurement of the performances uniformity of the XAA1.2 front-end electronic circuit, of its thermal stability and of its stability toward supply voltage variations, study of the cosmic rays interaction in the front-end circuit with experimental measurements and estimate of the expected flux in orbit, measurements of the scientific performances of the SuperAGILE flight model and finally study of the impact of the threshold non uniformity on the images. The measurements of the performances uniformity of the XAA1.2, of its thermal stability (between –20° C and +40° C) and of the stability toward supply voltage variations are performed using a dedicated acquisition board feeding the chip with a pulse generator contained in the board. From the measurements a variation of the XAA1.2 address signals (used to reconstruct the images of the sources in the Sky) on the 10° C scale is found. The study of the effect of the cosmic rays interaction in the XAA1.2 chip, that is not designed as a radiation hard component for space applications, concerns the latch-up (sudden increase of the supply currents that can damage the chip due to overheating) and the SEU (bit flip in the memory registers with loss of chip configuration) and the effect of the absorbed dose on the linearity and power consumption. The measurements have been performed with ions irradiation (from 16O to 197Au) at the SIRAD facility of the Tandem accelerator in the Laboratori Nazionali INFN in Legnaro near Padova. With different values of LET, a measure of the energy released per unit length by the charged particles in silicon, the latch-up and SEU cross-section values are measured. During the irradiation linearity measurements using the test pulse generator are performed in order to study the total dose effect. Evaluating the ions flux in orbit with the CREME96 code and using an approximated model to take into account the proton spallation, I have found that the expected latch-up and SEU rate in orbit is less than one event during all the AGILE duration and that the total dose effect is negligible. My Ph.D. Thesis contains also the characterization of the SuperAGILE flight model, performed measuring the linearity and the noise of the front-end electronics after the XAA1.2 integration, after the burn-in procedure (by supplying the board in nominal configuration inside an oven at 75° C for 240 hours long) and after the detector integration. From the measurements I have found no performance degradation after the burn-in procedure. After the detector integration the noise in the front-end electronic is about 7.5 keV FWHM while the energy threshold is about 19 keV. The noise in the front-end electronic has been measured also using X-ray sources (241Am, 57Co, 109Cd and Ba fluorescence lines) and the measured values are in good agreement with the test pulse measurements. My Thesis contains also the discussion of the most important topics in the development of data analysis programs. Because of the big number of the SuperAGILE detector pixels, linearity and noise (using both test pulse generator and X-ray sources) need to be estimated automatically, without requiring the user to provide specific parameters. Finally, the Thesis contains an estimate of the threshold non uniformity on SuperAGILE images by means of background detector images generation applying different non uniformity threshold models. By decoding the resulting Sky images I have found that, while the nominal threshold uniformity does not allow to observe faint sources with exposures of order 106 s, the uniformity level obtained with the digital fine threshold equalization (3 bit DAC), allows expose for 106 s long.
APA, Harvard, Vancouver, ISO, and other styles
42

Miranda, Castillo Nicolás Martín. "Deep learning para identificación de núcleos activos de galaxias por variabilidad." Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168059.

Full text
Abstract:
Magíster en Ciencias, Mención Computación
En la presente era de datos masivos, la astronomía requiere de herramientas automatizadas para el análisis de información asociada al comportamiento de objetos a lo largo del tiempo. El desarrollo de proyectos de observación sinópticos plantea muchos desafíos en lo que respecta a obtener descripciones relevantes de los aspectos subyacentes de muchos procesos variables en el tiempo. En particular, el estudio de los Núcleos Activos de Galaxia (AGN) resulta de especial interés; dado su comportamiento estocástico en el tiempo y la singular estructura en la variación temporal de su emisión electromagnética. El uso de algoritmos de aprendizaje computacional ha sido de gran éxito en aspectos de identificación de objetos según su morfología y análisis espectral; es de mucho valor el replicar esos resultados en el análisis de dominio temporal. Con este fin es que se puso a prueba distintas configuraciones de arquitecturas de algoritmos de Deep Learning, en particular Convolutional Neural Networks y Recurrent Neural Networks, con el fin de realizar tareas de clasificación de AGN a partir de sus curvas de luz. Estos se pusieron a prueba sobre datos simulados mediante un modelo matemático y sobre 6102 curvas de luz reales obtenidas a partir de observaciones de los campos extragalácticos COSMOS, Stripe82 y XMM-LSS. Los resultados fueron favorables sobre datos simulados, alcanzando un puntaje ROC AUC máximo de 0.96, pero no así sobre datos reales, donde el puntaje máximo alcanzado fue de 0.55 ROC AUC. Esta diferencia puede explicarse debido al reducido número de datos reales del que se dispuso a la hora de entrenar los distintos clasificadores, y a que el modelo de simulación permitió generar un mucho mayor número de curvas de entrenamiento, lo cual permitió un mucho mejor aprendizaje a partir de estas. El presente trabajo entregó información cuantitativa sobre lo importantes que son ciertas características de las curvas de luz, en particular la regularidad de su muestreo y el número de observaciones, en el desempeño de estos tipos de modelos de clasificación de Deep Learning. Junto con esto, se plantea un flujo en el procedimiento de manejo de datos de curvas de luz para clasificación, desde su recolección desde archivos de formato estándar (FITS) hasta la validación de los modelos, que puede ser reutilizado en el futuro en aplicaciones de Deep Learning sobre series de tiempo. Se sugiere, además, el añadir en próximas implementaciones métodos para manejo de incertidumbre debido a ausencia de mediciones, tales como modelos gráficos, de estado oculto o estocásticos.
APA, Harvard, Vancouver, ISO, and other styles
43

Burtovoi, Aleksandr. "Investigation of Gamma-ray Pulsars with the Cherenkov Telescope Array and the ASTRI Mini-array." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424367.

Full text
Abstract:
This Thesis presents the prospects for investigating gamma-ray pulsars with future Cherenkov facilities, the Cherenkov Telescope Array (CTA) and ASTRI mini-array (possible precursor for CTA). Gamma-ray pulsars are compact astrophysical objects which emit photons with energies up to ~100 GeV. The nature of the gamma-ray emission from these sources is not fully understood. In addition, the recent detection of the Crab pulsar with Cherenkov telescopes such as MAGIC and VERITAS at very high energies (VHE, >100 GeV) challenged current theoretical models. CTA will be a next-generation ground-based VHE gamma-ray instrument, designed to achieve a sensitivity of the order of one magnitude better than that of currently operating Cherenkov installations. It will consist of two arrays, one in northern and one in southern hemisphere, each with a large number of different-sized telescopes. Early observations can be carried out with CTA precursors, such as the ASTRI mini-array. I simulated the VHE emission from the 12 most energetic Fermi pulsars. I analyzed the Fermi-LAT data of these pulsars above 10 GeV and extrapolated their gamma-ray spectra up to ~160 TeV, in order to estimate how many of them will be detectable with CTA. In addition, I performed a detailed investigation of the pulsed VHE emission from the Crab pulsar, simulating the light curve detectable with CTA. I calculated with which accuracy it will be possible to study the timing properties of this pulsar with CTA and the ASTRI mini-array. Finally, I investigated VHE gamma rays from the Vela X region. Assuming different spatial distributions for the emission from the Vela pulsar wind nebula, I calculated more realistic estimates of the significance of the Vela pulsar detection with CTA. Using different software packages (ctools and Asrtisim), I also studied the extended Vela X emission and tested the resolving capabilities of CTA and the ASTRI mini-array.
Questa Tesi contiene i risultati di uno studio dell'emissione delle pulsar a raggi gamma osservate con futuri telescopi Cherenkov, il Cherenkov Telescope Array (CTA) ed il mini-array ASTRI (uno dei possibili precursori per CTA). Le pulsar a raggi gamma sono oggetti astrofisici compatti che emettono fotoni con energie fino a ~100 GeV. La natura dell'emissione di raggi gamma da queste sorgenti non è chiara. Inoltre, la recente rivelazione di emissione di altissima energia (VHE, >100 GeV) da parte della Crab pulsar con i telescopi Cherenkov MAGIC e VERITAS reppresenta una sfida per attuali modelli teorici. CTA sarà uno strumento nuova generazione, progettato per raggiungere una sensibilità un'ordine di grandezza migliore di quella dei telescopi Cherenkov attualmente in funzione. Esso comprenderà un array in ciascun emisfero con un gran numero di telescopi di dimensioni diverse. Le prime osservazioni verranno eseguite con precursori di CTA, come il mini-array ASTRI. Ho simulato l'emissione VHE dalle 12 Fermi pulsar più energiche. Ho analizzato i dati Fermi-LAT di queste pulsar ad energie superiori a 10 GeV ed estrapolato i loro spettri gamma fino a ~160 TeV, per stimare quante di loro saranno rivelabili con CTA. Inoltre, ho eseguito un esame più dettagliato dell'emissione VHE pulsata dalla Crab pulsar, simulando la curva di luce osservabile con CTA. Ho calcolato con quali accuratezza sarà possibile studiare le proprietà del timing di questa pulsar con CTA ed il mini-array ASTRI. Infine, ho studiato l'emissione di altissima energia (VHE) dalla sorgente Vela X. Assumendo diverse distribuzioni spaziali per l'emissione della wind nebula della Vela pulsar, ho calcolato stime più realistiche delle significatività della Vela pulsar con CTA. Utilizzando diversi pacchetti software (ctools e Asrtisim), ho anche studiato l'emissione estesa della Vela X e verificato la risoluzione angolare ottenibile con CTA ed il mini-array ASTRI.
APA, Harvard, Vancouver, ISO, and other styles
44

Fogliardi, Michele. "Proprietà generali dei pianeti del sistema solare e ricerca di pianeti esterni." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21236/.

Full text
Abstract:
In questo elaborato vengono analizzate le principali proprietà fisiche, chimiche e strutturali dei pianeti del Sistema Solare, soffermandosi in particolar modo sulle differenze esistenti tra i pianeti di tipo terrestre e quelli di tipo gioviano, ovvero tra i pianeti rocciosi e i giganti gassosi. In seguito vengono descritte le principali tecniche di ricerca dei pianeti extrasolari, discernendo tra metodi diretti e indiretti. In particolare, dopo una breve descrizione dell'origine del sistema solare secondo la teoria nebulare, ovvero quella attualmente più accreditata, si passa ad una caratterizzazione delle atmosfere planetarie e del meccanismo alla base delle differenze di composizione chimica tra le due tipologie di pianeti, a cui segue una descrizione delle loro proprietà generali. Infine, vengono messi in evidenza i principali metodi di ricerca e di individuazione dei pianeti extrasolari, soffermandosi in particolar modo sui metodi indiretti, quali il metodo delle velocità radiali, il metodo dei transiti, il metodo astrometrico e il microlensing gravitazionale.
APA, Harvard, Vancouver, ISO, and other styles
45

Perianhes, Roberto Vitoriano. "Utilizando algoritmo de cross-entropy para a modelagem de imagens de núcleos ativos de galáxias obtidas com o VLBA." Universidade Presbiteriana Mackenzie, 2017. http://tede.mackenzie.br/jspui/handle/tede/3466.

Full text
Abstract:
Submitted by Marta Toyoda (1144061@mackenzie.br) on 2018-02-16T23:06:29Z No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Paola Damato (repositorio@mackenzie.br) on 2018-03-08T11:19:18Z (GMT) No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-03-08T11:19:18Z (GMT). No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-09
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The images obtained by interferometers such as VLBA (Very Long Baseline Array) and VLBI (Very Long Baseline Interferometry), remain the direct evidence of relativistic jets and outbursts associated with supermassive black holes in active galactic nuclei (AGN). The study of these images are critical tools to the use of information from these observations, since they are one of the main ingredients for synthesis codes7 of extragalactic objects. In this thesis is used both synthetic and observed images. The VLBA images show 2-dimensional observations generated from complex 3-dimensional astrophysical processes. In this sense, one of the main difficulties of the models is the definition of parameters of functions and equations to reproduce macroscopic and dynamic physical formation events of these objects, so that images could be study reliably and on a large scale. One of the goals of this thesis is to elaborate a generic8 form of observations, assuming that the formation of these objects had origin directly by similar astrophysical processes, given the information of certain parameters of the formation events. The definition of parameters that reproduce the observations are key to the generalization formation of sources and extragalactic jets. Most observation articles have focus on few or even unique objects. The purpose of this project is to implement an innovative method, more robust and efficient, for modeling and rendering projects of various objects, such as the MOJAVE Project, which monitors several quasars simultaneously offering a diverse library for creating models (Quasars9 and Blazars10: OVV11 and BL Lacertae12). In this thesis was implemented a dynamic way to study these objects. Presents in this thesis the adaptation of the Cross-Entropy algorithm for the calibration of the parameters of astrophysical events that summarize the actual events of the VLBA observations. The development of the code of the adaptation structure includes the possibility of extension to any image, assuming that these images are dispose in intensities (Jy/beam) distributed in Right Ascension (AR) and Declination (DEC) maps. The code is validating by searching for self-convergence to synthetic models with the same structure, i.e, realistics simulations of components ejection, in milliarcsecond, similar to the observations of the MOJAVE project in 15.3 GHz. With the use of the parameters major semi-axis, angle of position, eccentricity and intensity applied individually to each observed component, it was possible to calculate the structure of the sources, the velocities of the jets, as well as the conversion in flux density to obtain light curves. Through the light curve, the brightness temperature, the Doppler factor, the Lorentz factor and the observation angle of the extragalactic objects can be estimated with precision. The objects OJ 287, 4C +15.05, 3C 279 and 4C +29.45 are studied in this thesis due the fact that they have different and complex morphologies for a more complete study.
As imagens obtidas por interferômetros, tais como VLBA (Very Long Baseline Array) e VLBI (Very Long Baseline Interferometry), são evidências diretas de jatos relativísticos associados a buracos negros supermassivos em núcleos ativos de galáxias (AGN). O estudo dessas imagens é fundamental para o aproveitamento das informações dessas observações, já que é um dos principais ingredientes para os códigos de síntese1 de objetos extragalácticos. Utiliza-se nesta tese, tanto imagens sintéticas quanto observadas. As imagens de VLBA mostram observações em 2 dimensões de processos astrofísicos complexos ocorrendo em 3 dimensões. Nesse sentido, uma das principais dificuldades dos modelos é a definição dos parâmetros das funções e equações que reproduzam de forma macroscópica e dinâmica os eventos físicos de formação desses objetos, para que as imagens sejam estudadas de forma confiável e em grande escala. Um dos objetivos desta tese é elaborar uma forma genérica2 de observações, supondo que a formação desses objetos é originada por processos astrofísicos similares, com a informação de determinados parâmetros da formação dos eventos. A definição de parâmetros que reproduzam as observações são elementos chave para a generalização da formação de componentes em jatos extragalácticos. Grande parte dos artigos de observação são voltados para poucos ou únicos objetos. Foi realizada nesta tese a implementação um método inovador, robusto e eficiente para a modelagem e reprodução de vários objetos, como por exemplo nas fontes do Projeto MOJAVE, que monitora diversos quasares simultaneamente, oferecendo uma biblioteca diversificada para a criação de modelos (Quasares3 e Blazares4: OVV5 e BL Lacertae6). Com essas fontes implementou-se uma forma dinâmica para o estudo desses objetos. Apresenta-se, nesta tese, a adaptação do algoritmo de Cross-Entropy para a calibração dos parâmetros dos eventos astrofísicos que sintetizem os eventos reais das observações em VLBA. O desenvolvimento da estrutura de adaptação do código incluiu a possibilidade de extensão para qualquer imagem, supondo que as mesmas estão dispostas em intensidades (Jy/beam) distribuídas em mapas de Ascensão Reta (AR) e Declinação (DEC). A validação do código foi feita buscando a auto convergência para modelos sintéticos com as mesmas estruturas, ou seja, de simulações realísticas de ejeção de componentes, em milissegundos de arco, similares às observações do projeto MOJAVE, em 15,3 GHz. Com a utilização dos parâmetros semieixo maior, ângulo de posição, excentricidade e intensidade aplicados individualmente a cada componente observada, é possível calcular a estrutura das fontes, as velocidades dos jatos, bem como a conversão em densidade de fluxo para obtenção de curvas de luz. Através da curva de luz estimou-se com precisão a temperatura de brilhância, o fator Doppler, o fator de Lorentz e o ângulo de observação dos objetos extragalácticos. Os objetos OJ 287, 4C +15.05, 3C 279 e 4C +29.45 são estudados nesta tese pois têm morfologias diferentes e complexas para um estudo mais completo.
APA, Harvard, Vancouver, ISO, and other styles
46

Silvestre, João Maria Felner Rino Alves. "Development of sparse coding and reconstruction subsystems for astronomical imaging." Master's thesis, 2019. http://hdl.handle.net/10451/37722.

Full text
Abstract:
Tese de mestrado integrado Engenharia Física, Universidade de Lisboa, Faculdade de Ciências, 2019
Amostragem comprimida (CS) é uma técnica revolucionária de processamento de sinal que nos permite recuperar completamente toda a informação de um sinal contornando os limites de amostragem impostos às técnicas convencionais, sendo para isto necessário que exista uma base de representação onde este sinal seja esparso, i.e. onde esse possa ser comprimido. Desde muito cedo após a sua elaboração, em 2006, até ao seu desenvolvimento e propagação nos anos seguintes que CS possibilitou o desenvolvimento de novas abordagens em fotografia, holografia e instrumentação para a medicina, entre outros. No entanto, apenas recentemente tem sido demonstrado algum interesse em levar a aplicação desta técnica para fora do laboratório. Continuando a partir dos trabalhos desenvolvidos previamente por Bandarra e Pires nas suas respectivas dissertações de mestrado, serão aqui descritos os avanços realizados neste projecto com vista ao desenvolvimento de um instrumento portátil, com aplicação prática fora do laboratório que utilize a abordagem CS; uma camara de amostragem comprimida para astronomia (COSAC). Este instrumento foi projectado para ser constituído por cinco subsistemas: óptico mecânico, de codificação de sinal, electrónica de aquisição, de recuperação de sinal e estrutura mecânica (caixa e suporte aos restantes subsistemas). O trabalho realizado focou-se no desenvolvimento/implementação dos subsistemas de codificação e recuperação de sinal, sendo ainda proposto um protótipo rudimentar para a estrutura mecânica de modo a poder integrar todos os subsistemas e assim testar o instrumento fora do laboratório; isto levou a um redesenho dos suportes óptico mecânicos. Adicionalmente, foram realizadas algumas alterações à electrónica de aquisição com a finalidade de melhorar o comportamento desta e também facilitar a integração deste subsistema com o de codificação de sinal; como resultado, são propostos dois circuitos funcionais, um utilizando o um ADC de 10 bits, o outro utilizando um ADC de 24 bits. Como subproduto do desenvolvimento destes circuitos é ainda apresentada uma shield com conversor de 24 bits para Arduino Uno. A componente central deste instrumento, que estabelece ligação entre os subsistemas óptico mecânico, de codificação de sinal e o de aquisição, é um digital micromirror device (DMD), uma rede de micro-espelhos independentes que podem assumir uma de duas inclinações opostas. Este dispositivo pode ser, e é, utilizado para estruturar luz, estando presente, actualmente, em grande parte do equipamento de projecção. Para este projecto foi utilizado um DLP LightCrafter, um projector/kit de desenvolvimento produzido pela Texas Instruments que inclui um DMD, sendo este responsável pela codificação de sinais. Testando a electrónica de aquisição, verificou-se que: o alcance dinâmico desta não estava totalmente aproveitado; algumas ligações realizadas entre uma das componentes e o micro-controlador Arduino, responsável pela gestão dos procedimentos necessários à realização de medições, impossibilitavam o estabelecimento de sincronismo entre os dois; o desenho da placa de circuito impresso (PCB) não estava optimizado em relação à topologia do Arduino e às ligações necessárias entre os dois. Algumas correções aos problemas atrás mencionados são aqui propostas, tendo sido alteradas ligações e um módulo do circuito, tendo sido redesenhado o PCB de modo a este possa encaixar nos terminais do Arduino, e tendo sido acrescentado um terminal de modo a poder estabelecer comunicação TTL entre o Arduino e o LightCrafter. O subsistema de codificação de sinal é constituído pelo LightCrafter, e dois programas: um, escrito em C/C++ e a ser executado num PC (DMD-CS.cpp), cujos propósitos são controlar o DMD, comunicar com o processador do LightCrafter e ainda comunicar com o controlador Arduino anteriormente mencionado; o segundo (que é partilhado com o subsistema de aquisição), escrito em linguagem de Arduino e a ser executado num (pIDDO.ino), que irá activar e desactivar as portas lógicas - dos circuito integrados (IC) presentes - necessárias a que o processo de efectuar uma medição seja realizado com sucesso, e comunicará com o PC; a interacção entre ambos os programas é essencial para garantir o sincronismo entre os subsistemas de codificação e de aquisição. Escolheu-se, para codificar o sinal a ser medido, como bases de representação, matrizes quadradas de Hadamard, podendo estas ser construídas através de simples algoritmos. De modo a poupar recursos computacionais, desenvolveu-se um algoritmo que recebendo como entrada o rank da matriz e o índice referente a uma linha desta constrói apenas essa linha. As linhas assim geradas serão posteriormente manipuladas de modo a serem utilizadas como padrões de configuração para as inclinações dos micro-espelhos, sendo que o conjunto destas linhas/padrões definirá a nossa matriz de amostragem; o método utilizado para configurar o DMD implica a transferência destes padrões codificados em formato de imagem bitmap (BMP), pelo que foram aqui criadas as funções necessárias a manipular a informação destes padrões de modo a ser interpretada do modo desejado pelo processador do LightCrafter. Cada um dos programas previamente mencionados irá gerar um ficheiro de saída: DMD-CS.cpp gerará um ficheiro contendo informação sobre a matriz de amostragem; já o ficheiro gerado por pIDDO.ino irá conter os resultados das medições realizadas pela electrónica de aquisição. As funções e instruções escritas para gerir a comunicação entre o PC e o LightCrafter são uma implementação simplificada do código da interface gráfica deste. O programa permitirá ao utilizador definir o rank da matriz de Hadamard a utilizar, o número de linhas, desta, a serem geradas e o tempo que deverá demorar cada aquisição. O subsistema de reconstrução de sinal é outro programa que, a partir dos ficheiros gerados pelos programas DMD-CS.cpp e pIDDO.ino, reconstrói o sinal real implementando um algoritmo de optimização, desenvolvido originalmente por Romberg. Este programa gera um ficheiro de imagem BMP com o resultado, numa escala de tons de cinza. As peças para o protótipo do subsistema estrutural e para os suportes óptico mecânicos foram desenhadas usando software de desenho assistido por computador (CAD), no qual também foram realizadas simulações de elementos finitos para garantir que tanto as peças como a estrutura são capazes de manter a sua integridade em condições reais de utilização. Algumas das peças foram compradas, as restantes foram produzidas no laboratório - tendo sido impressas em resina fotopolimérica por uma impressora 3D estereolitográfica - ou em oficinas - tendo sido maquinadas em alumínio com recurso a fresadoras de controlo numérico e de controlo numérico por computador (CNC), entre outras ferramentas. O protótipo foi desenhado tendo como objectivo ser possível anexá-lo a uma montagem telescópica equatorial comum. Todos os subsistemas foram primeiro testados individualmente e, posteriormente testaram-se em pares: o subsistema de codificação e o de aquisição, para garantir que o encadeamento de processos entre os dois estava sincronizado; o subsistema de óptico mecânico e o de aquisição, para focar os sinais de entrada primeiro na região de interesse do DMD e, após reflexão, no detector utilizado. Os subsistemas foram depois montados na estrutura para formar o COSAC. O instrumento foi calibrado, analisado e validado, usando ambas as versões do circuito de aquisição, em laboratório, sob condições de luminosidade controladas. São também apresentados resultados comparativos do desempenho do COSAC utilizando três modos de aquisição distintos (varrimento, óptica de transformadas de Hadamard e CS utilizando matrizes de Hadamard como base). Foi demonstrado que o COSAC é capaz de produzir imagens, no espectro visível, resultantes de medições em modo CS com, no mínimo, 64_64 pixéis de resolução.
Compressed sensing (CS) is a revolutionary signal processing technique that allows us, under a specific set of conditions, to fully reconstruct an under-sampled signal. Very early, from its inception, in 2006, to subsequent development and propagation in the following years compressed sensing enabled advancements in photography, holography and medical instrumentation among others. Its application in astronomy however, even though some calls to action have recently been made, has failed to leave the test bed. Continuing from the work developed by Bandarra and Pires in their respective Masters’ dissertations, advancements will here be described on the development of a physical, out of the table, instrument; a compressed sensing astronomy camera (COSAC). Such an instrument was projected to be constituted by five subsystems: optomechanics, signal coding, acquisition electronics, signal reconstruction and the mechanical structure (casing and inner supports for the aforementioned subsystems). The present work focused on the development/implementation of signal coding and reconstruction subsystems, while a simple prototype for the mechanical structure is also proposed to enable testing the instrument in a real world setting; this required the redesign of the optomechanical supports. Additionally, some changes were made to the acquisition electronics in order to not only improve its behavior, but to also facilitate its integration with the signal coding subsystem; as a result two working circuit are proposed, one using an ADC of 10 bits resolution, the other an ADC of 24 bits . A central component to this instrument, which bridges the optomechanics, signal coding and acquisition subsystems, is a digital micromirror device (DMD), an array of independently controlled micromirrors which can be tilted in two, opposed, directions. Such a device can, and is, thus used to manipulate light. For this project a DLP LightCrafter, a projector development kit by Texas Instruments which includes a DMD, was used to encode light signals. The signal coding subsystem is constituted by the LightCrafter and two programs: one written in C/C++ to run either on a PC (DMD-CS.cpp), which main purpose is to control the DMD and communicate with the LightCrafter’s processor, and which also communicates with an Arduino micro-controller that manages the acquisition electronics; the second (which is also part of the acquisition subsystem) in Arduino programming language, to run on the micro-controller (pIDDO.ino), which will manage the processes required to perform measurements with the electronics and communicate with the C/C++ program; the interactions between both programs are crucial to ensure synchronism between the signal coding and acquisition subsystems. The chosen encoding basis are squared Hadamard matrices that can be attained by following simple algorithms; rows of such matrices were then manipulated into tilt configurations for the micromirror grid; the set of rows used will constitute a sampling matrix. Each program outputs a file, one holding information about the sampling matrix used, the other holding the measurements. The signal reconstruction subsystem is another program that takes the files generated by DMD-CS.cpp and pIDDO.ino to reconstruct the original signal by implementing a Matlab script written by Romberg. The program then outputs a BMP image file of that reconstruction. The components of the prototype structural subsystem and optomechanical supports were designed using computer assisted design (CAD) software, with which finite element simulations were also performed to ensure those same components would be able to endure real world conditions. Some of these components were bought most of them were fabricated in the laboratory. All subsystems were individually tested, as well as in couples (when relevant). After passing those tests, these subsystems were assembled to form COSAC. The instrument was calibrated, analysed and validated, using both versions of the acquisition circuit, in a laboratory setting with controlled lighting conditions. Comparative results of COSAC’s performance for three modes of acquisition (raster, Hadamard transform optics and CS with Hadamard base) are also presented. COSAC was shown to be able to produce images of CS measurements, performed in the visible spectrum, with at least 64_64 pixels.
APA, Harvard, Vancouver, ISO, and other styles
47

Steinbring, Eric. "Techniques in high resolution observations from the ground and space, and imaging of the merging environments of radio galaxies at redshift 1 to 4." Thesis, 2000. https://dspace.library.uvic.ca//handle/1828/9861.

Full text
Abstract:
High resolution imaging and spectroscopy are invaluable tools for extragalactic astronomy. Galaxies with redshifts of 1 or more subtend a very small angle on the sky—typically, only about an arcsecond. Unfortunately, this is also approximately the angular resolution achieved with a ground-based telescope regardless of its aperture. Atmospheric turbulence ruins the image before it reaches the telescope but the emerging technology of adaptive optics (AO) gives the observer the possibility, within limitations, of correcting for these effects. This is the case for instruments such as the Canada-France-Hawaii Telescope (CFHT) Adaptive Optics Bonnette (AOB) and the Gemini North Telescope (Gemini) Altitude-Conjugate Adaptive Optics for the Infrared (Altair) systems. The alternative is to rise above the limitations of the atmosphere entirely and put the telescope in space, for example, the Hubble Space Telescope (HST) and its successor, the Next-Generation Space Telescope (NGST). I discuss several techniques that help overcome the limitations of AO observations with existing instruments in order to make them more comparable to imaging from space. For example, effective dithering and flat-fielding techniques as well as methods to determine the effect of the instrument on the image of, say, a galaxy. The implementation of these techniques as a software package called AOTOOLS is discussed. I also discuss computer simulations of AO systems, notably the Gemini Altair instrument, in order to understand and improve them. I apply my AO image processing techniques to observations of high-redshift radio galaxies (HzRGS) with the CFHT AOB and report on deep imaging in near-infrared (NIR) bands of 6 HzRGs in the redshift range 1.1 ≤ z ≤ 3.8. The NIR is probing the restframe visible light—mature stellar populations—at these redshifts. The radio galaxy is resolved in all of these observations and its ‘clumpier’ appearance at higher redshift leads to the main result—although the sample is very small—that these galaxy environments are undergoing mergers at high redshift. Finally, I look to the future of high resolution observations and discuss simulations of imaging and spectroscopy with the NGST. The computer software NGST VI/MOS is a ‘virtual reality’ simulator of the NGST observatory providing the user with the opportunity to test real observing campaigns.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
48

"The Carnegie Image Tube Committee and the Development of Electronic Imaging Devices in Astronomy, 1953-1976." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.53749.

Full text
Abstract:
abstract: This dissertation examines the efforts of the Carnegie Image Tube Committee (CITC), a group created by Vannevar Bush and composed of astronomers and physicists, who sought to develop a photoelectric imaging device, generally called an image tube, to aid astronomical observations. The Carnegie Institution of Washington’s Department of Terrestrial Magnetism coordinated the CITC, but the committee included members from observatories and laboratories across the United States. The CITC, which operated from 1954 to 1976, sought to replace direct photography as the primary means of astronomical imaging. Physicists, who gained training in electronics during World War II, led the early push for the development of image tubes in astronomy. Vannevar Bush’s concern for scientific prestige led him to form a committee to investigate image tube technology, and postwar federal funding for the sciences helped the CITC sustain development efforts for a decade. During those development years, the CITC acted as a mediator between the astronomical community and the image tube producers but failed to engage astronomers concerning various development paths, resulting in a user group without real buy-in on the final product. After a decade of development efforts, the CITC designed an image tube, which Radio Corporation of American manufactured, and, with additional funding from the National Science Foundation, the committee distributed to observatories around the world. While excited about the potential of electronic imaging, few astronomers used the Carnegie-developed device regularly. Although the CITC’s efforts did not result in an overwhelming adoption of image tubes by the astronomical community, examining the design, funding, production, and marketing of the Carnegie image tube shows the many and varied processes through which astronomers have acquired new tools. Astronomers’ use of the Carnegie image tube to acquire useful scientific data illustrates factors that contribute to astronomers’ adoption or non-adoption of those new tools.
Dissertation/Thesis
Doctoral Dissertation History and Philosophy of Science 2019
APA, Harvard, Vancouver, ISO, and other styles
49

Bakhshizadeh, Milad. "Phase retrieval in the high-dimensional regime." Thesis, 2021. https://doi.org/10.7916/d8-cpgb-7d85.

Full text
Abstract:
The main focus of this thesis is on the phase retrieval problem. This problem has a broad range of applications in advanced imaging systems, such as X-ray crystallography, coherent diffraction imaging, and astrophotography. Thanks to its broad applications and its mathematical elegance and sophistication, phase retrieval has attracted researchers with diverse backgrounds. Formally, phase retrieval is the problem of recovering a signal 𝔁 ∈ ℂⁿ from its phaseless linear measurements of the form |𝛼ᵢ∗𝔁| + 𝜖ᵢ where sensing vectors 𝛼ᵢ, 𝑖 = 1, 2, ..., 𝓶, are in the same vector space as 𝔁 and 𝜖ᵢ denotes the measurement noise. Finding an effective recovery method in a practical setup, analyzing the required sample complexity and convergence rate of a solution, and discussing the optimality of a proposed solution are some of the major mathematical challenges that researchers have tried to address in the last few years. In this thesis, our aim is to shed some light on some of these challenges and propose new ways to improve the imaging systems that have this problem at their core. Toward this goal, we focus on the high-dimensional setting where the ratio of the number of measurements to the ambient dimension of the signal remains bounded. This regime differs from the classical asymptotic regime in which the signal's dimension is fixed and the number of measurements is increasing. We obtain sharp results regarding the performance of the existing algorithms and the algorithms that are introduced in this thesis. To achieve this goal, we first develop a few sharp concentration inequalities. These inequalities enable us to obtain sharp bounds on the performance of our algorithms. We believe such results can be useful for researchers who work in other research areas as well. Second, we study the spectrum of some of the random matrices that play important roles in the phase retrieval problem, and use our tools to study the performance of some of the popular phase retrieval recovery schemes. Finally, we revisit the problem of structured signal recovery from phaseless measurements. We propose an iterative recovery method that can take advantage of any prior knowledge about the signal that is given as a compression code to efficiently solve the problem. We rigorously analyze the performance of our proposed method and provide extensive simulations to demonstrate its state-of-the-art performance.
APA, Harvard, Vancouver, ISO, and other styles
50

Lavigne, Jean-Francois. "Imagerie à haut contraste et caractérisation d'exoplanètes par la spectroscopie intégrale de champ." Thèse, 2009. http://hdl.handle.net/1866/3456.

Full text
Abstract:
Cette thèse porte sur l’amélioration des techniques d’imagerie à haut-contraste permettant la détection directe de compagnons à de faibles séparations de leur étoile hôte. Plus précisément, elle s’inscrit dans le développement du Gemini Planet Imager (GPI) qui est un instrument de deuxième génération pour les télescopes Gemini. Cette caméra utilisera un spectromètre à champ intégral (SCI) pour caractériser les compagnons détectés et pour réduire le bruit de tavelure limitant leur détection et corrigera la turbulence atmosphérique à un niveau encore jamais atteint en utilisant deux miroirs déformables dans son système d’optique adaptative (OA) : le woofer et le tweeter. Le woofer corrigera les aberrations de basses fréquences spatiales et de grandes amplitudes alors que le tweeter compensera les aberrations de plus hautes fréquences ayant une plus faible amplitude. Dans un premier temps, les performances pouvant être atteintes à l’aide des SCIs présentement en fonction sur les télescopes de 8-10 m sont investiguées en observant le compagnon de l’étoile GQ Lup à l’aide du SCI NIFS et du système OA ALTAIR installés sur le télescope Gemini Nord. La technique de l’imagerie différentielle angulaire (IDA) est utilisée pour atténuer le bruit de tavelure d’un facteur 2 à 6. Les spectres obtenus en bandes JHK ont été utilisés pour contraindre la masse du compagnon par comparaison avec les prédictions des modèles atmosphériques et évolutifs à 8−60 MJup, où MJup représente la masse de Jupiter. Ainsi, il est déterminé qu’il s’agit plus probablement d’une naine brune que d’une planète. Comme les SCIs présentement en fonction sont des caméras polyvalentes pouvant être utilisées pour plusieurs domaines de l’astrophysique, leur conception n’a pas été optimisée pour l’imagerie à haut-contraste. Ainsi, la deuxième étape de cette thèse a consisté à concevoir et tester en laboratoire un prototype de SCI optimisé pour cette tâche. Quatre algorithmes de suppression du bruit de tavelure ont été testés sur les données obtenues : la simple différence, la double différence, la déconvolution spectrale ainsi qu’un nouvel algorithme développé au sein de cette thèse baptisé l’algorithme des spectres jumeaux. Nous trouvons que l’algorithme des spectres jumeaux est le plus performant pour les deux types de compagnons testés : les compagnons méthaniques et non-méthaniques. Le rapport signal-sur-bruit de la détection a été amélioré d’un facteur allant jusqu’à 14 pour un compagnon méthanique et d’un facteur 2 pour un compagnon non-méthanique. Dernièrement, nous nous intéressons à certains problèmes liés à la séparation de la commande entre deux miroirs déformables dans le système OA de GPI. Nous présentons tout d’abord une méthode utilisant des calculs analytiques et des simulations Monte Carlo pour déterminer les paramètres clés du woofer tels que son diamètre, son nombre d’éléments actifs et leur course qui ont ensuite eu des répercussions sur le design général de l’instrument. Ensuite, le système étudié utilisant un reconstructeur de Fourier, nous proposons de séparer la commande entre les deux miroirs dans l’espace de Fourier et de limiter les modes transférés au woofer à ceux qu’il peut précisément reproduire. Dans le contexte de GPI, ceci permet de remplacer deux matrices de 1600×69 éléments nécessaires pour une séparation “classique” de la commande par une seule de 45×69 composantes et ainsi d’utiliser un processeur prêt à être utilisé plutôt qu’une architecture informatique plus complexe.
The main goal of this thesis is the improvement of high-contrast imaging techniques enabling the direct detection of faint companions at small separations from their host star. More precisely, it answers some questions linked to the development of the Gemini Planet Imager (GPI), a second generation instrument for the Gemini telescopes. This instrument will use an integral field spectrometer (IFS) to characterize the detected faint companions and to attenuate the speckle noise limiting their detection. Moreover, it will use a combination of two deformable mirrors, the woofer and the tweeter, in its adaptive optics (AO) system in order to reach the atmospheric turbulence correction sought. The woofer corrects the low spatial frequency high amplitude aberrations while the ones with a high frequency and a low amplitude are compensated by the tweeter. First, the high-contrast imaging performance achieved by current on-line IFS on 8-10 m telescopes are investigated through the observation of the faint companion to the star GQ Lup using the IFS NIFS and the AO system ALTAIR presently in function on the telescope Gemini North. The angular differential imaging (ADI) technique is used to reach an attenuation of the speckle noise by a factor of 2 to 6. The JHK spectra obtained were used to constrain the mass of the companion to 8−60 MJup making it most likely a brown dwarf. MJup represents the mass of Jupiter. Current on-line IFS were conceived to be versatile so that they could be used in many astrophysical fields. Hence, their conception was not optimized for high-contrast imaging. The second part of this thesis objective was to build and test in the laboratory an IFS optimized for this task. Four speckle suppression algorithms were tested on the resulting data: the simple difference, the double difference, the spectral deconvolution and a novel algorithm developed in this thesis dubbed the spectral twin algorithm. We found the spectral twin algorithm to be the most efficient to detect both types of companions tested: methanated and non-methanated. The detection signal-to-noise ratio was improved by a factor up to 14 for the methanated companion and up to 2 for a non-methanated one. In the last part, problems linked to the wavefront correction split between two deformable mirrors are investigated. First, a method allowing to select the woofer key parameters such as its diameter, its number of actuators and its required stroke which influenced the overall instrument design is presented. Second, since GPI will use a Fourier reconstructor, we propose to split the command in the Fourier domain and to limit the modes sent to the woofer to the ones it can accurately reproduce. In GPI, this results in replacing two matrices of 1600×69 elements in the case of a classic command split scheme by a single matrix of 45×69 components with the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography