Academic literature on the topic 'Lossless reconstruction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Lossless reconstruction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Lossless reconstruction"

1

Bassiouni, Mostafa A. "High-fidelity integrated lossless/lossy compression and reconstruction of images." Optical Engineering 32, no. 8 (1993): 1848. http://dx.doi.org/10.1117/12.145390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang Hongjuan, 王红娟, 王志鹏 Wang Zhipeng, 海涛 Hai Tao, 刘旭焱 Liu Xuyan, and 秦怡 Qin Yi. "Lossless Binary Image Reconstruction in Diffractive Encryption System with Redundant Data." Chinese Journal of Lasers 42, no. 7 (2015): 0709002. http://dx.doi.org/10.3788/cjl201542.0709002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Devaki, P. "Lossless Reconstruction of Secret Image Using Threshold Secret Sharing and Transformation." International Journal of Network Security & Its Applications 4, no. 3 (2012): 111–19. http://dx.doi.org/10.5121/ijnsa.2012.4307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Priya, C., T. Kesavamurthy, and M. Uma Priya. "An Efficient Lossless Medical Image Compression Using Hybrid Algorithm." Advanced Materials Research 984-985 (July 2014): 1276–81. http://dx.doi.org/10.4028/www.scientific.net/amr.984-985.1276.

Full text
Abstract:
Recently many new algorithms for image compression based on wavelets have been developed.This paper gives a detailed explanation of SPIHT algorithm with the combination of Lempel Ziv Welch compression technique for image compression by MATLAB implementation. Set partitioning in Hierarchical trees (SPIHT) is one of the most efficient algorithm known today. Pyramid structures have been created by the SPIHT algorithm based on a wavelet decomposition of an image. Lempel Ziv Welch is a universal lossless data compression algorithm guarantees that the original information can be exactly reproduced from the compressed data.The proposed methods have better compression ratio, computational speed and good reconstruction quality of the image. To analysis the proposed lossless methods here, calculate the performance metrics as Compression ratio, Mean square error, Peak signal to Noise ratio. Key Words-LempelZivWelch (LZW),SPIHT,Wavelet
APA, Harvard, Vancouver, ISO, and other styles
5

Aaker, Ole Edvard, Espen Birger Raknes, Ørjan Pedersen, and Børge Arntsen. "Wavefield reconstruction for velocity–stress elastodynamic full-waveform inversion." Geophysical Journal International 222, no. 1 (2020): 595–609. http://dx.doi.org/10.1093/gji/ggaa147.

Full text
Abstract:
SUMMARY Gradient computations in full-waveform inversion (FWI) require calculating zero-lag cross-correlations of two wavefields propagating in opposite temporal directions. Lossless media permit accurate and efficient reconstruction of the incident field from recordings along a closed boundary, such that both wavefields propagate backwards in time. Reconstruction avoids storing wavefield states of the incident field to secondary storage, which is not feasible for many realistic inversion problems. We give particular attention to velocity–stress modelling schemes and propose a novel modification of a conventional reconstruction method derived from the elastodynamic Kirchhoff–Helmholtz integral. In contrast to the original formulation (in a previous related work), the proposed approach is well-suited for velocity–stress schemes. Numerical examples demonstrate accurate wavefield reconstruction in heterogeneous, elastic media. A practical example using 3-D elastic FWI demonstrates agreement with the reference solution.
APA, Harvard, Vancouver, ISO, and other styles
6

Hejrati, Behzad, Abdolhossein Fathi, and Fardin Abdali-Mohammadi. "A new near-lossless EEG compression method using ANN-based reconstruction technique." Computers in Biology and Medicine 87 (August 2017): 87–94. http://dx.doi.org/10.1016/j.compbiomed.2017.05.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Setyaningsih, Emy, and Agus Harjoko. "Survey of Hybrid Image Compression Techniques." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 4 (2017): 2206. http://dx.doi.org/10.11591/ijece.v7i4.pp2206-2214.

Full text
Abstract:
A compression process is to reduce or compress the size of data while maintaining the quality of information contained therein. This paper presents a survey of research papers discussing improvement of various hybrid compression techniques during the last decade. A hybrid compression technique is a technique combining excellent properties of each group of methods as is performed in JPEG compression method. This technique combines lossy and lossless compression method to obtain a high-quality compression ratio while maintaining the quality of the reconstructed image. Lossy compression technique produces a relatively high compression ratio, whereas lossless compression brings about high-quality data reconstruction as the data can later be decompressed with the same results as before the compression. Discussions of the knowledge of and issues about the ongoing hybrid compression technique development indicate the possibility of conducting further researches to improve the performance of image compression method.
APA, Harvard, Vancouver, ISO, and other styles
8

Hennenfent, Gilles, Lloyd Fenelon, and Felix J. Herrmann. "Nonequispaced curvelet transform for seismic data reconstruction: A sparsity-promoting approach." GEOPHYSICS 75, no. 6 (2010): WB203—WB210. http://dx.doi.org/10.1190/1.3494032.

Full text
Abstract:
We extend our earlier work on the nonequispaced fast discrete curvelet transform (NFDCT) and introduce a second generation of the transform. This new generation differs from the previous one by the approach taken to compute accurate curvelet coefficients from irregularly sampled data. The first generation relies on accurate Fourier coefficients obtained by an [Formula: see text]-regularized inversion of the nonequispaced fast Fourier transform (FFT) whereas the second is based on a direct [Formula: see text]-regularized inversion of the operator that links curvelet coefficients to irregular data. Also, by construction the second generation NFDCT is lossless unlike the first generation NFDCT. This property is particularly attractive for processing irregularly sampled seismic data in the curvelet domain and bringing them back to their irregular record-ing locations with high fidelity. Secondly, we combine the second generation NFDCT with the standard fast discrete curvelet transform (FDCT) to form a new curvelet-based method, coined nonequispaced curvelet reconstruction with sparsity-promoting inversion (NCRSI) for the regularization and interpolation of irregularly sampled data. We demonstrate that for a pure regularization problem the reconstruction is very accurate. The signal-to-reconstruction error ratio in our example is above [Formula: see text]. We also conduct combined interpolation and regularization experiments. The reconstructions for synthetic data are accurate, particularly when the recording locations are optimally jittered. The reconstruction in our real data example shows amplitudes along the main wavefronts smoothly varying with limited acquisition imprint.
APA, Harvard, Vancouver, ISO, and other styles
9

Schlaeger, S. "A fast TDR-inversion technique for the reconstruction of spatial soil moisture content." Hydrology and Earth System Sciences 9, no. 5 (2005): 481–92. http://dx.doi.org/10.5194/hess-9-481-2005.

Full text
Abstract:
Abstract. Spatial moisture distribution in natural soil or other material is a valuably information for many applications. Standard measurement techniques give only mean or punctual results. Therefore a new inversion algorithm has been developed to derive moisture profiles along single TDR sensor-probes. The algorithm uses the full information content of TDR reflection data measured from one or both sides of an embedded probe. The system consisting of sensor probe and surrounded soil can be interpreted as a nonuniform transmission-line. The algorithm is based on the telegraph equations for nonuniform transmission-lines and an optimization approach to reconstruct the distribution of the capacitance and effective conductance along the transmission-line with high spatial resolution. The capacitance distribution can be converted into permittivity and water content by means of a capacitance model and dielectric mixing rules. Numerical investigations have been carried out to verify the accuracy of the inversion algorithm. Single- and double-sided time-domain reflection data were used to determine the capacitance and effective conductance profiles of lossless and lossy materials. The results show that single-sided reflection data are sufficient for lossless (or low-loss) cases. In case of lossy material two independent reflection measurements are required to reconstruct a reliable capacitance profile. The inclusion of an additional effective conductivity profile leads to an improved capacitance profile. The algorithm converges very fast and yields a capacitance profile within a sufficiently short time. The additional transformation to the water content requires no significant calculation time.
APA, Harvard, Vancouver, ISO, and other styles
10

Schlaeger, S. "A fast TDR-inversion technique for the reconstruction of spatial soil moisture content." Hydrology and Earth System Sciences Discussions 2, no. 3 (2005): 971–1009. http://dx.doi.org/10.5194/hessd-2-971-2005.

Full text
Abstract:
Abstract. Spatial moisture distribution in natural soil or other material is a valuably information for many applications. Standard measurement techniques give only mean or pointwise results. Therefore a new inversion algorithm has been developed to derive moisture profiles along single TDR sensor-probes. The algorithm uses the full information content of TDR reflection data measured from one or both sides of an embedded probe. The system consisting of sensor probe and surrounded soil can be interpreted as a nonuniform transmission-line. The algorithm is based on the telegraph equations for nonuniform transmission-lines and an optimization approach to reconstruct the distribution of the capacitance and effective conductance along the transmission-line with high spatial resolution. The capacitance distribution can be converted into permittivity and water content by means of a capacitance model and dielectric mixing rules. Numerical investigations have been carried out to verify the accuracy of the inversion algorithm. Single- and double-sided time-domain reflection data were used to determine the capacitance and effective conductance profiles of lossless and lossy soils. The results show that single-sided reflection data are sufficient for lossless (or low-loss) cases. In case of lossy material two independent reflection measurements are required to reconstruct a reliable soil moisture profile. The inclusion of an additional effective conductivity profile leads to an improved capacitance profile. The algorithm converges very fast and yields a capacitance profile within a sufficiently short time. The additional transformation to the water content requires no significant calculation time.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Lossless reconstruction"

1

Ramírez, Jávega Francisco. "Graph-based techniques for compression and reconstruction of sparse sources." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/385349.

Full text
Abstract:
The main goal of this thesis is to develop lossless compression schemes for analog and binary sources. All the considered compression schemes have as common feature that the encoder can be represented by a graph, so they can be studied employing tools from modern coding theory. In particular, this thesis is focused on two compression problems: the group testing and the noiseless compressed sensing problems. Although both problems may seem unrelated, in the thesis they are shown to be very close. Furthermore, group testing has the same mathematical formulation as non-linear binary source compression schemes that use the OR operator. In this thesis, the similarities between these problems are exploited. The group testing problem is aimed at identifying the defective subjects of a population with as few tests as possible. Group testing schemes can be divided into two groups: adaptive and non-adaptive group testing schemes. The former schemes generate tests sequentially and exploit the partial decoding results to attempt to reduce the overall number of tests required to label all members of the population, whereas non-adaptive schemes perform all the test in parallel and attempt to label as many subjects as possible. Our contributions to the group testing problem are both theoretical and practical. We propose a novel adaptive scheme aimed to efficiently perform the testing process. Furthermore, we develop tools to predict the performance of both adaptive and non-adaptive schemes when the number of subjects to be tested is large. These tools allow to characterize the performance of adaptive and non-adaptive group testing schemes without simulating them. The goal of the noiseless compressed sensing problem is to retrieve a signal from its lineal projection version in a lower-dimensional space. This can be done only whenever the amount of null components of the original signal is large enough. Compressed sensing deals with the design of sampling schemes and reconstruction algorithms that manage to reconstruct the original signal vector with as few samples as possible. In this thesis we pose the compressed sensing problem within a probabilistic framework, as opposed to the classical compression sensing formulation. Recent results in the state of the art show that this approach is more efficient than the classical one. Our contributions to noiseless compressed sensing are both theoretical and practical. We deduce a necessary and sufficient matrix design condition to guarantee that the reconstruction is lossless. Regarding the design of practical schemes, we propose two novel reconstruction algorithms based on message passing over the sparse representation of the matrix, one of them with very low computational complexity.<br>El objetivo principal de la tesis es el desarrollo de esquemas de compresión sin pérdidas para fuentes analógicas y binarias. Los esquemas analizados tienen en común la representación del compresor mediante un grafo; esto ha permitido emplear en su estudio las herramientas de codificación modernas. Más concretamente la tesis estudia dos problemas de compresión en particular: el diseño de experimentos de testeo comprimido de poblaciones (de sangre, de presencia de elementos contaminantes, secuenciado de ADN, etcétera) y el muestreo comprimido de señales reales en ausencia de ruido. A pesar de que a primera vista parezcan problemas totalmente diferentes, en la tesis mostramos que están muy relacionados. Adicionalmente, el problema de testeo comprimido de poblaciones tiene una formulación matemática idéntica a los códigos de compresión binarios no lineales basados en puertas OR. En la tesis se explotan las similitudes entre todos estos problemas. Existen dos aproximaciones al testeo de poblaciones: el testeo adaptativo y el no adaptativo. El primero realiza los test de forma secuencial y explota los resultados parciales de estos para intentar reducir el número total de test necesarios, mientras que el segundo hace todos los test en bloque e intenta extraer el máximo de datos posibles de los test. Nuestras contribuciones al problema de testeo comprimido han sido tanto teóricas como prácticas. Hemos propuesto un nuevo esquema adaptativo para realizar eficientemente el proceso de testeo. Además hemos desarrollado herramientas que permiten predecir el comportamiento tanto de los esquemas adaptativos como de los esquemas no adaptativos cuando el número de sujetos a testear es elevado. Estas herramientas permiten anticipar las prestaciones de los esquemas de testeo sin necesidad de simularlos. El objetivo del muestreo comprimido es recuperar una señal a partir de su proyección lineal en un espacio de menor dimensión. Esto sólo es posible si se asume que la señal original tiene muchas componentes que son cero. El problema versa sobre el diseño de matrices y algoritmos de reconstrucción que permitan implementar esquemas de muestreo y reconstrucción con un número mínimo de muestras. A diferencia de la formulación clásica de muestreo comprimido, en esta tesis se ha empleado un modelado probabilístico de la señal. Referencias recientes en la literatura demuestran que este enfoque permite conseguir esquemas de compresión y descompresión más eficientes. Nuestras contribuciones en el campo de muestreo comprimido de fuentes analógicas dispersas han sido también teóricas y prácticas. Por un lado, la deducción de la condición necesaria y suficiente que debe garantizar la matriz de muestreo para garantizar que se puede reconstruir unívocamente la secuencia de fuente. Por otro lado, hemos propuesto dos algoritmos, uno de ellos de baja complejidad computacional, que permiten reconstruir la señal original basados en paso de mensajes entre los nodos de la representación gráfica de la matriz de proyección.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Lossless reconstruction"

1

Grover, Pulkit. "Fundamental limits on power consumption for lossless signal reconstruction." In 2012 IEEE Information Theory Workshop (ITW 2012). IEEE, 2012. http://dx.doi.org/10.1109/itw.2012.6404730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Prasetyo, Heri, Jing-Ming Guo, and Chih-Hsien Hsia. "Friendly and Progressive Visual Secret Sharing with Lossless Reconstruction." In 2019 5th International Conference on Science in Information Technology (ICSITech). IEEE, 2019. http://dx.doi.org/10.1109/icsitech46713.2019.8987504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Benammar, Meryem, and Abdellatif Zaidi. "Lossless source coding for a Heegard-Berger problem with two sources and degraded reconstruction sets." In 2015 International Symposium on Wireless Communication Systems. IEEE, 2015. http://dx.doi.org/10.1109/iswcs.2015.7454324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Enlow, M. A., Tao Ju, I. A. Kakadiaris, and J. P. Carson. "Lossless 3-D reconstruction and registration of semi-quantitative gene expression data in the mouse brain." In 2011 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2011. http://dx.doi.org/10.1109/iembs.2011.6091994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!