Siga este enlace para ver otros tipos de publicaciones sobre el tema: Python and libraries of python (SciPy.

Artículos de revistas sobre el tema "Python and libraries of python (SciPy"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Python and libraries of python (SciPy".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Boulle, A. y J. Kieffer. "High-performance Python for crystallographic computing". Journal of Applied Crystallography 52, n.º 4 (24 de julio de 2019): 882–97. http://dx.doi.org/10.1107/s1600576719008471.

Texto completo
Resumen
The Python programming language, combined with the numerical computing library NumPy and the scientific computing library SciPy, has become the de facto standard for scientific computing in a variety of fields. This popularity is mainly due to the ease with which a Python program can be written and executed (easy syntax, dynamical typing, no compilation etc.), coupled with the existence of a large number of specialized third-party libraries that aim to lift all the limitations of the raw Python language. NumPy introduces vector programming, improving execution speeds, whereas SciPy brings a wealth of highly optimized and reliable scientific functions. There are cases, however, where vector programming alone is not sufficient to reach optimal performance. This issue is addressed with dedicated compilers that aim to translate Python code into native and statically typed code with support for the multi-core architectures of modern processors. In the present article it is shown how these approaches can be efficiently used to tackle different problems, with increasing complexity, that are relevant to crystallography: the 2D Laue function, scattering from a strained 2D crystal, scattering from 3D nanocrystals and, finally, diffraction from films and multilayers. For each case, detailed implementations and explanations of the functioning of the algorithms are provided. Different Python compilers (namely NumExpr, Numba, Pythran and Cython) are used to improve performance and are benchmarked against state-of-the-art NumPy implementations. All examples are also provided as commented and didactic Python (Jupyter) notebooks that can be used as starting points for crystallographers curious to enter the Python ecosystem or wishing to accelerate their existing codes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kumar, Rakesh. "FUTURE FOR SCIENTIFIC COMPUTING USING PYTHON". International Journal of Engineering Technologies and Management Research 2, n.º 1 (29 de enero de 2020): 30–41. http://dx.doi.org/10.29121/ijetmr.v2.i1.2015.28.

Texto completo
Resumen
Computational science (scientific computing or scientific computation) is concerned with constructing mathematical models as well as quantitative analysis techniques and using computers to analyze as well as solve scientific problems. In practical use, it is basically the application of computer simulation as well as other forms of computation from numerical analysis and theoretical computer science to problems in different scientific disciplines. The scientific computing approach is to gain understanding, basically through the analysis of mathematical models implemented on computers. Python is frequently used for highperformance scientific applications and widely used in academia as well as scientific projects because it is easy to write and performs well. Due to its high performance nature, scientific computing in Python often utilizes external libraries like NumPy, SciPy and Matplotlib etc.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Nunez-Iglesias, Juan, Adam J. Blanch, Oliver Looker, Matthew W. Dixon y Leann Tilley. "A new Python library to analyse skeleton images confirms malaria parasite remodelling of the red blood cell membrane skeleton". PeerJ 6 (15 de febrero de 2018): e4312. http://dx.doi.org/10.7717/peerj.4312.

Texto completo
Resumen
We present Skan (Skeleton analysis), a Python library for the analysis of the skeleton structures of objects. It was inspired by the “analyse skeletons” plugin for the Fiji image analysis software, but its extensive Application Programming Interface (API) allows users to examine and manipulate any intermediate data structures produced during the analysis. Further, its use of common Python data structures such as SciPy sparse matrices and pandas data frames opens the results to analysis within the extensive ecosystem of scientific libraries available in Python. We demonstrate the validity of Skan’s measurements by comparing its output to the established Analyze Skeletons Fiji plugin, and, with a new scanning electron microscopy (SEM)-based method, we confirm that the malaria parasite Plasmodium falciparum remodels the host red blood cell cytoskeleton, increasing the average distance between spectrin-actin junctions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lemenkova, Polina. "R Libraries {dendextend} and {magrittr} and Clustering Package scipy.cluster of Python For Modelling Diagrams of Dendrogram Trees". Carpathian Journal of Electronic and Computer Engineering 13, n.º 3 (1 de septiembre de 2020): 5–12. http://dx.doi.org/10.2478/cjece-2020-0002.

Texto completo
Resumen
AbstractThe paper presents a comparison of the two languages Python and R related to the classification tools and demonstrates the differences in their syntax and graphical output. It indicates the functionality of R and Python packages {dendextend} and scipy.cluster as effective tools for the dendrogram modelling by the algorithms of sorting and ranking datasets. R and Python programming languages have been tested on a sample dataset including marine geological measurements. The work aims to detect how bathymetric data change along the 25 bathymetric profiles digitized across the Mariana Trench. The methodology includes performed hierarchical cluster analysis with dendrograms and plotted clustermap with marginal dendrograms. The statistical libraries include Matplotlib, SciPy, NumPy, Pandas by Python and {dendextend}, {pvclust}, {magrittr} by R. The dendrograms were compared by the model-simulated clusters of the bathymetric ranges. The results show three distinct groups of the profiles sorted by the elevation ranges with maximal depths detected in a group of profiles 19-21. The dendrogram visualization in a cluster analysis demonstrates the effective representation of the data sorting, grouping and classifying by the machine learning algorithms. The programming codes presented in this study enable to sort a dataset in a similar research aimed to group data based on the similarity of attributes. Effective visualization by dendrograms is a useful modelling tool for the geospatial management where data ranking is required. Plotting dendrograms by R, comparing to Python, presented functional and sophisticated algorithms, refined design control and fine graphical data output. The interdisciplinary nature of this work consists in application of the coding algorithms for spatial data analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Grose, Lachlan, Laurent Ailleres, Gautier Laurent y Mark Jessell. "LoopStructural 1.0: time-aware geological modelling". Geoscientific Model Development 14, n.º 6 (29 de junio de 2021): 3915–37. http://dx.doi.org/10.5194/gmd-14-3915-2021.

Texto completo
Resumen
Abstract. In this contribution we introduce LoopStructural, a new open-source 3D geological modelling Python package (http://www.github.com/Loop3d/LoopStructural, last access: 15 June 2021). LoopStructural provides a generic API for 3D geological modelling applications harnessing the core Python scientific libraries pandas, numpy and scipy. Six different interpolation algorithms, including three discrete interpolators and 3 polynomial trend interpolators, can be used from the same model design. This means that different interpolation algorithms can be mixed and matched within a geological model allowing for different geological objects, e.g. different conformable foliations, fault surfaces and unconformities to be modelled using different algorithms. Geological features are incorporated into the model using a time-aware approach, where the most recent features are modelled first and used to constrain the geometries of the older features. For example, we use a fault frame for characterising the geometry of the fault surface and apply each fault sequentially to the faulted surfaces. In this contribution we use LoopStructural to produce synthetic proof of concepts models and a 86 km × 52 km model of the Flinders Ranges in South Australia using map2loop.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Sánchez-Jiménez, David, Fernando Buchón-Moragues, Begoña Escutia-Muñoz y Rafael Botella-Estrada. "Development of Computer Vision Applications to Automate the Measurement of the Dimensions of Skin Wounds". Proceedings 19, n.º 1 (16 de julio de 2019): 18. http://dx.doi.org/10.3390/proceedings2019019018.

Texto completo
Resumen
This paper shows the progress in the development of two computer vision applications for measuring skin wounds. Both applications have been written in Python programming language and make use of OpenCV and Scipy open source libraries. Their objective is to be part of a software that calculates the dimensions of skin wounds in an objective and reliable way. This could be useful in the clinical follow-up, assessing the evolution of skin wounds, as well as in research, comparing the efficacy of different treatments. Merging these two applications into a single one would allow to generate two-dimensional results in real time, and three-dimensional results after a few hours of processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rubint, Jakub. "Effects of meshing density of 1D structural members with non-uniform cross-section along the length on the calculation of eigenfrequencies". MATEC Web of Conferences 313 (2020): 00004. http://dx.doi.org/10.1051/matecconf/202031300004.

Texto completo
Resumen
Density of division in finite element method does not affect only the accuracy of calculation, but also the necessary calculation time. This is directly influenced by the power of the used hardware, the efficiency of the algorithm used to assemble global stiffness and mass matrices and finally, by the method used to find eigenvalues of matrices for determination of eigenfrequencies. In engineering practice, when commercially available software is used, it is necessary to look for the optimum between the accuracy of the calculation and the length of the calculation. This paper deals with solution of eigenfrequencies of 1D elements with nonuniform cross section using Python 3.7.4 with libraries "numpy 1.18.1" for finding eigenvalues of matrices and "scipy 1.4.1" for finding solution for system of nonlinear equations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Siebert, Julien, Janek Groß y Christof Schroth. "A Systematic Review of Packages for Time Series Analysis". Engineering Proceedings 5, n.º 1 (28 de junio de 2021): 22. http://dx.doi.org/10.3390/engproc2021005022.

Texto completo
Resumen
This paper presents a systematic review of Python packages with a focus on time series analysis. The objective is to provide (1) an overview of the different time series analysis tasks and preprocessing methods implemented, and (2) an overview of the development characteristics of the packages (e.g., documentation, dependencies, and community size). This review is based on a search of literature databases as well as GitHub repositories. Following the filtering process, 40 packages were analyzed. We classified the packages according to the analysis tasks implemented, the methods related to data preparation, and the means for evaluating the results produced (methods and access to evaluation data). We also reviewed documentation aspects, the licenses, the size of the packages’ community, and the dependencies used. Among other things, our results show that forecasting is by far the most frequently implemented task, that half of the packages provide access to real datasets or allow generating synthetic data, and that many packages depend on a few libraries (the most used ones being numpy, scipy and pandas). We hope that this review can help practitioners and researchers navigate the space of Python packages dedicated to time series analysis. We also provide an updated list of the reviewed packages online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Stančić, Adam, Ivan Grgurević y Zvonko Kavran. "Integration of Transport-relevant Data within Image Record of the Surveillance System". PROMET - Traffic&Transportation 28, n.º 5 (27 de octubre de 2016): 517–27. http://dx.doi.org/10.7307/ptt.v28i5.2114.

Texto completo
Resumen
Integration of the collected information on the road within the image recorded by the surveillance system forms a unified source of transport-relevant data about the supervised situation. The basic assumption is that the procedure of integration changes the image to the extent that is invisible to the human eye, and the integrated data keep identical content. This assumption has been proven by studying the statistical properties of the image and integrated data using mathematical model modelled in the programming language Python using the combinations of the functions of additional libraries (OpenCV, NumPy, SciPy and Matplotlib). The model has been used to compare the input methods of meta-data and methods of steganographic integration by correcting the coefficients of Discrete Cosine Transform JPEG compressed image. For the procedures of steganographic data processing the steganographic algorithm F5 was used. The review paper analyses the advantages and drawbacks of the integration methods and present the examples of situations in traffic in which the formed unified sources of transport-relevant information could be used.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sanyal, Parikshit y Sanghita Barui. "The watershed transform in pathological image analysis: application in rectiulocyte count from supravital stained smears". International Journal of Research in Medical Sciences 7, n.º 3 (27 de febrero de 2019): 871. http://dx.doi.org/10.18203/2320-6012.ijrms20190939.

Texto completo
Resumen
Background: Morphometric studies based on image analysis are a useful adjunct for quantitative analysis of microscopic images. However, effective separation of overlapping objects if often the bottleneck in image analysis techniques. We employ the watershed transform for counting reticulocytes from images of supravitally stained smears.Methods: The algorithm was developed with the Python programming platform, using the Numpy, Scipy and OpenCV libraries. The initial development and testing of the software were carried out with images from the American Society of Hematology Image Library. Then a pilot study with 30 samples was then taken up. The samples were incubated with supravital stain immediately after collection, and smears prepared. The smears were microphotographed at 100X objective, with no more than 150 RBCs per field. Reticulocyte count was carried out manually as well as by image analysis.Results: 600 out of 663 reticulocytes (90.49%) were correctly identified, with a specificity of 98%. The major difficulty faced was the slight bluish tinge seen in polychromatic RBCs, which were inconsistently detected by the software.Conclusions: The watershed transform can be used successfully to separate overlapping objects usually encountered in pathological smears. The algorithm has the potential to develop into a generalized cell classifier for cytopathology and hematology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ibanez, C. A. G., B. G. Carcellar III, E. C. Paringit, R. J. L. Argamosa, R. A. G. Faelga, M. A. V. Posilero, G. P. Zaragosa y N. A. Dimayacyac. "ESTIMATING DBH OF TREES EMPLOYING MULTIPLE LINEAR REGRESSION OF THE BEST LIDAR-DERIVED PARAMETER COMBINATION AUTOMATED IN PYTHON IN A NATURAL BROADLEAF FOREST IN THE PHILIPPINES". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (23 de junio de 2016): 657–62. http://dx.doi.org/10.5194/isprsarchives-xli-b8-657-2016.

Texto completo
Resumen
Diameter-at-Breast-Height Estimation is a prerequisite in various allometric equations estimating important forestry indices like stem volume, basal area, biomass and carbon stock. LiDAR Technology has a means of directly obtaining different forest parameters, except DBH, from the behavior and characteristics of point cloud unique in different forest classes. Extensive tree inventory was done on a two-hectare established sample plot in Mt. Makiling, Laguna for a natural growth forest. Coordinates, height, and canopy cover were measured and types of species were identified to compare to LiDAR derivatives. Multiple linear regression was used to get LiDAR-derived DBH by integrating field-derived DBH and 27 LiDAR-derived parameters at 20m, 10m, and 5m grid resolutions. To know the best combination of parameters in DBH Estimation, all possible combinations of parameters were generated and automated using python scripts and additional regression related libraries such as Numpy, Scipy, and Scikit learn were used. The combination that yields the highest r-squared or coefficient of determination and lowest AIC (Akaike’s Information Criterion) and BIC (Bayesian Information Criterion) was determined to be the best equation. The equation is at its best using 11 parameters at 10mgrid size and at of 0.604 r-squared, 154.04 AIC and 175.08 BIC. Combination of parameters may differ among forest classes for further studies. Additional statistical tests can be supplemented to help determine the correlation among parameters such as Kaiser- Meyer-Olkin (KMO) Coefficient and the Barlett’s Test for Spherecity (BTS).
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Ibanez, C. A. G., B. G. Carcellar III, E. C. Paringit, R. J. L. Argamosa, R. A. G. Faelga, M. A. V. Posilero, G. P. Zaragosa y N. A. Dimayacyac. "ESTIMATING DBH OF TREES EMPLOYING MULTIPLE LINEAR REGRESSION OF THE BEST LIDAR-DERIVED PARAMETER COMBINATION AUTOMATED IN PYTHON IN A NATURAL BROADLEAF FOREST IN THE PHILIPPINES". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (23 de junio de 2016): 657–62. http://dx.doi.org/10.5194/isprs-archives-xli-b8-657-2016.

Texto completo
Resumen
Diameter-at-Breast-Height Estimation is a prerequisite in various allometric equations estimating important forestry indices like stem volume, basal area, biomass and carbon stock. LiDAR Technology has a means of directly obtaining different forest parameters, except DBH, from the behavior and characteristics of point cloud unique in different forest classes. Extensive tree inventory was done on a two-hectare established sample plot in Mt. Makiling, Laguna for a natural growth forest. Coordinates, height, and canopy cover were measured and types of species were identified to compare to LiDAR derivatives. Multiple linear regression was used to get LiDAR-derived DBH by integrating field-derived DBH and 27 LiDAR-derived parameters at 20m, 10m, and 5m grid resolutions. To know the best combination of parameters in DBH Estimation, all possible combinations of parameters were generated and automated using python scripts and additional regression related libraries such as Numpy, Scipy, and Scikit learn were used. The combination that yields the highest r-squared or coefficient of determination and lowest AIC (Akaike’s Information Criterion) and BIC (Bayesian Information Criterion) was determined to be the best equation. The equation is at its best using 11 parameters at 10mgrid size and at of 0.604 r-squared, 154.04 AIC and 175.08 BIC. Combination of parameters may differ among forest classes for further studies. Additional statistical tests can be supplemented to help determine the correlation among parameters such as Kaiser- Meyer-Olkin (KMO) Coefficient and the Barlett’s Test for Spherecity (BTS).
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Poletaev, Anatoliy Y. y Elena M. Spiridonova. "Hierarchical Clustering as a Dimension Reduction Technique for Markowitz Portfolio Optimization". Modeling and Analysis of Information Systems 27, n.º 1 (23 de marzo de 2020): 62–71. http://dx.doi.org/10.18255/1818-1015-2020-1-62-71.

Texto completo
Resumen
Optimal portfolio selection is a common and important application of an optimization problem. Practical applications of an existing optimal portfolio selection methods is often difficult due to high data dimensionality (as a consequence of the large number of securities available for investment). In this paper, a method of dimension reduction based on hierarchical clustering is proposed. Clustering is widely used in computer science, a lot of algorithms and computational methods have been developed for it. As a measure of securities proximity for hierarchical clustering Pearson pair correlation coefficient is used. Further, the proposed method’s influence on the quality of the optimal solution is investigated on several examples of optimal portfolio selection according to the Markowitz Model. The influence of hierarchical clustering parameters (intercluster distance metrics and clustering threshold) on the quality of the obtained optimal solution is also investigated. The dependence between the target return of the portfolio and the possibility of reducing the dimension using the proposed method is investigated too. For each considered example in the paper graphs and tables with the main results of the proposed method - application which are the decrease of the dimension and the drop of the yield (the decrease of the quality of the optimal solution) - for a portfolio constructed using the proposed method compared to a portfolio constructed without the proposed method are given. For the experiments the Python programming language and its libraries: scipy for clustering and cvxpy for solving the optimization problem (building an optimal portfolio) are used.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Virtanen, Pauli, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski et al. "SciPy 1.0: fundamental algorithms for scientific computing in Python". Nature Methods 17, n.º 3 (3 de febrero de 2020): 261–72. http://dx.doi.org/10.1038/s41592-019-0686-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Parial, Prithwish. "Python the game changer in the field of Machine Learning, Data Science and IoT: A Review". International Journal for Research in Applied Science and Engineering Technology 9, n.º 8 (31 de agosto de 2021): 1827–37. http://dx.doi.org/10.22214/ijraset.2021.37668.

Texto completo
Resumen
Abstract: Python is the finest, easily adoptable object-oriented programming language developed by Guido van Rossum, and first released on February 20, 1991 It is a powerful high-level language in the recent software world. In this paper, our discussion will be an introduction to the various Python tools applicable for Machine learning techniques, Data Science and IoT. Then describe the packages that are in demand of Data science and Machine learning communities, for example- Pandas, SciPy, TensorFlow, Theano, Matplotlib, etc. After that, we will move to show the significance of python for building IoT applications. We will share different codes throughout an example. To assistance, the learning experience, execute the following examples contained in this paper interactively using the Jupiter notebooks. Keywords: Machine learning, Real world programming, Data Science, IOT, Tools, Different packages, Languages- Python.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Gyulai, Márk. "Numerikus számítások hatékonyságának vizsgálata Python, Matlab és Octave programcsomagokkal". Multidiszciplináris tudományok 10, n.º 2 (2020): 338–49. http://dx.doi.org/10.35925/j.multi.2020.2.38.

Texto completo
Resumen
A numerikus analízis a matematikai problémák közelítő, számítógéppel is hatékonyan elvégezhető megoldásával foglalkozik. A cikkben bemutatott kutatás célja, hogy összehasonlítja a Python NumPy és SciPy, a Matlab és GNU Octave programcsomagok alapvető numerikus algoritmusainak sebességét. A bemutatásra kerülő sebességtesztek olyan alapvető numerikus műveletekre koncentrálnak, mint a mátrix műveletek, interpolációk, lineáris egyenletrendszer megoldása, valamint a numerikus integrálás és deriválás. Különböző méretű tesztfeladatok esetén, az egyes műveletek elvégzéséhez szükséges időket fogjuk összevetni, valamint a mérés során figyelembe vesszük az egyes implementációk közötti különbségeket is.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Santos, Thiago Teixeira. "SciPy and OpenCV as an interactive computing environment for computer vision". Revista de Informática Teórica e Aplicada 22, n.º 1 (18 de mayo de 2015): 154. http://dx.doi.org/10.22456/2175-2745.49491.

Texto completo
Resumen
In research and development (R&D), interactive computing environments are a frequently employed alternative for data exploration, algorithm development and prototyping. In the last twelve years, a popular scientific computing environment flourished around the Python programming language. Most of this environment is part of (or built over) a software stack named SciPy Stack. Combined with OpenCV’s Python interface, this environment becomes an alternative for current computer vision R&D. This tutorial introduces such an environment and shows how it can address different steps of computer vision research, from initial data exploration to parallel computing implementations. Several code examples are presented. They deal with problems from simple image processing to inference by machine learning. All examples are also available as IPython notebooks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Galli, Massimiliano, Enric Tejedor y Stefan Wunsch. "A New PyROOT: Modern, Interoperable and More Pythonic". EPJ Web of Conferences 245 (2020): 06004. http://dx.doi.org/10.1051/epjconf/202024506004.

Texto completo
Resumen
Python is nowadays one of the most widely-used languages for data science. Its rich ecosystem of libraries together with its simplicity and readability are behind its popularity. HEP is also embracing that trend, often using Python as an interface language to access C++ libraries for the sake of performance. PyROOT, the Python bindings of the ROOT software toolkit, plays a key role here, since it allows to automatically and dynamically invoke C++ code from Python without the generation of any static wrappers beforehand. In that sense, this paper presents the efforts to create a new PyROOT with three main qualities: modern, able to exploit the latest C++ features from Python; pythonic, providing Python syntax to use C++ classes; interoperable, able to interact with the most important libraries of the Python data science toolset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Дмитренко, Т. "Методика обробки аудіо-сигналів за допомогою алгоритмів на базі мови програмування Python." COMPUTER-INTEGRATED TECHNOLOGIES: EDUCATION, SCIENCE, PRODUCTION, n.º 41 (23 de diciembre de 2020): 152–58. http://dx.doi.org/10.36910/6775-2524-0560-2020-41-24.

Texto completo
Resumen
Проведено аналіз можливості застосування інтерпретовано об'єктно‑орієнтованої мови програмування при обробці масивів аудіоданих як цифрового способу представлення звукових сигналів. Продемонстровано принцип використання з зазначеною метою мови програмування високого рівня зі строгою динамічною типізацією Python. Визначено особливості застосування у даній галузі таких модулів (python‑бібліотек)як: NumPy, SciPy та Matplotlib. Наведено методи обробки та модифікації масивів аудіоданих з метою їх подальшого застосування у мультимедійних комп’ютерних мережах. Побудовано математичну модель обробки аудіо-даних, ефективність якої перевірено на базі відповідних програмнихалгоритмів. Показано можливість вирішення актуальних задач та дослідження теоретичних аспектів проблему області обробки аудіо-даних шляхом використання інтерпретовано об'єктно‑орієнтованої мови програмування і спеціалізованих бібліотек з відкритим кодом.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Arias Hernández, Jesús Daniel, Andrés Fernando Jiménez López y Hernán Oswaldo Porras Castro. "Desarrollo de aplicaciones en python para el aprendizaje de física computacional (Development of Python applications for learning computational physics)". Ingeniería Investigación y Desarrollo 16, n.º 1 (11 de enero de 2016): 72. http://dx.doi.org/10.19053/1900771x.5122.

Texto completo
Resumen
Este artículo describe una aplicación desarrollada para el aprendizaje de algoritmos de simulación basados en conceptos de mecánica clásica. Los estudiantes de Ingeniería Electrónica y de Ciencias de la Computación de la Universidad de los Llanos estudian la física computacional usando cinemática de partículas (CP), como una de las actividades del grupo de investigación Sistemas Dinámicos. Python, el lenguaje de programación seleccionado, facilita portabilidad y el acceso a las librerías necesarias para la representación de partículas. Las principales librerías de Python usadas en este ejercicio son: matplotlib, numpy, PyQt4, scipy, Tkinter y VPython. Estas librerías permiten la simulación de movimiento uniforme,movimiento lineal acelerado, caída libre y movimiento de proyectiles. Además, son útiles para la generación de interfaces gráficas de usuario para mostrar los datos en tablas y gráficos. Las GUIs fueron implementadas usando las librerías Tkinter y PyQt4, donde esta última facilita el desarrollo con la ayuda de herramientas del software Qt Designer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

A. Aziz, Zina, Diler Naseradeen Abdulqader, Amira Bibo Sallow y Herman Khalid Omer. "Python Parallel Processing and Multiprocessing: A Rivew". Academic Journal of Nawroz University 10, n.º 3 (30 de agosto de 2021): 345–54. http://dx.doi.org/10.25007/ajnu.v10n3a1145.

Texto completo
Resumen
Parallel and multiprocessing algorithms break down significant numerical problems into smaller subtasks, reducing the total computing time on multiprocessor and multicore computers. Parallel programming is well supported in proven programming languages such as C and Python, which are well suited to “heavy-duty” computational tasks. Historically, Python has been regarded as a strong supporter of parallel programming due to the global interpreter lock (GIL). However, times have changed. Parallel programming in Python is supported by the creation of a diverse set of libraries and packages. This review focused on Python libraries that support parallel processing and multiprocessing, intending to accelerate computation in various fields, including multimedia, attack detection, supercomputers, and genetic algorithms. Furthermore, we discussed some Python libraries that can be used for this purpose.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Pilnenskiy, Nikita y Ivan Smetannikov. "Feature Selection Algorithms as One of the Python Data Analytical Tools". Future Internet 12, n.º 3 (16 de marzo de 2020): 54. http://dx.doi.org/10.3390/fi12030054.

Texto completo
Resumen
With the current trend of rapidly growing popularity of the Python programming language for machine learning applications, the gap between machine learning engineer needs and existing Python tools increases. Especially, it is noticeable for more classical machine learning fields, namely, feature selection, as the community attention in the last decade has mainly shifted to neural networks. This paper has two main purposes. First, we perform an overview of existing open-source Python and Python-compatible feature selection libraries, show their problems, if any, and demonstrate the gap between these libraries and the modern state of feature selection field. Then, we present new open-source scikit-learn compatible ITMO FS (Information Technologies, Mechanics and Optics University feature selection) library that is currently under development, explain how its architecture covers modern views on feature selection, and provide some code examples on how to use it with Python and its performance compared with other Python feature selection libraries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Karki, Sonali y Dr Kiran V. "Performance Comparison of SSH Libraries". Journal of University of Shanghai for Science and Technology 23, n.º 06 (18 de junio de 2021): 868–73. http://dx.doi.org/10.51201/jusst/21/05357.

Texto completo
Resumen
The business industry is evolving. Enterprises have begun a digital transformation path, adopting innovative technologies that enable them to move quickly and change how they cooperate, lowering costs and improving productivity. However, as a result of these technologies, the conventional perimeter has evaporated, and identification has become the new line of defense. New security concerns necessitate modern security measures. Passwords are no longer appropriate for authenticating privileged access to mission-critical assets. Passwords are notorious for being insecure, causing weariness, and giving the user a false sense of security. Enterprises must use password-less solutions, which is where SSH key-based authentication comes in. The Python language’s numerous applications are the consequence of a mixture of traits that offer this language advantage over others. Some of the advantages of programming with Python are as follows: To enable easy communication between Python and other systems, Python Package Index (PyPI) is used. The package consists of a variety of modules developed by third-party developers. It also has the benefit of being an Open Source and Community Development language, as well as having substantial Support Libraries. There are multiple SSH libraries in python and this paper focuses on each of their pros and cons as well as the time it has taken for each of them to perform.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Wazarkar, Aniket M. "Python: A Quintessential approach towards Data Science". International Journal for Research in Applied Science and Engineering Technology 9, n.º VI (30 de junio de 2021): 3018–24. http://dx.doi.org/10.22214/ijraset.2021.35683.

Texto completo
Resumen
Python is an interpreted object-oriented programming language that is sustainably procuring vogue in the field of data science and analytics by fabricating complex software applications. Establishing a righteous nexus between developers and data scientists. Python has undoubtedly become paramount for data scientists mindful of cosmic and robust standard libraries which are used for analyzing and visualizing the data. Data scientists have to deal with the exceedingly large amount of data alias as big data. With elementary usage and a vast set of python libraries, Python has doubtlessly become an admired option to handle big data. Python has developed and evolved analytical tools which can help data scientist in developing machine learning models, web services, data mining, data classification, exploratory data analysis, etc. In this paper, we will scrutinize various tools which are used by python programmers for efficient data analytics, their scope with contrast to other programming languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Badenhorst, Melinda, Christopher J. Barry, Christiaan J. Swanepoel, Charles Theo van Staden, Julian Wissing y Johann M. Rohwer. "Workflow for Data Analysis in Experimental and Computational Systems Biology: Using Python as ‘Glue’". Processes 7, n.º 7 (18 de julio de 2019): 460. http://dx.doi.org/10.3390/pr7070460.

Texto completo
Resumen
Bottom-up systems biology entails the construction of kinetic models of cellular pathways by collecting kinetic information on the pathway components (e.g., enzymes) and collating this into a kinetic model, based for example on ordinary differential equations. This requires integration and data transfer between a variety of tools, ranging from data acquisition in kinetics experiments, to fitting and parameter estimation, to model construction, evaluation and validation. Here, we present a workflow that uses the Python programming language, specifically the modules from the SciPy stack, to facilitate this task. Starting from raw kinetics data, acquired either from spectrophotometric assays with microtitre plates or from Nuclear Magnetic Resonance (NMR) spectroscopy time-courses, we demonstrate the fitting and construction of a kinetic model using scientific Python tools. The analysis takes place in a Jupyter notebook, which keeps all information related to a particular experiment together in one place and thus serves as an e-labbook, enhancing reproducibility and traceability. The Python programming language serves as an ideal foundation for this framework because it is powerful yet relatively easy to learn for the non-programmer, has a large library of scientific routines and active user community, is open-source and extensible, and many computational systems biology software tools are written in Python or have a Python Application Programming Interface (API). Our workflow thus enables investigators to focus on the scientific problem at hand rather than worrying about data integration between disparate platforms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Koldunov, Nikolay V. y Luisa Cristini. "Programming as a soft skill for project managers: How to have a computer take over some of your work". Advances in Geosciences 45 (18 de octubre de 2018): 295–303. http://dx.doi.org/10.5194/adgeo-45-295-2018.

Texto completo
Resumen
Abstract. Large part of the project manager's work can be described in terms of retrieving, processing, analysing and synthesizing various types of data from different sources. The types of information become more and more diverse (including participants, task and financial details, and dates) and data volumes continue to increase, especially for large international collaborations. In this paper we explore the possibility of using the python programming language as a tool for retrieving and processing data for some project management tasks. python is a general-purpose programming language with a very rich set of libraries. In recent years python experienced explosive growth leading to development of several libraries that help to efficiently solve many data related tasks without very deep knowledge of programming in general and python in particular. In this paper we present some of the core python libraries that can be used to solve some typical project management tasks and demonstrate several real-world applications using a HORIZON 2020 type European project and as example.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Dysarz, Tomasz. "Application of Python Scripting Techniques for Control and Automation of HEC-RAS Simulations". Water 10, n.º 10 (2 de octubre de 2018): 1382. http://dx.doi.org/10.3390/w10101382.

Texto completo
Resumen
The purpose of the paper was to present selected techniques for the control of river flow and sediment transport computations with the programming language Python. The base software for modeling of river processes was the well-known and widely used HEC-RAS. The concepts were tested on two models created for a single reach of the Warta river located in the central part of Poland. The ideas described were illustrated with three examples. The first was a basic simulation of a steady flow run from the Python script. The second example presented automatic calibration of model roughness coefficients with Nelder-Mead simplex from the SciPy module. In the third example, the sediment transport was controlled by Python script. Sediment samples were accessed and changed in the sediment data file stored in XML format. The results of the sediment simulation were read from HDF5 files. The presented techniques showed good effectiveness of this approach. The paper compared the developed techniques with other, earlier approaches to control of HEC-RAS computations. Possible further developments were also discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Anandaram, Mandyam N. "On the Adaptive Quadrature of Fermi-Dirac Functions and their Derivatives". Mapana - Journal of Sciences 18, n.º 1 (1 de enero de 2019): 1–20. http://dx.doi.org/10.12723/mjs.48.1.

Texto completo
Resumen
In this paper, using the Python SciPy module “quad”, a fast auto-adaptive quadrature solver based on the pre-compiled QUADPACK Fortran package, computational research is undertaken to accurately integrate the generalised Fermi-Dirac function and all its partial derivatives up to the third order. The numerical results obtained with quad method when combined with optimised break points achieve an excellent accuracy comparable to that obtained by other publications using fixed-order quadratures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

De Pra, Yuri y Federico Fontana. "Programming Real-Time Sound in Python". Applied Sciences 10, n.º 12 (19 de junio de 2020): 4214. http://dx.doi.org/10.3390/app10124214.

Texto completo
Resumen
For its versatility, Python has become one of the most popular programming languages. In spite of its possibility to straightforwardly link native code with powerful libraries for scientific computing, the use of Python for real-time sound applications development is often neglected in favor of alternative programming languages, which are tailored to the digital music domain. This article introduces Python as a real-time software programming tool to interested readers, including Python developers who are new to the real time or, conversely, sound programmers who have not yet taken this language into consideration. Cython and Numba are proposed as libraries supporting agile development of efficient software running at machine level. Moreover, it is shown that refactoring few critical parts of the program under these libraries can dramatically improve the performances of a sound algorithm. Such improvements can be directly benchmarked within Python, thanks to the existence of appropriate code parsing resources. After introducing a simple sound processing example, two algorithms that are known from the literature are coded to show how Python can be effectively employed to program sound software. Finally, issues of efficiency are mainly discussed in terms of latency of the resulting applications. Overall, such issues suggest that the use of real-time Python should be limited to the prototyping phase, where the benefits of language flexibility prevail on low latency requirements, for instance, needed during computer music live performances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Toby, Brian H. y Robert B. Von Dreele. "GSAS-II: the genesis of a modern open-source all purpose crystallography software package". Journal of Applied Crystallography 46, n.º 2 (14 de marzo de 2013): 544–49. http://dx.doi.org/10.1107/s0021889813003531.

Texto completo
Resumen
The newly developedGSAS-IIsoftware is a general purpose package for data reduction, structure solution and structure refinement that can be used with both single-crystal and powder diffraction data from both neutron and X-ray sources, including laboratory and synchrotron sources, collected on both two- and one-dimensional detectors. It is intended thatGSAS-IIwill eventually replace both theGSASand theEXPGUIpackages, as well as many other utilities.GSAS-IIis open source and is written largely in object-oriented Python but offers speeds comparable to compiled code because of its reliance on the Python NumPy and SciPy packages for computation. It runs on all common computer platforms and offers highly integrated graphics, both for a user interface and for interpretation of parameters. The package can be applied to all stages of crystallographic analysis for constant-wavelength X-ray and neutron data. Plans for considerable additional development are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Geldiev, Ertan Mustafa, Nayden Valkov Nenkov y Mariana Mateeva Petrova. "EXERCISE OF MACHINE LEARNING USING SOME PYTHON TOOLS AND TECHNIQUES". CBU International Conference Proceedings 6 (25 de septiembre de 2018): 1062–70. http://dx.doi.org/10.12955/cbup.v6.1295.

Texto completo
Resumen
One of the goals of predictive analytics training using Python tools is to create a "Model" from classified examples that classifies new examples from a Dataset. The purpose of different strategies and experiments is to create a more accurate prediction model. The goals we set out in the study are to achieve successive steps to find an accurate model for a dataset and preserving it for its subsequent use using the python instruments. Once we have found the right model, we save it and load it later, to classify if we have "phishing" in our case. In the case that the path we reach to the discovery of the search model, we can ask ourselves how much we can automate everything and whether a computer program can be written to automatically go through the unified steps and to find the right model? Due to the fact that the steps for finding the exact model are often unified and repetitive for different types of data, we have offered a hypothetical algorithm that could write a complex computer program searching for a model, for example when we have a classification task. This algorithm is rather directional and does not claim to be all-encompassing. The research explores some features of Python Scientific Python Packages like Numpy, Pandas, Matplotlib, Scipy and scycit-learn to create a more accurate model. The Dataset used for the research was downloaded free from the UCI Machine Learning Repository (UCI Machine Learning Repository, 2017).
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Fernique, Pierre y Christophe Pradal. "AutoWIG: automatic generation of python bindings for C++ libraries". PeerJ Computer Science 4 (2 de abril de 2018): e149. http://dx.doi.org/10.7717/peerj-cs.149.

Texto completo
Resumen
Most of Python and R scientific packages incorporate compiled scientific libraries to speed up the code and reuse legacy libraries. While several semi-automatic solutions exist to wrap these compiled libraries, the process of wrapping a large library is cumbersome and time consuming. In this paper, we introduce AutoWIG, a Python package that wraps automatically compiled libraries into high-level languages using LLVM/Clang technologies and the Mako templating engine. Our approach is automatic, extensible, and applies to complex C++ libraries, composed of thousands of classes or incorporating modern meta-programming constructs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Zahidi, Youssra, Yacine El Younoussi y Yassine Al-Amrani. "A powerful comparison of deep learning frameworks for Arabic sentiment analysis". International Journal of Electrical and Computer Engineering (IJECE) 11, n.º 1 (1 de febrero de 2021): 745. http://dx.doi.org/10.11591/ijece.v11i1.pp745-752.

Texto completo
Resumen
Deep learning (DL) is a machine learning (ML) subdomain that involves algorithms taken from the brain function named artificial neural networks (ANNs). Recently, DL approaches have gained major accomplishments across various Arabic natural language processing (ANLP) tasks, especially in the domain of Arabic sentiment analysis (ASA). For working on Arabic SA, researchers can use various DL libraries in their projects, but without justifying their choice or they choose a group of libraries relying on their particular programming language familiarity. We are basing in this work on Java and Python programming languages because they have a large set of deep learning libraries that are very useful in the ASA domain. This paper focuses on a comparative analysis of different valuable Python and Java libraries to conclude the most relevant and robust DL libraries for ASA. Throw this comparative analysis, and we find that: TensorFlow, Theano, and Keras Python frameworks are very popular and very used in this research domain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Spotz, William F. "PyTrilinos: Recent Advances in the Python Interface to Trilinos". Scientific Programming 20, n.º 3 (2012): 311–25. http://dx.doi.org/10.1155/2012/965812.

Texto completo
Resumen
PyTrilinos is a set of Python interfaces to compiled Trilinos packages. This collection supports serial and parallel dense linear algebra, serial and parallel sparse linear algebra, direct and iterative linear solution techniques, algebraic and multilevel preconditioners, nonlinear solvers and continuation algorithms, eigensolvers and partitioning algorithms. Also included are a variety of related utility functions and classes, including distributed I/O, coloring algorithms and matrix generation. PyTrilinos vector objects are compatible with the popular NumPy Python package. As a Python front end to compiled libraries, PyTrilinos takes advantage of the flexibility and ease of use of Python, and the efficiency of the underlying C++, C and Fortran numerical kernels. This paper covers recent, previously unpublished advances in the PyTrilinos package.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Huang, Jiacong, Junfeng Gao, Georg Hörmann y Wolf M. Mooij. "Integrating three lake models into a Phytoplankton Prediction System for Lake Taihu (Taihu PPS) with Python". Journal of Hydroinformatics 14, n.º 2 (7 de noviembre de 2011): 523–34. http://dx.doi.org/10.2166/hydro.2011.020.

Texto completo
Resumen
In the past decade, much work has been done on integrating different lake models using general frameworks to overcome model incompatibilities. However, a framework may not be flexible enough to support applications in different fields. To overcome this problem, we used Python to integrate three lake models into a Phytoplankton Prediction System for Lake Taihu (Taihu PPS). The system predicts the short-term (1–4 days) distribution of phytoplankton biomass in this large eutrophic lake in China. The object-oriented scripting language Python is used as the so-called ‘glue language’ (a programming language used for connecting software components). The distinguishing features of Python include rich extension libraries for spatial and temporal modelling, modular software architecture, free licensing and a high performance resulting in short execution time. These features facilitate efficient integration of the three models into Taihu PPS. Advanced tools (e.g. tools for statistics, 3D visualization and model calibration) could be developed in the future with the aid of the continuously updated Python libraries. Taihu PPS simulated phytoplankton biomass well and has already been applied to support decision making.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Lisboa, Antônio Chicharo Prata, Claudia Mazza Dias y Julio Henrique Lopes De Almeida. "Análise Energética e Exergética para o Ciclo Dual em Motores de Alta Rotação". Pesquisa e Ensino em Ciências Exatas e da Natureza 2, n.º 1.1 (26 de noviembre de 2018): 15. http://dx.doi.org/10.29215/pecen.v2i2.1037.

Texto completo
Resumen
<p>Este trabalho tem como propósito fazer uma análise do Ciclo Dual, adequado a motores de alta rotação, com o objetivo de obter suas eficiências Energética e Exergética, bem como quantificar a destruição de exergia nos processos. Os resultados são obtidos através de estudos paramétricos que permitem a avaliação do comportamento dos parâmetros de interesse. Todos os cálculos são desenvolvidos em um código aberto escrito em Python® com o suporte das bibliotecas numpy e scipy.</p><p><strong>Palavras chave</strong>: Ciclo Dual, Modelagem Computacional, Eficiência Exergética.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Dembinski, Hans Peter, Jim Pivarski y Henry Schreiner. "Recent developments in histogram libraries". EPJ Web of Conferences 245 (2020): 05014. http://dx.doi.org/10.1051/epjconf/202024505014.

Texto completo
Resumen
Boost.Histogram, a header-only C++14 library that provides multidimensional histograms and profiles, became available in Boost 1.70. It is extensible, fast, and uses modern C++ features. Using template metaprogramming, the most efficient code path for any given configuration is automatically selected. The library includes key features designed for the particle physics community, such as optional under- and overflow bins, weighted increments, reductions, growing axes, thread-safe filling, and memory-efficient counters with high-dynamic range. Python bindings for Boost.Histogram are being developed in the Scikit-HEP project to provide a fast, easy-to-install package as a backend for other Python libraries and for advanced users to manipulate histograms. Versatile and efficient histogram filling, effective manipulation, multithreading support, and other features make this a powerful tool. This library has also driven package distribution efforts in Scikit-HEP, allowing binary packages hosted on PyPI to be available for a very wide variety of platforms. Two other libraries fill out the remainder of the Scikit-HEP Python histogramming effort. Aghast is a library designed to provide conversions between different forms of histograms, enabling interaction between histogram libraries, often without an extra copy in memory. This enables a user to make a histogram in one library and then save it in another form, such as saving a Boost.Histogram in ROOT. And Hist is a library providing friendly, analyst-targeted syntax and shortcuts for quick manipulations and fast plotting using these two libraries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Zahidi, Youssra, Yacine El Younoussi y Yassine Al-Amrani. "Different valuable tools for Arabic sentiment analysis: a comparative evaluation". International Journal of Electrical and Computer Engineering (IJECE) 11, n.º 1 (1 de febrero de 2021): 753. http://dx.doi.org/10.11591/ijece.v11i1.pp753-762.

Texto completo
Resumen
Arabic Natural language processing (ANLP) is a subfield of artificial intelligence (AI) that tries to build various applications in the Arabic language like Arabic sentiment analysis (ASA) that is the operation of classifying the feelings and emotions expressed for defining the attitude of the writer (neutral, negative or positive). In order to work on ASA, researchers can use various tools in their research projects without explaining the cause behind this use, or they choose a set of libraries according to their knowledge about a specific programming language. Because of their libraries' abundance in the ANLP field, especially in ASA, we are relying on JAVA and Python programming languages in our research work. This paper relies on making an in-depth comparative evaluation of different valuable Python and Java libraries to deduce the most useful ones in Arabic sentiment analysis (ASA). According to a large variety of great and influential works in the domain of ASA, we deduce that the NLTK, Gensim and TextBlob libraries are the most useful for Python ASA task. In connection with Java ASA libraries, we conclude that Weka and CoreNLP tools are the most used, and they have great results in this research domain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Amela, Ramon, Cristian Ramon-Cortes, Jorge Ejarque, Javier Conejero y Rosa M. Badia. "Executing linear algebra kernels in heterogeneous distributed infrastructures with PyCOMPSs". Oil & Gas Science and Technology – Revue d’IFP Energies nouvelles 73 (2018): 47. http://dx.doi.org/10.2516/ogst/2018047.

Texto completo
Resumen
Python is a popular programming language due to the simplicity of its syntax, while still achieving a good performance even being an interpreted language. The adoption from multiple scientific communities has evolved in the emergence of a large number of libraries and modules, which has helped to put Python on the top of the list of the programming languages [1]. Task-based programming has been proposed in the recent years as an alternative parallel programming model. PyCOMPSs follows such approach for Python, and this paper presents its extensions to combine task-based parallelism and thread-level parallelism. Also, we present how PyCOMPSs has been adapted to support heterogeneous architectures, including Xeon Phi and GPUs. Results obtained with linear algebra benchmarks demonstrate that significant performance can be obtained with a few lines of Python.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Sánchez Villanueva, Javier. "Introducción a OpenKet". Revista de la Escuela de Física 2, n.º 1 (2 de septiembre de 2019): 3–10. http://dx.doi.org/10.5377/ref.v2i1.8290.

Texto completo
Resumen
Este es un manual introductorio a la librería de Python OpenKet. Esta librería es una herramienta para la manipulación de objetos en mecánica cuántica, tales como vectores en la notación de Dirac, operadores etc. Es un software libre bajo la licencia de GNUGPL v3, y utiliza otras librerías de software libre, tal como, Sympy, Pylab y Scipy. El proyecto comenzó a finales del 2009 con Vicente Rodríguez, bajo la supervisión de Pablo Barberis-Bolstein, como parte del requisito para su graduación. El Dr. Barberis-Bolstein profesor en el Instituto de Matemática Aplicada de la UNAM, quien estuvo en nuestro país para el CURCCAF en el 2011. Para el uso de OpenKet también se necesitará tener algún conocimiento en Python. OpenKet puede manipular expresiones que contengan objetos como: Kets (vectores), Bras (vectores duales), operadores, operadores adjuntos, operadores de ascenso y descenso, y puede realizar diversas operaciones como: sumar, restar, multiplicar por escalares, obtener el conjunto Hermitiano, aplicar operadores, realizar producto interior, producto exterior, producto tensorial, obtener la representación matricial de un operador, entre otras funciones. OpenKet todavía se encuentra en sus primeros pasos, y por lo tanto tiene un montón de agujeros. Consecuentemente está en pleno desarrollo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zollweg, J. A. "EXTRACTING SPATIOTEMPORAL OBJECTS FROM RASTER DATA TO REPRESENT PHYSICAL FEATURES AND ANALYZE RELATED PROCESSES". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-4/W2 (19 de octubre de 2017): 87–92. http://dx.doi.org/10.5194/isprs-annals-iv-4-w2-87-2017.

Texto completo
Resumen
Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don’t see millions of cubes of atmosphere; we see a thunderstorm ‘object’. Temporally, we don’t see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain’s perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA’s High-Resolution Rapid Refresh v2 (HRRRv2) data stream.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Herlawati, Herlawati y Rahmadya Trias Handayanto. "Penggunaan Matlab dan Python dalam Klasterisasi Data". Jurnal Kajian Ilmiah 20, n.º 1 (12 de mayo de 2020): 103–18. http://dx.doi.org/10.31599/jki.v20i1.85.

Texto completo
Resumen
Abstract Organizations need to dig through the data clustering process, both past data and data from the internet. Sometimes the data has to be re-clustered to match the actual conditions. Therefore, it is necessary to prepare clustering support equipment. In this study the K-Means method was chosen for comparing two technical computational languages, i.e. Matlab and Python which are currently in great demand by researchers and can be used by organizations for a clustering process. This study showed both Matlab and Python have enough libraries (libraries) and toolboxes to help users in data clastering as well as graphics presentation. The test results show that the two programming languages are capable of carrying out the clustering process with two clusters; cluster 1 with a center point at coordinates (1.24, 1.34) and cluster 2 with a center point at coordinates (3.1, 3.07) and are presented by a cluster distribution plot. Keywords: Clusterization, K-Means, Matlab, Python. Abstrak Organisasi perlu menggali data lewat proses klasterisasi data, baik data lampau maupun data dari internet. Terkadang data harus dilakukan klasterisasi ulang untuk mencocokan dengan kondisi yang sebenarnya. Oleh karena itu perlu dipersiapkan peralatan pendukung klasterisasi. Dalam penelitian ini metode K-Means dipilih untuk membandingkan dua bahasa komputasi teknis yaitu Matlab dan Python yang sekarang ini banyak diminati para peneliti yang dan dapat digunakan oleh organisasi yang membutuhkan proses klasterisasi. Hasil dari penelitian ini menunjukan baik Matlab maupun Python memiliki cukup pustaka (library) dan toolbox dalam membantu pengguna mengklasterisasi data, mempresentasikan grafik. Hasil pengujian menunjukan kedua Bahasa pemrograman mampu menjalankan proses klasterisasi berupa klaster 1 yang memiliki titik pusat yang berada pada koordinat (1.24, 1.34) dan klaster 2 dengan titik pusat yang berada pada koordinat (3.1, 3.07) disertai dengan plot sebaran klasternya. Kata kunci: Klasterisasi, K-Means, Matlab, Python.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Tejedor, Enric, Yolanda Becerra, Guillem Alomar, Anna Queralt, Rosa M. Badia, Jordi Torres, Toni Cortes y Jesús Labarta. "PyCOMPSs: Parallel computational workflows in Python". International Journal of High Performance Computing Applications 31, n.º 1 (27 de julio de 2016): 66–82. http://dx.doi.org/10.1177/1094342015594678.

Texto completo
Resumen
The use of the Python programming language for scientific computing has been gaining momentum in the last years. The fact that it is compact and readable and its complete set of scientific libraries are two important characteristics that favour its adoption. Nevertheless, Python still lacks a solution for easily parallelizing generic scripts on distributed infrastructures, since the current alternatives mostly require the use of APIs for message passing or are restricted to embarrassingly parallel computations. In that sense, this paper presents PyCOMPSs, a framework that facilitates the development of parallel computational workflows in Python. In this approach, the user programs her script in a sequential fashion and decorates the functions to be run as asynchronous parallel tasks. A runtime system is in charge of exploiting the inherent concurrency of the script, detecting the data dependencies between tasks and spawning them to the available resources. Furthermore, we show how this programming model can be built on top of a Big Data storage architecture, where the data stored in the backend is abstracted and accessed from the application in the form of persistent objects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Zhegallo, A. V. "Tracking moving objects on video in psychological studies". Experimental Psychology (Russia) 12, n.º 4 (2019): 5–11. http://dx.doi.org/10.17759/exppsy.2019120401.

Texto completo
Resumen
The article discusses the possibilities of using OpenCV and dlib libraries to track the position of specified objects on the video image and to track a set of landmark points of the human face. It is shown that these problems can be effectively solved with the help of compact programs in Python. The Python language is recommended to the attention of research psychologists as an instrument for analyzing video images and constructing complex experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Akishin, Boris Alekseevich. "Using the Python Possibilities in Studying of the Mathematical Subjects in Technical Higher Education". Russian Digital Libraries Journal 23, n.º 1-2 (3 de marzo de 2020): 6–13. http://dx.doi.org/10.26907/1562-5419-2020-23-1-2-6-13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Javidi, G., E. Sheybani y D. Mason. "Visualization of Real-Time Radar Data by Integration of X-Band Software". International Journal of Interdisciplinary Telecommunications and Networking 6, n.º 4 (octubre de 2014): 73–82. http://dx.doi.org/10.4018/ijitn.2014100108.

Texto completo
Resumen
To have a fully functional FMCW X-band radar for the SMARTLabs ACHIEVE trailer, it is necessary to produce code to retrieve data from an FPGA board linked to the radar, calculate Fourier transforms and display the power spectrum in near-real time using a computer code based on freely available scientific development tools. In order for the communication between the FPGA board and the computer to be reliable and accurate, developing a specific format through the use of C was an initial step. This was followed by the development of a method to visualize data efficiently. In this case, Python, along with its matplotlib, SciPy, and NumPy modules, were used. Both programs were then integrated together within a graphical user interface.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Winkel, Benjamin y Axel Jessner. "Spectrum management and compatibility studies with Python". Advances in Radio Science 16 (4 de septiembre de 2018): 177–94. http://dx.doi.org/10.5194/ars-16-177-2018.

Texto completo
Resumen
Abstract. We developed the pycraf Python package, which provides functions and procedures for various tasks related to spectrum-management compatibility studies. This includes an implementation of ITU-R Rec. P.452 (ITU-R, 2015), which allows to calculate the path attenuation arising from the distance and terrain properties between an interferer and the victim service. A typical example would be the calculation of interference levels at a radio telescope produced from a radio broadcasting tower. Furthermore, pycraf provides functionality to calculate atmospheric attenuation as proposed in ITU-R Rec. P.676 (ITU-R, 2013).Using the rich ecosystem of scientific Python libraries and our pycraf package, we performed a large number of compatibility studies. Here, we will highlight a recent case study, where we analysed the potential harm that the next-generation cell-phone standard 5G could bring to observations at a radio observatory. For this we implemented a Monte-Carlo simulation to deal with the quasi-statistical spatial distribution of base stations and user devices around the radio astronomy station.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Raschka, Sebastian, Joshua Patterson y Corey Nolet. "Machine Learning in Python: Main Developments and Technology Trends in Data Science, Machine Learning, and Artificial Intelligence". Information 11, n.º 4 (4 de abril de 2020): 193. http://dx.doi.org/10.3390/info11040193.

Texto completo
Resumen
Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Eschle, Jonas, Albert Navarro Puig, Rafael Silva Coutinho y Nicola Serra. "zfit: scalable pythonic fitting". EPJ Web of Conferences 245 (2020): 06025. http://dx.doi.org/10.1051/epjconf/202024506025.

Texto completo
Resumen
Statistical modeling and fitting is a key element in most HEP analyses. This task is usually performed in the C++ based framework ROOT/RooFit. Recently the HEP community started shifting more to the Python language, which the tools above are only loose integrated into, and a lack of stable, native Python based toolkits became clear. We presented zfit, a project that aims at building a fitting ecosystem by providing a carefully designed, stable API and a workflow for libraries to communicate together with an implementation fully integrated into the Python ecosystem. It is built on top of one of the state-of-theart industry tools, TensorFlow, which is used the main computational backend. zfit provides data loading, extensive model building capabilities, loss creation, minimization and certain error estimation. Each part is also provided with convenient base classes built for customizability and extendability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

McFerren, G. y T. van Zyl. "GEOSPATIAL DATA STREAM PROCESSING IN PYTHON USING FOSS4G COMPONENTS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (22 de junio de 2016): 931–37. http://dx.doi.org/10.5194/isprs-archives-xli-b7-931-2016.

Texto completo
Resumen
One viewpoint of current and future IT systems holds that there is an increase in the scale and velocity at which data are acquired and analysed from heterogeneous, dynamic sources. In the earth observation and geoinformatics domains, this process is driven by the increase in number and types of devices that report location and the proliferation of assorted sensors, from satellite constellations to oceanic buoy arrays. Much of these data will be encountered as self-contained messages on data streams - continuous, infinite flows of data. Spatial analytics over data streams concerns the search for spatial and spatio-temporal relationships within and amongst data “on the move”. In spatial databases, queries can assess a store of data to unpack spatial relationships; this is not the case on streams, where spatial relationships need to be established with the incomplete data available. Methods for spatially-based indexing, filtering, joining and transforming of streaming data need to be established and implemented in software components. This article describes the usage patterns and performance metrics of a number of well known FOSS4G Python software libraries within the data stream processing paradigm. In particular, we consider the RTree library for spatial indexing, the Shapely library for geometric processing and transformation and the PyProj library for projection and geodesic calculations over streams of geospatial data. We introduce a message oriented Python-based geospatial data streaming framework called Swordfish, which provides data stream processing primitives, functions, transports and a common data model for describing messages, based on the Open Geospatial Consortium Observations and Measurements (O&M) and Unidata Common Data Model (CDM) standards. We illustrate how the geospatial software components are integrated with the Swordfish framework. Furthermore, we describe the tight temporal constraints under which geospatial functionality can be invoked when processing high velocity, potentially infinite geospatial data streams. The article discusses the performance of these libraries under simulated streaming loads (size, complexity and volume of messages) and how they can be deployed and utilised with Swordfish under real load scenarios, illustrated by a set of Vessel Automatic Identification System (AIS) use cases. We conclude that the described software libraries are able to perform adequately under geospatial data stream processing scenarios - many real application use cases will be handled sufficiently by the software.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía