To see the other types of publications on this topic, follow the link: Scientific Array Data.

Journal articles on the topic 'Scientific Array Data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Scientific Array Data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Harris, Charles R., K. Jarrod Millman, Stéfan J. van der Walt, et al. "Array programming with NumPy." Nature 585, no. 7825 (2020): 357–62. http://dx.doi.org/10.1038/s41586-020-2649-2.

Full text
Abstract:
AbstractArray programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Jun-Yeong, Moon-Hyun Kim, Syed Asif Raza Shah, Sang-Un Ahn, Heejun Yoon, and Seo-Young Noh. "Performance Evaluations of Distributed File Systems for Scientific Big Data in FUSE Environment." Electronics 10, no. 12 (2021): 1471. http://dx.doi.org/10.3390/electronics10121471.

Full text
Abstract:
Data are important and ever growing in data-intensive scientific environments. Such research data growth requires data storage systems that play pivotal roles in data management and analysis for scientific discoveries. Redundant Array of Independent Disks (RAID), a well-known storage technology combining multiple disks into a single large logical volume, has been widely used for the purpose of data redundancy and performance improvement. However, this requires RAID-capable hardware or software to build up a RAID-enabled disk array. In addition, it is difficult to scale up the RAID-based storage. In order to mitigate such a problem, many distributed file systems have been developed and are being actively used in various environments, especially in data-intensive computing facilities, where a tremendous amount of data have to be handled. In this study, we investigated and benchmarked various distributed file systems, such as Ceph, GlusterFS, Lustre and EOS for data-intensive environments. In our experiment, we configured the distributed file systems under a Reliable Array of Independent Nodes (RAIN) structure and a Filesystem in Userspace (FUSE) environment. Our results identify the characteristics of each file system that affect the read and write performance depending on the features of data, which have to be considered in data-intensive computing environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Glushanovskiy, A. V. "Bibliometric analysis of Russian publications’ quality in physical area, included to the Web of Science Core Collection Data Base." Bibliosphere, no. 2 (July 21, 2020): 49–60. http://dx.doi.org/10.20913/1815-3186-2020-2-49-60.

Full text
Abstract:
The article analyzes the changes in the bibliometric characteristics of the array of Russian publications reflected in the Web of Science Core Collection (WoS CC database) in the field of physics in 2018, compared to the same characteristics in 2010. The main parameter to assess the quality (research level) of arrays with bibliometric point of view was “Comprehensive index of quality” (CIQ) for the array of publications, calculated on the basis of one of the parameters in “Method for calculating the qualitative indicator of the state task “the Comprehensive performance score publication”...”, used by the Ministry of science and higher education of the Russian Federation. It was found that with an almost twofold increase in the volume of the array, there was a slight decrease in its quality in terms of CIQ in 2018 in comparison with 2010. The author also compared the characteristics of the array of Russian publications in 2018 with similar ones of the arrays on physical publications in Germany, India and Great Britain, located close to Russia in the ranking by the number of publications included in the WoS array (in this ranking, Russia was on the fourth place in 2018). In the ranking based on the CIQ indicator, the arrays of these countries are significantly ahead of the Russian one, and our country is only on the sixth place. The main reasons for this lag in the Russian publications array are identified. They are: a lower percentage of Russian publications in high-quartile journals and a greater number of publications from conference proceedings. The conclusion is made about the applicability of bibliometric analysis to identify trends in publishing activities in the scientific field.
APA, Harvard, Vancouver, ISO, and other styles
4

NARAYANAN, P. J., and LARRY S. DAVIS. "REPLICATED IMAGE ALGORITHMS AND THEIR ANALYSES ON SIMD MACHINES." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 02n03 (1992): 335–52. http://dx.doi.org/10.1142/s0218001492000217.

Full text
Abstract:
Data parallel processing on processor array architectures has gained popularity in data intensive applications, such as image processing and scientific computing, as massively parallel processor array machines became feasible commercially. The data parallel paradigm of assigning one processing element to each data element results in an inefficient utilization of a large processor array when a relatively small data structure is processed on it. The large degree of parallelism of a massively parallel processor array machine does not result in a faster solution to a problem involving relatively small data structures than the modest degree of parallelism of a machine that is just as large as the data structure. We presented data replication technique to speed up the processing of small data structures on large processor arrays. In this paper, we present replicated data algorithms for digital image convolutions and median filtering, and compare their performance with conventional data parallel algorithms for the same on three popular array interconnection networks, namely, the 2-D mesh, the 3-D mesh, and the hypercube.
APA, Harvard, Vancouver, ISO, and other styles
5

Silver, Jeremy D., and Charles S. Zender. "The compression–error trade-off for large gridded data sets." Geoscientific Model Development 10, no. 1 (2017): 413–23. http://dx.doi.org/10.5194/gmd-10-413-2017.

Full text
Abstract:
Abstract. The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scale and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.
APA, Harvard, Vancouver, ISO, and other styles
6

Mokhnacheva, Yu V., and V. A. Tsvetkova. "Russia in the world array of scientific publications." Вестник Российской академии наук 89, no. 8 (2019): 820–30. http://dx.doi.org/10.31857/s0869-5873898820-830.

Full text
Abstract:
This article presents the results of a study on the representation of Russian publications in the global array of publications that discuss narrow thematic areas within the entire post-Soviet period, using ranking distributions from the Web of Science Core Collection (WoS CC). For extraction and analysis, the classifiers provided by the WoS categories were used. Of the 252 subject categories in the WoS CC, the share of Russian publications for 2010–2017 was not less than 0.4% of the global flow in 132 scientific areas. Therefore, in the period 1993–2000, a gradual recovery was found of Russia’s lost position in the world ranking of countries by number of publications in the WoS CC. Currently, positive changes have been observed both for the entire array of Russian publications in particular in narrow scientific topics. The highest-ranking position for Russian publications fell in 1993–1999, and the greatest decline, when the share of Russian publications fell to their minimum values, was in 2011–2014. Data on the scientific areas in which Russia managed to stay in the top 10 leading countries during its recent history, according to share of publications in the global array, are presented. This list slightly expanded from 2010 to 2017, and today, it includes 39 areas in which Russia is in the top 10 countries, and it is among the five leading countries in eight areas of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
7

Bonaldi, A., T. An, M. Brüggen, et al. "Square Kilometre Array Science Data Challenge 1: analysis and results." Monthly Notices of the Royal Astronomical Society 500, no. 3 (2020): 3821–37. http://dx.doi.org/10.1093/mnras/staa3023.

Full text
Abstract:
ABSTRACT As the largest radio telescope in the world, the Square Kilometre Array (SKA) will lead the next generation of radio astronomy. The feats of engineering required to construct the telescope array will be matched only by the techniques developed to exploit the rich scientific value of the data. To drive forward the development of efficient and accurate analysis methods, we are designing a series of data challenges that will provide the scientific community with high-quality data sets for testing and evaluating new techniques. In this paper, we present a description and results from the first such Science Data Challenge 1 (SDC1). Based on SKA MID continuum simulated observations and covering three frequencies (560, 1400, and 9200 MHz) at three depths (8, 100, and 1000 h), SDC1 asked participants to apply source detection, characterization, and classification methods to simulated data. The challenge opened in 2018 November, with nine teams submitting results by the deadline of 2019 April. In this work, we analyse the results for eight of those teams, showcasing the variety of approaches that can be successfully used to find, characterize, and classify sources in a deep, crowded field. The results also demonstrate the importance of building domain knowledge and expertise on this kind of analysis to obtain the best performance. As high-resolution observations begin revealing the true complexity of the sky, one of the outstanding challenges emerging from this analysis is the ability to deal with highly resolved and complex sources as effectively as the unresolved source population.
APA, Harvard, Vancouver, ISO, and other styles
8

Edwards, H. Carter, Daniel Sunderland, Vicki Porter, Chris Amsler, and Sam Mish. "Manycore Performance-Portability: Kokkos Multidimensional Array Library." Scientific Programming 20, no. 2 (2012): 89–114. http://dx.doi.org/10.1155/2012/917630.

Full text
Abstract:
Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel execution performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].
APA, Harvard, Vancouver, ISO, and other styles
9

Federici, Memmo, Bruno Luigi Martino, and Pietro Ubertini. "AVES: A high performance computer cluster array for the INTEGRAL satellite scientific data analysis." Experimental Astronomy 34, no. 1 (2012): 105–21. http://dx.doi.org/10.1007/s10686-012-9301-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Labarta, J., E. Ayguadé, J. Oliver, and D. S. Henty. "New OpenMP Directives for Irregular Data Access Loops." Scientific Programming 9, no. 2-3 (2001): 175–83. http://dx.doi.org/10.1155/2001/798505.

Full text
Abstract:
Many scientific applications involve array operations that are sparse in nature, ie array elements depend on the values of relatively few elements of the same or another array. When parallelised in the shared-memory model, there are often inter-thread dependencies which require that the individual array updates are protected in some way. Possible strategies include protecting all the updates, or having each thread compute local temporary results which are then combined globally across threads. However, for the extremely common situation of sparse array access, neither of these approaches is particularly efficient. The key point is that data access patterns usually remain constant for a long time, so it is possible to use an inspector/executor approach. When the sparse operation is first encountered, the access pattern is inspected to identify those updates which have potential inter-thread dependencies. Whenever the code is actually executed, only these selected updates are protected. We propose a new OpenMP clause, indirect, for parallel loops that have irregular data access patterns. This is trivial to implement in a conforming way by protecting every array update, but also allows for an inspector/executor compiler implementation which will be more efficient in sparse cases. We describe efficient compiler implementation strategies for the new directive. We also present timings from the kernels of a Discrete Element Modelling application and a Finite Element code where the inspector/executor approach is used. The results demonstrate that the method can be extremely efficient in practice.
APA, Harvard, Vancouver, ISO, and other styles
11

McKee, James W. "Pulsar science with data from the Large European Array for Pulsars." Proceedings of the International Astronomical Union 13, S337 (2017): 374–75. http://dx.doi.org/10.1017/s1743921317009462.

Full text
Abstract:
AbstractThe Large European Array for Pulsars (LEAP) is a European Pulsar Timing Array project that combines the Lovell, Effelsberg, Nançay, Sardinia, and Westerbork radio telescopes into a single tied-array, and makes monthly observations of a set of millisecond pulsars (MSPs). The overview of our experiment is presented in Bassa et al. (2016). Baseband data are recorded at a central frequency of 1396 MHz and a bandwidth of 128 MHz at each telescope, and are correlated offline on a cluster at Jodrell Bank Observatory using a purpose-built correlator, detailed in Smits et al. (2017). LEAP offers a substantial increase in sensitivity over that of the individual telescopes, and can operate in timing and imaging modes (notably in observations of the galactic centre radio magnetar; Wucknitz 2015). To date, 4 years of observations have been reduced. Here, we report on the scientific projects which have made use of LEAP data.
APA, Harvard, Vancouver, ISO, and other styles
12

Taylor, A. R. "Data Intensive Radio Astronomy en route to the SKA: The Rise of Big Radio Data." Proceedings of the International Astronomical Union 10, H16 (2012): 677–78. http://dx.doi.org/10.1017/s1743921314012861.

Full text
Abstract:
AbstractAdvances in both digital processing devices and in technologies to sample the focal and aperture planes of radio antennas is enabling observations of the radio sky with high spectral and spatial resolution combined with large bandwidth and field of view. As a consequence, survey mode radio astronomy generating vast amounts of data and involving globally distributed collaborations is fast becoming a primary tool for scientific advance. The Square Kilometre Array (SKA) will open up a new frontier in data intensive astronomy. Within the next few years SKA precursor telescopes will demonstrate new technologies and take the first major steps toward the SKA. Projects that path find the scientific journey to the SKA with these and other telescopes are currently underway and being planned. The associated exponential growth in data require us to explore new methodologies for collaborative end-to-end execution of data intensive observing programs.
APA, Harvard, Vancouver, ISO, and other styles
13

Pintore, F., A. Giuliani, A. Belfiore та ін. "Scientific prospects for a mini-array of ASTRI telescopes: A γ-ray TeV data challenge". Journal of High Energy Astrophysics 26 (червень 2020): 83–94. http://dx.doi.org/10.1016/j.jheap.2020.03.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Srinivasulu, P., P. Yasodha, P. Kamaraj, et al. "1280-MHz Active Array Radar Wind Profiler for Lower Atmosphere: System Description and Data Validation." Journal of Atmospheric and Oceanic Technology 29, no. 10 (2012): 1455–70. http://dx.doi.org/10.1175/jtech-d-12-00030.1.

Full text
Abstract:
Abstract An L-band radar wind profiler was established at National Atmospheric Research Laboratory, Gadanki, India (13.5°N, 79.2°E), to provide continuous high-resolution wind measurements in the lower atmosphere. This system utilizes a fully active array and passive beam-forming network. It operates at 1280 MHz with peak output power of 1.2 kW. The active array comprises a 16 × 16 array of microstrip patch antenna elements fed by dedicated solid-state transceiver modules. A 2D modified Butler beam-forming network is employed to feed the active array. The combination of active array and passive beam-forming network results in enhanced signal-to-noise ratio and simple beam steering. This system also comprises a direct intermediate frequency (IF) digital receiver and pulse compression scheme, which result in more flexibility and enhanced height coverage. The scientific objectives of this profiler are to study the atmospheric boundary layer dynamics and precipitation. Observations made by this profiler have been validated using a collocated GPS sonde. This paper presents the detailed system description, including sample observations for clear-air and precipitation cases.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Wei, and Shi Qun Yin. "A Method of Solving Data Consistency of Disk Array Cache." Advanced Materials Research 532-533 (June 2012): 1172–76. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1172.

Full text
Abstract:
In order to raise speed of reading data from disk array memory, scientific and technological personnel have introduced cache technology into disk array. Since this technique has been invented, although the efficiency of reading data have been solved, after writing operation of countless times in disk cache, data consistency problem has been prominent expression. Especially in this condition that false of electricity and machine abnormal failure, the consistency of the data is more difficult to guarantee. In this paper, we adopt Non-Volatile RAM (NVRAM) devices to realize that the data will not be lost in disk array cache after power failure. Here we design a kind of new cache organizational structure. We firstly introduce cache structure of two tables (real-time mapping table, backup mapping table) and a cache backup block. Then through these two tables, we can work to recover data through the copy between the two tables in the macroscopic, and in the microscopic through cache backup block can backup the cache data from writing failure. As power failure and system breakdown, we can ensure that data will not easily lose and the original data can recovery after system crash by this technology. Thus it ensures the consistency of the data cache.
APA, Harvard, Vancouver, ISO, and other styles
16

Sultana, Camille M., Gavin C. Cornwell, Paul Rodriguez, and Kimberly A. Prather. "FATES: a flexible analysis toolkit for the exploration of single-particle mass spectrometer data." Atmospheric Measurement Techniques 10, no. 4 (2017): 1323–34. http://dx.doi.org/10.5194/amt-10-1323-2017.

Full text
Abstract:
Abstract. Single-particle mass spectrometer (SPMS) analysis of aerosols has become increasingly popular since its invention in the 1990s. Today many iterations of commercial and lab-built SPMSs are in use worldwide. However, supporting analysis toolkits for these powerful instruments are outdated, have limited functionality, or are versions that are not available to the scientific community at large. In an effort to advance this field and allow better communication and collaboration between scientists, we have developed FATES (Flexible Analysis Toolkit for the Exploration of SPMS data), a MATLAB toolkit easily extensible to an array of SPMS designs and data formats. FATES was developed to minimize the computational demands of working with large data sets while still allowing easy maintenance, modification, and utilization by novice programmers. FATES permits scientists to explore, without constraint, complex SPMS data with simple scripts in a language popular for scientific numerical analysis. In addition FATES contains an array of data visualization graphic user interfaces (GUIs) which can aid both novice and expert users in calibration of raw data; exploration of the dependence of mass spectral characteristics on size, time, and peak intensity; and investigations of clustered data sets.
APA, Harvard, Vancouver, ISO, and other styles
17

Farnes, Jamie, Ben Mort, Fred Dulwich, Stef Salvini, and Wes Armour. "Science Pipelines for the Square Kilometre Array." Galaxies 6, no. 4 (2018): 120. http://dx.doi.org/10.3390/galaxies6040120.

Full text
Abstract:
The Square Kilometre Array (SKA) will be both the largest radio telescope ever constructed and the largest Big Data project in the known Universe. The first phase of the project will generate on the order of five zettabytes of data per year. A critical task for the SKA will be its ability to process data for science, which will need to be conducted by science pipelines. Together with polarization data from the LOFAR Multifrequency Snapshot Sky Survey (MSSS), we have been developing a realistic SKA-like science pipeline that can handle the large data volumes generated by LOFAR at 150 MHz. The pipeline uses task-based parallelism to image, detect sources and perform Faraday tomography across the entire LOFAR sky. The project thereby provides a unique opportunity to contribute to the technological development of the SKA telescope, while simultaneously enabling cutting-edge scientific results. In this paper, we provide an update on current efforts to develop a science pipeline that can enable tight constraints on the magnetised large-scale structure of the Universe.
APA, Harvard, Vancouver, ISO, and other styles
18

Murray, S. G., C. Power, and A. S. G. Robotham. "Modelling Galaxy Populations in the Era of Big Data." Proceedings of the International Astronomical Union 10, S306 (2014): 304–6. http://dx.doi.org/10.1017/s1743921314010710.

Full text
Abstract:
AbstractThe coming decade will witness a deluge of data from next generation galaxy surveys such as the Square Kilometre Array and Euclid. How can we optimally and robustly analyse these data to maximise scientific returns from these surveys? Here we discuss recent work in developing both the conceptual and software frameworks for carrying out such analyses and their application to the dark matter halo mass function. We summarise what we have learned about the HMF from the last 10 years of precision CMB data using the open-source HMFcalc framework, before discussing how this framework is being extended to the full Halo Model.
APA, Harvard, Vancouver, ISO, and other styles
19

Ruiz-Rosero, Juan, Gustavo Ramirez-Gonzalez, and Rahul Khanna. "Field Programmable Gate Array Applications—A Scientometric Review." Computation 7, no. 4 (2019): 63. http://dx.doi.org/10.3390/computation7040063.

Full text
Abstract:
Field Programmable Gate Array (FPGA) is a general purpose programmable logic device that can be configured by a customer after manufacturing to perform from a simple logic gate operations to complex systems on chip or even artificial intelligence systems. Scientific publications related to FPGA started in 1992 and, up to now, we found more than 70,000 documents in the two leading scientific databases (Scopus and Clarivative Web of Science). These publications show the vast range of applications based on FPGAs, from the new mechanism that enables the magnetic suspension system for the kilogram redefinition, to the Mars rovers’ navigation systems. This paper reviews the top FPGAs’ applications by a scientometric analysis in ScientoPy, covering publications related to FPGAs from 1992 to 2018. Here we found the top 150 applications that we divided into the following categories: digital control, communication interfaces, networking, computer security, cryptography techniques, machine learning, digital signal processing, image and video processing, big data, computer algorithms and other applications. Also, we present an evolution and trend analysis of the related applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Kassaras, I., Z. Roumelioti, O. J. Ktenidou, K. Pitilakis, N. Voulgaris, and K. Makropoulos. "ACCELEROMETRIC DATA AND WEB PORTAL FOR THE VERTICAL CORINTH GULF SOFT SOIL ARRAY (CORSSA)." Bulletin of the Geological Society of Greece 50, no. 2 (2017): 1081. http://dx.doi.org/10.12681/bgsg.11813.

Full text
Abstract:
Strong motion data recorded during the 15-year operation of the CORinth Gulf Soft Soil Array (CORSSA) in the highly seismic region of Aegion have been homogenized and organized in a MySQL database. In the present work we describe the contents of the database and the web portal through which these data are publicly accessible. CORSSA comprises one surface and four downhole 3-D broadband accelerometric stations. It was installed in 2002, in the framework of European project CORSEIS, aiming at gathering data for studying site effects, liquefaction, and non-linear behaviour of soils, as well as earthquake source properties. To date, the array has recorded 549 local and regional events with magnitudes ranging from 1.1 to 6.5. Although the vast majority of the recorded events caused weak ground motion at the CORSSA site, the scientific value of the data set pertains to the sparsity of this kind of infrastructure in most parts of the world.
APA, Harvard, Vancouver, ISO, and other styles
21

Heald, George, Sui Mao, Valentina Vacca, et al. "Magnetism Science with the Square Kilometre Array." Galaxies 8, no. 3 (2020): 53. http://dx.doi.org/10.3390/galaxies8030053.

Full text
Abstract:
The Square Kilometre Array (SKA) will answer fundamental questions about the origin, evolution, properties, and influence of magnetic fields throughout the Universe. Magnetic fields can illuminate and influence phenomena as diverse as star formation, galactic dynamics, fast radio bursts, active galactic nuclei, large-scale structure, and dark matter annihilation. Preparations for the SKA are swiftly continuing worldwide, and the community is making tremendous observational progress in the field of cosmic magnetism using data from a powerful international suite of SKA pathfinder and precursor telescopes. In this contribution, we revisit community plans for magnetism research using the SKA, in light of these recent rapid developments. We focus in particular on the impact that new radio telescope instrumentation is generating, thus advancing our understanding of key SKA magnetism science areas, as well as the new techniques that are required for processing and interpreting the data. We discuss these recent developments in the context of the ultimate scientific goals for the SKA era.
APA, Harvard, Vancouver, ISO, and other styles
22

Moreira, Belmiro, Spyridon Trigazis, and Theodoros Tsioutsias. "Optimizing OpenStack Nova for Scientific Workloads." EPJ Web of Conferences 214 (2019): 07031. http://dx.doi.org/10.1051/epjconf/201921407031.

Full text
Abstract:
The CERN OpenStack cloud provides over 300,000 CPU cores to run data processing analyses for the Large Hadron Collider (LHC) experiments. To deliver these services, with high performance and reliable service levels, while at the same time ensuring a continuous high resource utilization has been one of the major challenges for the CERN cloud engineering team. Several optimizations like NUMA-aware scheduling and huge pages, have been deployed to improve scientific workloads performance, but the CERN Cloud team continues to explore new possibilities like preemptible instances and containers on bare-metal. In this paper we will dive into the concept and implementation challenges of preemptible instances and containers on bare-metal for scientific workloads. We will also explore how they can improve scientific workloads throughput and infrastructure resource utilization. We will present the ongoing collaboration with the Square Kilometer Array (SKA) community to develop the necessary upstream enhancement to further improve OpenStack Nova to support large-scale scientific workloads.
APA, Harvard, Vancouver, ISO, and other styles
23

Busby, Robert W., and Kasey Aderhold. "The Alaska Transportable Array: As Built." Seismological Research Letters 91, no. 6 (2020): 3017–27. http://dx.doi.org/10.1785/0220200154.

Full text
Abstract:
Abstract Alaska is the last frontier and final destination for the National Science Foundation-supported EarthScope USArray Transportable Array (TA) project. The goal of this project is to record earthquakes and image the structure of the North American continent. The Alaska TA consists of 283 broadband seismic stations evenly spaced about 85 km apart to cover the state of Alaska and into western Canada. The sensor emplacement technique and station design were developed specifically for superior performance—both in terms of seismic noise levels and station durability. This technique and design were used for the 194 new stations installed as well as the 32 existing broadband stations that were upgraded. Trial stations were installed in 2011–2013 as part of a process to test and refine the installation design. The main deployment began in 2014 using the final station design and was completed in 2017. From 2018 through 2020, Incorporated Research Institutions for Seismology (IRIS) operated the Alaska TA by performing servicing, station improvements, and data quality monitoring. High data return was maintained throughout, though some stations had lower real-time data delivery in winter. 110 TA stations are expected to transition to other operators in 2019 and 2020, and the data from these are openly available under new network codes. The last 84 stations are expected to be removed during the 2021 field season to close out the TA project. The Alaska TA was installed safely despite a challenging environment and has been operated to maximize the continuity and quality of data collected across a vast geographic region, enabling exciting scientific research for years to come.
APA, Harvard, Vancouver, ISO, and other styles
24

Nazarovets, Maryna. "Innovative Instruments in Ukrainian Scientific Communication." Current Issues of Mass Communication, no. 21 (2017): 8–23. http://dx.doi.org/10.17721/2312-5160.2017.21.08-23.

Full text
Abstract:
The development of information technologies has led to changes in the processes of scientific communication and role of library support of research processes. Librarians from the Utrecht University (the Netherlands) Jeroen Bosman and Bianca Kramer conducted a global online survey “Innovations in Scholarly Communication” to study the situation. The main objective of this our study is to identify the main trends in the usage of modern communication tools by Ukrainian scientists with the help of analysis of responses of 117 Ukrainian respondents who took part in this online survey. The source of the study is the open data set “Innovations in scholarly communication – data of the global 2015-2016 survey”, available at the Zenodo Scientific Repository. The results of surveys by Ukrainian respondents were selected for study from the main array of the open data. The calculated percentage of the received answers and the obtained results were compared with the global ones. The study found that changes in research workflows in Ukraine related to the development of digital technologies are similar to the global ones, with the exception of some differences. The data obtained as a result of the survey can be used as an empirical material for further research of the library web-support of Ukrainian scientists’ research activities. But given the rapid changes in the scientific communication landscape associated with both the development of information technology and increasing number of their users and their digital literacy, the current state of usage of these technologies in scientific process requires additional research.
APA, Harvard, Vancouver, ISO, and other styles
25

Hales, Riley Chad, Everett James Nelson, Gustavious P. Williams, Norman Jones, Daniel P. Ames, and J. Enoch Jones. "The Grids Python Tool for Querying Spatiotemporal Multidimensional Water Data." Water 13, no. 15 (2021): 2066. http://dx.doi.org/10.3390/w13152066.

Full text
Abstract:
Scientific datasets from global-scale earth science models and remote sensing instruments are becoming available at greater spatial and temporal resolutions with shorter lag times. Water data are frequently stored as multidimensional arrays, also called gridded or raster data, and span two or three spatial dimensions, the time dimension, and other dimensions which vary by the specific dataset. Water engineers and scientists need these data as inputs for models and generate data in these formats as results. A myriad of file formats and organizational conventions exist for storing these array datasets. The variety does not make the data unusable but does add considerable difficulty in using them because the structure can vary. These storage formats are largely incompatible with common geographic information system (GIS) software. This introduces additional complexity in extracting values, analyzing results, and otherwise working with multidimensional data since they are often spatial data. We present a Python package which provides a central interface for efficient access to multidimensional water data regardless of the file format. This research builds on and unifies existing file formats and software rather than suggesting entirely new alternatives. We present a summary of the code design and validate the results using common water-related datasets and software.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Wanrong, Xiaodong Yi, Yichun Sun, Yingwen Liu, Shuai Ye, and Hengzhu Liu. "Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data." Scientific Programming 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/1496104.

Full text
Abstract:
The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS), is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.
APA, Harvard, Vancouver, ISO, and other styles
27

Arrabito, Luisa, Konrad Bernlöhr, Johan Bregeon, et al. "The Cherenkov Telescope Array production system for data-processing and Monte Carlo simulation." EPJ Web of Conferences 214 (2019): 03052. http://dx.doi.org/10.1051/epjconf/201921403052.

Full text
Abstract:
The Cherenkov Telescope Array (CTA) is the next-generation instrument in the field of very high energy gamma-ray astronomy. It will be composed of two arrays of Imaging Atmospheric Cherenkov Telescopes, located at La Palma (Spain) and Paranal (Chile). The construction of CTA has just started with the installation of the first telescope on site at La Palma and the first data expected by the end of 2018. The scientific operations should begin in 2022 for a duration of about 30 years. The overall amount of data produced during these operations is around 27 PB per year. The associated computing power for data processing and Monte Carlo (MC) simulations is of the order of hundreds of millions of CPU HS06 hours per year. In order to cope with these high computing requirements, we have developed a production system prototype based on the DIRAC framework, that we have intensively exploited during the past 6 years to handle massive MC simulations on the grid for the CTA design and prototyping phases. CTA workflows are composed of several inter-dependent steps, which we used to handle separately within our production system. In order to fully automatize the whole workflows execution, we have partially revised the production system by further enhancing the data-driven behavior and by extending the use of meta-data to link together the different steps of a workflow. In this contribution we present the application of the production system to the last years MC campaigns as well as the recent production system evolution, intended to obtain a fully data-driven and automatized workflow execution for efficient processing of real telescope data.
APA, Harvard, Vancouver, ISO, and other styles
28

Haveraaen, Magne. "Case Study on Algebraic Software Methodologies for Scientific Computing." Scientific Programming 8, no. 4 (2000): 261–73. http://dx.doi.org/10.1155/2000/482042.

Full text
Abstract:
The use of domain specific languages and appropriate software architectures are currently seen as the way to enhance reusability and improve software productivity. Here we outline a use of algebraic software methodologies and advanced program constructors to improve the abstraction level of software for scientific computing. This leads us to the language of coordinate free numerics as an alternative to the traditional coordinate dependent array notation. This provides the backdrop for the three accompanying papers:Coordinate Free Programming of Computational Fluid Dynamics Problems, centered around an example of using coordinate free numerics,Machine and Collection Abstractions for User-Implemented Data-Parallel Programming, exploiting the higher abstraction level when parallelising code, andAn Algebraic Programming Style for Numerical Software and its Optimization, looking at high-level transformations enabled by the domain specific programming style.
APA, Harvard, Vancouver, ISO, and other styles
29

Tuccari, G., M. Wunderlich, S. Dornbusch, and G. G. Tuccari. "AntArr – Etna Low Frequency Antenna Array." Latvian Journal of Physics and Technical Sciences 57, no. 1-2 (2020): 6–12. http://dx.doi.org/10.2478/lpts-2020-0001.

Full text
Abstract:
AbstractA project called AntArr as a new application of the DBBC3 (Digital Base Band Converter, 3rd generation) is under development. A group of antennas operating at low frequency, in the range from 10 MHz up to 1500 MHz, are phased up for VLBI, pulsar and more recently for FRB observations. Part of the scientific programme is also dedicated to SETI activities in piggy-back mode. Dedicated elements can even be added to reach still lower frequencies to observe the range down to kHz frequencies. The DBBC3 manages the array operations in a selected portion of the band and the main characteristic is to synthesize a beam with an innovative approach. The final product of the array is a single station standard VLBI data stream for correlation with other antennas, or a synthesized beam for single dish observations. A number of antennas and array prototypes are under test at a location on the Etna volcano slope, with the aim to form a complete radio telescope of up to 1024 elements in 2020 and beyond. This project completes the lower part of the frequency spectrum covered in VLBI by the BRAND EVN project. The project AntArr is hosted and financed by HAT-Lab Ltd., which is the manufacturer of the DBBC family backends.
APA, Harvard, Vancouver, ISO, and other styles
30

Witwer, Kenneth W. "Data Submission and Quality in Microarray-Based MicroRNA Profiling." Clinical Chemistry 59, no. 2 (2013): 392–400. http://dx.doi.org/10.1373/clinchem.2012.193813.

Full text
Abstract:
BACKGROUND Public sharing of scientific data has assumed greater importance in the omics era. Transparency is necessary for confirmation and validation, and multiple examiners aid in extracting maximal value from large data sets. Accordingly, database submission and provision of the Minimum Information About a Microarray Experiment (MIAME)3 are required by most journals as a prerequisite for review or acceptance. METHODS In this study, the level of data submission and MIAME compliance was reviewed for 127 articles that included microarray-based microRNA (miRNA) profiling and were published from July 2011 through April 2012 in the journals that published the largest number of such articles—PLOS ONE, the Journal of Biological Chemistry, Blood, and Oncogene—along with articles from 9 other journals, including Clinical Chemistry, that published smaller numbers of array-based articles. RESULTS Overall, data submission was reported at publication for <40% of all articles, and almost 75% of articles were MIAME noncompliant. On average, articles that included full data submission scored significantly higher on a quality metric than articles with limited or no data submission, and studies with adequate description of methods disproportionately included larger numbers of experimental repeats. Finally, for several articles that were not MIAME compliant, data reanalysis revealed less than complete support for the published conclusions, in 1 case leading to retraction. CONCLUSIONS These findings buttress the hypothesis that reluctance to share data is associated with low study quality and suggest that most miRNA array investigations are underpowered and/or potentially compromised by a lack of appropriate reporting and data submission.
APA, Harvard, Vancouver, ISO, and other styles
31

Battye, Richard A., Michael L. Brown, Caitlin M. Casey, et al. "SuperCLASS – I. The super cluster assisted shear survey: Project overview and data release 1." Monthly Notices of the Royal Astronomical Society 495, no. 2 (2020): 1706–23. http://dx.doi.org/10.1093/mnras/staa709.

Full text
Abstract:
ABSTRACT The SuperCLuster Assisted Shear Survey (SuperCLASS) is a legacy programme using the e-MERLIN interferometric array. The aim is to observe the sky at L-band (1.4 GHz) to a r.m.s. of $7\, \mu {\rm Jy}\,$beam−1 over an area of $\sim 1\, {\rm deg}^2$ centred on the Abell 981 supercluster. The main scientific objectives of the project are: (i) to detect the effects of weak lensing in the radio in preparation for similar measurements with the Square Kilometre Array (SKA); (ii) an extinction free census of star formation and AGN activity out to z ∼ 1. In this paper we give an overview of the project including the science goals and multiwavelength coverage before presenting the first data release. We have analysed around 400 h of e-MERLIN data allowing us to create a Data Release 1 (DR1) mosaic of $\sim 0.26\, {\rm deg}^2$ to the full depth. These observations have been supplemented with complementary radio observations from the Karl G. Jansky Very Large Array (VLA) and optical/near infrared observations taken with the Subaru, Canada-France-Hawaii, and Spitzer Telescopes. The main data product is a catalogue of 887 sources detected by the VLA, of which 395 are detected by e-MERLIN and 197 of these are resolved. We have investigated the size, flux, and spectral index properties of these sources finding them compatible with previous studies. Preliminary photometric redshifts, and an assessment of galaxy shapes measured in the radio data, combined with a radio-optical cross-correlation technique probing cosmic shear in a supercluster environment, are presented in companion papers.
APA, Harvard, Vancouver, ISO, and other styles
32

Espadinha-Cruz, Pedro, Radu Godina, and Eduardo M. G. Rodrigues. "A Review of Data Mining Applications in Semiconductor Manufacturing." Processes 9, no. 2 (2021): 305. http://dx.doi.org/10.3390/pr9020305.

Full text
Abstract:
For decades, industrial companies have been collecting and storing high amounts of data with the aim of better controlling and managing their processes. However, this vast amount of information and hidden knowledge implicit in all of this data could be utilized more efficiently. With the help of data mining techniques unknown relationships can be systematically discovered. The production of semiconductors is a highly complex process, which entails several subprocesses that employ a diverse array of equipment. The size of the semiconductors signifies a high number of units can be produced, which require huge amounts of data in order to be able to control and improve the semiconductor manufacturing process. Therefore, in this paper a structured review is made through a sample of 137 papers of the published articles in the scientific community regarding data mining applications in semiconductor manufacturing. A detailed bibliometric analysis is also made. All data mining applications are classified in function of the application area. The results are then analyzed and conclusions are drawn.
APA, Harvard, Vancouver, ISO, and other styles
33

Qin, Li-Xuan, Huei-Chung Huang, and Colin B. Begg. "Cautionary Note on Using Cross-Validation for Molecular Classification." Journal of Clinical Oncology 34, no. 32 (2016): 3931–38. http://dx.doi.org/10.1200/jco.2016.68.1031.

Full text
Abstract:
Purpose Reproducibility of scientific experimentation has become a major concern because of the perception that many published biomedical studies cannot be replicated. In this article, we draw attention to the connection between inflated overoptimistic findings and the use of cross-validation for error estimation in molecular classification studies. We show that, in the absence of careful design to prevent artifacts caused by systematic differences in the processing of specimens, established tools such as cross-validation can lead to a spurious estimate of the error rate in the overoptimistic direction, regardless of the use of data normalization as an effort to remove these artifacts. Methods We demonstrated this important yet overlooked complication of cross-validation using a unique pair of data sets on the same set of tumor samples. One data set was collected with uniform handling to prevent handling effects; the other was collected without uniform handling and exhibited handling effects. The paired data sets were used to estimate the biologic effects of the samples and the handling effects of the arrays in the latter data set, which were then used to simulate data using virtual rehybridization following various array-to-sample assignment schemes. Results Our study showed that (1) cross-validation tended to underestimate the error rate when the data possessed confounding handling effects; (2) depending on the relative amount of handling effects, normalization may further worsen the underestimation of the error rate; and (3) balanced assignment of arrays to comparison groups allowed cross-validation to provide an unbiased error estimate. Conclusion Our study demonstrates the benefits of balanced array assignment for reproducible molecular classification and calls for caution on the routine use of data normalization and cross-validation in such analysis.
APA, Harvard, Vancouver, ISO, and other styles
34

Guseynova, Ksenia. "Critical discourse-analysis as an effective method of the qualitative analysis of scientific and educational texts." nauka.me, no. 3 (2017): 0. http://dx.doi.org/10.18254/s241328880000051-0.

Full text
Abstract:
The article provides the results of applying of a critical discourse analysis’ methods developed by Norman Fairclough. The data array consisting of abstracts of candidates of sociological sciences’ theses on discipline 22.00.08 «sociology of management» is analyzed. It is claimed that by means of qualitative methods it is possible not only to explore the problem field of the designated discipline, but also to reveal and typify the main methodological errors of scientific texts. The article is of interest to Humanities students.
APA, Harvard, Vancouver, ISO, and other styles
35

Smit, Pieter Bart, Tim Janssen, Wheeler Gans, and Cameron Dunning. "REAL-TIME ASSIMILATION USING A DENSE ARRAY OF DIRECTIONAL WAVE OBSERVATIONS." Coastal Engineering Proceedings, no. 36 (December 30, 2018): 6. http://dx.doi.org/10.9753/icce.v36.waves.6.

Full text
Abstract:
Wave conditions along our coastlines are monitored using networks of wave buoys. Augmented with regional wave now- and hind-casts from operational wave models, these data networks provide detailed regional information of wave conditions providing vital updates of wave conditions for maritime, engineering, recreational and scientific purposes. Currently, the observational networks are mostly used to initiate models and assess model performance, but are usually not directly integrated into the modeling system. Recent work by Crosby et al. (2017) explores the integration of buoy data into models and shows that data assimilation of buoy observations into models can improve predictions and wave hindcasts. The results suggest that assimilation of dense observational networks results in significant and important improvements in model performance. In the current work we leverage these modeling advances with the recent development of low-cost directional wave buoys (such as the Spoondrift Spotter, www.spoondrift.co). The use of low-cost and solar powered instruments allows for much denser long-term arrays of instruments than was previously possible. The availability of large numbers of independent observations, in turn, can provide excellent constrains on models and model boundary conditions.
APA, Harvard, Vancouver, ISO, and other styles
36

Grachev, A. A., G. I. Nikitin, D. G. Plotnikov, A. V. Banite, D. E. Bortiakov, and A. S. Gabriel. "AUTOMATION OF LOCAL VOLTAGE CALCULATION IN BOX- SECTIONAL ELEMENTS OF SPAN STRUCTURES WITH RUNNING BEAMS ACCORDING TO CONTINUOUS MONITORING DATA." Automation on Transport 7, no. 2 (2021): 216–30. http://dx.doi.org/10.20295/2412-9186-2021-7-2-216-230.

Full text
Abstract:
The article presents a technique for automating the calculation of local stresses from the application of a concentrated load on additional structural elements of box-section spans with riding beams based on continuous monitoring data to assess the current technical condition and then predict the state of the structure. The necessity of applying an analytical approach to assessing the stress-strain state of a structure at the initial stages of design and technical operation has been substantiated. The drawbacks of the existing analytical approach to stress assessment are shown. Taking into account the available resources, an acceptable research program was developed and implemented, which would lead to satisfactory results. In accordance with the program, a series of numerical experiments was carried out using software that makes it possible to simulate complex structures and solve them using the finite element method, aimed at obtaining an array of data, the use of which would improve the accuracy of the existing analytical solution. By using software aimed at processing scientific data arrays, mathematical dependences of the required parameters on the initial constructive data were obtained. Formulas and initial data were obtained for calculating correction factors, which improve the accuracy of calculating local compressive and bending stresses. These formulas were combined with the already existing analytical dependencies in order to obtain a simple and easily applicable method for determining the actual local stresses from the application of an indirect concentrated load. As a result of scientific research, a new method for the analytical determination of local bending and compression stresses in riding beams was proposed on the basis of an analysis of the existing methodology and a series of numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
37

Sloan, Luke, Curtis Jessop, Tarek Al Baghal, and Matthew Williams. "Linking Survey and Twitter Data: Informed Consent, Disclosure, Security, and Archiving." Journal of Empirical Research on Human Research Ethics 15, no. 1-2 (2019): 63–76. http://dx.doi.org/10.1177/1556264619853447.

Full text
Abstract:
Linked survey and Twitter data present an unprecedented opportunity for social scientific analysis, but the ethical implications for such work are complex—requiring a deeper understanding of the nature and composition of Twitter data to fully appreciate the risks of disclosure and harm to participants. In this article, we draw on our experience of three recent linked data studies, briefly discussing the background research on data linkage and the complications around ensuring informed consent. Particular attention is paid to the vast array of data available from Twitter and in what manner it might be disclosive. In light of this, the issues of maintaining security, minimizing risk, archiving, and reuse are applied to linked Twitter and survey data. In conclusion, we reflect on how our ability to collect and work with Twitter data has outpaced our technical understandings of how the data are constituted and observe that understanding one’s data is an essential prerequisite for ensuring best ethical practice.
APA, Harvard, Vancouver, ISO, and other styles
38

Shchyptsov, О. A. "Digital vector of Ukraine development: formation of national industry of oceanographic geospatial data." Geofizicheskiy Zhurnal 43, no. 1 (2021): 266–75. http://dx.doi.org/10.24028/gzh.0203-3100.v43i1.2021.225553.

Full text
Abstract:
A step-by-step deployment of work on the creation of a national geospatial data infrastructure within the digital development of Ukraine is considered. The goals of mentioned infrastructure are as follows: provision of the open access to data, in particular the data and knowledge of scientific researches and observations; involvement of the national oceanographic scientific community in the global network of “big data”, further commercialization of research results, creation of innovations, digital products and services. The digital industry of oceanographic geospatial data is expected to become one of the components of the national digital geospatial data infrastructure. It will primarily cover the oceanography, considering the scale and complexity of hydrophysical processes in the oceans, the multifaceted impact and use of data for the sustainable development and economic activity; as well as conceptually modernize data production and disposal. The article presents an analysis of the Law on the National Infrastructure of Geospatial Data regarding the provisions that have to be fulfilled by the holders of oceanographic data. Considering that in accordance to the Law the national geospatial data infrastructure does not cover the entire array of collected oceanographic data, additional preparatory measures are proposed for the formation of a modern infrastructure and digital industry of oceanographic data. Creation of a modern digital industry of oceanographic data requires concerted efforts and consolidated actions of the state, scientific community, business entities and interested civil society. Its creation will ensure an appropriate level of participation and form a positive image of Ukraine in the framework of the United Nations Decade of Ocean Science for Sustainable Development (2021—2030).
APA, Harvard, Vancouver, ISO, and other styles
39

Contreras, Jorge L., and Bartha M. Knoppers. "The Genomic Commons." Annual Review of Genomics and Human Genetics 19, no. 1 (2018): 429–53. http://dx.doi.org/10.1146/annurev-genom-083117-021552.

Full text
Abstract:
Over its 30 or so years of existence, the genomic commons—the worldwide collection of publicly accessible repositories of human and nonhuman genomic data—has enjoyed remarkable, perhaps unprecedented, success. Thanks to the rapid public data release policies initiated by the Human Genome Project, free access to a vast array of scientific data is now the norm, not only in genomics, but in scientific disciplines of all descriptions. And far from being a monolithic creation of bureaucratic fiat, the genomic commons is an exemplar of polycentric, multistakeholder governance. But like all dynamic and rapidly evolving systems, the genomic commons is not without its challenges. Issues involving scientific priority, intellectual property, individual privacy, and informed consent, in an environment of data sets of exponentially expanding size and complexity, must be addressed in the near term. In this review, we describe the characteristics and unique history of the genomic commons, then address some of the trends, challenges, and opportunities that we envision for this valuable public resource in the years to come.
APA, Harvard, Vancouver, ISO, and other styles
40

Lopes, Pedro, Luis Bastião Silva, and José Luis Oliveira. "Challenges and Opportunities for Exploring Patient-Level Data." BioMed Research International 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/150435.

Full text
Abstract:
The proper exploration of patient-level data will pave the way towards personalised medicine. To better assess the state of the art in this field we identify the challenges and uncover the opportunities for the exploration of patient-level data through the review of well-known initiatives and projects focusing on the exploration of patient-level data. These cover a broad array of topics, from genomics to patient registries up to rare diseases research, among others. For each, we identified basic goals, involved partners, defined strategies and key technological and scientific outcomes, establishing the foundation for our analysis framework with four pillars: control, sustainability, technology, and science. Substantial research outcomes have been produced towards the exploration of patient-level data. The potential behind these data will be essential to realise the personalised medicine premise in upcoming years. Hence, relevant stakeholders continually push forward new developments in this domain, bringing novel opportunities that are ripe for exploration. Despite last decade’s translational research advances, personalised medicine is still far from being a reality. Patients’ data underlying potential goes beyond daily clinical practice. There are miscellaneous challenges and opportunities open for the exploration of these data by academia and business stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
41

Brotzer, Andreas, Felix Bernauer, Karl Ulrich Schreiber, Joachim Wassermann, and Heiner Igel. "Automated Quality Assessment of Interferometric Ring Laser Data." Sensors 21, no. 10 (2021): 3425. http://dx.doi.org/10.3390/s21103425.

Full text
Abstract:
In seismology, an increased effort to observe all 12 degrees of freedom of seismic ground motion by complementing translational ground motion observations with measurements of strain and rotational motions could be witnessed in recent decades, aiming at an enhanced probing and understanding of Earth and other planetary bodies. The evolution of optical instrumentation, in particular large-scale ring laser installations, such as G-ring and ROMY (ROtational Motion in seismologY), and their geoscientific application have contributed significantly to the emergence of this scientific field. The currently most advanced, large-scale ring laser array is ROMY, which is unprecedented in scale and design. As a heterolithic structure, ROMY’s ring laser components are subject to optical frequency drifts. Such Sagnac interferometers require new considerations and approaches concerning data acquisition, processing and quality assessment, compared to conventional, mechanical instrumentation. We present an automated approach to assess the data quality and the performance of a ring laser, based on characteristics of the interferometric Sagnac signal. The developed scheme is applied to ROMY data to detect compromised operation states and assign quality flags. When ROMY’s database becomes publicly accessible, this assessment will be employed to provide a quality control feature for data requests.
APA, Harvard, Vancouver, ISO, and other styles
42

Attwood, Teresa K., Douglas B. Kell, Philip McDermott, James Marsh, Steve R. Pettifer, and David Thorne. "Calling International Rescue: knowledge lost in literature and data landslide!" Biochemical Journal 424, no. 3 (2009): 317–33. http://dx.doi.org/10.1042/bj20091474.

Full text
Abstract:
We live in interesting times. Portents of impending catastrophe pervade the literature, calling us to action in the face of unmanageable volumes of scientific data. But it isn't so much data generation per se, but the systematic burial of the knowledge embodied in those data that poses the problem: there is so much information available that we simply no longer know what we know, and finding what we want is hard – too hard. The knowledge we seek is often fragmentary and disconnected, spread thinly across thousands of databases and millions of articles in thousands of journals. The intellectual energy required to search this array of data-archives, and the time and money this wastes, has led several researchers to challenge the methods by which we traditionally commit newly acquired facts and knowledge to the scientific record. We present some of these initiatives here – a whirlwind tour of recent projects to transform scholarly publishing paradigms, culminating in Utopia and the Semantic Biochemical Journal experiment. With their promises to provide new ways of interacting with the literature, and new and more powerful tools to access and extract the knowledge sequestered within it, we ask what advances they make and what obstacles to progress still exist? We explore these questions, and, as you read on, we invite you to engage in an experiment with us, a real-time test of a new technology to rescue data from the dormant pages of published documents. We ask you, please, to read the instructions carefully. The time has come: you may turn over your papers…
APA, Harvard, Vancouver, ISO, and other styles
43

Lee, Hyo Seong, Hae Goo Song, and Hee Sang Lee. "Classification of Photovoltaic Research Papers by Using Text-Mining Techniques." Applied Mechanics and Materials 284-287 (January 2013): 3362–69. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.3362.

Full text
Abstract:
The research described in this article focuses on one important aspect of monitoring scientific and technological trends and tries to examine topics of research and trends in the photovoltaic field. The data used to examine the research and trends were scientific and technological literature published during the last five years, which were exhaustively collected from the two SCI journals that specialize in photovoltaic and solar energy research. In order to analyze the 2,031 academic papers colllected, text-mining was applied. As a result, research topics were identified through document clustering and classified through text categorization into four major subjects; ‘Cell’, ‘Module/Array’, ‘System’ and ‘Relative/Advanced.’
APA, Harvard, Vancouver, ISO, and other styles
44

Miller, Alexander, Boris Miller, and Gregory Miller. "Navigation of Underwater Drones and Integration of Acoustic Sensing with Onboard Inertial Navigation System." Drones 5, no. 3 (2021): 83. http://dx.doi.org/10.3390/drones5030083.

Full text
Abstract:
The navigation of autonomous underwater vehicles is a major scientific and technological challenge. The principal difficulty is the opacity of the water media for usual types of radiation except for the acoustic waves. Thus, an acoustic transducer (array) composed of an acoustic sonar is the only tool for external measurements of the AUV attitude and position. Another difficulty is the inconstancy of the speed of propagation of acoustic waves, which depends on the temperature, salinity, and pressure. For this reason, only the data fusion of the acoustic measurements with data from other onboard inertial navigation system sensors can provide the necessary estimation quality and robustness. This review presents common approaches to underwater navigation and also one novel method of velocity measurement. The latter is an analog of the well-known Optical Flow method but based on a sequence of sonar array measurements.
APA, Harvard, Vancouver, ISO, and other styles
45

Tinbergen, Jaap. "Transformations and Modern Technology." International Astronomical Union Colloquium 136 (1993): 264–70. http://dx.doi.org/10.1017/s025292110000765x.

Full text
Abstract:
AbstractTransformations are a central issue in making a global network do more than simple monitoring of low-amplitude variability. I explore the approach of observing at a much narrower instrumental bandwidth than is required for the scientific problem; such an approach would have the following advantages:•transformations can be handled in standard fashion at the instrumental level; at the scientific level, they can be avoided entirely,•users have almost complete freedom in specifying the shape of the scientific passbands, hence comparison of observational data with stellar atmosphere models can be maximally effective,•standard star observations can be used repeatedly, for programmes running concurrently in different scientific photometric systems,•an observer can use existing standard-star observations to create his own specially-tailored photometric system from scratch,•all-sky homogeneity of the instrumental system can be tested against space photometry such as that provided by HIPPARCOS; this will benefit other scientific systems synthesized from the same instrumental system.Key components in the hardware will be array detectors with low readout noise and a calibration lamp system designed specifically for this application. A data base provides the link between the observations (in the instrumental system) and the results (with scientific passbands defined by the end user).
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, M. L., G. X. Zhang, G. L. Zhang та ін. "Identification of different reaction channels in 6Li + 89Y experiment by the particles-γ coincidence measurement". EPJ Web of Conferences 223 (2019): 01068. http://dx.doi.org/10.1051/epjconf/201922301068.

Full text
Abstract:
This short paper presents the investigation of reaction mechanism induced by 6Li through a particle­γ coincidence measurement. The data have been taken from a 6Li+89Y experimentwhich is performed inINFN-LNL, Italy. In this experiment, the light charged particles are detected by a Si­ball, named EUCLIDES, and the γ rays are collected by a HPGe detector array, called GALILEO. In this contribution, scientific motivations, experimental details and some results, such as α­γ analysis, are presented.
APA, Harvard, Vancouver, ISO, and other styles
47

Attwood, Teresa K., Douglas B. Kell, Philip McDermott, James Marsh, Steve R. Pettifer, and David Thorne. "Calling International Rescue: Knowledge lost in literature and data landslide!" Biochemist 31, no. 6 (2009): 23–38. http://dx.doi.org/10.1042/bio03106023.

Full text
Abstract:
We live in interesting times. Portents of impending catastrophe pervade the literature, calling us to action in the face of unmanageable volumes of scientific data. But it isn't so much data generation per se, but the systematic burial of the knowledge embodied in those data that poses the problem: there is so much information available that we simply no longer know what we know, and finding what we want is hard – too hard. The knowledge we seek is often fragmen tary and disconnected, spread thinly across thousands of databases and millions of articles in thousands of journals. The intellectual energy required to search this array of dataarchives, and the time and money this wastes, has led several researchers to challenge the methods by which we traditionally commit newly acquired facts and knowledge to the scientific record. This has spawned a number of initiatives aiming to uncover this buried knowledge and to transform scholarly publishing paradigms. This article, which has been adapted from our review published in the Biochemical Journal [volume 424 (part 3), pages 317–333], provides an overview of these projects. It culminates with a description of the Semantic Biochemical Journal experiment, an exciting and innovative collaboration with Portland Press Ltd to create Utopia Documents, a new PDFdocumentreader designed to rescue data from the dormant pages of published docu ments. The article you are about to read is, in part, intended as a taster; to get the full, interactive Utopia experience, we encourage you to investigate the full review in volume 424 (part 3) of the Biochemical Journal (www. BiochemJ.org) and the other articles in that issue.
APA, Harvard, Vancouver, ISO, and other styles
48

Kwon, Taek M., Nirish Dhruv, Siddharth A. Patwardhan, and Eil Kwon. "Common Data Format Archiving of Large-Scale Intelligent Transportation Systems Data for Efficient Storage, Retrieval, and Portability." Transportation Research Record: Journal of the Transportation Research Board 1836, no. 1 (2003): 111–17. http://dx.doi.org/10.3141/1836-14.

Full text
Abstract:
Intelligent transportation system (ITS) sensor networks, such as road weather information and traffic sensor networks, typically generate enormous amounts of data. As a result, archiving, retrieval, and exchange of ITS sensor data for planning and performance analysis are becoming increasingly difficult. An efficient ITS archiving system that is compact and exchangeable and allows efficient and fast retrieval of large amounts of data is essential. A proposal is made for a system that can meet the present and future archiving needs of large-scale ITS data. This system is referred to as common data format (CDF) and was developed by the National Space Science Data Center for archiving, exchange, and management of large-scale scientific array data. CDF is an open system that is free and portable and includes self-describing data abstraction. Archiving traffic data by using CDF is demonstrated, and its archival and retrieval performance is presented for the Minnesota Department of Transportation–s 30-s traffic data collected from about 4,000 loop detectors around Twin Cities freeways. For comparison of the archiving performance, the same data were archived by using a commercially available relational database, which was evaluated for its archival and retrieval performance. This result is presented, along with reasons that CDF is a good fit for large-scale ITS data archiving, retrieval, and exchange of data.
APA, Harvard, Vancouver, ISO, and other styles
49

Celli, Fabrizio, Johannes Keizer, Yves Jaques, Stasinos Konstantopoulos, and Dušan Vudragović. "Discovering, Indexing and Interlinking Information Resources." F1000Research 4 (July 30, 2015): 432. http://dx.doi.org/10.12688/f1000research.6848.1.

Full text
Abstract:
The social media revolution is having a dramatic effect on the world of scientific publication. Scientists now publish their research interests, theories and outcomes across numerous channels, including personal blogs and other thematic web spaces where ideas, activities and partial results are discussed. Accordingly, information systems that facilitate access to scientific literature must learn to cope with this valuable and varied data, evolving to make this research easily discoverable and available to end users. In this paper we describe the incremental process of discovering web resources in the domain of agricultural science and technology. Making use of Linked Open Data methodologies, we interlink a wide array of custom-crawled resources with the AGRIS bibliographic database in order to enrich the user experience of the AGRIS website. We also discuss the SemaGrow Stack, a query federation and data integration infrastructure used to estimate the semantic distance between crawled web resources and AGRIS.
APA, Harvard, Vancouver, ISO, and other styles
50

Celli, Fabrizio, Johannes Keizer, Yves Jaques, Stasinos Konstantopoulos, and Dušan Vudragović. "Discovering, Indexing and Interlinking Information Resources." F1000Research 4 (November 17, 2015): 432. http://dx.doi.org/10.12688/f1000research.6848.2.

Full text
Abstract:
The social media revolution is having a dramatic effect on the world of scientific publication. Scientists now publish their research interests, theories and outcomes across numerous channels, including personal blogs and other thematic web spaces where ideas, activities and partial results are discussed. Accordingly, information systems that facilitate access to scientific literature must learn to cope with this valuable and varied data, evolving to make this research easily discoverable and available to end users. In this paper we describe the incremental process of discovering web resources in the domain of agricultural science and technology. Making use of Linked Open Data methodologies, we interlink a wide array of custom-crawled resources with the AGRIS bibliographic database in order to enrich the user experience of the AGRIS website. We also discuss the SemaGrow Stack, a query federation and data integration infrastructure used to estimate the semantic distance between crawled web resources and AGRIS.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!