To see the other types of publications on this topic, follow the link: IBM Power systems.

Journal articles on the topic 'IBM Power systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'IBM Power systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cahill, J. J., T. Nguyen, M. Vega, D. Baska, D. Szerdi, H. Pross, R. X. Arroyo, et al. "IBM Power Systems built with the POWER8 architecture and processors." IBM Journal of Research and Development 59, no. 1 (January 2015): 4:1–4:10. http://dx.doi.org/10.1147/jrd.2014.2376132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Marquart, R. G. "μPDSM: Mainframe Search/Match on an IBM PC." Powder Diffraction 1, no. 1 (March 1986): 34–39. http://dx.doi.org/10.1017/s088571560001126x.

Full text
Abstract:
AbstractμPDSM, a powder diffraction search/match system, is derived from the original interactive time-sharing system written for the NIH/EPA Chemical Information System. Transformed and essentially rewritten, it still searches the entire JCPDS database, but on an inexpensive IBM PC microcomputer. In the transition from mainframe to micro, μPDSM has lost none of its speed or performance. Indeed, the basic discriminating power of the search/match algorithms had to be improved, since the Powder Diffraction File is some 50% larger than it was when The CIS version was in operation. While μPDSM will solve typical problems on a “push-button” mode, it provides an environment that promotes optimization of search parameters by the diffractionist, and provides all of the subfile and chemistry functions associated with larger systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Berry, Christopher, David Wolpert, Christos Vezrytzis, Richard Rizzolo, Sean Carey, Yaniv Maroz, Hunter Shi, et al. "IBM z14: Processor Characterization and Power Management for High-Reliability Mainframe Systems." IEEE Journal of Solid-State Circuits 54, no. 1 (January 2019): 121–32. http://dx.doi.org/10.1109/jssc.2018.2873582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

FRITZ, KARL E., BARBARA A. RANDALL, GREGG J. FOKKEN, MICHAEL J. DEGERSTROM, MICHAEL J. LORSUNG, JASON F. PRAIRIE, ERIC L. H. AMUNDSEN, et al. "HIGH-SPEED, LOW-POWER DIGITAL AND ANALOG CIRCUITS IMPLEMENTED IN IBM SiGe BiCMOS TECHNOLOGY." International Journal of High Speed Electronics and Systems 13, no. 01 (March 2003): 221–37. http://dx.doi.org/10.1142/s0129156403001582.

Full text
Abstract:
Under the auspices of Defense Advanced Research Project Agency's Microsystems Technology Office (DARPA/MTO) Low Power Electronics Program, the Mayo Foundation Special Purpose Processor Development Group is exploring ways to reduce circuit power consumption, while maintaining or increasing functionality, for existing military systems. Applications presently being studied include all-digital radar receivers, electronic warfare receivers, and other types of digital signal processors. One of the integrated circuit technologies currently under investigation to support such military systems is the IBM Corporation silicon germanium (SiGe) BiCMOS process. In this paper, design methodology, simulations and test results from demonstration circuits developed for these applications and implemented in the IBM SiGe BiCMOS 5HP (50 GHz fT HBTs with 0.5 μm CMOS) and 7HP (120 GHz fT HBTs with 0.18 μm CMOS) technologies will be presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Pousa, Adrian. "Optimization of throughput, fairness and energy efficiency on asymmetric multicore systems via OS scheduling." Journal of Computer Science and Technology 18, no. 01 (June 7, 2018): e09. http://dx.doi.org/10.24215/16666038.18.e09.

Full text
Abstract:
Most of chip multiprocessors (CMPs) are symmetric, i.e. they are composed of identical cores. These CMPs may consist of complex cores (e.g., Intel Haswell or IBM Power8) or simple and lower-power cores (e.g. ARM Cortex A9 or Intel Xeon Phi). Cores in the former approach have advanced microarchitectural features, such as out-of-order super-scalar pipelines, and they are suitable for running sequential applications which use them efficiently. Cores in the latter approach have a simple microarchitecture and are good for running applications with high thread-level parallelism (TLP).
APA, Harvard, Vancouver, ISO, and other styles
6

Balakumar, Arvind, Harish Nandakumar, and Mrs V. Bhuvaneswari. "Implementation of Quantum Key Distribution Algorithm in Real Time IBM Quantum Computers." International Journal for Research in Applied Science and Engineering Technology 11, no. 4 (April 30, 2023): 3123–27. http://dx.doi.org/10.22214/ijraset.2023.50574.

Full text
Abstract:
Abstract: We are Currently in the NISQ(noisy intermediate scale quantum) era. Current Quantum computers have a high noise error rate. With the increasing number of qubits every year we can develop fault tolerant quantum systems which have immense power to change our current computing power. Current quantum systems have shown to have the potential to break the classical RSA algorithm using shors algorithm. With more development and increase in number of qubits available it could easily break the current security systems of our computing. In future we could transmit data safely using the same quantum principles in the post quantum cryptography era. Quantum key distribution could be the future protocol used to transfer our secure information without getting hacked even by using quantum computing and break other quantum principles like shors algorithm which can break current encryption techniques like the RSA algorithms. We need to develop Quantum network infrastructure so we can implement the quantum key distribution algorithm in the future to enable secure transmission of data.
APA, Harvard, Vancouver, ISO, and other styles
7

Bertran, R., Y. Sugawara, H. M. Jacobson, A. Buyuktosunoglu, and P. Bose. "Application-level power and performance characterization and optimization on IBM Blue Gene/Q systems." IBM Journal of Research and Development 57, no. 1/2 (January 2013): 4:1–4:17. http://dx.doi.org/10.1147/jrd.2012.2227580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

McLaughlin, G. T., L. Y. Liu, D. J. DeGroff, and K. W. Fleck. "IBM Power Systems platform: Advancements in the state of the art in IT availability." IBM Systems Journal 47, no. 4 (2008): 519–33. http://dx.doi.org/10.1147/sj.2008.5386517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dillard, Jesse F. "Professional Services, IBM, and the Holocaust." Journal of Information Systems 17, no. 2 (September 1, 2003): 1–16. http://dx.doi.org/10.2308/jis.2003.17.2.1.

Full text
Abstract:
IBM and the Holocaust both represent power, ideology, and rational administration. We view one as logical and commendable, the other as pathological and deplorable, and both as a manifestation of instrumental rationality. IBM and the Holocaust (Black 2001) explores the connect between IBM and its dynamic leader, Thomas J. Watson, and the program of genocide carried out against European Jewry over the 12-year reign of Germany's Third Reich. Those who controlled, applied, and supported IBM's information processing technology are implicated in operationalizing the lethal ideology of the National Socialist German Workers (Nazi) Party. Considering relationships between the fascist and the capitalist extremes provides a starting point in a dialogue that challenges the privileged position of instrumental rationality in evaluating choices related to the development, implementation, and application of information technology. The investigation of IBM and the Holocaust illustrates the potential for technology to reinforce, and be reinforced by, a prevailing ideology through the tangible manifestations of instrumental rationality: machines, professionals, and administrative structures. The ends to which the technology and its manifestations are applied by those implementing and supporting it become lost in striving to efficiently accomplish the immediate, intermediate tasks. The technological manifestations and their complicity in the Holocaust illustrate the inability of instrumental rationality to adequately incorporate the requisite ethical and moral dimensions, a lacuna no less present, though not so obvious, in actions undertaken within the current economic and political spheres by those employing the same tangible manifestations of instrumental rationality. The inability of those most directly implicated to reflexively consider the alliance of technology and ideology assures the continuing propensity of both good and evil. Unfortunately, the social systems that spawned the impressive technological developments do not provide adequate means for discerning and ethically evaluating the destructive and the creative potential.
APA, Harvard, Vancouver, ISO, and other styles
10

Siau, Keng, Fiona Fui-Hoon Nah, Brian E. Mennecke, and Shu Z. Schiller. "Co-creation and Collaboration in a Virtual World." Journal of Database Management 21, no. 4 (October 2010): 1–13. http://dx.doi.org/10.4018/jdm.2010100101.

Full text
Abstract:
One of the most successful and useful implementations of 3D virtual worlds is in the area of education and training. This paper discusses the use of virtual worlds in education and describes an innovative 3D visualization design project using one of the most popular virtual worlds, Second Life. This ongoing project is a partnership between IBM and three universities in the United States: the University of Nebraska-Lincoln, Iowa State University, and Wright State University. More than 400 MBA students have participated in this project by completing a creative design project that involves co-creation and collaboration in Second Life. The MBA students from the three universities worked in pairs to create designs to represent concepts related to IBM Power Systems, a family of IBM servers. The paper discusses observations and reflections on the 3D visualization design project. The paper concludes with a discussion of future research directions in applying virtual worlds in education.
APA, Harvard, Vancouver, ISO, and other styles
11

K C, Santosh. "IBM Watson Studio for Building an Automated Essay Grading System." International Journal for Research in Applied Science and Engineering Technology 10, no. 7 (July 31, 2022): 3471–79. http://dx.doi.org/10.22214/ijraset.2022.45755.

Full text
Abstract:
Abstract: Essays are one of the most important methods for assessing learning and intelligence of a student. Manual essay grading is a time-consuming process for the evaluator, a solution to such problem is to make evaluation through computers. Aim of this paper is to understand and analyze current essay grading systems and compare them primarily focusing on technique used, performance and focused attributes.
APA, Harvard, Vancouver, ISO, and other styles
12

Chornous, Galyna O., and Viktoriya L. Gura. "Integration of Information Systems for Predictive Workforce Analytics: Models, Synergy, Security of Entrepreneurship." European Journal of Sustainable Development 9, no. 1 (February 1, 2020): 83. http://dx.doi.org/10.14207/ejsd.2020.v9n1p83.

Full text
Abstract:
The era of information economy leads to redesigning not only business models of organizations but also to rethinking the human resources paradigm to harness the power of state-of-the-art technology for Human Capital Management (HCM) optimization. Predictive analytics and computational intelligence will bring transformative change to HCM. This paper deals with issues of HCM optimization based on the models of predictive workforce analytics (WFA) and Business Intelligence (BI). The main trends in the implementation of predictive WFA in the world and in Ukraine, as well as the need to protect business data for security of entrepreneurship and the tasks of predictive analysis in the context of proactive HCM were examined. Some models of effective integration of information systems for predictive WFA were proposed, their advantages and disadvantages were analyzed. These models combine ERP, HCM, BI, Predictive Analytics, and security systems. As an example, integration of HCM system, the analytics platform (IBM SPSS Modeler), BI system (IBM Planning Analytics), and security platform (IBM QRadar Security Intelligence Platform) for predicting the employee attrition was shown. This integration provides a cycle ‘prediction – planning – performance review – causal analysis’ to support protected data-driven decision making in proactive HCM The results of the research support ensuring the effective management of all spectrum of risks associated with the collection, storage and use of data. Keywords: Workforce Analytics (WFA), Human Capital Management (HCM), Predictive Analytics, Proactive Management, BI, Information Systems (IS), Integration, Security of Entrepreneurship
APA, Harvard, Vancouver, ISO, and other styles
13

Estévez Ruiz, Eduardo Patricio, Giovanny Eduardo Caluña Chicaiza, Fabian Rodolfo Jiménez Patiño, Joaquín Cayetano López Lago, and Saravana Prakash Thirumuruganandham. "Dense Matrix Multiplication Algorithms and Performance Evaluation of HPCC in 81 Nodes IBM Power 8 Architecture." Computation 9, no. 8 (July 30, 2021): 86. http://dx.doi.org/10.3390/computation9080086.

Full text
Abstract:
Optimizing HPC systems based on performance factors and bottlenecks is essential for designing an HPC infrastructure with the best characteristics and at a reasonable cost. Such insight can only be achieved through a detailed analysis of existing HPC systems and the execution of their workloads. The “Quinde I” is the only and most powerful supercomputer in Ecuador and is currently listed third on the South America. It was built with the IBM Power 8 servers. In this work, we measured its performance using different parameters from High-Performance Computing (HPC) to compare it with theoretical values and values obtained from tests on similar models. To measure its performance, we compiled and ran different benchmarks with the specific optimization flags for Power 8 to get the maximum performance with the current configuration in the hardware installed by the vendor. The inputs of the benchmarks were varied to analyze their impact on the system performance. In addition, we compile and compare the performance of two algorithms for dense matrix multiplication SRUMMA and DGEMM.
APA, Harvard, Vancouver, ISO, and other styles
14

Uzaman, Sardar Khaliq, Atta ur Rehman Khan, Junaid Shuja, Tahir Maqsood, Faisal Rehman, and Saad Mustafa. "A Systems Overview of Commercial Data Centers." International Journal of Information Technology and Web Engineering 14, no. 1 (January 2019): 42–65. http://dx.doi.org/10.4018/ijitwe.2019010103.

Full text
Abstract:
Data center facilities play a vital role in present and forthcoming information and communication technologies. Internet giants, such as IBM, Microsoft, Google, Yahoo, and Amazon hold large data centers to provide cloud computing services and web hosting applications. Due to rapid growth in data center size and complexity, it is essential to highlight important design aspects and challenges of data centers. This article presents market segmentation of the leading data center operators and discusses the infrastructural considerations, namely energy consumption, power usage effectiveness, cost structure, and system reliability constraints. Moreover, it presents data center network design, classification of the data center servers, recent developments, and future trends of the data center industry. Furthermore, the emerging paradigm of mobile cloud computing is debated with respect to the research issues. Preliminary results for the energy consumption of task scheduling techniques are also provided.
APA, Harvard, Vancouver, ISO, and other styles
15

Mahmood, Amal Ibrahim, Sadik Kamel Gharghan, Mohamed A. A. Eldosoky, and Ahmed M. Soliman. "Powering Implanted Devices Wirelessly Using Spider-Web Coil." Journal of Techniques 5, no. 4 (November 10, 2023): 28–34. http://dx.doi.org/10.51173/jt.v5i4.1650.

Full text
Abstract:
Implantable biomedical (IBM) systems and biomedical sensors can improve life quality, identify sickness, monitor biological signs, and replace the function of malfunctioning organs. However, these devices compel continuous battery power, which can be limited by the battery's capacity and lifetime, reducing the device's effectiveness. The wireless power transfer (WPT) technique, specifically magnetic resonator coupling (MRC), was utilized to address the limited battery capacity of IBMs. By using WPT–MRC, the device can obtain power wirelessly, thereby reducing the need for frequent battery replacements and increasing the device's potential. In this research, spider-web coil (S-WC) based MRC–WPT was conceived and carried out experimentally to enhance low-power IBM's rechargeable battery usage time. The presented S-WC–MRC–WPT design uses series–parallel (S–P) configuration to power the IBM. Both transmitter and receiver coils exhibit an operating oscillation frequency of 6.78 MHz. The paper reports on experiments performed in the laboratory to assess the performance of the proposed design in terms of output DC at three different resistive loads and transmission distances with alignment conditions among the receiver and the transmitter coils. Various transfer distances ranging from 10 to 100 mm were investigated to analyze the DC output current (Idc). Specifically, under a 30 V voltage source (VS) and a transfer distance of 20 mm, the DC output current was observed to be 330, 321, and 313 mA at resistive loads of 50, 100, and 150 Ω, respectively.
APA, Harvard, Vancouver, ISO, and other styles
16

Raplee, Jack. "Blackout Punch." Mechanical Engineering 123, no. 08 (August 1, 2001): 68–69. http://dx.doi.org/10.1115/1.2001-aug-6.

Full text
Abstract:
A California landscaper discovers an unorthodox solution to rolling blackouts, using remote power generation. The AuraGen remote power unit, under the hood of Sepulveda Building Materials’ service truck, could provide enough power to keep the company operating during a rolling blackout. With the onset of the summer months, rolling blackouts are likely to continue, possibly with increasing regularity as California’s energy crunch extends. The result is that many companies will still suffer more from the lack of power supply and the increasing heat. The mainframe computer, an IBM Power P C, is networked and communicates with the other stores using Strata model telephone tie lines from Toshiba Phone Systems. Al Fergades, director of vehicle maintenance for Sepulveda, suggested that the company run cables to his service truck and use the power from the onboard generator. The company has been able to evaluate exactly which office machines need to be operating when a power outage occurs again, because now it has had the chance to think about it.
APA, Harvard, Vancouver, ISO, and other styles
17

Petersen, Emily January, and Ryan M. Moeller. "Using Antenarrative to Uncover Systems of Power in Mid-20th Century Policies on Marriage and Maternity at IBM." Journal of Technical Writing and Communication 46, no. 3 (March 23, 2016): 362–86. http://dx.doi.org/10.1177/0047281616639473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wolski, Rich, and David Cann. "Compiler-Enforced Cache Coherence Using a Functional Language." Scientific Programming 5, no. 2 (1996): 161–71. http://dx.doi.org/10.1155/1996/537293.

Full text
Abstract:
The cost of hardware cache coherence, both in terms of execution delay and operational cost, is substantial for scalable systems. Fortunately, compiler-generated cache management can reduce program serialization due to cache contention; increase execution performance; and reduce the cost of parallel systems by eliminating the need for more expensive hardware support. In this article, we use the Sisal functional language system as a vehicle to implement and investigate automatic, compiler-based cache management. We describe our implementation of Sisal for the IBM Power/4. The Power/4, briefly available as a product, represents an early attempt to build a shared memory machine that relies strictly on the language system for cache coherence. We discuss the issues associated with deterministic execution and program correctness on a system without hardware coherence, and demonstrate how Sisal (as a functional language) is able to address those issues.
APA, Harvard, Vancouver, ISO, and other styles
19

Mustafa, Dheya. "Performance Evaluation of Massively Parallel Systems Using SPEC OMP Suite." Computers 11, no. 5 (May 5, 2022): 75. http://dx.doi.org/10.3390/computers11050075.

Full text
Abstract:
Performance analysis plays an essential role in achieving a scalable performance of applications on massively parallel supercomputers equipped with thousands of processors. This paper is an empirical investigation to study, in depth, the performance of two of the most common High-Performance Computing architectures in the world. IBM has developed three generations of Blue Gene supercomputers—Blue Gene/L, P, and Q—that use, at a large scale, low-power processors to achieve high performance. Better CPU core efficiency has been empowered by a higher level of integration to gain more parallelism per processing element. On the other hand, the Intel Xeon Phi coprocessor armed with 61 on-chip x86 cores, provides high theoretical peak performance, as well as software development flexibility with existing high-level programming tools. We present an extensive evaluation study of the performance peaks and scalability of these two modern architectures using SPEC OMP benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
20

Noor, Ahmed K. "Game Changers." Mechanical Engineering 136, no. 09 (September 1, 2014): 30–35. http://dx.doi.org/10.1115/1.2014-sep-1.

Full text
Abstract:
This article discusses the recent development in “cognitive computing” technology. Unlike expert systems of the past, which required inflexible hard-coded expert rules, cognitive computers interpret unstructured data (sensory information, images, voices, and numbers), navigate through vast amounts of information, learn by experience, and participate in dialogues with humans using natural language to solve extremely complex problems. The U.S. Defense Advanced Research Projects Agency is funding a program called SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) to develop machine technology that will function like biological neural systems. IBM, Hughes Research Labs, and several universities are working on this program. The aim is to build an electronic system that matches a mammalian brain in function, size, and power consumption. It would recreate 10 billion neurons and 100 trillion synapses, consume one kilowatt (same as a small electric heater), and measure less than 2,000 cubic centimeters. Several other projects are also under way to apply cognitive technology to robotics, cars, and production systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Bao, Kai, Mi Yan, Rebecca Allen, Amgad Salama, Ligang Lu, Kirk E. Jordan, Shuyu Sun, and David Keyes. "High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics." SPE Journal 21, no. 03 (June 15, 2016): 0853–63. http://dx.doi.org/10.2118/163621-pa.

Full text
Abstract:
Summary The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of subsurface systems, fluid thermodynamic properties over a broad range of temperature, pressure, and composition under different geological conditions are required, although the experimental results are limited. Although equations of state can reproduce the existing experimental data within certain ranges of conditions, their extrapolation out of the experimental data range is still limited. The present framework will definitely provide better flexibility and predictability compared with conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
22

Hsu, Li Fu. "Key Technologies of Cloud Computing Model." Advanced Materials Research 765-767 (September 2013): 972–74. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.972.

Full text
Abstract:
Could computing is a new type of computing mode, it distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computation power, the storage space and software service according to its demand. The understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.
APA, Harvard, Vancouver, ISO, and other styles
23

Tseng, Chien Hsun. "Analysis of Parallel Multidimensional Wave Digital Filtering Network on IBM Cell Broadband Engine." Journal of Computational Engineering 2014 (February 17, 2014): 1–13. http://dx.doi.org/10.1155/2014/793635.

Full text
Abstract:
As an alternative approach for the numerical integration of physical systems, the MDWDF technique has become of importance in the field of numerical analysis due to its attractive features, for example, massive parallelism and high accuracy both inherent in nature. In this study, speed-up efficiencies of a MDWDF network are studied for the linearized shallow water system, which plays an important role in fluid dynamics. To achieve the goal, the full parallelism of the MDWDF network is established in the first place based on the chained MD retiming technique. Following the implementation on the IBM Cell Broadband Engine (Cell/BE), excellent performance of the full parallel architecture is revealed. The IBM Cell/BE containing 1 power processor element (PPE) and 8 synergistic processor elements (SPEs) perfectly fits the architecture of the retimed MDWDF model. Empirical results have demonstrated that the full parallelized model with 8 processors (1PPE + 7SPEs) outperforms the other three models: partial right/left-loop retimed models and the full sequential model with 4× improvements for scheduled grids 51×51. In addition, for scheduled fine grids 201×201, the full parallel model is shown to possess significant performance over these models by up to 7× improvements.
APA, Harvard, Vancouver, ISO, and other styles
24

Gholamhosseini, Leila, Ali Behmanesh, Somayeh Nasiri, Seyed Jafar Ehsanzadeh, and Farahnaz Sadoughi. "Cloud-Based Internet of Things in Healthcare Applications: A Systematic Literature Review." Frontiers in Health Informatics 12 (July 1, 2023): 145. http://dx.doi.org/10.30699/fhi.v12i0.451.

Full text
Abstract:
Introduction: The Internet of Things (IoT) and Cloud computing are two recent technological advances whose potential have not been realized in a range of industries. These technologies have been used in healthcare systems to improve their performance. In this regard, the IoT generates a large amount of data; also, cloud computing is a viable option for data storage and complex computing. The purpose of this study is to identify and categorize various aspects of CIoT-based healthcare in terms of main application domains, sensors, wireless communication technologies, messaging protocols, cloud platforms, and artificial intelligence (AI) algorithms.Material and Methods: We conducted a literature systematic review and reported according to the PRISMA guideline. PubMed, IEEE, Scopus, and Web of Science were searched using the related keywords and their synonyms. Two independent authors reviewed the papers' eligibility according to the defined inclusion and exclusion criteria.Results: Of the 2,118 papers retrieved, 61 were eventually selected in the study. Results of the present study revealed that the majority of CIoT research works were applied to patient monitoring systems and cardiovascular patient monitoring systems using Amazon and IBM dominating Cloud platforms. In addition, the most widely used communication technologies for CIoT in healthcare are cellular networks (3G and 4G), Wi-Fi, and Bluetooth. Cardiovascular, environmental, and position sensors are also the most common types of sensors used in healthcare CIoT applications. Among the Cloud platform providers, Amazon and IBM have the highest utility in healthcare systems. The majority of the included studies used Cloud-based AI algorithms to diagnose, classify, and predict diseases.Conclusion: The integration of the Cloud into the IoT can support healthcare systems in terms of processing power, storage capacity, security, privacy, performance, reliability, and scalability. We suggest researchers conduct experimental studies to evaluate the effectiveness of the CIoT approach in healthcare applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Popa, Stefan, Mihai Ivanovici, and Radu-Mihai Coliban. "Optimal Implementations of 8b/10b Encoders and Decoders for AMD FPGAs." Electronics 13, no. 6 (March 13, 2024): 1062. http://dx.doi.org/10.3390/electronics13061062.

Full text
Abstract:
The 8b/10b IBM encoding scheme is used in a plethora of communication technologies, including USB, Gigabit Ethernet, and Serial ATA. We propose two primitive-based structural designs of an 8b/10b encoder and two of an 8b/10b decoder, all targeted at modern AMD FPGA architectures. Our aim is to reduce the amount of resources used for the implementations. We compare our designs with implementations resulting from behavioral models as well as with state-of-the-art solutions from the literature. The implementation results show that our solutions provide the lowest resource utilization with comparable maximum operating frequency and power consumption. The proposed structural designs are suitable for resource-constrained data communication protocol implementations that employ the IBM 8b/10b encoding scheme. This paper is an extended version of our paper published at the 2022 International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 10–11 November 2022.
APA, Harvard, Vancouver, ISO, and other styles
26

Guo, Feng, Jian Li, Chi Zhang, Yizhi Zhu, Caiyang Yu, Qingsong Wang, and Giuseppe Buja. "Optimized Power and Capacity Configuration Strategy of a Grid-Side Energy Storage System for Peak Regulation." Energies 16, no. 15 (July 27, 2023): 5644. http://dx.doi.org/10.3390/en16155644.

Full text
Abstract:
The optimal configuration of the rated capacity, rated power and daily output power is an important prerequisite for energy storage systems to participate in peak regulation on the grid side. Economic benefits are the main reason driving investment in energy storage systems. In this paper, the relationship between the economic indicators of an energy storage system and its configuration is first analyzed, and the optimization objective function is formulated. Then, according to the objective limitations of the energy storage system configuration and operation, the constraints are formulated. A set of typical parameters is selected, and the CPLEX (IBM ILOG CPLEX Optimization Studio) solver is used in MATLAB to solve the optimal configuration results. Several sets of optimization results are obtained by taking different subjective coefficient β values, and the economic and social benefits of the optimization results are analyzed. When the economic benefits of the energy storage system are more important, the value of β needs to be smaller, such as a value of 1000. Conversely, when the peak-regulation effect is more important, the value of β should be larger. Finally, configuration results of the control groups are given, and the effect of optimizing the calculation for improving economic benefits is verified.
APA, Harvard, Vancouver, ISO, and other styles
27

Ally Chingumbe, Anifa, and Victoria Mahabi. "Assessment of the Factors Affecting the National ICT Broadband Backbone (NICTBB) Systems Restoration Time." Tanzania Journal of Engineering and Technology 43, no. 2 (July 15, 2024): 23–45. http://dx.doi.org/10.52339/tjet.v43i2.1027.

Full text
Abstract:
This study investigates the factors affecting the prolonged system restoration time when national ICT broadband backbone (NICTBB) services are affected by breakdown incidents. NICTBB is the government-owned backbone infrastructure constructed nationally by the Republic of Tanzania to increase the usage of ICT for equitable and sustainable socio-economic development and accelerate poverty reduction. This study utilised a mixture of exploratory and descriptive study approaches. The sample size determination was executed using purposing and simple random sampling, thus involving 289 respondents. The data reliability test was undertaken. It was followed by factor analysis, analysis of variances (ANOVA), linear regression analysis, and confirmed with confirmatory factor analysis (CFA) using IBM SPSS Amos 28 to test the model fitness. The study revealed that the maintenance centres are few and sparsely located, and the distance from maintenance centres increases travelling time, resulting in increased time to restore services. Some factors affecting the restoration time include initial infrastructure design not trenching to locate points of cut, absence of dedicated vehicles standby for NICTBB restoration, retirements of experienced and well-trained staff leaving behind untrained personnel for maintenance activities, and insufficient funds for maintenance. Lastly, power equipment is not looked upon as sometimes the services are down due to commercial power cuts, tripped circuit breakers, and batteries not being charged. Some recommendations include having proper maintenance processes, having enough restoration resources at all maintenance centres, and maintaining proper communication.
APA, Harvard, Vancouver, ISO, and other styles
28

Kudryashov, V. E., and A. N. Fionov. "Problem of stability of modern cryptosystems against the background of the emergence of quantum computers." Interexpo GEO-Siberia 6 (May 18, 2022): 109–15. http://dx.doi.org/10.33764/2618-981x-2022-6-109-115.

Full text
Abstract:
Every year, the production of quantum computers is becoming more productive and cheaper - for example, “The Tsuchongzi” computer uses 56 qubits and is able to solve problems that highlight the possibility of quantum acceleration within a few hours, while classical supercomputers require tens of thousands of years. Currently, IBM allows any Internet user to remotely connect and work on a real quantum computer, albeit with a power of several qubits. Modern cryptography is based on the fact that it is difficult to carry out factorization of integers or discrete logarithm by classical algorithms. But with the use of Shor's algorithm on a quantum computer, these difficulties are easily bypassed. Some of the most popular cryptographic systems - RSA (integer factorization), DH (discrete logarithm), and ECDSA (elliptic curves over finite fields) - will no longer be a reliable tool for data encryption with the advent of productive quantum computers. In this article, post-quantum cryptographic systems are studied and compared with the classical RSA system.
APA, Harvard, Vancouver, ISO, and other styles
29

Guo, Chenxi. "Grover’s Algorithm – Implementations and Implications." Highlights in Science, Engineering and Technology 38 (March 16, 2023): 1071–78. http://dx.doi.org/10.54097/hset.v38i.5997.

Full text
Abstract:
Ever since the realisation of the potential computational power of quantum-natured circuits, multiple quantum algorithms have been proposed, exploiting quantum superposition or quantum entanglement features to simulate quantum systems that classical computers cannot efficiently probe. The most notable among these strategies is Shor’s algorithm, which takes advantage of the quantum Fourier transform and has been shown capable of solving integer factorisation problems within polynomial time. This article’s focus is on one type of quantum algorithm based on amplitude amplification, namely Grover’s algorithm. This algorithm’s working principle is explained, and a discussion of its recent developments and possible direction of future reach is provided. In addition, implementations of Grover’s algorithm have been performed using the IBM Quantum Lab, followed by a reference to its connection to life sciences.
APA, Harvard, Vancouver, ISO, and other styles
30

Tami, Y., K. Sebaa, M. Lahdeb, O. Usta, and H. Nouri. "Extended mixed integer quadratic programming for simultaneous distributed generation location and network reconfiguration." Electrical Engineering & Electromechanics, no. 2 (March 5, 2023): 93–100. http://dx.doi.org/10.20998/2074-272x.2023.2.14.

Full text
Abstract:
Introduction. To minimise power loss, maintain the voltage within the acceptable range, and improve power quality in power distribution networks, reconfiguration and optimal distributed generation placement are presented. Power flow analysis and advanced optimization techniques that can handle significant combinatorial problems must be used in distribution network reconfiguration investigations. The optimization approach to be used depends on the size of the distribution network. Our methodology simultaneously addresses two nonlinear discrete optimization problems to construct an intelligent algorithm to identify the best solution. The proposed work is novel in that it the Extended Mixed-Integer Quadratic Programming (EMIQP) technique, a deterministic approach for determining the topology that will effectively minimize power losses in the distribution system by strategically sizing and positioning Distributed Generation (DG) while taking network reconfiguration into account. Using an efficient Quadratic Mixed Integer Programming (QMIP) solver (IBM ®), the resulting optimization problem has a quadratic form. To ascertain the range and impact of various variables, our methodology outperforms cutting-edge algorithms described in the literature in terms of the obtained power loss reduction, according to extensive numerical validation carried out on typical IEEE 33- and 69-bus systems at three different load factors. Practical value. Examining the effectiveness of concurrent reconfiguration and DG allocation versus sole reconfiguration is done using test cases. According to the findings, network reconfiguration along with the installation of a distributed generator in the proper location, at the proper size, with the proper loss level, and with a higher profile, is effective.
APA, Harvard, Vancouver, ISO, and other styles
31

Jung, Eui S. "Development of An Expert Systems for Ergonomic Workplace Design and Evaluation." Proceedings of the Human Factors Society Annual Meeting 32, no. 11 (October 1988): 617–21. http://dx.doi.org/10.1518/107118188786762423.

Full text
Abstract:
Industrial workplace design and evaluation is the outcome of a multi-factored process which requires diverse disciplines and interrelated techniques to consider alternative factors and to achieve an optimal design solution. However, it is still evident that industrial workplace design fails to incorporate ergonomic principles throughout all stages of design and evaluation, One approach to solve the problem is to introduce an expert system for integrating existing analytic models and expertises into a framework which guides the designer along the necessary steps to reach a solution, with explanations on its reasoning process. This paper discusses the framework of the prototype expert system being implemented using VM/PROLOG on IBM VM/CMS mainframe. The rule-based production system was selected as a representation scheme due to its versatility and expressive power. It is consisted of two main parts, First, modularized knowledge bases incorporate multidisciplinary ergonomic factors such as biomechanics, work physiology, and psychophysics. Each module stores knowledge either in the fact base or the rule base, however, massive experimental findings and table-lookups are separately stored in the external database through its interface and retrieved without bothering the main inference mechanism. Secondly, inference mechanism was built as a control mechanism, with a front-end user interface. It has a pattern-directed architecture coupled with a normal forward/backward chaining mechanism. The prototype expert system also incorporates analytical models (usually written in FORTRAN) into the reasoning process so that it is highly flexible to the problem specificity.
APA, Harvard, Vancouver, ISO, and other styles
32

Vahbeh, Mutasem, Emre Özer, and Fırat Kaçar. "Design of Lossless Negative Capacitance Multiplier Employing a Single Active Element." Electronics 13, no. 6 (March 21, 2024): 1163. http://dx.doi.org/10.3390/electronics13061163.

Full text
Abstract:
In this paper, a new negative lossless grounded capacitance multiplier (GCM) circuit based on a Current Feedback Operational Amplifier (CFOA) is presented. The proposed circuit includes a single CFOA, four resistors, and a grounded capacitor. In order to reduce the power consumption, the internal structure of the CFOA is realized with dynamic threshold-voltage MOSFET (DTMOS) transistors. The effects of parasitic components on the operating frequency range of the proposed circuit are investigated. The simulation results were obtained with the SPICE program using 0.13 µm IBM CMOS technology parameters. The total power consumption of the circuit was 1.6 mW. The functionality of the circuit is provided by the capacitance cancellation circuit. PVT (Process, Voltage, Temperature) analyses were performed to verify the robustness of the proposed circuit. An experimental study is provided to verify the operability of the proposed negative lossless GCM using commercially available integrated circuits (ICs).
APA, Harvard, Vancouver, ISO, and other styles
33

Pawar, Sahil, Girik Tripathi, Harsh Patel, and Anant Singh. "Machine Learning based Movie Recommendation System." International Journal for Research in Applied Science and Engineering Technology 11, no. 10 (October 31, 2023): 637–42. http://dx.doi.org/10.22214/ijraset.2023.56077.

Full text
Abstract:
Abstract: Machine Learning (ML) could be a fashionable engineering technique to form machines suppose or use their intelligence like humans by mimicking traits and by learning to require acceptable choices and to perform appointed tasks properly. a number of the businesses that have done outstanding add the sphere of ML (AI) ar Facebook, Google, Microsoft, IBM, etc. that ar investment millions and billions during this terribly field of ML development and analysis. presently there's a large market and want for building Intelligent Systems for Recommendation. To counter this, one amongst the simplest and most desirable System is Recommendation System (RS). Recommendation Systems had well-tried to play a crucial role within the field of E-Commerce websites, on-line searching, chemical analysis Apps, Social-Networking, Digital selling, on- line Advertisements, etc. by providing customized recommends and feedback to users in step with their preferences and selections. the subject of this report is AI primarily based picture show Recommendation System. because the topic of this paper suggests we have a tendency to ar planning to discuss concerning varied ways in which and approaches of ML (AI) to make a picture show Recommendation System (RS) application. There ar several approaches to make a recommendation system in step with one’s want like cooperative Filtering, Content primarily based Recommendation Systems, K-Means algorithmic rule. which can be in brief mentioned during this report, however we are going to primarily discuss and work on K suggests that algorithmic rule approach
APA, Harvard, Vancouver, ISO, and other styles
34

Salikhov, R. B., V. Kh Abdrakhmanov, and I. N. Safargalin. "Internet of Things (IoT) Security Alarms on ESP32-CAM." Journal of Physics: Conference Series 2096, no. 1 (November 1, 2021): 012109. http://dx.doi.org/10.1088/1742-6596/2096/1/012109.

Full text
Abstract:
Abstract The article presents the basic requirements for systems operating on the technology of the industrial / industrial "Internet of Things" (Industrial Internet of Things, IIoT). presents the main technologies with which it is recommended to develop IIoT devices. These are low-level programming of microcontrollers using the STM32 example, working with real-time systems (using Mbed OS as an example), using low-power wireless technologies, such as LoRa, 6LoWPAN, NB-IoT, ZigBee, Bluetooth Low Energy (BLE). It is also necessary to use special protocols, for example, the MQTT application layer protocol, the use of special cloud services, for example, Artik Cloud, IBM Cloud, Intel Cloud. The article also provides the main features of the choice of hardware - a development board for a microcontroller, wireless communication modules, as well as features of choosing software to accelerate the stage of initial debugging and development of a device prototype. It also provides a brief overview of existing security alarm solutions based on the Internet of Things (IoT) and Smart Home technologies. The idea of creating a budget solution based on Arduino and ESP32-CAM is presented. A prototype was assembled, the device was tested in operation.
APA, Harvard, Vancouver, ISO, and other styles
35

Arroyo, R. X., R. J. Harrington, S. P. Hartman, and T. Nguyen. "IBM POWER7 systems." IBM Journal of Research and Development 55, no. 3 (May 2011): 2:1–2:13. http://dx.doi.org/10.1147/jrd.2011.2131610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Katochkov, V. M., G. V. Savin, and E. V. Toporkova. "TODAY's SMART CITY TRENDS IN THE WORLD." Bulletin of Udmurt University. Series Economics and Law 30, no. 3 (June 26, 2020): 340–45. http://dx.doi.org/10.35634/2412-9593-2020-30-3-340-345.

Full text
Abstract:
Digitalization today is a modern trend focused on streamlining processes and improving efficiency. The introduction of information and communication technologies has affected not only enterprises, but complex socio-economic systems as cities, and this predetermined the development of smart cities. Today, IBM, McKinsey or Price Waterhouse are considered the brain centers for the development of this concept, while Siemens, BMW, Mercedes Benz, IBM, Phillips, General Electric, etc., discovered the “smart” city as a future-oriented concept, and chose only that a niche in which they can offer their products and services. The development of smart cities is also influenced by intergovernmental organizations and their specialized departments, as well as research centers, institutes and laboratories. There are more than 100 cities in the world that claim the title of “smart city”. At the same time, their current ratings reflect the implementation of modern information and communication technologies in the life of a person with the aim of improving the quality of his life in the development of the urban environment. The IESE Cities in Motion Index, Global Power City Index, The Global Cities Index, The Global Cities Outlook, Juniper Research, EasyPark Smart City Index ratings provide sufficient and comprehensive indicators to give this title. The developed standards ISO 37120 and ISO 37122 determine the main indicators for smart cities, while the criteria that determine the achievement of city smartness are conditionally formed today, which allows to clarify the semantic loads implemented in this definition, namely, to prioritize technologies for people and society, improving quality of life, and highlight differences with other similar definitions.
APA, Harvard, Vancouver, ISO, and other styles
37

Abegaz, Brook W., Satish M. Mahajan, and Ebisa O. Negeri. "Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization." International Journal of Emerging Electric Power Systems 17, no. 3 (June 1, 2016): 251–66. http://dx.doi.org/10.1515/ijeeps-2015-0154.

Full text
Abstract:
Abstract Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers’ level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.
APA, Harvard, Vancouver, ISO, and other styles
38

Idrees, Zeba, Zhuo Zou, and Lirong Zheng. "Edge Computing Based IoT Architecture for Low Cost Air Pollution Monitoring Systems: A Comprehensive System Analysis, Design Considerations & Development." Sensors 18, no. 9 (September 10, 2018): 3021. http://dx.doi.org/10.3390/s18093021.

Full text
Abstract:
With the swift growth in commerce and transportation in the modern civilization, much attention has been paid to air quality monitoring, however existing monitoring systems are unable to provide sufficient spatial and temporal resolutions of the data with cost efficient and real time solutions. In this paper we have investigated the issues, infrastructure, computational complexity, and procedures of designing and implementing real-time air quality monitoring systems. To daze the defects of the existing monitoring systems and to decrease the overall cost, this paper devised a novel approach to implement the air quality monitoring system, employing the edge-computing based Internet-of-Things (IoT). In the proposed method, sensors gather the air quality data in real time and transmit it to the edge computing device that performs necessary processing and analysis. The complete infrastructure & prototype for evaluation is developed over the Arduino board and IBM Watson IoT platform. Our model is structured in such a way that it reduces the computational burden over sensing nodes (reduced to 70%) that is battery powered and balanced it with edge computing device that has its local data base and can be powered up directly as it is deployed indoor. Algorithms were employed to avoid temporary errors in low cost sensor, and to manage cross sensitivity problems. Automatic calibration is set up to ensure the accuracy of the sensors reporting, hence achieving data accuracy around 75–80% under different circumstances. In addition, a data transmission strategy is applied to minimize the redundant network traffic and power consumption. Our model acquires a power consumption reduction up to 23% with a significant low cost. Experimental evaluations were performed under different scenarios to validate the system’s effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
39

Luo, Daiyi, Weifeng Pan, Yifan Li, Kaicheng Feng, and Guanzheng Liu. "The Interaction Analysis between the Sympathetic and Parasympathetic Systems in CHF by Using Transfer Entropy Method." Entropy 20, no. 10 (October 16, 2018): 795. http://dx.doi.org/10.3390/e20100795.

Full text
Abstract:
Congestive heart failure (CHF) is a cardiovascular disease associated with autonomic dysfunction, where sympathovagal imbalance was reported in many studies using heart rate variability (HRV). To learn more about the dynamic interaction in the autonomic nervous system (ANS), we explored the directed interaction between the sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS) with the help of transfer entropy (TE). This article included 24-h RR interval signals of 54 healthy subjects (31 males and 23 females, 61.38 ± 11.63 years old) and 44 CHF subjects (8 males and 2 females, 19 subjects’ gender were unknown, 55.51 ± 11.44 years old, 4 in class I, 8 in class II and 32 in class III~IV, according to the New York Heart Association Function Classification), obtained from the PhysioNet database and then segmented into 5-min non-overlapping epochs using cubic spline interpolation. For each segment in the normal group and CHF group, frequency-domain features included low-frequency (LF) power, high-frequency (HF) power and LF/HF ratio were extracted as classical estimators of autonomic activity. In the nonlinear domain, TE between LF and HF were calculated to quantify the information exchanging between SNS and PNS. Compared with the normal group, an extreme decrease in LF/HF ratio (p = 0.000) and extreme increases in both TE(LF→HF) (p = 0.000) and TE(HF→LF) (p = 0.000) in the CHF group were observed. Moreover, both in normal and CHF groups, TE(LF→HF) was a lot greater than TE(HF→LF) (p = 0.000), revealing that TE was able to distinguish the difference in the amount of directed information transfer among ANS. Extracted features were further applied in discriminating CHF using IBM SPSS Statistics discriminant analysis. The combination of the LF/HF ratio, TE(LF→HF) and TE(HF→LF) reached the highest screening accuracy (83.7%). Our results suggested that TE could serve as a complement to traditional index LF/HF in CHF screening.
APA, Harvard, Vancouver, ISO, and other styles
40

Colinge, Jean-Pierre. "Silicon-on-lnsulator Technology: Past Achievements and Future Prospects." MRS Bulletin 23, no. 12 (December 1998): 16–19. http://dx.doi.org/10.1557/s0883769400029778.

Full text
Abstract:
In silicon-on-insulator (SOI) technology, devices are dielectrically insulated from one another—usually by silicon dioxide. Unlike in conventional silicon devices, there is no direct contact between a transistor and the silicon substrate. The advantages of this type of isolation are many: reduced parasitic capacitances and reduced crosstalk between devices, improved current drive, subthreshold characteristics, and current gain. Silicon-on-insulator devices have been and are being used in several niche-market applications such as hightemperature and radiation-hard integrated circuits. However most importantly, SOI technology seems perfectly adapted to the needs of low-voltage, low-power (LVLP) electronic circuits. Because of the growing market for portable systems, LVLP technology is bound to soon become one of the drivers of the microelectronics industry, and SOI is likely to be part of it. Moreover major companies such as IBM, Sharp, Motorola, and Peregrine have announced upcoming lowpower and high-frequency lines of SOI products. The goal of this article is to introduce the reader to the basics of SOI device physics and the integrated-circuit applications of SOI.
APA, Harvard, Vancouver, ISO, and other styles
41

Munn, Luke. "Machine Readable Race: Constructing Racial Information in the Third Reich." Open Information Science 4, no. 1 (August 12, 2020): 143–55. http://dx.doi.org/10.1515/opis-2020-0011.

Full text
Abstract:
AbstractThis paper examines how informational processing drove new structures of racial classification in the Third Reich. The Deutsche Hollerith-Maschinen Gesellschaft mbH (Dehomag) worked closely with the government in designing and integrating punch-card informational systems. As a German subsidiary of IBM, Dehomag’s technology was deployed initially for a census in order to provide a more detailed racial analysis of the population. However the racial data was not detailed enough. The Nuremberg Race Laws provided a more precise and procedural definition of Jewishness that could be rendered machine-readable. As the volume and velocity of information in the Reich increased, Dehomag’s technology was adopted by other agencies like the Race and Settlement Office, and culminated in the vision of a single machinic number for each citizen. Through the lens of these proto-technologies, the paper demonstrates the historical interplay between race and information. Yet if the indexing and sorting of race anticipates big-data analytics, contemporary power is more sophisticated and subtle. The complexity of modern algorithmic regimes diffuses obvious racial markers, engendering a racism without race.
APA, Harvard, Vancouver, ISO, and other styles
42

Munyoka, Willard. "Electronic government adoption in voluntary environments – a case study of Zimbabwe." Information Development 36, no. 3 (July 28, 2019): 414–37. http://dx.doi.org/10.1177/0266666919864713.

Full text
Abstract:
Many governmental organisations across the world are progressively implementing electronic government systems to enhance their back-office operations and offer better and efficient services to citizens. Zimbabwe is not an exception to this e-government wave. Previous studies note that the acceptance and utilisation of e-government systems by citizens in Zimbabwe remains suboptimal, sluggish and problematic due to several factors. This study sought to establish the effect of seven predictor variables on citizens’ behavioural intentions to use e-government systems in Zimbabwe. Drawing from the extended Technology Acceptance Model (TAM2), extended Unified Theory of Acceptance and Use of Technology (UTAUT2), Framework for National and Donor Action, and e-Government Trust model as theoretical underpinnings, this study proposed a conceptual framework to predict citizens’ behavioural intentions on e-government. Survey data for testing the conceptual framework were collected from 247 respondents in Zimbabwe using structured questionnaires. Confirmatory factor analysis using IBM AMOS structural equation modelling method was conducted to establish the structural model fit of the proposed model. Findings of this study establish that eight of the hypothesised constructs explain 89% of the discrepancies of behavioural intention to demonstrate good predictive power of the proposed model in voluntary environments. Thus, level of education, facilitating conditions, e-government awareness, price value; privacy, security and trust; political self-efficacy and influence were all confirmed as salient predictors of e-government adoption. These findings provide invaluable insights and pointers to practitioners and policy-makers on e-government implementation and may guide further research on e-government adoption in voluntary environments.
APA, Harvard, Vancouver, ISO, and other styles
43

Agrawal, Animesh, Hemant Kumar Diwakar, and Suraj Kumar Mukti. "Employee Productivity and Service Quality Enhancement Using ERP With Knowledge Management." International Journal of Knowledge Management 16, no. 3 (July 2020): 89–111. http://dx.doi.org/10.4018/ijkm.2020070106.

Full text
Abstract:
Many researchers have identified that implementation of knowledge management (KM) improves the success rate of enterprise resource planning (ERP) in different sectors, but the support of KM in post-ERP implementation in the service sector is yet to be analyzed. The aim of the study is to confirm the support of KM and to identify the post-implementation effect of ERP on employees and service quality. For the fulfillment of this research, data were gathered from the service sector's employees and consumers. The research has been processed in three stages: The first stage includes the variables identification from the previously published literature and reduced to smaller groups with the help of IBM SPSS software. Second stage explores the preparation of conceptual model. The third stage involves the identification and substantiation of KM support with post-ERP implementation, which has been accomplished through a short brainstorming session with employees and senior manager of power distribution company. This research concludes that there is positive support of KM in post-ERP implementation.
APA, Harvard, Vancouver, ISO, and other styles
44

Colinge, Jean-Pierre, and Robert W. Bower. "Silicon-on-lnsulator Technology." MRS Bulletin 23, no. 12 (December 1998): 13–15. http://dx.doi.org/10.1557/s0883769400029766.

Full text
Abstract:
Silicon-on-lnsulator (SOI) technology has been around since the 1960s when so-called silicon on sapphire (SOS) was first introduced. Silicon on sapphire has been used for many years for the fabrication of spaceborne and high-speed integrated circuits. It is still used in the fabrication of radio-frequency circuits.More recent SOI materials involve only silicon and silicon dioxide—the two most common materials used in the fabrication of integrated circuits—as opposed to SOS, which requires the use of an alumina substrate.Silicon-on-insulator technology has been used for a long time in niche applications such as spacecraft electronics and devices operating in a hightemperature or radiative environment. Recently however much attention has been paid to SOI technology because it is extremely suitable for the fabrication of low-voltage integrated circuits. Such circuits are in high demand for all kinds of portable systems, ranging from cellular phones to laptop computers. In August of 1998, IBM, Sharp, and other semiconductor manufacturers announced the development of SOI chips for high-speed computing and telecommunication con-sumer electronics. Most major semiconductor companies are putting considerable effort into SOI-circuit development for mainstream low-power applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Luchs, Inga, Clemens Apprich, and Marcel Broersma. "Learning machine learning: On the political economy of big tech's online AI courses." Big Data & Society 10, no. 1 (January 2023): 205395172311538. http://dx.doi.org/10.1177/20539517231153806.

Full text
Abstract:
Machine learning (ML) algorithms are still a novel research object in the field of media studies. While existing research focuses on concrete software on the one hand and the socio-economic context of the development and use of these systems on the other, this paper studies online ML courses as a research object that has received little attention so far. By pursuing a walkthrough and critical discourse analysis of Google's Machine Learning Crash Course and IBM's introductory course to Machine Learning with Python, we not only shed light on the technical knowledge, assumptions, and dominant infrastructures of ML as a field of practice, but also on the economic interests of the companies providing the courses. We demonstrate how the online courses further support Google and IBM to consolidate and even expand their position of power by recruiting new AI talent and by securing their infrastructures and models to become the dominant ones. Further, we show how the companies not only influence greatly how ML is represented, but also how these representations in turn influence and direct current ML research and development, as well as the societal effects of their products. Here, they boast an image of fair and democratic artificial intelligence, which stands in stark contrast to the ubiquity of their corporate products and the advertised directives of efficiency and performativity the companies strive for. This underlines the need for alternative infrastructures and perspectives.
APA, Harvard, Vancouver, ISO, and other styles
46

HYUN, Jae-Won, Do-Yeon KIM, Ji-Su PARK, Yon-Su CHOI, Yun-Young CHOI, and Sanghee Kim. "Factors that Influence Clinical Nurses’ Moral Courage*." Korean Journal of Medical Ethics 24, no. 1 (March 2021): 45–58. http://dx.doi.org/10.35301/ksme.2021.24.1.45.

Full text
Abstract:
Although moral courage can play an important role in resolving some of the ethical dilemmas faced by clinical nurses, the concept of moral courage is in need of greater clarification. This study investigates some of the factors that influence moral courage, including moral sensitivity, ethical decision-making confidence, and ethical environment. A total of 148 nurses agreed to participate in an online survey for this study. The collected data were analyzed with Pearson’s correlation coefficient and multiple regression using the IBM SPSS 24 program. The analysis indicates that moral courage has a significant correlation with moral sensitivity (r=.55, p<.001), ethical decision-making confidence (r=.65, p<.001), and ethical environment (r=.66, p<.001). Ethical decision-making confidence and ethical environment were derived as factors that affect moral courage, and the explanatory power of these variables is 52.0% (F=20.94, p<.001). These findings justify the creation of ethical clinical environments in which nurses’ ethical decision-making confidence can be expressed as actual behavior. In addition, it is argued that ethical regulations and guidelines within clinical settings should be clarified in order to establish support systems that protect the well-being of patients and reflect the true value of professional nurses.
APA, Harvard, Vancouver, ISO, and other styles
47

Sorokin, Aleksei, Sergey Malkovsky, Georgiy Tsoy, Alexander Zatsarinnyy, and Konstantin Volovich. "Comparative Performance Evaluation of Modern Heterogeneous High-Performance Computing Systems CPUs." Electronics 9, no. 6 (June 23, 2020): 1035. http://dx.doi.org/10.3390/electronics9061035.

Full text
Abstract:
The study presents a comparison of computing systems based on IBM POWER8, IBM POWER9, and Intel Xeon Platinum 8160 processors running parallel applications. Memory subsystem bandwidth was studied, parallel programming technologies were compared, and the operating modes and capabilities of simultaneous multithreading technology were analyzed. Performance analysis for the studied computing systems running parallel applications based on the OpenMP and MPI technologies was carried out by using the NAS Parallel Benchmarks. An assessment of the results obtained during experimental calculations led to the conclusion that IBM POWER8 and Intel Xeon Platinum 8160 systems have almost the same maximum memory bandwidth, but require a different number of threads for efficient utilization. The IBM POWER9 system has the highest maximum bandwidth, which can be attributed to the large number of memory channels per socket. Based on the results of numerical experiments, recommendations are given on how the hardware of a similar grade can be utilized to solve various scientific problems, including recommendations on optimal processor architecture choice for leveraging the operation of high-performance hybrid computing platforms.
APA, Harvard, Vancouver, ISO, and other styles
48

Hammond, Angus, Zongyuan Liu, Thibaut Pérami, Peter Sewell, Lars Birkedal, and Jean Pichon-Pharabod. "An Axiomatic Basis for Computer Programming on the Relaxed Arm-A Architecture: The AxSL Logic." Proceedings of the ACM on Programming Languages 8, POPL (January 5, 2024): 604–37. http://dx.doi.org/10.1145/3632863.

Full text
Abstract:
Very relaxed concurrency memory models, like those of the Arm-A, RISC-V, and IBM Power hardware architectures, underpin much of computing but break a fundamental intuition about programs, namely that syntactic program order and the reads-from relation always both induce order in the execution. Instead, out-of-order execution is allowed except where prevented by certain pairwise dependencies, barriers, or other synchronisation. This means that there is no notion of the 'current' state of the program, making it challenging to design (and prove sound) syntax-directed, modular reasoning methods like Hoare logics, as usable resources cannot implicitly flow from one program point to the next. We present AxSL, a separation logic for the relaxed memory model of Arm-A, that captures the fine-grained reasoning underpinning the low-overhead synchronisation mechanisms used by high-performance systems code. In particular, AxSL allows transferring arbitrary resources using relaxed reads and writes when they induce inter-thread ordering. We mechanise AxSL in the Iris separation logic framework, illustrate it on key examples, and prove it sound with respect to the axiomatic memory model of Arm-A. Our approach is largely generic in the axiomatic model and in the instruction-set semantics, offering a potential way forward for compositional reasoning for other similar models, and for the combination of production concurrency models and full-scale ISAs.
APA, Harvard, Vancouver, ISO, and other styles
49

Sinharoy, B., R. Swanberg, N. Nayar, B. Mealey, J. Stuecheli, B. Schiefer, J. Leenstra, et al. "Advanced features in IBM POWER8 systems." IBM Journal of Research and Development 59, no. 1 (January 2015): 1:1–1:18. http://dx.doi.org/10.1147/jrd.2014.2374252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Starke, W. J., J. S. Dodson, J. Stuecheli, E. Retter, B. W. Michael, S. J. Powell, and J. A. Marcella. "IBM POWER9 memory architectures for optimized systems." IBM Journal of Research and Development 62, no. 4/5 (July 1, 2018): 3:1–3:13. http://dx.doi.org/10.1147/jrd.2018.2846159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography