Academic literature on the topic 'Neural network: performance'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neural network: performance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neural network: performance"

1

Panda, Subodh, Bikash Swain, and Sandeep Mishra. "Boiler Performance Optimization Using Process Neural Network." Indian Journal of Applied Research 3, no. 7 (October 1, 2011): 298–300. http://dx.doi.org/10.15373/2249555x/july2013/93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

CR, Dhivyaa, Sudhakar R, Nithya K, and Prabhakar E. "Performance Analysis of Convolutional Neural Network for Retinal Image Classification." International Journal of Psychosocial Rehabilitation 23, no. 4 (December 20, 2019): 1149–59. http://dx.doi.org/10.37200/ijpr/v23i4/pr190441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Xiao Hu, Feng Xu, Jin Hua Zhang, and Su Nan Wang. "A New Small-World Neural Network with its Performance on Fault Tolerance." Advanced Materials Research 629 (December 2012): 719–24. http://dx.doi.org/10.4028/www.scientific.net/amr.629.719.

Full text
Abstract:
Many artificial neural networks are the simple simulation of brain neural network’s architecture and function. However, how to rebuild new artificial neural network which architecture is similar to biological neural networks is worth studying. In this study, a new multilayer feedforward small-world neural network is presented using the results form research on complex network. Firstly, a new multilayer feedforward small-world neural network which relies on the rewiring probability heavily is built up on the basis of the construction ideology of Watts-Strogatz networks model and community structure. Secondly, fault tolerance is employed in investigating the performances of new small-world neural network. When the network with connection fault or neuron damage is used to test the fault tolerance performance under different rewiring probability, simulation results show that the fault tolerance capability of small-world neural network outmatches that of the same scale regular network when the fault probability is more than 40%, while random network has the best fault tolerance capability.
APA, Harvard, Vancouver, ISO, and other styles
4

Yen, Gary G., and Haiming Lu. "Hierarchical Rank Density Genetic Algorithm for Radial-Basis Function Neural Network Design." International Journal of Computational Intelligence and Applications 03, no. 03 (September 2003): 213–32. http://dx.doi.org/10.1142/s1469026803000975.

Full text
Abstract:
In this paper, we propose a genetic algorithm based design procedure for a radial-basis function neural network. A Hierarchical Rank Density Genetic Algorithm (HRDGA) is used to evolve the neural network's topology and parameters simultaneously. Compared with traditional genetic algorithm based designs for neural networks, the hierarchical approach addresses several deficiencies highlighted in literature. In addition, the rank-density based fitness assignment technique is used to optimize the performance and topology of the evolved neural network to deal with the confliction between the training performance and network complexity. Instead of producing a single optimal solution, HRDGA provides a set of near-optimal neural networks to the designers so that they can have more flexibility for the final decision-making based on certain preferences. In terms of searching for a near-complete set of candidate networks with high performances, the networks designed by the proposed algorithm prove to be competitive, or even superior, to three other traditional radial-basis function networks for predicting Mackey–Glass chaotic time series.
APA, Harvard, Vancouver, ISO, and other styles
5

Jeong, Yeongsang, and Sungshin Kim. "A Study of Arrow Performance using Artificial Neural Network." Journal of Korean Institute of Intelligent Systems 24, no. 5 (October 25, 2014): 548–53. http://dx.doi.org/10.5391/jkiis.2014.24.5.548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tun, Myat Thida. "Implementation and Performance Evaluation of Neural Network for English Alphabet Recognition System." International Journal of Trend in Scientific Research and Development Volume-2, Issue-5 (August 31, 2018): 474–78. http://dx.doi.org/10.31142/ijtsrd15863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Suhailayani Suhaimi, Nur, Zalinda Othman, and Mohd Ridzwan Yaakub. "Analyzing Prediction Performance between Wavelet Neural Network and Product-Unit Neural Network." Journal of Physics: Conference Series 1432 (January 2020): 012081. http://dx.doi.org/10.1088/1742-6596/1432/1/012081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stevens, R., J. Ikeda, A. Casillas, J. Palacio-Cayetano, and S. Clyman. "Artificial neural network-based performance assessments." Computers in Human Behavior 15, no. 3-4 (May 1999): 295–313. http://dx.doi.org/10.1016/s0747-5632(99)00025-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jasic, Teo, and Douglas Wood. "Neural network protocols and model performance." Neurocomputing 55, no. 3-4 (October 2003): 747–53. http://dx.doi.org/10.1016/s0925-2312(03)00437-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wilson, Charles L., James L. Blue, and Omid M. Omidvar. "Training Dynamics and Neural Network Performance." Neural Networks 10, no. 5 (July 1997): 907–23. http://dx.doi.org/10.1016/s0893-6080(96)00119-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Neural network: performance"

1

Tupas, Ronald-Ray Tiñana. "Artificial neural network modelling of filtration performance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0011/MQ59890.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bataineh, Mohammad Hindi. "Artificial neural network for studying human performance." Thesis, University of Iowa, 2012. https://ir.uiowa.edu/etd/3259.

Full text
Abstract:
The vast majority of products and processes in industry and academia require human interaction. Thus, digital human models (DHMs) are becoming critical for improved designs, injury prevention, and a better understanding of human behavior. Although many capabilities in the DHM field continue to mature, there are still many opportunities for improvement, especially with respect to posture- and motion-prediction. Thus, this thesis investigates the use of artificial neural network (ANN) for improving predictive capabilities and for better understanding how and why human behave the way they do. With respect to motion prediction, one of the most challenging opportunities for improvement concerns computation speed. Especially, when considering dynamic motion prediction, the underlying optimization problems can be large and computationally complex. Even though the current optimization-based tools for predicting human posture are relatively fast and accurate and thus do not require as much improvement, posture prediction in general is a more tractable problem than motion prediction and can provide a test bead that can shed light on potential issues with motion prediction. Thus, we investigate the use of ANN with posture prediction in order to discover potential issues. In addition, directly using ANN with posture prediction provides a preliminary step towards using ANN to predict the most appropriate combination of performance measures (PMs) - what drives human behavior. The PMs, which are the cost functions that are minimized in the posture prediction problem, are typically selected manually depending on the task. This is perhaps the most significant impediment when using posture prediction. How does the user know which PMs should be used? Neural networks provide tools for solving this problem. This thesis hypothesizes that the ANN can be trained to predict human motion quickly and accurately, to predict human posture (while considering external forces), and to determine the most appropriate combination of PM(s) for posture prediction. Such capabilities will in turn provide a new tool for studying human behavior. Based on initial experimentation, the general regression neural network (GRNN) was found to be the most effective type of ANN for DHM applications. A semi-automated methodology was developed to ease network construction, training and testing processes, and network parameters. This in turn facilitates use with DHM applications. With regards to motion prediction, use of ANN was successful. The results showed that the calculation time was reduced from 1 to 40 minutes, to a fraction of a second without reducing accuracy. With regards to posture prediction, ANN was again found to be effective. However, potential issues with certain motion-prediction tasks were discovered and shed light on necessary future development with ANNs. Finally, a decision engine was developed using GRNN for automatically selecting four human PMs, and was shown to be very effective. In order to train this new approach, a novel optimization formulation was used to extract PM weights from pre-existing motion-capture data. Eventually, this work will lead to automatically and realistically driving predictive DHMs in a general virtual environment.
APA, Harvard, Vancouver, ISO, and other styles
3

Alrumah, Muhammad K. "Neural networks predict well inflow performance." Texas A&M University, 2003. http://hdl.handle.net/1969.1/349.

Full text
Abstract:
Predicting well inflow performance relationship accurately is very important for production engineers. From these predictions, future plans for handling and improving well performance can be established. One method of predicting well inflow performance is to use artificial neural networks. Vogel's reference curve, which is produced from a series of simulation runs for a reservoir model proposed by Weller, is typically used to predict inflow performance relationship for solution-gas-drive reservoirs. In this study, I reproduced Vogel's work, but instead of producing one curve by conventional regression, I built three neural network models. Two models predict the IPR efficiently with higher overall accuracy than Vogel's reference curve.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Dong. "Neural network model for predicting performance of projects." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0021/MQ48059.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schilling, Glenn D. "Modeling Aircraft Fuel Consumption with a Neural Network." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36533.

Full text
Abstract:
This research involves the development of an aircraft fuel consumption model to simplify Bela Collins of the MITRE Corporation aircraft fuelburn model in terms of level of computation and level of capability. MATLAB and its accompanying Neural Network Toolbox, has been applied to data from the base model to predict fuel consumption. The approach to the base model and neural network is detailed in this paper. It derives from the basic concepts of energy balance. Multivariate curve fitting techniques used in conjunction with aircraft performance data derive the aircraft specific constants. Aircraft performance limits are represented by empirical relationships that also utilize aircraft specific constants. It is based on generally known assumptions and approximations for commercial jet operations. It will simulate fuel consumption by adaptation of a specific aircraft using constants that represent the relationship of lift-to-drag and thrust-to-fuel flow. The neural network model invokes the output from MITRE1s algorithm and provides: (1) a comparison to the polynomial fuelburn function in the fuelburn post- processor of the FAA Airport and Airspace Simulation Model (SIMMOD), (2) an established sensitivity of system performance for a range of variables that effect fuel consumption, (3) a comparison of post fuel burn (fuel consumption algorithms) techniques to new techniques, and (4) the development of a trained demo neural network. With the powerful features of optimization, graphics, and hierarchical modeling, the MATLAB toolboxes proved to be effective in this modeling process.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Rosenfeld, Jonathan S. (Jonathan Shmuel). "On the relation between neural network size and performance." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122703.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 57-58).
Artificial Neural Networks (NN) are notorious for their size requirements and for the effort involved in developing well performing network models. This thesis uncovers a fundamental relationship that ties model size and performance in a predictable manner. This relationship enables a well-founded development of networks at small scale while producing insight into their large-scale behavior.
by Jonathan S. Rosenfeld.
S.M.
S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Mitchell, David. "Classification by Neural Network and Statistical Models in Tandem: Does Integration Enhance Performance?" Thesis, University of North Texas, 1998. https://digital.library.unt.edu/ark:/67531/metadc278874/.

Full text
Abstract:
The major purposes of the current research are twofold. The first purpose is to present a composite approach to the general classification problem by using outputs from various parametric statistical procedures and neural networks. The second purpose is to compare several parametric and neural network models on a transportation planning related classification problem and five simulated classification problems.
APA, Harvard, Vancouver, ISO, and other styles
8

Mamidanna, Pranav. "Optimizing Neural Source Extraction Algorithms: A Performance Measure Based on Neuronal Network Properties." Thesis, KTH, Numerisk analys, NA, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210052.

Full text
Abstract:
Extracting neural activity from electrophysiological and calcium All existing automated algorithms for this purpose, however, rely heavily on manual intervention and parameter tuning. In this thesis, we introduce a novel performance measure based on well-founded notions of neuronal network organization. This enables us to systematically tune parameters, using techniques from statistical design of experiments and response surface methods. We implement this framework on an algorithm used to extract neural activity from microendoscopic calcium imaging datasets, and demonstrate that this greatly reduces manual intervention.
Extraktion av neuronal aktivitet från elektrofysiologiska och kalciumavbildningsmätningar utgör ett viktigt problem inom neurovetenskapen. Alla existerande automatiska algoritmer för detta ändamål beror dock i dagsläget på manuell handpåläggning och parameterinställning. I detta examensarbete presenterar vi ett nytt prestandamått baserat på välgrundade begrepp rörande organisationen av neuronala nätverk. Detta möjliggör en systematisk parameterinställning genom att använda tekniker från statistisk experimentdesign och response surface-metoder. Vi har implementerat detta ramverk för en algoritm som används för att extrahera neuronal aktivitet från mikroendoskopisk kalciumavbildningsdata och visar att detta förfarande avsevärt minskar behovet av manuell inblandning.
APA, Harvard, Vancouver, ISO, and other styles
9

Nichols, Roger Alan. "A performance baseline for machinery condition classification by neural network." Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-03172010-020117/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Yu Chu. "E-government website performance evaluation based on BP neural network." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Neural network: performance"

1

A, Lloyd J. Performance and scalability of neural network implementations on parallel computers. Manchester: UMIST, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kobayashi, Takahisa. A hybrid neural network-genetic algorithm technique for aircraft engine performance diagnostics. [Cleveland, Ohio]: National Aeronautics and Space Administration, Glenn Research Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kobayashi, Takahisa. A hybrid neural network-genetic algorithm technique for aircraft engine performance diagnostics. [Cleveland, Ohio]: National Aeronautics and Space Administration, Glenn Research Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kobayashi, Takahisa. A hybrid neural network-genetic algorithm technique for aircraft engine performance diagnostics. [Cleveland, Ohio]: National Aeronautics and Space Administration, Glenn Research Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kobayashi, Takahisa. A hybrid neural network-genetic algorithm technique for aircraft engine performance diagnostics. [Cleveland, Ohio]: National Aeronautics and Space Administration, Glenn Research Center, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

McGrath, M. Neural networks for financial performance prediction. Dublin: University CollegeDublin, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

1941-, Venetsanopoulos A. N., ed. Artificial neural networks: Learning algorithms, performance evaluation, and applications. Boston: Kluwer Academic, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Simon, Donald L. Adaptive optimization of aircraft engine performance using neural networks. [Washington, D.C.]: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rutkowski, Leszek. Flexible neuro-fuzzy systems: Structures, learning, and performance evaluation. Boston: Kluwer Academic Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Piramuthu, Selwyn. Using feature construction to improve the performance of neural networks. [Urbana, Ill.]: College of Commerce and Business Administration, University of Illinois at Urbana-Champaign., 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Neural network: performance"

1

Feng, Ruibin, Chi-Sing Leung, Kai-Tat Ng, and John Sum. "The Performance of the Stochastic DNN-kWTA Network." In Neural Information Processing, 279–86. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12637-1_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suddarth, S. C., and Y. L. Kergosien. "Rule-injection hints as a means of improving network performance and learning time." In Neural Networks, 120–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-52255-7_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Raeth, Peter G. "An Experiment with 3-D Surface Maps to Illustrate Neural Network Performance." In International Neural Network Conference, 733–37. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pomerleau, Dean A. "Driving Results and Performance." In Neural Network Perception for Mobile Robot Guidance, 71–83. Boston, MA: Springer US, 1993. http://dx.doi.org/10.1007/978-1-4615-3192-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kamimura, Ryotaro. "Experimental Analysis of Performance of Temporal Supervised Learning Algorithm, Applied to a Long and Complex Sequence." In International Neural Network Conference, 753–56. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ueguchi, Taisei, Nobuyuki Matsui, and Teijiro Isokawa. "Performance of Qubit Neural Network in Chaotic Time Series Forecasting." In Neural Information Processing, 253–60. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46675-0_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yao, Kai, Kaizhu Huang, Rui Zhang, and Amir Hussain. "Improving Deep Neural Network Performance with Kernelized Min-Max Objective." In Neural Information Processing, 182–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04167-0_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fernández, Benito R. "Performance analysis of artificial neural network methods." In Artificial Neural Networks for Intelligent Manufacturing, 299–368. Dordrecht: Springer Netherlands, 1994. http://dx.doi.org/10.1007/978-94-011-0713-6_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Jieyu, and John Shawe-Taylor. "Neural Network Optimization for Good Generalization Performance." In ICANN ’94, 561–64. London: Springer London, 1994. http://dx.doi.org/10.1007/978-1-4471-2097-1_131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kechadi, M.-Tahar. "Recurrent neural network approach for partitioning irregular graphs." In High-Performance Computing and Networking, 450–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/bfb0100606.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neural network: performance"

1

Ghorbanian, Kaveh, and Mohammad Gholamrezaei. "Axial Compressor Performance Map Prediction Using Artificial Neural Network." In ASME Turbo Expo 2007: Power for Land, Sea, and Air. ASMEDC, 2007. http://dx.doi.org/10.1115/gt2007-27165.

Full text
Abstract:
The application of artificial neural network to compressor performance map prediction is investigated. Different types of artificial neural network such as multilayer perceptron network, radial basis function network, general regression neural network, and a rotated general regression neural network proposed by the authors are considered. Two different models are utilized in simulating the performance map. The results indicate that while the rotated general regression neural network has the least mean error and best agreement to the experimental data, it is however limited to curve fitting application. On the other hand, if one considers a tool for curve fitting as well as for interpolation and extrapolation applications, multilayer perceptron network technique is the most powerful candidate. Further, the compressor efficiency based on the multilayer perceptron network technique is determined. Excellent agreement between the predictions and the experimental data is obtained.
APA, Harvard, Vancouver, ISO, and other styles
2

Chou, Hung, Andrew A. Kostrzewski, Shudong Wu, Freddie S. Lin, and Thomas T. Lu. "Performance evaluation of a holographic optical neural network system." In Photonic Neural Networks. SPIE, 1993. http://dx.doi.org/10.1117/12.983187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kepner, Jeremy, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Albert Reuther, Ryan Robinett, and Sid Samsi. "GraphChallenge.org Sparse Deep Neural Network Performance." In 2020 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2020. http://dx.doi.org/10.1109/hpec43674.2020.9286253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mehdizadeh, Nasser S., Payam Sinaei, and Ali L. Nichkoohi. "Modeling Jones’ Reduced Chemical Mechanism of Methane Combustion With Artificial Neural Network." In ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-31186.

Full text
Abstract:
The present work reports a way of using Artificial Neural Networks for modeling and integrating the governing chemical kinetics differential equations of Jones’ reduced chemical mechanism for methane combustion. The chemical mechanism is applicable to both diffusion and premixed laminar flames. A feed-forward multi-layer neural network is incorporated as neural network architecture. In order to find sets of input-output data, for adapting the neural network’s synaptic weights in the training phase, a thermochemical analysis is embedded to find the chemical species mole fractions. An analysis of computational performance along with a comparison between the neural network approach and other conventional methods, used to represent the chemistry, are presented and the ability of neural networks for representing a non-linear chemical system is illustrated.
APA, Harvard, Vancouver, ISO, and other styles
5

Joung, Junegak, and Harrison M. Kim. "Importance-Performance Analysis of Product Attributes Using Explainable Deep Neural Network From Online Reviews." In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22382.

Full text
Abstract:
Abstract Importance-performance analysis (IPA) is a technique used to understand customer satisfaction and improve the quality of product attributes. This study proposes an explainable deep-neural-network-based method to carry out IPA of product attributes from online reviews for product design. Previous works used shallow neural network (SNN)-based methods to estimate importance values, but it was unclear whether the SNN is an optimal neural network architecture. The estimated importance has high variability by a single neural network from a training set that is randomly selected. However, the proposed method provides importance values with a lower variance by improving the importance estimation of each product attribute in the IPA. The proposed method first identifies the product attributes and estimates their performance. Then, it infers the importance values by combining explanations of the input features from multiple optimal neural networks. A case study on smartphones is used herein to demonstrate the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Biswas, M. A. Rafe, and Melvin D. Robinson. "Performance Estimation of Direct Methanol Fuel Cell Using Artificial Neural Network." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-51723.

Full text
Abstract:
A direct methanol fuel cell can convert chemical energy in the form of a liquid fuel into electrical energy to power devices, while simultaneously operating at low temperatures and producing virtually no greenhouse gases. Since the direct methanol fuel cell performance characteristics are inherently nonlinear and complex, it can be postulated that artificial neural networks represent a marked improvement in performance prediction capabilities. Artificial neural networks have long been used as a tool in predictive modeling. In this work, an artificial neural network is employed to predict the performance of a direct methanol fuel cell under various operating conditions. This work on the experimental analysis of a uniquely designed fuel cell and the computational modeling of a unique algorithm has not been found in prior literature outside of the authors and their affiliations. The fuel cell input variables for the performance analysis consist not only of the methanol concentration, fuel cell temperature, and current density, but also the number of cells and anode flow rate. The addition of the two typically unconventional variables allows for a more distinctive model when compared to prior neural network models. The key performance indicator of our neural network model is the cell voltage, which is an average voltage across the stack and ranges from 0 to 0:8V. Experimental studies were carried out using DMFC stacks custom-fabricated, with a membrane electrode assembly consisting of an additional unique liquid barrier layer to minimize water loss through the cathode side to the atmosphere. To determine the best fit of the model to the experimental cell voltage data, the model is trained using two different second order training algorithms: OWO-Newton and Levenberg-Marquardt (LM). The OWO-Newton algorithm has a topology that is slightly different from the topology of the LM algorithm by the employment of bypass weights. It can be concluded that the application of artificial neural networks can rapidly construct a predictive model of the cell voltage for a wide range of operating conditions with an accuracy of 10−3 to 10−4. The results were comparable with existing literature. The added dimensionality of the number of cells provided insight into scalability where the coefficient of the determination of the results for the two multi-cell stacks using LM algorithm were up to 0:9998. The model was also evaluated with empirical data of a single-cell stack.
APA, Harvard, Vancouver, ISO, and other styles
7

Shiflett, Kyle, Dylan Wright, Avinash Karanth, and Ahmed Louri. "PIXEL: Photonic Neural Network Accelerator." In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. http://dx.doi.org/10.1109/hpca47549.2020.00046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kepner, Jeremy, Vikalo Gadepally, Hayden Jananthan, Lauren Milechin, and Sid Samsi. "Sparse Deep Neural Network Exact Solutions." In 2018 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2018. http://dx.doi.org/10.1109/hpec.2018.8547742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kepner, Jeremy, Simon Alford, Vijay Gadepally, Michael Jones, Lauren Milechin, Ryan Robinett, and Sid Samsi. "Sparse Deep Neural Network Graph Challenge." In 2019 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2019. http://dx.doi.org/10.1109/hpec.2019.8916336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Samsi, Siddharth, Andrew Prout, Michael Jones, Andrew Kirby, Bill Arcand, Bill Bergeron, David Bestor, et al. "Benchmarking network fabrics for data distributed training of deep neural networks." In 2020 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2020. http://dx.doi.org/10.1109/hpec43674.2020.9286232.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neural network: performance"

1

Fine, Terrence L., and Thomas W. Parks. Statistical Benchmarks for Neural Network Performance. Fort Belvoir, VA: Defense Technical Information Center, October 1992. http://dx.doi.org/10.21236/ada294937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fix, Edward L. Neural Network Based Human Performance Modeling. Fort Belvoir, VA: Defense Technical Information Center, August 1990. http://dx.doi.org/10.21236/ada229822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wilson, Charles L., James L. Blue, and Omid M. Omidvar. The effect of training dynamics on neural network performance. Gaithersburg, MD: National Institute of Standards and Technology, 1995. http://dx.doi.org/10.6028/nist.ir.5696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wilson, Charles L., James L. Blue, and Omid M. Omidvar. Improving neural network performance for character and fingerprint classification by altering network dynamics. Gaithersburg, MD: National Institute of Standards and Technology, 1995. http://dx.doi.org/10.6028/nist.ir.5695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Haichao, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-Precision Deep Neural Networks. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ejai.v1i1.82.

Full text
Abstract:
We present Any-Precision Deep Neural Networks (Any- Precision DNNs), which are trained with a new method that empowers learned DNNs to be flexible in any numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-width, by trun- cating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low- bits, we show that the model achieved accuracy compara- ble to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learn- ing models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures. We experimentally validated our method with different deep network backbones (AlexNet-small, Resnet-20, Resnet-50) on different datasets (SVHN, Cifar-10, ImageNet) and observed consistent results.
APA, Harvard, Vancouver, ISO, and other styles
6

Rose-Pehrsson, Susan, Sean J. Hart, Mark H. Hammond, Daniel T. Gottuk, and Mark T. Wright. Real-Time Probabilistic Neural Network Performance and Optimization for Fire Detection and Nuisance Alarm Rejection: Test Series 2 Results. Fort Belvoir, VA: Defense Technical Information Center, October 2000. http://dx.doi.org/10.21236/ada383627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Field, R. L., E. J. Yoerger, and P. K. Simpson. Performance of Neural Networks in Classifying Environmentally Distorted Transient Signals. Fort Belvoir, VA: Defense Technical Information Center, January 1990. http://dx.doi.org/10.21236/ada230739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Puttanapong, Nattapong, Arturo M. Martinez Jr, Mildred Addawe, Joseph Bulan, Ron Lester Durante, and Marymell Martillan. Predicting Poverty Using Geospatial Data in Thailand. Asian Development Bank, December 2020. http://dx.doi.org/10.22617/wps200434-2.

Full text
Abstract:
This study examines an alternative approach in estimating poverty by investigating whether readily available geospatial data can accurately predict the spatial distribution of poverty in Thailand. It also compares the predictive performance of various econometric and machine learning methods such as generalized least squares, neural network, random forest, and support vector regression. Results suggest that intensity of night lights and other variables that approximate population density are highly associated with the proportion of population living in poverty. The random forest technique yielded the highest level of prediction accuracy among the methods considered, perhaps due to its capability to fit complex association structures even with small and medium-sized datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Kozman, Robert, and Walter J. Freeman. The Effect of External and Internal Noise on the Performance of Chaotic Neural Networks. Fort Belvoir, VA: Defense Technical Information Center, January 2002. http://dx.doi.org/10.21236/ada413501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography