Academic literature on the topic 'Network processors Computer architecture. Computer networks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Network processors Computer architecture. Computer networks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Network processors Computer architecture. Computer networks"

1

OMONDI, AMOS R. "Letter to the Editor: NEUROCOMPUTERS: A DEAD END?" International Journal of Neural Systems 10, no. 06 (December 2000): 475–81. http://dx.doi.org/10.1142/s0129065700000375.

Full text
Abstract:
The last decade saw a proliferation of research into the design of neurocomputers. Although such work still continues, much of it is never beyond the prototype-machine stage. In this paper, we argue that, on the whole, neurocomputers are no longer viable; like, say, database computers before them, their time has passed before they became a common reality. We consider the implementation of hardware neural networks, from the level of arithmetic to complete individual processors and parallel processors and show that currents trends in computer architecture and implementation are not supportive of a case for custom neurocomputers. We argue that in the future, neural-network processing ought to be mostly restricted to general-purpose processors or to processors that have been designed for other widely-used applications. There are just one or two, rather narrow, exceptions to this.
APA, Harvard, Vancouver, ISO, and other styles
2

GONZALEZ, TEOFILO F. "Improved Communication Schedules with Buffers." Parallel Processing Letters 19, no. 01 (March 2009): 129–39. http://dx.doi.org/10.1142/s0129626409000110.

Full text
Abstract:
We consider the multimessage multicasting over the n processor complete (or fully connected) static network when there are l incoming (message) buffers on every processor. We present an efficient algorithm to route the messages for every degree d problem instance in d2/l + l - 1 total communication rounds, where d is the maximum number of messages that each processor may send (or receive). Our algorithm takes linear time with respect to the input length, i.e. O(n + q) where q is the total number of messages that all processors must receive. For l = d we present a lower bound for the total communication time. The lower bound matches the upper bound for the schedules generated by our algorithm. For convenience we assume that the network is completely connected. However, it is important to note that each communication round can be automatically translated into one communication round for processors interconnected via a replication network followed by a permutation network (e.g., two adjacent Benes networks), because in these networks all possible one-to-many communications can be performed in a single communication round.
APA, Harvard, Vancouver, ISO, and other styles
3

PETIT, FRANCK, and VINCENT VILLAIN. "OPTIMALITY AND SELF-STABILIZATION IN ROOTED TREE NETWORKS." Parallel Processing Letters 10, no. 01 (March 2000): 3–14. http://dx.doi.org/10.1142/s0129626400000032.

Full text
Abstract:
In this paper, we consider arbitrary tree networks where every processor, except one, called the root, executes the same program. We show that, to design a depth-first token circulation protocol in such networks, it is necessary to have at least [Formula: see text] configurations, where n is the number of processors in the network and Δi is the degree of processor pi. We then propose a depth-first token circulation algorithm which matches the above minimal number of configurations. We show that the proposed algorithm is self-stabilizing, i.e., the system eventually recovers itself to a legitimate state after any perturbation modifying the state of the processors. Hence, the proposed algorithm is optimal in terms of the number of configurations and no extra cost is involved in making it stabilizing.
APA, Harvard, Vancouver, ISO, and other styles
4

PETIT, FRANCK, and VINCENT VILLAIN. "OPTIMALITY AND SELF-STABILIZATION IN ROOTED TREE NETWORKS." Parallel Processing Letters 09, no. 03 (September 1999): 313–23. http://dx.doi.org/10.1142/s0129626499000293.

Full text
Abstract:
In this paper, we consider arbitrary tree networks where every processor, except one, called the root, executes the same program. We show that, to design a depth-first token circulation protocol in such networks, it is necessary to have at least [Formula: see text] configurations, where n is the number of processors in the network and Δi is the degree of processor pi. We then propose a depth-first token circulation algorithm which matches the above minimal number of configurations. We show that the proposed algorithm is self-stabilizing, i.e., the system eventually recovers itself to a legitimate state after any perturbation modifying the state of the processors. Hence, the proposed algorithm is optimal in terms of the number of configurations and no extra cost is involved in making it stabilizing.
APA, Harvard, Vancouver, ISO, and other styles
5

Summers, Kenneth L., Thomas Preston Caudell, Kathryn Berkbigler, Brian Bush, Kei Davis, and Steve Smith. "Graph Visualization for the Analysis of the Structure and Dynamics of Extreme-Scale Supercomputers." Information Visualization 3, no. 3 (July 8, 2004): 209–22. http://dx.doi.org/10.1057/palgrave.ivs.9500079.

Full text
Abstract:
We are exploring the development and application of information visualization techniques for the analysis of new massively parallel supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often non-standard networks. The scale, complexity, and inherent non-locality of the structure and dynamics of this hardware, and the operating systems and applications distributed over them, challenge traditional analysis methods. As part of the á la carte (A Los Alamos Computer Architecture Toolkit for Extreme-Scale Architecture Simulation) team at Los Alamos National Laboratory, who are simulating these new architectures, we are exploring advanced visualization techniques and creating tools to enhance analysis of these simulations with intuitive three-dimensional representations and interfaces. This work complements existing and emerging algorithmic analysis tools. In this paper, we give background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree communications network), and a presentation of three classes of visualizations that clearly display the switching fabric and the flow of information in the interconnecting network.
APA, Harvard, Vancouver, ISO, and other styles
6

FERREIRA, A., A. GOLDMAN, and S. W. SONG. "BROADCASTING IN BUS INTERCONNECTION NETWORKS." Journal of Interconnection Networks 01, no. 02 (June 2000): 73–94. http://dx.doi.org/10.1142/s0219265900000068.

Full text
Abstract:
In most distributed memory MIMD multiprocessors, processors are connected by a point-to-point interconnection network, usually modeled by a graph where processors are nodes and communication links are edges. Since interprocessor communication frequently constitutes serious bottlenecks, several architectures were proposed that enhance point-to-point topologies with the help of multiple bus systems so as to improve the communication efficiency. In this paper we study parallel architectures where the communication means are constituted solely by buses. These architectures can use the power of bus technologies, providing a way to interconnect much more processors in a simple and efficient manner. We present the hyperpath, hypergrid, hyperring, and hypertorus architectures, which are the bus-based versions of the well used point-to-point interconnection networks. Using (hyper) graph theoretic concepts to model inter-processor communication in such networks, we give optimal algorithms for broadcasting a message from one processor to all the others. For deriving high performance communication patterns we developed a new tool called simplification. The idea is to construct a graph, to be called representative graph, from the original hyper-topology, in such a way that it will become easy to describe and perform communication schemes to the former that will fit to the latter, because the simplification concept also allows us to partially use some already known communication algorithms for usual networks.
APA, Harvard, Vancouver, ISO, and other styles
7

Sánchez Couso, José Ramón, José Angel Sanchez Martín, Victor Mitrana, and Mihaela Păun. "Simulations between Three Types of Networks of Splicing Processors." Mathematics 9, no. 13 (June 28, 2021): 1511. http://dx.doi.org/10.3390/math9131511.

Full text
Abstract:
Networks of splicing processors (NSP for short) embody a subcategory among the new computational models inspired by natural phenomena with theoretical potential to handle unsolvable problems efficiently. Current literature considers three variants in the context of networks managed by random-context filters. Despite the divergences on system complexity and control degree over the filters, the three variants were proved to hold the same computational power through the simulations of two computationally complete systems: Turing machines and 2-tag systems. However, the conversion between the three models by means of a Turing machine is unattainable because of the huge computational costs incurred. This research paper addresses this issue with the proposal of direct and efficient simulations between the aforementioned paradigms. The information about the nodes and edges (i.e., splicing rules, random-context filters, and connections between nodes) composing any network of splicing processors belonging to one of the three categories is used to design equivalent networks working under the other two models. We demonstrate that these new networks are able to replicate any computational step performed by the original network in a constant number of computational steps and, consequently, we prove that any outcome achieved by the original architecture can be accomplished by the constructed architectures without worsening the time complexity.
APA, Harvard, Vancouver, ISO, and other styles
8

Ferreira de Lima, Thomas, Alexander N. Tait, Armin Mehrabian, Mitchell A. Nahmias, Chaoran Huang, Hsuan-Tung Peng, Bicky A. Marquez, et al. "Primer on silicon neuromorphic photonic processors: architecture and compiler." Nanophotonics 9, no. 13 (August 10, 2020): 4055–73. http://dx.doi.org/10.1515/nanoph-2020-0172.

Full text
Abstract:
AbstractMicroelectronic computers have encountered challenges in meeting all of today’s demands for information processing. Meeting these demands will require the development of unconventional computers employing alternative processing models and new device physics. Neural network models have come to dominate modern machine learning algorithms, and specialized electronic hardware has been developed to implement them more efficiently. A silicon photonic integration industry promises to bring manufacturing ecosystems normally reserved for microelectronics to photonics. Photonic devices have already found simple analog signal processing niches where electronics cannot provide sufficient bandwidth and reconfigurability. In order to solve more complex information processing problems, they will have to adopt a processing model that generalizes and scales. Neuromorphic photonics aims to map physical models of optoelectronic systems to abstract models of neural networks. It represents a new opportunity for machine information processing on sub-nanosecond timescales, with application to mathematical programming, intelligent radio frequency signal processing, and real-time control. The strategy of neuromorphic engineering is to externalize the risk of developing computational theory alongside hardware. The strategy of remaining compatible with silicon photonics externalizes the risk of platform development. In this perspective article, we provide a rationale for a neuromorphic photonics processor, envisioning its architecture and a compiler. We also discuss how it can be interfaced with a general purpose computer, i.e. a CPU, as a coprocessor to target specific applications. This paper is intended for a wide audience and provides a roadmap for expanding research in the direction of transforming neuromorphic photonics into a viable and useful candidate for accelerating neuromorphic computing.
APA, Harvard, Vancouver, ISO, and other styles
9

Wohl, Peter. "EFFICIENCY THROUGH REDUCED COMMUNICATION IN MESSAGE PASSING SIMULATION OF NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 02, no. 01 (March 1993): 133–62. http://dx.doi.org/10.1142/s0218213093000096.

Full text
Abstract:
Neural algorithms require massive computation and very high communication bandwidth and are naturally expressed at a level of granularity finer than parallel systems can exploit efficiently. Mapping Neural Networks onto parallel computers has traditionally implied a form of clustering neurons and weights to increase the granularity. SIMD simulations may exceed a million connections per second using thousands of processors, but are often tailored to particular networks and learning algorithms. MIMD simulations required an even larger granularity to run efficiently and often trade flexibility for speed. An alternative technique based on pipelining fewer but larger messages through parallel. “broadcast/accumulate trees” is explored. “Lazy” allocation of messages reduces communication and memory requirements, curbing excess parallelism at run time. The mapping is flexible to changes in network architecture and learning algorithm and is suited for a variety of computer configurations. The method pushes the limits of parallelizing backpropagation and feed-forward type algorithms. Results exceed a million connections per second already on 30 processors and are up to ten times superior to previous results on similar hardware. The implementation techniques can also be applied in conjunction with others, including systolic and VLSI.
APA, Harvard, Vancouver, ISO, and other styles
10

Amodu, Oluwatosin Ahmed, Mohamed Othman, Nur Arzilawati Md Yunus, and Zurina Mohd Hanapi. "A Primer on Design Aspects and Recent Advances in Shuffle Exchange Multistage Interconnection Networks." Symmetry 13, no. 3 (February 26, 2021): 378. http://dx.doi.org/10.3390/sym13030378.

Full text
Abstract:
Interconnection networks provide an effective means by which components of a system such as processors and memory modules communicate to provide reliable connectivity. This facilitates the realization of a highly efficient network design suitable for computational-intensive applications. Particularly, the use of multistage interconnection networks has unique advantages as the addition of extra stages helps to improve the network performance. However, this comes with challenges and trade-offs, which motivates researchers to explore various design options and architectural models to improve on its performance. A particular class of these networks is shuffle exchange network (SEN) which involves a symmetric N-input and N-output architecture built in stages of N/2 switching elements each. This paper presents recent advances in multistage interconnection networks with emphasis on SENs while discussing pertinent issues related to its design aspects, and taking lessons from the past and current literature. To achieve this objective, applications, motivating factors, architectures, shuffle exchange networks, and some of the performance evaluation techniques as well as their merits and demerits are discussed. Then, to capture the latest research trends in this area not covered in contemporary literature, this paper reviews very recent advancements in shuffle exchange multistage interconnection networks within the last few years and provides design guidelines as well as recommendations for future consideration.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Network processors Computer architecture. Computer networks"

1

Crowley, Patrick. "Design and analysis of architectures for programmable network processing systems /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/6991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Batra, Shalini. "An efficient algorithm and architecture for network processors." Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-07052007-194448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Diler, Timur. "Network processors and utilizing their features in a multicast design." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FDiler.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science and M.S. in Electrical Engineering)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Su Wen, Jon Butler. Includes bibliographical references (p. 53-54). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Boivie, Victor. "Network Processor specific Multithreading tradeoffs." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2940.

Full text
Abstract:

Multithreading is a processor technique that can effectively hide long latencies that can occur due to memory accesses, coprocessor operations and similar. While this looks promising, there is an additional hardware cost that will vary with for example the number of contexts to switch to and what technique is used for it and this might limit the possible gain of multithreading.

Network processors are, traditionally, multiprocessor systems that share a lot of common resources, such as memories and coprocessors, so the potential gain of multithreading could be high for these applications. On the other hand, the increased hardware required will be relatively high since the rest of the processor is fairly small. Instead of having a multithreaded processor, higher performance gains could be achieved by using more processors instead.

As a solution, a simulator was built where a system can effectively be modelled and where the simulation results can give hints of the optimal solution for a system in the early design phase of a network processor system. A theoretical background to multithreading, network processors and more is also provided in the thesis.

APA, Harvard, Vancouver, ISO, and other styles
5

Omundsen, Daniel (Daniel Simon) Carleton University Dissertation Engineering Electrical. "A pipelined, multi-processor architecture for a connectionless server for broadband ISDN." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Winig, Robert J. "Conceptual design of a network architecture for a typical manufacturing information system using open systems integration." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-07292009-090413/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Musasa, Mutombo Mike. "Evaluation of embedded processors for next generation asic : Evaluation of open source Risc-V processors and tools ability to perform packet processing operations compared to Arm Cortex M7 processors." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299656.

Full text
Abstract:
Nowadays, network processors are an integral part of information technology. With the deployment of 5G network ramping up around the world, numerous new devices are going to take advantage of their processing power and programming flexibility. Contemporary information technology providers of today such as Ericsson, spend a great amount of financial resources on licensing deals to use processors with proprietary instruction set architecture designs from companies like Arm holdings. There is a new non-proprietary instruction set architecture technology being developed known as Risc-V. There are many open source processors based on Risc-V architecture, but it is still unclear how well an open-source Risc-V processor performs network packet processing tasks compared to an Arm-based processor. The main purpose of this thesis is to design a test model simulating and evaluating how well an open-source Risc-V processor performs packet processing compared to an Arm Cortex M7 processor. This was done by designing a C code simulating some key packet processing functions processing 50 randomly generated 72 bytes data packets. The following functions were tested: framing, parsing, pattern matching, and classification. The code was ported and executed in both an Arm Cortex M7 processor and an emulated open source Risc-V processor. A working packet processing test code was built, evaluated on an Arm Cortex M7 processor. Three different open-source Risc-V processors were tested, Arianne, SweRV core, and Rocket-chip. The execution time of both cases was analyzed and compared. The execution time of the test code on Arm was 67, 5 ns. Based on the results, it can be argued that open source Risc-V processor tools are not fully reliable yet and ready to be used for packet processing applications. Further evaluation should be performed on this topic, with a more in-depth look at the SweRV core processor, at physical open-source Risc-V hardware instead of emulators.
Nätverksprocessorer är en viktig byggsten av informationsteknik idag. I takt med att 5G nätverk byggs ut runt om i världen, många fler enheter kommer att kunna ta del av deras kraftfulla prestanda och programerings flexibilitet. Informationsteknik företag som Ericsson, spenderarmycket ekonomiska resurser på licenser för att kunna använda proprietära instruktionsuppsättnings arkitektur teknik baserade processorer från ARM holdings. Det är väldigt kostam att fortsätta köpa licenser då dessa arkitekturer är en byggsten till designen av många processorer och andra komponenter. Idag finns det en lovande ny processor instruktionsuppsättnings arkitektur teknik som inte är licensierad så kallad Risc-V. Tack vare Risc-V har många propietära och öppen källkod processor utvecklats idag. Det finns dock väldigt lite information kring hur bra de presterar i nätverksapplikationer är känt idag. Kan en öppen-källkod Risc-V processor utföra nätverks databehandling funktioner lika bra som en proprietär Arm Cortex M7 processor? Huvudsyftet med detta arbete är att bygga en test model som undersöker hur väl en öppen-källkod Risc-V baserad processor utför databehandlings operationer av nätverk datapacket jämfört med en Arm Cortex M7 processor. Detta har utförts genom att ta fram en C programmeringskod som simulerar en mottagning och behandling av 72 bytes datapaket. De följande funktionerna testades, inramning, parsning, mönster matchning och klassificering. Koden kompilerades och testades i både en Arm Cortex M7 processor och 3 olika emulerade öppen källkod Risc-V processorer, Arianne, SweRV core och Rocket-chip. Efter att ha testat några öppen källkod Risc-V processorer och använt test koden i en ArmCortex M7 processor, kan det hävdas att öppen-källkod Risc-V processor verktygen inte är tillräckligt pålitliga än. Denna rapport tyder på att öppen-källkod Risc-V emulatorer och verktygen behöver utvecklas mer för att användas i nätverks applikationer. Det finns ett behov av ytterligare undersökning inom detta ämne i framtiden. Exempelvis, en djupare undersökning av SweRV core processor, eller en öppen-källkod Risc-V byggd hårdvara krävs.
APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Van Minh. "Wireless Link Quality Modelling and Mobility Management for Cellular Networks." Phd thesis, Telecom ParisTech, 2011. http://tel.archives-ouvertes.fr/tel-00702798.

Full text
Abstract:
La qualité de communication dans un réseau sans fil est déterminée par la qualité du signal, et plus précisément par le rapport signal à interférence et bruit. Cela pousse chaque récepteur à se connecter à l'émetteur qui lui donne la meilleure qualité du signal. Nous utilisons la géométrie stochastique et la théorie des extrêmes pour obtenir la distribution de la meilleure qualité du signal, ainsi que celles de interférence et du maximum des puissances reçues. Nous mettons en évidence comment la singularité de la fonction d'affaiblissement modifie leurs comportements. Nous nous intéressons ensuite au comportement temporel des signaux radios en étudiant le franchissement de seuils par un processus stationnaire X (t). Nous démontrons que l'intervalle de temps que X (t) passe au-dessus d'un seuil γ → −∞ suit une distribution exponentielle, et obtenons 'egalement des r'esultats caract'erisant des franchissements par X (t) de plusieurs seuils adjacents. Ces r'esultats sont ensuite appliqu'es 'a la gestion de mobilit'e dans les r'eseaux cellulaires. Notre travail se concentre sur la fonction de 'handover measurement'. Nous identifions la meilleure cellule voisine lors d'un handover. Cette fonction joue un rôle central sur expérience perçue par l'utilisateur. Mais elle demande une coopération entre divers mécanismes de contrôle et reste une question difficile. Nous traitons ce problème en proposant des approches analytiques pour les réseaux émergents de types macro et pico cellulaires, ainsi qu'une approche d'auto- optimisation pour les listes de voisinage utilisées dans les réseaux cellulaires actuels.
APA, Harvard, Vancouver, ISO, and other styles
9

Cashman, Neil. "SMART : an innovative multimedia computer architecture for processing ATM cells in real-time." Thesis, University of Sussex, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fink, Glenn Allen. "Visual Correlation of Network Traffic and Host Processes for Computer Security." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28770.

Full text
Abstract:
Much computer communications activity is invisible to the user, happening without explicit permission. When system administrators investigate network communications activities, they have difficulty tracing them back to the processes that cause them. The strictly layered TCP/IP networking model that underlies all widely used, general-purpose operating systems makes it impossible to trace a packet seen on the network back to the processes that are responsible for generating and receiving it. The TCP/IP model separates the concerns of network routing and process ownership so that the layers cannot share the information needed to correlate packets to processes. But knowing what processes are responsible for communications activities can be a great help in determining whether that activity is benign or malicious. My solution combines a visualization tool, a kernel-level correlation engine, and middleware that ties the two together. My research enables security personnel to visually correlate packets to the processes they belong to helping users determine whether communications are benign or malicious. I present my discoveries about the system administrator community and relate how I created a new correlation technology. I conducted a series of initial interviews with system administrators to clarify the problem, researched available solutions in the literature, identified what was missing, and worked with users to build it. The users were my co-designers as I built a series of prototypes of increasing fidelity and conducted usability evaluations on them. I hope that my work will demonstrate how well the participatory design approach works. My work has implications for the kernel structure of all operating system kernels with a TCP/IP protocol stack and network model. In light of my research, I hope security personnel will more clearly see sets of communicating processes on a network as basic computational units rather than the individual host computers. If kernel designers incorporate my findings into their work, it will enable much better security monitoring than is possible today making the Internet safer for all.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Network processors Computer architecture. Computer networks"

1

Giladi, Ran. Network processors: Architecture, programming, and implementation. Amsterdam: Morgan Kaufmann, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Giladi, Ran. Network processors: Architecture, programming, and implementation. Amsterdam: Morgan Kaufmann, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Giladi, Ran. Network processors: Architecture, programming, and implementation. Amsterdam: Morgan Kaufmann, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Franklin, Manoj. Multiscalar Processors. Boston, MA: Springer US, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Comer, Douglas. Network systems design: Using network processors : Agere version. Upper Saddle River, N.J: Pearson/Prentice Hall, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Comer, Douglas. Network systems design: Using network processors : Intel IXP version. Upper Saddle River, N.J: Pearson/Prentice Hall, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Aaron, Kunze, and Intel Corporation, eds. IXP1200 programming: The microengine coding guide for the Intel IXP1200 network processor family. Hillsboro, OR: Intel Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Johnson, Erik. IXP1200 programming: The microengine coding guide for the Intel IXP1200 network processor family. Hillsboro, OR: Intel Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ogras, Umit Y. Modeling, Analysis and Optimization of Network-on-Chip Communication Architectures. Dordrecht: Springer Netherlands, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lekkas, Panos C. Network Processors. New York: McGraw-Hill, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Network processors Computer architecture. Computer networks"

1

Murti, KCS. "Embedded Processor Architectures." In Transactions on Computer Systems and Networks, 341–89. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3293-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Nen-Fu, Ying-Tsuen Chen, Yi-Chung Chen, Chia-Nan Kao, and Joe Chiou. "A Network Processor-Based Fault-Tolerance Architecture for Critical Network Equipments." In Lecture Notes in Computer Science, 763–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-25978-7_76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Jing, and Michel Savoie. "Peer-to-Peer Network Architecture." In Handbook of Computer Networks, 131–51. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118256107.ch9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fojcik, Marcin, and Joar Sande. "Some Problems of Integrating Industrial Network Control Systems Using Service Oriented Architecture." In Computer Networks, 210–21. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38865-1_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kupriyanov, Alexey, Frank Hannig, Dmitrij Kissler, Jürgen Teich, Julien Lallet, Olivier Sentieys, and Sébastien Pillement. "Modeling of Interconnection Networks in Massively Parallel Processor Architectures." In Lecture Notes in Computer Science, 268–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-71270-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Srinivasa Rao, T., S. K. Bose, K. R. Srivathsan, and Kalyanmoy Deb. "A New Approach for Network Topology Optimization." In Computer Networks, Architecture and Applications, 358–71. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-0-387-34887-2_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Talati, Vijay, and S. L. Mehndiratta. "Vartalaap: A Network Based Multimedia Presentation System." In Computer Networks, Architecture and Applications, 107–23. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-0-387-34887-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Grochla, Krzysztof, and Piotr Stolarz. "Extending the TLS Protocol by EAP Handshake to Build a Security Architecture for Heterogenous Wireless Network." In Computer Networks, 258–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38865-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Venkatesulu, D., and Timothy A. Gonsalves. "A Queueing Network Model of Distributed Shared Memory." In Computer Networks, Architecture and Applications, 265–79. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-0-387-34887-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Drossu, R., T. V. Lakshman, Z. Obradovic, and C. Raghavendra. "Single and Multiple Frame Video Traffic Prediction Using Neural Network Models." In Computer Networks, Architecture and Applications, 146–58. Boston, MA: Springer US, 1995. http://dx.doi.org/10.1007/978-0-387-34887-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Network processors Computer architecture. Computer networks"

1

Hidalgo-Espinoza, Sergio, Kevin Chamorro-Cupuerán, and Oscar Chang-Tortolero. "Intrusion Detection in Computer Systems by using Artificial Neural Networks with Deep Learning Approaches." In 10th International Conference on Advances in Computing and Information Technology (ACITY 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101501.

Full text
Abstract:
Intrusion detection into computer networks has become one of the most important issues in cybersecurity. Attackers keep on researching and coding to discover new vulnerabilities to penetrate information security system. In consequence computer systems must be daily upgraded using up-to-date techniques to keep hackers at bay. This paper focuses on the design and implementation of an intrusion detection system based on Deep Learning architectures. As a first step, a shallow network is trained with labelled log-in [into a computer network] data taken from the Dataset CICIDS2017. The internal behaviour of this network is carefully tracked and tuned by using plotting and exploring codes until it reaches a functional peak in intrusion prediction accuracy. As a second step, an autoencoder, trained with big unlabelled data, is used as a middle processor which feeds compressed information and abstract representation to the original shallow network. It is proven that the resultant deep architecture has a better performance than any version of the shallow network alone. The resultant functional code scripts, written in MATLAB, represent a re-trainable system which has been proved using real data, producing good precision and fast response.
APA, Harvard, Vancouver, ISO, and other styles
2

Derutin, J. P., L. Damez, A. Desportes, and J. L. Lazaro Galilea. "Design of a Scalable Network of Communicating Soft Processors on FPGA." In CAMPS 2006. International Workshop on Computer Architecture for Machine Perception and Sensing. IEEE, 2006. http://dx.doi.org/10.1109/camp.2007.4350378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mughaz, Dror, Michael Cohen, Sagit Mejahez, Tal Ades, and Dan Bouhnik. "From an Artificial Neural Network to Teaching [Abstract]." In InSITE 2020: Informing Science + IT Education Conferences: Online. Informing Science Institute, 2020. http://dx.doi.org/10.28945/4557.

Full text
Abstract:
[This Proceedings paper was revised and published in the "Interdisciplinary Journal of e-Skills and Lifelong Learning," 16, 1-17.] Aim/Purpose: Using Artificial Intelligence with Deep Learning (DL) techniques, which mimic the action of the brain, to improve a student’s grammar learning process. Finding the subject of a sentence using DL, and learning, by way of this computer field, to analyze human learning processes and mistakes. In addition, showing Artificial Intelligence learning processes, with and without a general overview of the problem that it is under examination. Applying the idea of the general perspective that the network gets on the sentences and deriving recommendations from this for teaching processes. Background: We looked for common patterns of computer errors and human grammar mistakes. Also deducing the neural network’s learning process, deriving conclusions, and applying concepts from this process to the process of human learning. Methodology: We used DL technologies and research methods. After analysis, we built models from three types of complex neuronal networks – LSTM, Bi-LSTM, and GRU – with sequence-to-sequence architecture. After this, we combined the sequence-to- sequence architecture model with the attention mechanism that gives a general overview of the input that the network receives. Contribution: The cost of computer applications is cheaper than that of manual human effort, and the availability of a computer program is much greater than that of humans to perform the same task. Thus, using computer applications, we can get many desired examples of mistakes without having to pay humans to perform the same task. Understanding the mistakes of the machine can help us to under-stand the human mistakes, because the human brain is the model of the artificial neural network. This way, we can facilitate the student learning process by teaching students not to make mistakes that we have seen made by the artificial neural network. We hope that with the method we have developed, it will be easier for teachers to discover common mistakes in students’ work before starting to teach them. In addition, we show that a “general explanation” of the issue under study can help the teaching and learning process. Findings: We performed the test case on the Hebrew language. From the mistakes we received from the computerized neuronal networks model we built, we were able to classify common human errors. That is, we were able to find a correspondence between machine mistakes and student mistakes. Recommendations for Practitioners: Use an artificial neural network to discover mistakes, and teach students not to make those mistakes. We recommend that before the teacher begins teaching a new topic, he or she gives a general explanation of the problems this topic deals with, and how to solve them. Recommendations for Researchers: To use machines that simulate the learning processes of the human brain, and study if we can thus learn about human learning processes. Impact on Society: When the computer makes the same mistakes as a human would, it is very easy to learn from those mistakes and improve the study process. The fact that ma-chine and humans make similar mistakes is a valuable insight, especially in the field of education, Since we can generate and analyze computer system errors instead of doing a survey of humans (who make mistakes similar to those of the machine); the teaching process becomes cheaper and more efficient. Future Research: We plan to create an automatic grammar-mistakes maker (for instance, by giving the artificial neural network only a tiny data-set to learn from) and ask the students to correct the errors made. In this way, the students will practice on the material in a focused manner. We plan to apply these techniques to other education subfields and, also, to non-educational fields. As far as we know, this is the first study to go in this direction ‒ instead of looking at organisms and building machines, to look at machines and learn about organisms.
APA, Harvard, Vancouver, ISO, and other styles
4

De Oliveira, Lucas, Guilherme Mota, and Vitor Vidal. "A Thorough Evaluation of Kernel Order in CNN Based Traffic Signs Recognition." In Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/wvc.2020.13485.

Full text
Abstract:
Convolutional Neural Network is an important deep learning architecture for computer vision. Alongside with its variations, it brought image analysis applications to a new performance level. However, despite its undoubted quality, the evaluation of the performance presented in the literature is mostly restricted to accuracy measurements. So, considering the stochastic characteristics of neural networks training and the impact of the architectures configuration, research is still needed to affirm if such architectures reached the optimal configuration for their focused problems. Statistical significance is a powerful tool for a more accurate experimental evaluation of stochastic processes. This paper is dedicated to perform a thorough evaluation of kernel order influence over convolutional neural networks in the context of traffic signs recognition. Experiments for distinct kernels sizes were performed using the most well accepted database, the socalled German Traffic Sign Recognition Benchmark.
APA, Harvard, Vancouver, ISO, and other styles
5

Jung, Yung J., Laila Jaber-Ansari, Xugang Xiong, Sinan Mu¨ftu¨, Ahmed Busnaina, Swastik Kar, Caterina Soldano, and Pulickel M. Ajayan. "Highly Organized Carbon Nanotube-PDMS Hybrid System for Multifunctional Flexible Devices." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-35442.

Full text
Abstract:
We will present a method to fabricate a new class of hybrid composite structures based on highly organized multiwalled carbon nanotube (MWNT) and singlewalled carbon nanotube (SWNT) network architectures and a polydimethylsiloxane (PDMS) matrix for the prototype high performance flexible systems which could be used for many daily-use applications. To build 1–3 dimensional highly organized network architectures with carbon nanotubes (both MWNT and SWNT) in macro/micro/nanoscale we used various nanotube assembly processes such as selective growth of carbon nanotubes using chemical vapor deposition (CVD) and self-assembly of nanotubes on the patterned trenches through solution evaporation with dip coating. Then these vertically or horizontally aligned and assembled nanotube architectures and networks are transferred in PDMS matrix using casting process thereby creating highly organized carbon nanotube based flexible composite structures. The PDMS matrix undergoes excellent conformal filling within the dense nanotube network, giving rise to extremely flexible conducting structures with unique electromechanical properties. We will demonstrate its robustness under large stress conditions, under which the composite is found to retain its conducting nature. We will also demonstrate that these structures can be directly utilized as flexible field-emission devices. Our devices show some of the best field enhancement factors and turn-on electric fields reported so far.
APA, Harvard, Vancouver, ISO, and other styles
6

Tibbals, Thomas F., Theodore A. Bapty, and Ben A. Abbott. "CADDMAS: A Real-Time Parallel System for Dynamic Data Analysis." In ASME 1994 International Gas Turbine and Aeroengine Congress and Exposition. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/94-gt-194.

Full text
Abstract:
Arnold Engineering Development Center (AEDC) has designed and built a high-speed data acquisition and processing system for real-time online dynamic data monitoring and analysis. The Computer Assisted Dynamic Data Monitoring and Analysis System (CADDMAS) provides 24 channels at high frequency and another 24 channels at low frequency for online real-time aeromechanical, vibration, and performance analysis of advanced turbo-engines and other systems. The system is primarily built around two different parallel processors and several PCs to demonstrate hardware independence and architecture scalability. These processors provide the computational power to display online and in real-time what can take from days to weeks using existing offline techniques. The CADDMAS provides online test direction and immediate hardcopy plots for critical parameters, all the while providing continuous health monitoring through parameter limit checking. Special in-house developed Front End Processors (FEP) sample the dynamic signals, perform anti-aliasing, signal transfer function correction, and bandlimit filtering to improve the accuracy of the time domain signal. A second in-house developed Numeric Processing Element (NPE) performs the FFT, threshold monitoring, and packetizes the data for rapid asynchronous access by the parallel network. Finally, the data are then formatted for display, hardcopy plotting, and cross-channel processing within the parallel network utilizing off-the-shelf hardware. The parallel network is a heterogeneous message-passing parallel pipeline configuration which permits easy scaling of the system. Advanced parallel processing scheduler/controller software has been adapted specifically for CADDMAS to allow quasi-dynamic instantiation of a variety of simultaneous data processing tasks concurrent with display and alarm monitoring functions without gapping the data. Although many applications of CADDMAS exist, this paper describes the features of CADDMAS, the development approach, and the application of CADDMAS for turbine engine aeromechanical testing.
APA, Harvard, Vancouver, ISO, and other styles
7

Thames, J. Lane, Andrew Hyder, Robert Wellman, and Dirk Schaefer. "An Information Technology Infrastructure for Internet-Enabled Remote and Portable Laboratories." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87112.

Full text
Abstract:
With the proliferation of distributed and distance learning in higher education, there is a growing need for remote and portable laboratory design and deployment for the engineering, science, and technology education sectors. Amongst the current threads of research in this area, very little work has focused on solutions to the challenges, which are imposed by modern day information technology infrastructure, enterprise networks, and enterprise network security change management processes, that will be faced by large scale deployments of remote and portable labs. In this paper, the authors will discuss some of these challenges and will propose the use of the command and control communications architecture coupled with Web 2.0 as a solution to many of the deployment challenges.
APA, Harvard, Vancouver, ISO, and other styles
8

Sundararajan, V., Andrew Redfern, William Watts, and Paul Wright. "Distributed Monitoring of Steady-State System Performance Using Wireless Sensor Networks." In ASME 2004 International Mechanical Engineering Congress and Exposition. ASMEDC, 2004. http://dx.doi.org/10.1115/imece2004-59884.

Full text
Abstract:
Wireless sensor networks provide a cost-effective alternative to monitoring system performance in real-time. In addition to the ability to communicate data without wires, the sensor nodes possess computing and memory capabilities that can be harnessed to execute signal processing and state-tracking algorithms. This paper describes the architecture and application layer protocols for the distributed monitoring of the steady-state performance of systems that have a finite number of states. Protocols are defined for two phases — the learning phase and the monitoring phase. In the learning phase, an expert user trains the wireless network to define the acceptable states of the system. The nodes are programmed with a set of algorithms for processing their readings. The nodes use these algorithms to compute invariant metrics on the sensor readings, which are then used to define the internal state of the node. In the monitoring phase, the nodes track their individual states by computing their state based on the sensor readings and then comparing them with the pre-determined values. If the system properties change, the nodes communicate with each other to determine the new state. If the new state is not one of the acceptable states determined in the learning phase, an alert is raised. This approach de-centralizes the monitoring and detection process by distributing both the state information and the computing throughout the network. The paper presents algorithms for the various processes of the system and also the results of testing the sensor network architecture on real-time models. The sensor network can be used in automotive engine test rigs to carry out long term performance analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Horva´th, Imre, Zolta´n Rusa´k, Eliab Z. Opiyo, and Adrie Kooijman. "Towards Ubiquitous Design Support." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87573.

Full text
Abstract:
Efficient computer support of product innovation processes has become an important issue of industrial competitiveness in the last forty years. As a consequence, there has been a growing demand for new computer-based tools and system. Various hardware, software and knowledge technologies have been used over the years as the basis of design support systems. With the appearance of network technologies, the conventional standalone workstation paradigm has been replaced by the paradigm of web-interconnected collaborative environments. Currently, the emerging and rapidly proliferating mobile and ubiquitous computing technologies create a technological push again. These technologies force us to reconsider not only the digital information processing devices and their interconnection, but also the way of obtaining, processing and communicating product design information. Many researches and laboratories are engaged with the development of novel concepts, architectures, tools and methods for next-generation design support environments. They will integrate many resources of the current collaborative design environments with pervasive computing functionality and large-scale mobility in a volatile manner. Part of the design support tools will have fixed location, but will be remotely accessible through wireless networks. Other part of the tools will be moving with the designers as portable, embedded, wearable and transferable devices, and will feature ad hoc connectivity. These not only offer new ways for aggregation, processing and presentation of design information, but also enable alternative ways of completing design activities. Our current research concentrates on three interrelated main issues: (i) studying workflow scenarios for future design support environment, (ii) investigation and integration of multiple technologies into an ad hoc interconnected heterogeneous infrastructure, and (iii) exploring efficient methods for utilizing new affordances in supporting product innovation. In this paper we report on the results of our recent technology study that analyzed the current results and trends of ubiquitous technology development, and tried to form a vision about the possible manifestation of future ubiquitous design support environments. Essentially, they have been conceptualized as ad hoc and volatile networks of fixed and mobile information collection, processing and communication units. This network functions as a complex service provider system, with special attention to the on-demand information management in the fuzzy front end of design projects.
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Xiaohan, Chang Si, Ying Wang, Cheng Liu, and Lei Zhang. "NASA: Accelerating Neural Network Design with a NAS Processor." In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2021. http://dx.doi.org/10.1109/isca52012.2021.00067.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Network processors Computer architecture. Computer networks"

1

Farhi, Edward, and Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. Web of Open Science, December 2020. http://dx.doi.org/10.37686/qrl.v1i2.80.

Full text
Abstract:
We introduce a quantum neural network, QNN, that can represent labeled data, classical or quantum, and be trained by supervised learning. The quantum circuit consists of a sequence of parameter dependent unitary transformations which acts on an input quantum state. For binary classification a single Pauli operator is measured on a designated readout qubit. The measured output is the quantum neural network’s predictor of the binary label of the input state. We show through classical simulation that parameters can be found that allow the QNN to learn to correctly distinguish the two data sets. We then discuss presenting the data as quantum superpositions of computational basis states corresponding to different label values. Here we show through simulation that learning is possible. We consider using our QNN to learn the label of a general quantum state. By example we show that this can be done. Our work is exploratory and relies on the classical simulation of small quantum systems. The QNN proposed here was designed with near-term quantum processors in mind. Therefore it will be possible to run this QNN on a near term gate model quantum computer where its power can be explored beyond what can be explored with simulation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography