Academic literature on the topic 'Sparse Matrix Vector Multiplications'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sparse Matrix Vector Multiplications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sparse Matrix Vector Multiplications"

1

Tao, Yuan, Yangdong Deng, Shuai Mu, et al. "GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication." Concurrency and Computation: Practice and Experience 27, no. 14 (2014): 3771–89. http://dx.doi.org/10.1002/cpe.3415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Donglin, Jianbin Fang, Chuanfu Xu, Shizhao Chen, and Zheng Wang. "Characterizing Scalability of Sparse Matrix–Vector Multiplications on Phytium FT-2000+." International Journal of Parallel Programming 48, no. 1 (2019): 80–97. http://dx.doi.org/10.1007/s10766-019-00646-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Burkhardt, Paul. "Optimal Algebraic Breadth-First Search for Sparse Graphs." ACM Transactions on Knowledge Discovery from Data 15, no. 5 (2021): 1–19. http://dx.doi.org/10.1145/3446216.

Full text
Abstract:
There has been a rise in the popularity of algebraic methods for graph algorithms given the development of the GraphBLAS library and other sparse matrix methods. An exemplar for these approaches is Breadth-First Search (BFS). The algebraic BFS algorithm is simply a recurrence of matrix-vector multiplications with the n × n adjacency matrix, but the many redundant operations over nonzeros ultimately lead to suboptimal performance. Therefore an optimal algebraic BFS should be of keen interest especially if it is easily integrated with existing matrix methods. Current methods, notably in the GraphBLAS, use a Sparse Matrix masked-Sparse Vector multiplication in which the input vector is kept in a sparse representation in each step of the BFS, and nonzeros in the vector are masked in subsequent steps. This has been an area of recent research in GraphBLAS and other libraries. While in theory, these masking methods are asymptotically optimal on sparse graphs, many add work that leads to suboptimal runtime. We give a new optimal, algebraic BFS for sparse graphs, thus closing a gap in the literature. Our method multiplies progressively smaller submatrices of the adjacency matrix at each step. Let n and m refer to the number of vertices and edges, respectively. On a sparse graph, our method takes O ( n ) algebraic operations as opposed to O ( m ) operations needed by theoretically optimal sparse matrix approaches. Thus, for sparse graphs, it matches the bounds of the best-known sequential algorithm, and on a Parallel Random Access Machine, it is work-optimal. Our result holds for both directed and undirected graphs. Compared to a leading GraphBLAS library, our method achieves up to 24x faster sequential time, and for parallel computation, it can be 17x faster on large graphs and 12x faster on large-diameter graphs.
APA, Harvard, Vancouver, ISO, and other styles
4

ERHEL, JOCELYNE. "SPARSE MATRIX MULTIPLICATION ON VECTOR COMPUTERS." International Journal of High Speed Computing 02, no. 02 (1990): 101–16. http://dx.doi.org/10.1142/s012905339000008x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bienz, Amanda, William D. Gropp, and Luke N. Olson. "Node aware sparse matrix–vector multiplication." Journal of Parallel and Distributed Computing 130 (August 2019): 166–78. http://dx.doi.org/10.1016/j.jpdc.2019.03.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Filippone, Salvatore, Valeria Cardellini, Davide Barbieri, and Alessandro Fanfarillo. "Sparse Matrix-Vector Multiplication on GPGPUs." ACM Transactions on Mathematical Software 43, no. 4 (2017): 1–49. http://dx.doi.org/10.1145/3017994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haque, Sardar Anisul, Shahadat Hossain, and M. Moreno Maza. "Cache friendly sparse matrix-vector multiplication." ACM Communications in Computer Algebra 44, no. 3/4 (2011): 111–12. http://dx.doi.org/10.1145/1940475.1940490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Heath, L. S., C. J. Ribbens, and S. V. Pemmaraju. "Processor-efficient sparse matrix-vector multiplication." Computers & Mathematics with Applications 48, no. 3-4 (2004): 589–608. http://dx.doi.org/10.1016/j.camwa.2003.06.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Donglin, Jianbin Fang, Shizhao Chen, Chuanfu Xu, and Zheng Wang. "Optimizing Sparse Matrix–Vector Multiplications on an ARMv8-based Many-Core Architecture." International Journal of Parallel Programming 47, no. 3 (2019): 418–32. http://dx.doi.org/10.1007/s10766-018-00625-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yang, Xintian, Srinivasan Parthasarathy, and P. Sadayappan. "Fast sparse matrix-vector multiplication on GPUs." Proceedings of the VLDB Endowment 4, no. 4 (2011): 231–42. http://dx.doi.org/10.14778/1938545.1938548.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sparse Matrix Vector Multiplications"

1

Ashari, Arash. "Sparse Matrix-Vector Multiplication on GPU." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1417770100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramachandran, Shridhar. "Incremental PageRank acceleration using Sparse Matrix-Sparse Vector Multiplication." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462894358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Balasubramanian, Deepan Karthik. "Efficient Sparse Matrix Vector Multiplication for Structured Grid Representation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1339730490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mansour, Ahmad [Verfasser]. "Sparse Matrix-Vector Multiplication Based on Network-on-Chip / Ahmad Mansour." München : Verlag Dr. Hut, 2015. http://d-nb.info/1075409470/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Kunal. "High-Performance Sparse Matrix-Multi Vector Multiplication on Multi-Core Architecture." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524089757826551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

El-Kurdi, Yousef M. "Sparse Matrix-Vector floating-point multiplication with FPGAs for finite element electromagnetics." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98958.

Full text
Abstract:
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. Field Programmable Gate Arrays (FPGAs) have been shown to have higher peak floating-point performance than general purpose CPUs, and the trends are moving in favor of FPGAs. We present an architecture and implementation of an FPGA-based Sparse Matrix-Vector Multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. Our architecture exploits the FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements. The architecture is based on a pipelined linear array of Processing Elements (PEs). A hardware-oriented matrix "striping" scheme is developed which reduces the number of required processing elements. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA reconfigurable systems, this architecture can achieve 1.5 GFLOPS sustained performance. A single pipeline uses 30% of the logic resources and 40% of the memory resources of a Stratix S80 FPGA. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solvers such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
APA, Harvard, Vancouver, ISO, and other styles
7

Godwin, Jeswin Samuel. "High-Performancs Sparse Matrix-Vector Multiplication on GPUS for Structured Grid Computations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357280824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

DeLorimier, Michael DeHon André. "Floating-point sparse matrix-vector multiply for FPGAs /." Diss., Pasadena, Calif. : California Institute of Technology, 2005. http://resolver.caltech.edu/CaltechETD:etd-05132005-144347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Belgin, Mehmet. "Structure-based Optimizations for Sparse Matrix-Vector Multiply." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/30260.

Full text
Abstract:
This dissertation introduces two novel techniques, OSF and PBR, to improve the performance of Sparse Matrix-vector Multiply (SMVM) kernels, which dominate the runtime of iterative solvers for systems of linear equations. SMVM computations that use sparse formats typically achieve only a small fraction of peak CPU speeds because they are memory bound due to their low flops:byte ratio, they access memory irregularly, and exhibit poor ILP due to inefficient pipelining. We particularly focus on improving the flops:byte ratio, which is the main limiter on performance, by exploiting recurring structures or sub-structures in matrices. Our techniques also support micro-architecture level optimizations to further improve performance. Operation Stacking Framework (OSF) stacks problems in large ensemble computations, which run the same sparse kernel using an identical matrix structure, such that they share a single copy of the indexing information to significantly reduce memory bandwidth usage. OSF provides performance improvements of up to 1.94x on an AMD Opteron compared to the CSR method. We validate performance results using hardware event counters, which demonstrate significantly improved cache and pipeline utilization. Pattern-based Representation (PBR) exploits recurring block nonzero patterns by generating custom code for each recurring block pattern. In this way, no indexing data for individual nonzero elements are read from memory, reducing the overall size of the indices by up to 98%. Our code generator emits highly tuned codes that utilize SSE vectorization and software prefetching. PBR accurately identifies a block size that achieves optimal or near-optimal performance using a linear multiple regression performance model. On recent multicore machines, PBR provides performance improvements of up to 3.4x sequentially and 5x in parallel, compared to the CSR method. The PBR library we provide converts matrices at runtime, allowing our method to be used as a drop-in replacement for existing methods. We compare PBRâ s overhead relative to its benefits and show that PBR is beneficial for many applications that repetitively call the SMVM kernel for the same matrix structure.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Flegar, Goran. "Sparse Linear System Solvers on GPUs: Parallel Preconditioning, Workload Balancing, and Communication Reduction." Doctoral thesis, Universitat Jaume I, 2019. http://hdl.handle.net/10803/667096.

Full text
Abstract:
With the breakdown of Dennard scaling in the mid-2000s and the end of Moore's law on the horizon, the high performance computing community is turning its attention towards unconventional accelerator hardware to ensure the continued growth of computational capacity. This dissertation presents several contributions related to the iterative solution of sparse linear systems on the most widely used general purpose accelerator - the Graphics Processing Unit (GPU). Specifically, it accelerates the major building blocks of Krylov solvers, and describes their realization as part of a software library of reusable building blocks. The first part of the dissertation focuses on the sparse matrix-vector product and effective load balancing in the presence of irregular sparsity patterns. The second part describes the design of high-performance preconditioners. Finally, the third part demonstrates the potential of adaptive precision techniques for constructing preconditioners with lower memory footprint, and accuracy comparable to their full precision equivalents.<br>Con el final de la ley de Dennard y el cercano fin de la ley de Moore, la comunidad en computación de altas prestaciones se está centrando en tecnologías de aceleración no convencionales para asegurar el crecimiento exponencial de la capacidad de computación. Esta tesis contribuye a la solución iterativa de sistemas lineales dispersos en el acelerador más difundido: el procesador gráfico. Específicamente, el trabajo acelera los bloques fundamentales de los métodos de Krylov, y describe su implementación como parte de una biblioteca de bloques reutilizables. La primera parte del trabajo se centra en el producto matriz-vector disperso y el equilibrado de la carga ante patrones de dispersidad irregulares. La segunda parte describe el diseño de precondicionadores de alto rendimiento. Finalmente, la tercera parte demuestra el potencial de las técnicas de precisión adaptativa para construir precondicionadores con menor consumo de memoria, y fiabilidad comparable con las versiones de precisión completa.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sparse Matrix Vector Multiplications"

1

Andersen, J. The scheduling of sparse matrix-vector multiplicatiion on a massively parallel DAP computer. Brunel University, Department of Mathematics and Statistics, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bisseling, Rob H. Parallel Scientific Computation. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198788348.001.0001.

Full text
Abstract:
This book explains how to use the bulk synchronous parallel (BSP) model to design and implement parallel algorithms in the areas of scientific computing and big data. Furthermore, it presents a hybrid BSP approach towards new hardware developments such as hierarchical architectures with both shared and distributed memory. The book provides a full treatment of core problems in scientific computing and big data, starting from a high-level problem description, via a sequential solution algorithm to a parallel solution algorithm and an actual parallel program written in the communication library BSPlib. Numerical experiments are presented for parallel programs on modern parallel computers ranging from desktop computers to massively parallel supercomputers. The introductory chapter of the book gives a complete overview of BSPlib, so that the reader already at an early stage is able to write his/her own parallel programs. Furthermore, it treats BSP benchmarking and parallel sorting by regular sampling. The next three chapters treat basic numerical linear algebra problems such as linear system solving by LU decomposition, sparse matrix-vector multiplication (SpMV), and the fast Fourier transform (FFT). The final chapter explores parallel algorithms for big data problems such as graph matching. The book is accompanied by a software package BSPedupack, freely available online from the author’s homepage, which contains all programs of the book and a set of test programs.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sparse Matrix Vector Multiplications"

1

Vassiliadis, Stamatis, Sorin Cotofana, and Pyrrhos Stathis. "Vector ISA Extension for Sparse Matrix-Vector Multiplication." In Euro-Par’99 Parallel Processing. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48311-x_100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Maeda, Hiroshi, and Daisuke Takahashi. "Parallel Sparse Matrix-Vector Multiplication Using Accelerators." In Computational Science and Its Applications – ICCSA 2016. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-42108-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schubert, Gerald, Georg Hager, and Holger Fehske. "Performance Limitations for Sparse Matrix-Vector Multiplications on Current Multi-Core Environments." In High Performance Computing in Science and Engineering, Garching/Munich 2009. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-13872-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hishinuma, Toshiaki, Hidehiko Hasegawa, and Teruo Tanaka. "SIMD Parallel Sparse Matrix-Vector and Transposed-Matrix-Vector Multiplication in DD Precision." In High Performance Computing for Computational Science – VECPAR 2016. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61982-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Katagiri, Takahiro, Takao Sakurai, Mitsuyoshi Igai, et al. "Control Formats for Unsymmetric and Symmetric Sparse Matrix–Vector Multiplications on OpenMP Implementations." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38718-0_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Çatalyürek, Ümit V., and Cevdet Aykanat. "Decomposing irregularly sparse matrices for parallel matrix-vector multiplication." In Parallel Algorithms for Irregularly Structured Problems. Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/bfb0030098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wellein, Gerhard, Georg Hager, Achim Basermann, and Holger Fehske. "Fast Sparse Matrix-Vector Multiplication for TeraFlop/s Computers." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36569-9_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Monakov, Alexander, Anton Lokhmotov, and Arutyun Avetisyan. "Automatically Tuning Sparse Matrix-Vector Multiplication for GPU Architectures." In High Performance Embedded Architectures and Compilers. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11515-8_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

AlAhmadi, Sarah, Thaha Muhammed, Rashid Mehmood, and Aiiad Albeshri. "Performance Characteristics for Sparse Matrix-Vector Multiplication on GPUs." In Smart Infrastructure and Applications. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-13705-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Monakov, Alexander, and Arutyun Avetisyan. "Implementing Blocked Sparse Matrix-Vector Multiplication on NVIDIA GPUs." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-03138-0_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sparse Matrix Vector Multiplications"

1

Ichimura, Shuntaro, Takahiro Katagiri, Katsuhisa Ozaki, Takeshi Ogita, and Toru Nagai. "Threaded Accurate Matrix-Matrix Multiplications with Sparse Matrix-Vector Multiplications." In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE, 2018. http://dx.doi.org/10.1109/ipdpsw.2018.00168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Keklikian, Thalie, J. M. Pierre Langlois, and Yvon Savaria. "A memory transaction model for Sparse Matrix-Vector multiplications on GPUs." In 2014 IEEE 12th International New Circuits and Systems Conference (NEWCAS). IEEE, 2014. http://dx.doi.org/10.1109/newcas.2014.6934044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Buluç, Aydin, Jeremy T. Fineman, Matteo Frigo, John R. Gilbert, and Charles E. Leiserson. "Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks." In the twenty-first annual symposium. ACM Press, 2009. http://dx.doi.org/10.1145/1583991.1584053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shah, Monika. "Sparse Matrix Sparse Vector Multiplication - A Novel Approach." In 2015 44th International Conference on Parallel Processing Workshops (ICPPW). IEEE, 2015. http://dx.doi.org/10.1109/icppw.2015.18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Haque, Sardar Anisul, Shahadat Hossain, and Marc Moreno Maza. "Cache friendly sparse matrix-vector multiplication." In the 4th International Workshop. ACM Press, 2010. http://dx.doi.org/10.1145/1837210.1837238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhuo, Ling, and Viktor K. Prasanna. "Sparse Matrix-Vector multiplication on FPGAs." In the 2005 ACM/SIGDA 13th international symposium. ACM Press, 2005. http://dx.doi.org/10.1145/1046192.1046202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jamroz, Ben, and Paul Mullowney. "Performance of Parallel Sparse Matrix-Vector Multiplications in Linear Solves on Multiple GPUs." In 2012 Symposium on Application Accelerators in High Performance Computing (SAAHPC). IEEE, 2012. http://dx.doi.org/10.1109/saahpc.2012.27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhuowei Wang, Xianbin Xu, Wuqing Zhao, Yuping Zhang, and Shuibing He. "Optimizing sparse matrix-vector multiplication on CUDA." In 2010 2nd International Conference on Education Technology and Computer (ICETC 2010). IEEE, 2010. http://dx.doi.org/10.1109/icetc.2010.5529724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Junqing, Gregory Peterson, and Olaf Storaasli. "Sparse Matrix-Vector Multiplication Design on FPGAs." In 15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007). IEEE, 2007. http://dx.doi.org/10.1109/fccm.2007.56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pinar, Ali, and Michael T. Heath. "Improving performance of sparse matrix-vector multiplication." In the 1999 ACM/IEEE conference. ACM Press, 1999. http://dx.doi.org/10.1145/331532.331562.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Sparse Matrix Vector Multiplications"

1

Vuduc, R., and H. Moon. Fast sparse matrix-vector multiplication by exploiting variable block structure. Office of Scientific and Technical Information (OSTI), 2005. http://dx.doi.org/10.2172/891708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hammond, Simon David, and Christian Robert Trott. Optimizing the Performance of Sparse-Matrix Vector Products on Next-Generation Processors. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1528773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruiz, Pablo, Craig Perry, Alejando Garcia, et al. The Everglades National Park and Big Cypress National Preserve vegetation mapping project: Interim report—Northwest Coastal Everglades (Region 4), Everglades National Park (revised with costs). National Park Service, 2020. http://dx.doi.org/10.36967/nrr-2279586.

Full text
Abstract:
The Everglades National Park and Big Cypress National Preserve vegetation mapping project is part of the Comprehensive Everglades Restoration Plan (CERP). It is a cooperative effort between the South Florida Water Management District (SFWMD), the United States Army Corps of Engineers (USACE), and the National Park Service’s (NPS) Vegetation Mapping Inventory Program (VMI). The goal of this project is to produce a spatially and thematically accurate vegetation map of Everglades National Park and Big Cypress National Preserve prior to the completion of restoration efforts associated with CERP. This spatial product will serve as a record of baseline vegetation conditions for the purpose of: (1) documenting changes to the spatial extent, pattern, and proportion of plant communities within these two federally-managed units as they respond to hydrologic modifications resulting from the implementation of the CERP; and (2) providing vegetation and land-cover information to NPS park managers and scientists for use in park management, resource management, research, and monitoring. This mapping project covers an area of approximately 7,400 square kilometers (1.84 million acres [ac]) and consists of seven mapping regions: four regions in Everglades National Park, Regions 1–4, and three in Big Cypress National Preserve, Regions 5–7. The report focuses on the mapping effort associated with the Northwest Coastal Everglades (NWCE), Region 4 , in Everglades National Park. The NWCE encompasses a total area of 1,278 square kilometers (493.7 square miles [sq mi], or 315,955 ac) and is geographically located to the south of Big Cypress National Preserve, west of Shark River Slough (Region 1), and north of the Southwest Coastal Everglades (Region 3). Photo-interpretation was performed by superimposing a 50 × 50-meter (164 × 164-feet [ft] or 0.25 hectare [0.61 ac]) grid cell vector matrix over stereoscopic, 30 centimeters (11.8 inches) spatial resolution, color-infrared aerial imagery on a digital photogrammetric workstation. Photo-interpreters identified the dominant community in each cell by applying majority-rule algorithms, recognizing community-specific spectral signatures, and referencing an extensive ground-truth database. The dominant vegetation community within each grid cell was classified using a hierarchical classification system developed specifically for this project. Additionally, photo-interpreters categorized the absolute cover of cattail (Typha sp.) and any invasive species detected as either: Sparse (10–49%), Dominant (50–89%), or Monotypic (90–100%). A total of 178 thematic classes were used to map the NWCE. The most common vegetation classes are Mixed Mangrove Forest-Mixed and Transitional Bayhead Shrubland. These two communities accounted for about 10%, each, of the mapping area. Other notable classes include Short Sawgrass Marsh-Dense (8.1% of the map area), Mixed Graminoid Freshwater Marsh (4.7% of the map area), and Black Mangrove Forest (4.5% of the map area). The NWCE vegetation map has a thematic class accuracy of 88.4% with a lower 90th Percentile Confidence Interval of 84.5%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!