Academic literature on the topic 'And Web Scrapper'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'And Web Scrapper.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "And Web Scrapper"

1

Muthee, Mutwiri George, Mutua Makau, and Omamo Amos. "SwaRegex: a lexical transducer for the morphological segmentation of swahili verbs." African Journal of Science, Technology and Social Sciences 1, no. 2 (2022): 77–84. http://dx.doi.org/10.58506/ajstss.v1i2.119.

Full text
Abstract:
The morphological syntax of the Swahili verb comprises 10 slots. In this work, we present SwaRegex, a novel rule-based model for the morphological segmentation of Swahili verbs. This model is designed as a lexical transducer, which accepts a verb as an input string and outputs the morphological slot occupied by the morphemes in the input string. SwaRegex is based on regular expressions developed using the C# programming language. To test the model, we designed a web scraper that obtained verbs from an online Swahili dictionary. The scrapper separated the corpus into two datasets: dataset A, comprising 163 verbs Bantu origin; and dataset B, containing the entire set of 715 non-Arabic verb entries obtained by the web scrapper. The performance of the model was tested against a similar model designed using the Xerox Finite State Tools (XFST). The regular expressions used in both models were the same. SwaRegex outperformed the XFST model on both datasets, achieving a 98.77% accuracy on dataset A, better than the XFST model by 41.1%, and a 68.67% accuracy on dataset B, better than the XFST model by 38.46%. This work is beneficial to prospective learners of Swahili, by helping them understand the syntax of Swahili verbs, and is an integral teaching aid for Swahili. Search engines will benefit from the lexical transducer by leveraging its finite state network when lemmatizing search terms. This work will also create more opportunities for more research to be done on Swahili.
APA, Harvard, Vancouver, ISO, and other styles
2

Onoma, Paul Avweresuo, Joy Agboi, Victor Ochuko Geteloma, et al. "Investigating an Anomaly-based Intrusion Detection via Tree-based Adaptive Boosting Ensemble." Journal of Fuzzy Systems and Control 3, no. 1 (2025): 90–97. https://doi.org/10.59247/jfsc.v3i1.279.

Full text
Abstract:
The eased accessibility, mobility, and portability of smartphones have caused the consequent rise in the proliferation of users' vulnerability to a variety of phishing attacks. Some users are more vulnerable due to factors like personality behavioral traits, media presence, and other factors. Our study seeks to reveal cues utilized by successful attacks by identifying web content as genuine and malicious data. We explore a sentiment-based extreme gradient boost learner with data collected over social platforms, scraped using the Python Google Scrapper. Our results show AdaBoost yields a prediction accuracy of 0.9989 to correctly classify 2148 cases with incorrectly classified 25 cases. The result shows the tree-based AdaBoost ensemble can effectively identify phishing cues and efficiently classify phishing lures against unsuspecting users from access to malicious content.
APA, Harvard, Vancouver, ISO, and other styles
3

Okpor, Margaret Dumebi, Fidelis Obukohwo Aghware, Maureen Ifeanyi Akazue, et al. "Pilot Study on Enhanced Detection of Cues over Malicious Sites Using Data Balancing on the Random Forest Ensemble." Journal of Future Artificial Intelligence and Technologies 1, no. 2 (2024): 109–23. http://dx.doi.org/10.62411/faith.2024-14.

Full text
Abstract:
The digital revolution frontiers have rippled across society today – with various web content shared online for users as they seek to promote monetization and asset exchange, with clients constantly seeking improved alternatives at lowered costs to meet their value demands. From item upgrades to their replacement, businesses are poised with retention strategies to help curb the challenge of customer attrition. The birth of smartphones has proliferated feats such as mobility, ease of accessibility, and portability – which, in turn, have continued to ease their rise in adoption, exposing user device vulnerability as they are quite susceptible to phishing. With users classified as more susceptible than others due to online presence and personality traits, studies have sought to reveal lures/cues as exploited by adversaries to enhance phishing success and classify web content as genuine and malicious. Our study explores the tree-based Random Forest to effectively identify phishing cues via sentiment analysis on phishing website datasets as scrapped from user accounts on social network sites. The dataset is scrapped via Python Google Scrapper and divided into train/test subsets to effectively classify contents as genuine or malicious with data balancing and feature selection techniques. With Random Forest as the machine learning of choice, the result shows the ensemble yields a prediction accuracy of 97 percent with an F1-score of 98.19% that effectively correctly classified 2089 instances with 85 incorrectly classified instances for the test-dataset.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramadandi, Rizki, Novi Yusliani, Osvari Arsalan, Rizki Kurniati, and Rahmat Fadli Isnanto. "Pemodelan Topik Menggunakan Metode Latent Dirichlet Allocation dan Gibbs Sampling." Generic 14, no. 2 (2022): 74–79. http://dx.doi.org/10.18495/generic.v14i2.142.

Full text
Abstract:
Pemodelan topik adalah suatu alat yang digunakan untuk menemukan topik laten pada sekelompok dokumen. Pada penelitian ini dilakukan pemodelan topik dengan menggunakan metode Latent Dirichlet Allocation dan Gibbs Sampling. Enam artikel berita Bahasa Indonesia telah dikumpulkan dari portal berita detiknews dengan menggunakan metode Web Scrapper. Artikel berita dibagi menjadi dua kategori utama yaitu, narkoba dan COVID-19. Analisis model LDA dilakukan dengan menggunakan metode koherensi topik pengukuran skor UCI dengan hasil penelitian menyebutkan diperoleh lima buah topik optimal pada kedua konfigurasi pengujian.
APA, Harvard, Vancouver, ISO, and other styles
5

Gultom, Edra Arkananta, Nurafni Eltivia, and Nur Indah Riwajanti. "Shares Price Forecasting Using Simple Moving Average Method and Web Scrapping." Journal of Applied Business, Taxation and Economics Research 2, no. 3 (2023): 288–97. http://dx.doi.org/10.54408/jabter.v2i3.164.

Full text
Abstract:
The fluctuation of share prices in a secondary market allows investors/traders to gain profits through the difference in share prices (capital gain). In order to obtain these benefits, it is necessary to analyze before buying shares through fundamental and technical analysis. One of several methods in Technical Analysis is Simple Moving Average Method. This method can predict (forecast) share prices by calculating the moving average of the share price history. Historical share prices can be obtained in real-time using the Web Scrapper technique, so the results are more quickly and accurate. Using the MAPE (Mean Absolute Percent Error) method, the level of accuracy of forecasting can be calculated. As a result, the program could run successfully and display the value of forecasting and the level of accuracy for the entire data tested in LQ45. Besides, forecasting with a value of N = 5 has the highest level of accuracy, reaching 97,6 %, while the lowest one uses the value of N = 30, which is 95,0 %.
APA, Harvard, Vancouver, ISO, and other styles
6

Anggraeni, Dessy Tri. "FORECASTING HARGA SAHAM MENGGUNAKAN METODE SIMPLE MOVING AVERAGE DAN WEB SCRAPPING." Jurnal Ilmiah Matrik 21, no. 3 (2019): 234–41. http://dx.doi.org/10.33557/jurnalmatrik.v21i3.726.

Full text
Abstract:
Abstract: The fluctuative of stock prices in a secondary market provide the possibility for investors/traders to gain profits through the difference in stock prices (capital gain). In order to obtain these benefits, it is necessary to analyze before buying shares, through fundamental and technical analysis. One of several methods in Technical Analysis is Simple Moving Average Method. This method can be used to predict (forecast) stock prices by calculating moving average of the stock price history. Historical stock prices can be obtained in real time using the Web Scrapper technique, so the results is more quickly and accurately. Using the MAPE (Mean Absolute Percent Error) method, the level of accuracy of forecasting can be calculated. As a result, the program was able to run successfully and was able to display the value of forecasting and the level of accuracy for the entire data tested in LQ45. Besides forecasting with a value of N = 5 has the highest level of accuracy that reaches 97,6 % while the lowest one is using the value of N = 30 which is 95,0 %.
APA, Harvard, Vancouver, ISO, and other styles
7

Anggraeni, Dessy Tri. "Peramalan Harga Saham Menggunakan Metode Autoregressive Dan Web Scrapping Pada Indeks Saham Lq45 Dengan Python." Rabit : Jurnal Teknologi dan Sistem Informasi Univrab 5, no. 2 (2020): 137–44. http://dx.doi.org/10.36341/rabit.v5i2.1401.

Full text
Abstract:
Bursa Saham memberikan kemungkinan investor untuk memperoleh keuntungan (capital gain) atau mengalami kerugian (capital loss) dikarenakan harga saham yang berfluktuasi. Ketidakpastian ini bisa disiasati dengan menerapkan metode peramalan untuk memprediksi harga saham di masa datang. Salah satu metode peramalan yang dapat digunakan adalah Autoregressive. Metode ini memanfaatkan data saham di masa lalu untuk mendapatkan formula prediksi di masa datang. History harga saham bisa dilihat secara realtime melalui beberapa laman penyedia data saham. Data ini bisa ditarik secara otomatis dengan menggunakan teknik Web Scrapper, sehingga hasil peramalan dapat diperoleh dengan lebih cepat, mudah, dan akurat. Tingkat akurasi peramalan diukur dengan menggunakan metode MAPE (Mean Absolute Percent Error). Metode ini dipilih karena lebih mudah dipahami oleh para pengguna awam. Hasilnya, aplikasi peramalan mampu menampilkan prediksi harga saham beserta tingkat akurasinya. Data yang diujikan pada penelitian adalah semua data saham LQ45. Tingkat akurasi rata-rata yang diperoleh adalah sebesar 94,62 %. Tingkat akurasi terbesar terdapat pada emiten BKSL dengan nilai persentase 99,92 % dan tingkat akurasi terkecil terdapat pada emiten ASRI dengan nilai persentase 90,13 %.
APA, Harvard, Vancouver, ISO, and other styles
8

Prestianta, Albertus Magnus. "Mapping the ASEAN YouTube Uploaders." Jurnal ASPIKOM 6, no. 1 (2021): 1. http://dx.doi.org/10.24329/aspikom.v6i1.761.

Full text
Abstract:
YouTube can now be categorized as mainstream media. It can be seen as a disruptive force in business and society, particularly concerning young people. There have been several recent studies about YouTube, providing essential insights on YouTube videos, viewers, social behavior, video traffic, and recommendation systems. However, research about YouTube uploaders has not been done much, especially YouTube uploaders from ASEAN countries. Using a combination of web content mining and content analysis, this paper reviews 600 YouTube uploaders using the data of Top 100 favorite YouTube uploaders in six ASEAN countries (Indonesia, Singapore, Malaysia, Thailand, Vietnam, and the Philippines), which are retrieved from NoxInfluencer. The study aims to provide a wider picture of YouTube uploaders' characteristics from six ASEAN countries. This study also provides useful information about how to retrieve web documents using Google Web Scrapper automatically. The study results found that the entertainment category dominated the top 100 positions of the NoxInfluencer version. In almost every country analyzed, channels related to news and politics are less attractive to YouTube users. For YouTube uploaders, YouTube can be a potential revenue source through advertising or in collaboration with specific brands. Through the analysis, we discovered that engagement is the critical factor in generating income in the form of likes, dislikes, and comments.
APA, Harvard, Vancouver, ISO, and other styles
9

Divyam, Pithawa, Nahar Sarthak, Sharma Shivam, and Nikhil Chaturvedi Er. "Data Set of AI Jobs." Advancement of Computer Technology and its Applications 5, no. 3 (2022): 1–7. https://doi.org/10.5281/zenodo.7330062.

Full text
Abstract:
The automated, targeted extraction of information from websites is known as web scraping. Similar technology used by search engines is marked as “Web Crawling.” Although human data collection is a possibility, automation is frequently faster, more efficient, and less prone to mistakes. Online job portals frequently collect a substantial amount of data in the form of resumes and job openings, which may be a useful source of knowledge on the features of market demand. Web scraping may be categorized into three steps: the web scraper finds the needed links on the internet; the data is then scraped from the source links; and finally, the data is shown in a CSV file. For doing the scrape, the Python language is used. As part of the job series of datasets, this dataset can be helpful for finding a job as an AI engineer!
APA, Harvard, Vancouver, ISO, and other styles
10

Wijaya, Arie, and Prihandoko. "ANALISIS SENTIMEN REVIEW PENGGUNA APLIKASI DEPOK SINGLE WINDOW DI GOOGLE PLAY MENGGUNAKAN ALGORITMA SUPPORT VECTOR MACHINE." Jurnal Ilmiah Informatika Komputer 28, no. 1 (2023): 77–87. http://dx.doi.org/10.35760/ik.2023.v28i1.7902.

Full text
Abstract:
Technology is developing rapidly, including in the world of government. The district government makes a web or mobile-based application with the aim of helping people in getting the services that the community deserves. The Depok Regency Government created a mobile-based public service application called Depok Single Window. Due to the importance of user reviews for the continuity of the DSW application, it is required to analyze the sentiment of reviews of the Depok Single Window application on Google Play Store. Sentiment analysis is carried out using the Support Vector Machine. The data used in this study were 733 reviews obtained from the scrapping. The scrapping is carried out by utilizing python library, namely google play scrapper as access to retrieve data. The results attained from this research are an accuracy value of 89.23% for the sentiment analysis of the Depok Single Window application, which means that the Support Vector Machine is good to be used to classify the Depok Single Window application review data into positive, negative and neutral.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "And Web Scrapper"

1

Andersson, Pontus. "Developing a Python based web scraper : A study on the development of a web scraper for TimeEdit." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-43140.

Full text
Abstract:
I en värld där alltmer information lagras på internet är det svårt för en vanlig användare att hänga med. Även när informationen finns tillgänglig på en och samma hemsida kan den hemsidan sakna funktioner eller vara svår att läsa av. Idén bakom att skrapa hemsidor, tidningar eller spel på information är inte ny och detta examensarbete fokuserar på att bygga en web scraper med tillhörande hemsida där användare kan ladda upp sitt schema skrapat från TimeEdit. Hemsidan ska sedan presentera denna skrapade data på ett visuellt tilltalande sett. När system är färdigutvecklade utvärderas dem för att se om examensarbetets mål har uppnåtts samt om systemen har förbättrat det befintliga sättet att hantera schemaläggning i TimeEdit hos lärare och studenter. I sammanfattningen finns sedan framtida forskning och arbeten presenterat.<br>The concept of scraping the web is not new, however, with modern programming languages it is possible to build web scrapers that can collect unstructured data and save this in a structured way. TimeEdit, a scheduling platform used by Mid Sweden University, has no feasible way to count how many hours has been scheduled at any given week to a specific course, student, or professor. The goal of this thesis is to build a python-based web scraper that collects data from TimeEdit and saves this in a structured manner. Users can then upload this text file to a dynamic website where it is extracted from the file and saved into a predetermined database and unique to that user. The user can then get this data presented in a fast, efficient, and user-friendly way. This platform is developed and evaluated with the resulting platform being a good and fast way to scan a TimeEdit schedule and evaluate the extracted data. With the platform built future work is recommended to make it a finishes product ready for live use by all types of users.
APA, Harvard, Vancouver, ISO, and other styles
2

Lloyd, Oskar, and Christoffer Nilsson. "How to Build a Web Scraper for Social Media." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20594.

Full text
Abstract:
In recent years, the act of scraping websites for information has become increasingly relevant. However, along with this increase in interest, the internet has also grown substantially and advances and improvements to websites over the years have in fact made it more difficult to scrape. One key reason for this is that scrapers simply account for a significant portion of the traffic to many websites, and so developers often implement anti-scraping measures along with the Robots Exclusion Protocol (robots.txt) to try to stymie this traffic. The popular use of dynamically loaded content – content which loads after user interaction – poses another problem for scrapers. In this paper, we have researched what kinds of issues commonly occur when scraping and crawling websites – more specifically when scraping social media – and how to solve them. In order to understand these issues better and to test solutions, a literature review was performed and design and creation methods were used to develop a prototype scraper using the frameworks Scrapy and Selenium. We found that automating interaction with dynamic elements worked best to solve the problem of dynamically loaded content. We also theorize that having an artificial random delay when scraping and randomizing intervals between each visit to a website would counteract some of the anti-scraping measures. Another, smaller aspect of our research was the legality and ethicality of scraping. Further thoughts and comments on potential solutions to other issues have also been included.
APA, Harvard, Vancouver, ISO, and other styles
3

Palma, Michael, and Shidi Zhou. "A Web Scraper For Forums : Navigation and text extraction methods." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-219903.

Full text
Abstract:
Web forums are a popular way of exchanging information and discussing various topics. These websites usually have a special structure, divided into boards, threads and posts. Although the structure might be consistent across forums, the layout of each forum is different. The way a web forum presents the user posts is also very different from how a news website presents a single piece of information. All of this makes the navigation and extraction of text a hard task for web scrapers. The focus of this thesis is the development of a web scraper specialized in forums. Three different methods for text extraction are implemented and tested before choosing the most appropriate method for the task. The methods are Word Count, Text-Detection Framework and Text-to-Tag Ratio. The handling of link duplicates is also considered and solved by implementing a multi-layer bloom filter. The thesis is conducted applying a qualitative methodology. The results indicate that the Text-to-Tag Ratio has the best overall performance and gives the most desirable result in web forums. Thus, this was the selected methods to keep on the final version of the web scraper.<br>Webforum är ett populärt sätt att utbyta information och diskutera olika ämnen. Dessa webbplatser har vanligtvis en särskild struktur, uppdelad i startsida, trådar och inlägg. Även om strukturen kan vara konsekvent bland olika forum är layouten av varje forum annorlunda. Det sätt på vilket ett webbforum presenterar användarinläggen är också väldigt annorlunda än hur en nyhet webbplats presenterar en enda informationsinlägg. Allt detta gör navigering och extrahering av text en svår uppgift för webbskrapor. Fokuset av detta examensarbete är utvecklingen av en webbskrapa specialiserad på forum. Tre olika metoder för textutvinning implementeras och testas innan man väljer den lämpligaste metoden för uppgiften. Metoderna är Word Count, Text Detection Framework och Text-to-Tag Ratio. Hanteringen av länk dubbleringar noga övervägd och löses genom att implementera ett flerlagers bloom filter. Examensarbetet genomförs med tillämpning av en kvalitativ metodik. Resultaten indikerar att Text-to-Tag Ratio har den bästa övergripande prestandan och ger det mest önskvärda resultatet i webbforum. Således var detta den valda metoden att behålla i den slutliga versionen av webbskrapan.
APA, Harvard, Vancouver, ISO, and other styles
4

Wheeler, Ryan. "BlindCanSeeQL: Improved Blind SQL Injection For DB Schema Discovery Using A Predictive Dictionary From Web Scraped Word Based Lists." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/6050.

Full text
Abstract:
SQL Injections are still a prominent threat on the web. Using a custom built tool, BlindCanSeeQL (BCSQL), we will explore how to automate Blind SQL attacks to discover database schema using fewer requests than the standard methods, thus helping avoid detection from overloading a server with hits. This tool uses a web crawler to discover keywords that assist with autocompleting schema object names, along with improvements in ASCII bisection to lower the number of requests sent to the server. Along with this tool, we will discuss ways to prevent and protect against such attacks.
APA, Harvard, Vancouver, ISO, and other styles
5

Fiordarancio, Matteo. "Monitorare attacchi al Brand." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Al giorno d’oggi, la proprietà intellettuale e l’immagine comunicata dal proprio brand sono fra le cose più importanti per un’azienda. Fra i vari vettori d’attacco, uno dei più impor­tanti è proprio internet. Infatti, è proprio tramite di questo che avvengono i più comuni casi di phishing a spese degli utenti o dove vengono distribuiti malware sotto mentite spoglie. Ed è proprio in questi casi che le aziende si mostrano più vulnerabili: per utenti inesperti, diventa difficile distinguere il confine fra quello che fa il brand e quello che fanno questi criminali. Pertanto, diventa responsabilità dell’azienda stessa tutelarsi da questi attacchi, per evitare che indirettamente la propria reputazione venga danneggiata. Da questa osservazione, in collaborazione con Bending Spoons, azienda con cui ho collaborato, ho creato un tool in grado di monitorare il web alla ricerca di attacchi al brand o violazioni di proprietà intellettuale. In particolare, l’esigenza nasce da esperienza diretta: durante lo sviluppo di Immuni da parte di Bending Spoons (l’app scelta dal governo italiano per aiutare a combattere il Coronavirus) sono stati registrati diversi domini e profili Insta­gram con nomi simili a quelli dell’azienda. Essi portavano a siti diversi con informazioni inaccurate e che necessitavano di essere controllati manualmente per la loro eliminazione. Per tali motivi, il tool si pone come obiettivo quello di riportare automaticamente i risultati sospetti e monitorare gli stessi notificando un’eventuale cambiamento.
APA, Harvard, Vancouver, ISO, and other styles
6

Wara, Ummul. "A Framework for Fashion Data Gathering, Hierarchical-Annotation and Analysis for Social Media and Online Shop : TOOLKIT FOR DETAILED STYLE ANNOTATIONS FOR ENHANCED FASHION RECOMMENDATION." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-234285.

Full text
Abstract:
Due to the transformation of different recommendation system from contentbased to hybrid cross-domain-based, there is an urge to prepare a socialnetwork dataset which will provide sufficient data as well as detail-level annotation from a predefined hierarchical clothing category and attribute based vocabulary by considering user interactions. However, existing fashionbased datasets lack either in hierarchical-category based representation or user interactions of social network. The thesis intends to represent two datasets- one from photo-sharing platform Instagram which gathers fashionistas images with all possible user-interactions and another from online-shop Zalando with every cloths detail. We present a design of a customized crawler that enables the user to crawl data based on category or attributes. Moreover, an efficient and collaborative web-solution is designed and implemented to facilitate large-scale hierarchical category-based detaillevel annotation of Instagram data. By considering all user-interactions, the developed solution provides a detail-level annotation facility that reflects the user’s preference. The web-solution is evaluated by the team as well as the Amazon Turk Service. The annotated output from different users proofs the usability of the web-solution in terms of availability and clarity. In addition to data crawling and annotation web-solution development, this project analyzes the Instagram and Zalando data distribution in terms of cloth category, subcategory and pattern to provide meaningful insight over data. Researcher community will benefit by using these datasets if they intend to work on a rich annotated dataset that represents social network and resembles in-detail cloth information.<br>Med tanke på trenden inom forskning av rekommendationssystem, där allt fler rekommendationssystem blir hybrida och designade för flera domäner, så finns det ett behov att framställa en datamängd från sociala medier som innehåller detaljerad information om klädkategorier, klädattribut, samt användarinteraktioner. Nuvarande datasets med inriktning mot mode saknar antingen en hierarkisk kategoristruktur eller information om användarinteraktion från sociala nätverk. Detta projekt har syftet att ta fram två dataset, ett dataset som insamlats från fotodelningsplattformen Instagram, som innehåller foton, text och användarinteraktioner från fashionistas, samt ett dataset som insamlats från klädutbutdet som ges av onlinebutiken Zalando. Vi presenterar designen av en webbcrawler som är anpassad för att kunna hämta data från de nämnda domänerna och är optimiserad för mode och klädattribut. Vi presenterar även en effektiv webblösning som är designad och implementerad för att möjliggöra annotering av stora mängder data från Instagram med väldigt detaljerad information om kläder. Genom att vi inkluderar användarinteraktioner i applikationen så kan vår webblösning ge användaranpassad annotering av data. Webblösningen har utvärderats av utvecklarna samt genom AmazonTurk tjänsten. Den annoterade datan från olika användare demonstrerar användarvänligheten av webblösningen. Utöver insamling av data och utveckling av ett system för webb-baserad annotering av data så har datadistributionerna i två modedomäner, Instagram och Zalando, analyserats. Datadistributionerna analyserades utifrån klädkategorier och med syftet att ge datainsikter. Forskning inom detta område kan dra nytta av våra resultat och våra datasets. Specifikt så kan våra datasets användas i domäner som kräver information om detaljerad klädinformation och användarinteraktioner.
APA, Harvard, Vancouver, ISO, and other styles
7

Dias, João Tiago Pereira. "Prometheus: a generic e-commerce crawler for the study of business markets and other e-commerce problems." Master's thesis, 2019. http://hdl.handle.net/1822/66581.

Full text
Abstract:
Dissertação de mestrado em Computer Science<br>The continuous social and economic development has led over time to an increase in consumption, as well as greater demand from the consumer for better and cheaper products. Hence, the selling price of a product assumes a fundamental role in the purchase decision by the consumer. In this context, online stores must carefully analyse and define the best price for each product, based on several factors such as production/acquisition cost, positioning of the product (e.g. anchor product) and the competition companies strategy. The work done by market analysts changed drastically over the last years. As the number of Web sites increases exponentially, the number of E-commerce web sites also prosperous. Web page classification becomes more important in fields like Web mining and information retrieval. The traditional classifiers are usually hand-crafted and non-adaptive, that makes them inappropriate to use in a broader context. We introduce an ensemble of methods and the posterior study of its results to create a more generic and modular crawler and scraper for detection and information extraction on E-commerce web pages. The collected information may then be processed and used in the pricing decision. This framework goes by the name Prometheus and has the goal of extracting knowledge from E-commerce Web sites. The process requires crawling an online store and gathering product pages. This implies that given a web page the framework must be able to determine if it is a product page. In order to achieve this we classify the pages in three categories: catalogue, product and ”spam”. The page classification stage was addressed based on the html text as well as on the visual layout, featuring both traditional methods and Deep Learning approaches. Once a set of product pages has been identified we proceed to the extraction of the pricing information. This is not a trivial task due to the disparity of approaches to create a web page. Furthermore, most product pages are dynamic in the sense that they are truly a page for a family of related products. For instance, when visiting a shoe store, for a particular model there are probably a number of sizes and colours available. Such a model may be displayed in a single dynamic web page making it necessary for our framework to explore all the relevant combinations. This process is called scraping and is the last stage of the Prometheus framework.<br>O contínuo desenvolvimento social e económico tem conduzido ao longo do tempo a um aumento do consumo, assim como a uma maior exigência do consumidor por produtos melhores e mais baratos. Naturalmente, o preço de venda de um produto assume um papel fundamental na decisão de compra por parte de um consumidor. Nesse sentido, as lojas online precisam de analisar e definir qual o melhor preço para cada produto, tendo como base diversos fatores, tais como o custo de produção/venda, posicionamento do produto (e.g. produto âncora) e as próprias estratégias das empresas concorrentes. O trabalho dos analistas de mercado mudou drasticamente nos últimos anos. O crescimento de sites na Web tem sido exponencial, o número de sites E-commerce também tem prosperado. A classificação de páginas da Web torna-se cada vez mais importante, especialmente em campos como mineração de dados na Web e coleta/extração de informações. Os classificadores tradicionais são geralmente feitos manualmente e não adaptativos, o que os torna inadequados num contexto mais amplo. Nós introduzimos um conjunto de métodos e o estudo posterior dos seus resultados para criar um crawler e scraper mais genéricos e modulares para extração de conhecimento em páginas de Ecommerce. A informação recolhida pode então ser processada e utilizada na tomada de decisão sobre o preço de venda. Esta Framework chama-se Prometheus e tem como intuito extrair conhecimento de Web sites de E-commerce. Este processo necessita realizar a navegação sobre lojas online e armazenar páginas de produto. Isto implica que dado uma página web a framework seja capaz de determinar se é uma página de produto. Para atingir este objetivo nós classificamos as páginas em três categorias: catálogo, produto e spam. A classificação das páginas foi realizada tendo em conta o html e o aspeto visual das páginas, utilizando tanto métodos tradicionais como Deep Learning. Depois de identificar um conjunto de páginas de produto procedemos à extração de informação sobre o preço. Este processo não é trivial devido à quantidade de abordagens possíveis para criar uma página web. A maioria dos produtos são dinâmicos no sentido em que um produto é na realidade uma família de produtos relacionados. Por exemplo, quando visitamos uma loja online de sapatos, para um modelo em especifico existe a provavelmente um conjunto de tamanhos e cores disponíveis. Esse modelo pode ser apresentado numa única página dinâmica fazendo com que seja necessário para a nossa Framework explorar estas combinações relevantes. Este processo é chamado de scraping e é o último passo da Framework Prometheus.
APA, Harvard, Vancouver, ISO, and other styles
8

Hsing, Shih Chia, and 石佳興. "The Factors Influencing the Intention of Property Management Staff to Use Scrapped Property Auction Systems – An Example of the Kaohsiung City Government Nostalgia Auction Web System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/31520620849250977580.

Full text
Abstract:
碩士<br>國立中正大學<br>會計與資訊科技研究所<br>99<br>In order to effectively manage public properties and promote resource recycling, Kaohsiung City Government implemented the “Kaohsiung City Government Nostalgia Auction Web System” to provide a web-based platform for mutual exchange and auction of scrapped but still valuable public property. The introduction of this system promoted the e-government concept, however, it also changed the work style and procedures of property management staff. Therefore some supporting measures might be needed to encourage system users to adopt it. Based on the Extended Technology Acceptance Model (TAM2) proposed by Venkatesh & Davis (2000) and incorporating the concept of Decomposed Theory of Planned Behavior introduced by Taylor & Todd (1995), this study investigated the factors influencing the intention of the property management staff of Kaohsiung City to use the “Kaohsiung City Government Nostalgia Auction Web System.” Questionnaires were distributed to the property management staff of the Kaohsiung City Government and 171 of them were returned. After statistical analyses of the collected data, the following research hypotheses were supported: 1. Social norm, output quality, and result demonstrability have positive impacts on perceived usefulness of the system. 2. Computer self-efficacy and government promotion policies have positive impacts on perceived behavioral control. 3. Perceived usefulness, perceieved ease of use, and perceived behavioral control have positive impacts on behavioral intention.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "And Web Scrapper"

1

Kidder, Jonathan. Web Scraping Basics for Recruiters: Learn How to Extract and Scrape Data from the Web. Independently Published, 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schrenk, Michael. Webbots, Spiders, and Screen Scrapers: A Guide to Developing Internet Agents with PHP/CURL. No Starch Press, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Smith, Vincent. Go Web Scraping Quick Start Guide: Implement the Power of Go to Scrape and Crawl Data from the Web. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Patel, Jay M. Getting Structured Data from the Internet: Running Web Crawlers/Scrapers on a Big Data Production Scale. Apress L. P., 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Web Scraping with Python: Successfully scrape data from any website with the power of Python. Packt Publishing, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

R Web Scraping Quick Start Guide: Techniques and Tools to Crawl and Scrape Data from Websites. Packt Publishing, Limited, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kvanvig, Jonathan L. Lessons from Gettier. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198724551.003.0009.

Full text
Abstract:
This chapter argues that the literature surrounding the Gettier Problem arises from a kind of methodological false consciousness in the epistemology of the middle part of the twentieth century. The underlying methodology is contrasted with two paradigms within the history of epistemology: one prompted by the conversational context of scrapes with the skeptic and the other on the scientific project of trying to understand the universe and our place in it. These competing paradigms call for two quite different epistemological projects and we can separate the two projects in a way that sees them as complementary, unlike the picture that emerges from within the presuppositions of the Gettier literature. The resulting picture does not make the Gettier Problem go away, but implies a weaker claim, that it should not now be and never should have been a primary focus of epistemology.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "And Web Scrapper"

1

Raj, Jyoti, Amirul Hoque, and Ashim Saha. "Integrated Micro-Video Recommender Based on Hadoop and Web-Scrapper." In Machine Learning and Big Data Analytics (Proceedings of International Conference on Machine Learning and Big Data Analytics (ICMLBDA) 2021). Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82469-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kumbhkar, Makhan, Shraddha Masih, and Savita Kolhe. "Design and Implementation of Web Scrapper for Fact-Checking Website." In Advances in Intelligent Systems Research. Atlantis Press International BV, 2025. https://doi.org/10.2991/978-94-6463-716-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pujari, Niharika, Abhishek Ray, and Jagannath Singh. "Slicing Based on Web Scrapped Concurrent Component." In Advances in Intelligent Systems and Computing. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5400-1_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hirapar, Prince, Raj Davande, Mittal Desai, Bhargav Vyas, and Dip Patel. "Intelligent Classification of Documents Based on Critique Points from Relevant Web Scrapped Content." In IOT with Smart Systems. Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3575-6_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Benedetti, Ilaria, Tiziana Laureti, Niccolò Salvini, and Luigi Palumbo. "Food Prices and Household Vulnerability in Italy: Insights from Web-Scraped Data." In Italian Statistical Society Series on Advances in Statistics. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-96736-8_76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Munteán, László. "5. Asbestos: The Fallout of Shipbreaking in the Global South." In Edition Kulturwissenschaft. transcript Verlag, 2023. http://dx.doi.org/10.14361/9783839466971-008.

Full text
Abstract:
László Munteán focuses in his chapter on the asbestos content of ships slated to be scrapped in the shipbreaking yards of India, Bangladesh, and Pakistan. He demonstrates how the operation of the shipbreaking industry is engrained in imperial power dynamics that continue to ravage human lives, the environment, and the economies of the Global South. While major shipyards in the West are now opening up about their illegal use of asbestos after its ban from shipbuilding and are facing lawsuits by affected workers and their families, the lax environmental, safety, and health regulations in countries of the Global South allow for the recycling and reselling of asbestos (alongside other toxic materials) retrieved from discarded ships. Working under lethal labor conditions, millions of migrant workers earn their livelihood and provide for their families from shipbreaking. Drawing on, among others, Ann Stoler's notion of 'duress', Karen Barad's 'intra-action', and Michael Rothberg's 'implicated subject', this chapter follows the toxic trail of asbestos, and probes the web of responsibility for the sustenance of exploitation in the shipbreaking industry.
APA, Harvard, Vancouver, ISO, and other styles
7

Bechini, Alessio, Beatrice Lazzerini, Francesco Marcelloni, and Alessandro Renda. "Integration of Web-Scraped Data in CPM Tools: The Case of Project Sibilla." In Proceedings of Fifth International Congress on Information and Communication Technology. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5859-7_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alfken, Christoph, Charlotte Articus, Hanna Brenzel, Jana Emmenegger, Ralf Münnich, and Johannes Rohde. "Estimating Regional Rental Prices on LAU 2 Municipalities in North Rhine-Westphalia." In Studies in Theoretical and Applied Statistics. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-63630-1_2.

Full text
Abstract:
AbstractRecently, rental prices came more and more into focus in policy. Latest developments show highly different developments among cities and rural regions. The Ukraine and energy crises caused additional impact on high rental prices due to an imbalance of supply and demand, as well as to affordable living space especially in big cities. Hence, reliable estimates are of major importance for evidence-based policy. However, available data sources are either subject to selection biases, e.g. web-scraped data, or hardly allow accurate estimates on small regional scale. The present chapter derives small area estimates on LAU 2 level using classical and spatial Fay–Herriot methods in contrast to rental prices from Internet platforms. The data in use are official data from the federal state North Rhine-Westphalia in Germany.
APA, Harvard, Vancouver, ISO, and other styles
9

Patil, Ankur, Nishtha Jain, Rahul Agrahari, Murhaf Hossari, Fabrizio Orlandi, and Soumyabrata Dev. "A Data-Driven Analysis of Formula 1 Car Races Outcome." In Communications in Computer and Information Science. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_11.

Full text
Abstract:
AbstractThere are a range of factors that affect the outcome of Formula 1 (F1) car races. Today, it is reasonable to say that F1 races are first won at the factory, and then on the track. F1 teams accumulate enormous amounts of data during races. In this paper, we propose a data-driven approach to identify the most important factors that contribute to the overall points scored by each driver in a F1 season. We perform a correlation analysis along with a principal components analysis (PCA) to identify the factors that are closely related. Furthermore, using PCA, we efficiently reduce our 21 input variables into a lower-dimensional subspace, that can explain most of the variance in our data and which is easier to comprehend. We obtain 5 years (2015–2019) of data explaining the F1 car characteristics from a publicly available website https://www.racefans.net/. We use this web-scrapped F1 race study to understand the impact of the different car features on the total points scored by a driver in the season. To the best of our knowledge, our work is the first of its kind in the area of F1 car races.
APA, Harvard, Vancouver, ISO, and other styles
10

Wilson, Andrew S., Vincent Gaffney, Chris Gaffney, et al. "Curious Travellers: Using Web-Scraped and Crowd-Sourced Imagery in Support of Heritage Under Threat." In Visual Heritage: Digital Approaches in Heritage Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-77028-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "And Web Scrapper"

1

Huang, Wenhao, Zhouhong Gu, Chenghao Peng, et al. "AutoScraper: A Progressive Understanding Web Agent for Web Scraper Generation." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-main.141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

G, Sriram, Rahul S, Adaline Suji R, Priyanka Nallusamy, Nivitha K, and Arumuga Arun R. "Scrapetalk: Chatbot Conversation with Web Scraped Insights." In 2024 International Conference on Integration of Emerging Technologies for the Digital World (ICIETDW). IEEE, 2024. https://doi.org/10.1109/icietdw61607.2024.10940054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bojnal, Hrishik Sai, Dharmisht SVK, J. Krishna Kaarthik, Dhyaan Kotian, and Shashidhar Virupaksha. "Privacy Preservation of Cluster Integrity on Web-Scraped Hospital Data." In 2025 Fifth International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT). IEEE, 2025. https://doi.org/10.1109/icaect63952.2025.10958977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kanataria, Nikunj, Kunj Pareshbhai Patel, Hetul Niteshbhai Patel, Parth Goel, Krishna Patel, and Dweepna Garg. "RAG-Enhanced Large Language Model for Intelligent Assistance from Web-Scraped Data." In 2024 9th International Conference on Communication and Electronics Systems (ICCES). IEEE, 2024. https://doi.org/10.1109/icces63552.2024.10859894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brach, William, Matej Petrik, Kristián Košt’ál, and Michal Ries. "Ghosts in the Markup: Techniques to Fight Large Language Model-Powered Web Scrapers." In 2025 37th Conference of Open Innovations Association (FRUCT). IEEE, 2025. https://doi.org/10.23919/fruct65909.2025.11008269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Supleo, Richmond Bryan A., Robert G. De Luna, and Aivor C. Padilla. "Predicting Used Car Prices in Metro Manila Using Artificial Neural Networks on Web-Scraped Data." In 2024 7th International Conference on Informatics and Computational Sciences (ICICoS). IEEE, 2024. http://dx.doi.org/10.1109/icicos62600.2024.10636891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Al-Abbas, Faisal M., Qasem Salem, and Ahmed Harb. "Top of Line Corrosion Probabilistic Risk Analysis for Wet Sour Subsea Pipeline." In CORROSION 2019. NACE International, 2019. https://doi.org/10.5006/c2019-13116.

Full text
Abstract:
Abstract Top of Line Corrosion (TLC) occurs in a multiphase flow when water vapor condenses at the top and the sides of the pipeline, leading to a severe corrosion attack. This study investigated the probabilistic risk of TLC for wet sour gas subsea pipeline using flow modeling and corrosion predications. The flow assurance hydraulic study showed that most of water drops out over the first few kilometers as the gas is cooled and becomes much less through the rest of the offshore part until they reach onshore area where the gas temperature drops further due to Joule-Thomson effect. It was anticipated that corrosion activities will be higher at the high condensation locations. The corrosion prediction modeling revealed high corrosion severity driven by Top of Line Corrosion (TLC). In order to maintain the system integrity the internal coating supplemented by V-jet batch inhibitor injection has been selected to protect against TLC. This study has realized the challenge to apply the batch treatment as it requires process interruption to meet scraper speed limitations. Therefore, the industry path forward should consider the development of novel TLC treatments to be applied with no impact on operations.
APA, Harvard, Vancouver, ISO, and other styles
8

Mishra, Debahuti, and Niharika Pujari. "Cross-domain query answering: Using Web scrapper and data integration." In 2011 2nd International Conference on Computer and Communication Technology (ICCCT). IEEE, 2011. http://dx.doi.org/10.1109/iccct.2011.6075193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Koras, S., Mannem Venkatarao Rao, and V. Bhuvaneswari. "Bi-directional Methodology for Literature Extraction from PubMed Abstracts using Web Scrapper and Web Crawler." In 2019 International Conference on Communication and Electronics Systems (ICCES). IEEE, 2019. http://dx.doi.org/10.1109/icces45898.2019.9002140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Peiyuan, and Yu Sun. "Web Scraper Utilizes Google Street view Images to Power a University Tour." In 10th International Conference on Information Technology Convergence and Services (ITCSE 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110916.

Full text
Abstract:
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "And Web Scrapper"

1

Kaltenberg, Mary, Adam Jaffe, and Margie Lachman. The Age of Invention: Matching Inventor Ages to Patents Based on Web-scraped Sources. National Bureau of Economic Research, 2021. http://dx.doi.org/10.3386/w28768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Forteza, Nicolás, Elvira Prades, and Marc Roca. Analysing the VAT cut pass-through in Spain using web-scraped supermarket data and machine learning. Banco de España, 2024. http://dx.doi.org/10.53479/36652.

Full text
Abstract:
On 28 December 2022, the Spanish government announced a temporary Value Added Tax (VAT) rate reduction for selected products. VAT rates were cut on 1 January 2023 and are expected to go back to their previous level by mid-2024. Using a web-scraped dataset, we leverage machine learning techniques to classify each product. Then we study the price effects of the temporary VAT rate reduction, covering the daily prices of roughly 10,000 food products sold online by a Spanish supermarket. To identify the causal price effects, we compare the evolution of prices for treated items (that is, subject to the tax policy) against a control group (food items outside the policy’s scope). Our findings indicate that, at the supermarket level, the pass-through was almost complete. We observe differences in the speed of pass-through across different product types.
APA, Harvard, Vancouver, ISO, and other styles
3

Clegg, Alex, and Adam Corlett. Limited ambition? An assessment of the rumoured options for easing the two-child limit. The Resolution Foundation, 2025. https://doi.org/10.63492/zcoc73.

Full text
Abstract:
Abolishing the two-child limit would be the most cost-effective way to reduce child poverty; if it is not scrapped, we project that 4.8 million children (34 per cent) will be in poverty by 2029-30, including half of all children in large families. There has been speculation in recent weeks that the Government is considering measures that would reduce the impact of the two-child limit, but fall short of fully scrapping it, either to reduce the cost to the Government (which we estimate to be £3.5 billion in 2029-30, or £4.5 billion if the benefit cap is also scrapped) or so that the Government can say that there is still a limit on how much Universal Credit is paid to families with children. Of the suggested compromise options, either moving to a three-child limit or paying lower amounts for third and subsequent children would both be preferable to other options that introduce problematic incentives, cliff-edges or distortions.
APA, Harvard, Vancouver, ISO, and other styles
4

Anilkumar, Rahul, Benjamin Melone, Michael Patsula, et al. Canadian jobs amid a pandemic : examining the relationship between professional industry and salary to regional key performance indicators. Department of Systems and Computer Engineering, Carleton University, 2022. http://dx.doi.org/10.22215/dsce/220608.

Full text
Abstract:
The COVID-19 pandemic has contributed to massive rates of unemployment and greater uncertainty in the job market. There is a growing need for data-driven tools and analyses to better inform the public on trends within the job market. In particular, obtaining a “snapshot” of available employment opportunities mid-pandemic promises insights to inform policy and support retraining programs. In this work, we combine data scraped from the Canadian Job Bank and Numbeo globally crowd-sourced repository to explore the relationship between job postings during a global pandemic and Key Performance Indicators (e.g. quality of life index, cost of living) for major cities across Canada. This analysis aims to help Canadians make informed career decisions, collect a “snapshot” of the Canadian employment opportunities amid a pandemic, and inform job seekers in identifying the correct fit between the desired lifestyle of a city and their career. We collected a new high-quality dataset of job postings from jobbank.gc.ca obtained with the use of ethical web scraping and performed exploratory data analysis on this dataset to identify job opportunity trends. When optimizing for average salary of job openings with quality of life, affordability, cost of living, and traffic indices, it was found that Edmonton, AB consistently scores higher than the mean, and is therefore an attractive place to move. Furthermore, we identified optimal provinces to relocate to with respect to individual skill levels. It was determined that Ajax, Marathon, and Chapleau, ON are each attractive cities for IT professionals, construction workers, and healthcare workers respectively when maximizing average salary. Finally, we publicly release our scraped dataset as a mid-pandemic snapshot of Canadian employment opportunities and present a public web application that provides an interactive visual interface that summarizes our findings for the general public and the broader research community.
APA, Harvard, Vancouver, ISO, and other styles
5

Fourqurean, James, Johannes Krause, Juan González-Corredor, Tom Frankovich, and Justin Campbell. Caricas Partner's Practical Field and Laboratory Guide. Florida International University, 2024. http://dx.doi.org/10.25148/merc_fac.2024.32.

Full text
Abstract:
This field and laboratory guide describes the field and laboratory methods used to characterize blue carbon in seagrass meadows. It was developed for the Caribbean Carbon Accounting in Seagrass project and describes the protocols and methods used by the network. In brief, at each project site, seagrass abundance, species composition, canopy height, and sediment type were assessed at sixteen 0.25 m2 quadrats placed at random locations within the site. Eight 20 cm diameter cores were taken to assess seagrass biomass, shoot density, and to provide the material for assessing seagrass carbon and nutrient content. All seagrasses within each of the eight cores were separated by species and tissue type, washed and scraped to remove epiphytes, then dried and weighed. A piston core of uncompressed soils was retrieved, to a depth of 1 m or until refusal. Cores were subsampled at 5 cm depth intervals using small subcorers. All subcores were weighed wet to permit the calculation of porosity and soil dry bulk density. Seagrass tissue and sediment samples were oven-dried at 60°C, and dry weight recorded. Finally, samples were analyzed in the laboratory for determination of Loss on Ignition, total carbon content, inorganic carbon content, organic carbon content, and carbon and nitrogen content as well as stable isotope ratios. The resulting data allow for the estimation of seagrass organic carbon stocks as well as nutrient and carbonate stocks in biomass and sediment, their relationship with environmental covariates, and the contribution of seagrass material to carbon stocks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography