To see the other types of publications on this topic, follow the link: Webpage ranking.

Journal articles on the topic 'Webpage ranking'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 journal articles for your research on the topic 'Webpage ranking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sankpal, Lata Jaywant, and Suhas H. Patil. "Rider-Rank Algorithm-Based Feature Extraction for Re-ranking the Webpages in the Search Engine." Computer Journal 63, no. 10 (June 12, 2020): 1479–89. http://dx.doi.org/10.1093/comjnl/bxaa032.

Full text
Abstract:
Abstract The webpage re-ranking is a challenging task while retrieving the webpages based on the query of the user. Even though the webpages in the search engines are ordered depends on the importance of the content, retrieving the necessary documents based on the input query is quite difficult. Hence, it is required to re-rank the webpages available in the websites based on the features of the pages in the search engines, like Google and Bing. Thus, an effective Rider-Rank algorithm is proposed to re-rank the webpages based on the Rider Optimization Algorithm (ROA). The input queries are forwarded to different search engines, and the webpages generated from the search engines with respect to the input query are gathered. Initially, the keywords are generated for the webpages. Then, the top keyword is selected, and the features are extracted from the top keyword using factor-based, text-based and rank-based features of the webpage. Finally, the webpages are re-ranked using the Rider-Rank algorithm. The performance of the proposed approach is analyzed based on the metrics, such as F-measure, recall and precision. From the analysis, it can be shown that the proposed algorithm obtains the F-measure, recall and precision of 0.90, 0.98 and 0.84, respectively.
APA, Harvard, Vancouver, ISO, and other styles
2

Satish Babu, J., T. Ravi Kumar, and Dr Shahana Bano. "Optimizing webpage relevancy using page ranking and content based ranking." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 1025. http://dx.doi.org/10.14419/ijet.v7i2.7.12220.

Full text
Abstract:
Systems for web information mining can be isolated into a few classifications as indicated by a sort of mined data and objectives that specif-ic classifications set: Web structure mining, Web utilization mining, and Web Content Mining. This paper proposes another Web Content Mining system for page significance positioning taking into account the page content investigation. The strategy, we call it Page Content Rank (PCR) in the paper, consolidates various heuristics that appear to be critical for breaking down the substance of Web pages. The page significance is resolved on the base of the significance of terms which the page contains. The significance of a term is determined concern-ing a given inquiry q and it depends on its measurable and linguistic elements. As a source set of pages for mining we utilize an arrangement of pages reacted by a web search tool to the question q. PCR utilizes a neural system as its inward order structure. We depict a usage of the proposed strategy and an examination of its outcomes with the other existing characterization framework –page rank algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Shao Xuan, and Tian Liu. "A Webpage Ranking Algorithm Based on Collaborative Recommendation." Advanced Materials Research 765-767 (September 2013): 998–1002. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.998.

Full text
Abstract:
In view of the present personalized ranking of search results user interest model construction difficult, relevant calculation imprecise problems, proposes a combination of user interest model and collaborative recommendation algorithm for personalized ranking method. The method from the user search history, including the submit query, click the relevant webpage information to train users interest model, then using collaborative recommendation algorithm to obtain with common interests and neighbor users, on the basis of these neighbors on the webpage and webpage recommendation level associated with the users to sort the search results. Experimental results show that: the algorithm the average minimum precision than general sorting algorithm was increased by about 0.1, with an increase in the number of neighbors of the user, minimum accuracy increased. Compared with other ranking algorithms, using collaborative recommendation algorithm is helpful for improving webpage with the user interest relevance precision, thereby improving the sorting efficiency, help to improve the search experience of the user.
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Ying, and Zeng Min Geng. "Research and Realization of a Search Engine System for Professional Field." Advanced Materials Research 850-851 (December 2013): 745–50. http://dx.doi.org/10.4028/www.scientific.net/amr.850-851.745.

Full text
Abstract:
In the light of the deficiency of general search engine technology in professional retrieval,This paper researched and designed a search engine system for professional field (SESPF for short).This system automatically crawls web pages by the spider program.It introduced professional dictionary and filtered the webpages information according to certain rules.At the same time,the system improved the PageRank algorithm and Lucene webpage ranking algorithm.The experimental results show that this system has a higher precision in professional field retrieval compared with the general search engine.
APA, Harvard, Vancouver, ISO, and other styles
5

K.G., Srinivasa, Anil Kumar Muppalla, Bharghava Varun A., and Amulya M. "MapReduce Based Information Retrieval Algorithms for Efficient Ranking of Webpages." International Journal of Information Retrieval Research 1, no. 4 (October 2011): 23–37. http://dx.doi.org/10.4018/ijirr.2011100102.

Full text
Abstract:
In this paper, the authors discuss the MapReduce implementation of crawler, indexer and ranking algorithms in search engines. The proposed algorithms are used in search engines to retrieve results from the World Wide Web. A crawler and an indexer in a MapReduce environment are used to improve the speed of crawling and indexing. The proposed ranking algorithm is an iterative method that makes use of the link structure of the Web and is developed using MapReduce framework to improve the speed of convergence of ranking the WebPages. Categorization is used to retrieve and order the results according to the user choice to personalize the search. A new score is introduced in this paper that is associated with each WebPage and is calculated using user’s query and number of occurrences of the terms in the query in the document corpus. The experiments are conducted on Web graph datasets and the results are compared with the serial versions of crawler, indexer and ranking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Hong, Chen Sheng Bai, and Song Zhu. "Automatic Keyword Extraction Algorithm and Implementation." Applied Mechanics and Materials 44-47 (December 2010): 4041–49. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.4041.

Full text
Abstract:
Search engines can bring a lot of benefit to the website. For a site, each page’s search engine ranking is very important. To make web page ranking in search engine ahead, Search engine optimization (SEO) make effect on the ranking. Web page needs to set the keywords as “keywords" to use SEO. The paper focuses on the content of a given word, and extracts the keywords of each page by calculating the word frequency. The algorithm is implemented by C # language. Keywords setting of webpage are of great importance on the information and products
APA, Harvard, Vancouver, ISO, and other styles
7

Rahman, Md Mahbubur, Samsuddin Ahmed, Md Syful Islam, and Md Moshiur Rahman. "An Effective Ranking Method of Webpage Through TFIDF and Hyperlink Classified Pagerank." International Journal of Data Mining & Knowledge Management Process 3, no. 4 (July 31, 2013): 149–56. http://dx.doi.org/10.5121/ijdkp.2013.3411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sangamuang, Sumalee, Pruet Boonma, Juggapong Natwichai, and Wanpracha Art Chaovalitwongse. "Impact of minimum-cut density-balanced partitioning solutions in distributed webpage ranking." Optimization Letters 14, no. 3 (February 13, 2019): 521–33. http://dx.doi.org/10.1007/s11590-019-01399-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Makkar, Aaisha, and Neeraj Kumar. "User behavior analysis-based smart energy management for webpage ranking: Learning automata-based solution." Sustainable Computing: Informatics and Systems 20 (December 2018): 174–91. http://dx.doi.org/10.1016/j.suscom.2018.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Poulos, Marios, Sozon Papavlasopoulos, V. S. Belesiotis, and Nikolaos Korfiatis. "A semantic self-organising webpage-ranking algorithm using computational geometry across different knowledge domains." International Journal of Knowledge and Web Intelligence 1, no. 1/2 (2009): 24. http://dx.doi.org/10.1504/ijkwi.2009.027924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Priya, R. Vishnu, V. Vijayakumar, and Longzhi Yang. "Semantics Based Web Ranking Using a Robust Weight Scheme." International Journal of Web Portals 11, no. 1 (January 2019): 56–72. http://dx.doi.org/10.4018/ijwp.2019010104.

Full text
Abstract:
In this paper, HTML tags and attributes are used to determine different structural position of text in a web page. Tags- attributes based models are used to assign a weight to a text that exist in different structural position of web page. Genetic algorithms (GAs), harmony search (HS), and particle swarm optimization (PSO) algorithms are used to select the informative terms using a novel tags-attributes and term frequency weighting scheme. These informative terms with heuristic weight give emphasis to important terms, qualifying how well they semantically explain a webpage and distinguish them from each other. The proposed approach is developed by customizing Terrier and tested over the Clueweb09B, WT10g, .GOV2 and uncontrolled data collections. The performance of the proposed approach is found to be encouraging against five baseline ranking models. The percentage gain of approach achieved is 75-90%, 70-83% and 43-60% in P@5, P@10 and MAP, respectively.
APA, Harvard, Vancouver, ISO, and other styles
12

Lakshmanaprabu, S. K., K. Shankar, Deepak Gupta, Ashish Khanna, Joel J. P. C. Rodrigues, Plácido R. Pinheiro, and Victor Hugo C. de Albuquerque. "Ranking Analysis for Online Customer Reviews of Products Using Opinion Mining with Clustering." Complexity 2018 (September 6, 2018): 1–9. http://dx.doi.org/10.1155/2018/3569351.

Full text
Abstract:
Sites for web-based shopping are winding up increasingly famous these days. Organizations are anxious to think about their client purchasing conduct to build their item deal. Internet shopping is a method for powerful exchange among cash and merchandise which is finished by end clients without investing a huge energy spam. The goal of this paper is to dissect the high-recommendation web-based business sites with the help of a collection strategy and a swarm-based improvement system. At first, the client surveys of the items from web-based business locales with a few features were gathered and, afterward, a fuzzy c-means (FCM) grouping strategy to group the features for a less demanding procedure was utilized. Also, the novelty of this work—the Dragonfly Algorithm (DA)—recognizes ideal features of the items in sites, and an advanced ideal feature-based positioning procedure will be directed to discover, at long last, which web-based business webpage is best and easy to understand. From the execution, the outcomes demonstrate the greatest exactness rate, that is, 94.56% compared with existing methods.
APA, Harvard, Vancouver, ISO, and other styles
13

Littman, Dalia, and Fumiko Chino. "Availability, reading level, quality, and accessibility of online cancer center smoking cessation materials." Journal of Clinical Oncology 39, no. 15_suppl (May 20, 2021): e18662-e18662. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.e18662.

Full text
Abstract:
e18662 Background: Smoking cessation after a cancer diagnosis improves cancer outcomes. Therefore, it is important for cancer centers to provide educational resources to encourage patients to quit smoking. The NIH recommends that patient reading materials be written at a grade 6-7 reading level to maximize comprehension. As smokers on average have lower educational attainment than the general population, they may have particular difficulty comprehending smoking cessation materials written at advanced grade levels. Methods: This study evaluated the reading level of online resources via textual analysis of smoking cessation webpages associated with 63 NCI-Designated Cancer Centers or their affiliated medical centers or universities. Reading level was assessed using the WebFx Readability Test Tool. Differences in grade level were calculated by Mood’s Median Test. Content was evaluated for the quality of information, including textual analysis of print-out pamphlets. Non-English content and ease of navigation to webpages was documented. Results: Availability: Of 63 cancer centers, 42 (67%) had smoking cessation webpages. Among centers that did not have their own webpages, 14 had smoking cessation webpages hosted by affiliated medical centers and the remaining 7 had webpages hosted by affiliated universities. Reading Level: The median grade level for online smoking cessation materials was 9 (interquartile range IQR 8-10). There was no significant difference in grade level based on cancer center region, ranking, or whether the webpage was hosted by the cancer center, medical center or university. 17 webpages (27%) had print out pamphlets available, which had a median reading level of 8.5 (IQR 7-10). Information Quality: 27 webpages (43%) explicitly stated that smoking cessation improves cancer outcomes, 15 (24%) included details about smoking cessation medications, 16 (25%) provided information on behavioral counseling, and 14 (22%) described the risks/benefits of e-cigarette use. Only 4 (6%) had information on all four topics, while 21 (33%) did not have information on any of these four topics. Accessibility: Only 3 webpages (5%) were available in multiple languages. 12 webpages (19%) were inaccessible by search from the homepage with common terms (i.e. smoking, quit smoking, tobacco, etc). 38 webpages (60%) required 3 or more clicks to reach from the center homepage. Conclusions: Online smoking cessation materials at leading cancer centers exceed recommended reading levels, which can inhibit comprehension for patients trying to quit smoking. These webpages do not routinely include information on cancer outcomes or on evidence-based medications and behavioral change interventions to assist patients in quitting. Given the survival benefit found in cancer patients who quit smoking, it is imperative that educational materials from cancer centers maximize comprehension and accessibility.
APA, Harvard, Vancouver, ISO, and other styles
14

Serrano-Cinca, Carlos, and Jose Felix Muñoz-Soro. "What municipal websites supply and citizens demand: a search engine optimisation approach." Online Information Review 43, no. 1 (February 11, 2019): 7–28. http://dx.doi.org/10.1108/oir-02-2018-0042.

Full text
Abstract:
PurposeThe purpose of this paper is to analyse if citizens’ searches on the internet coincide with the services that municipal websites offer. In addition, the authors examine municipal webpage rankings in search engines and the factors explaining them.Design/methodology/approachThe empirical study, conducted through a sample of Spanish city councils, contrasted if the information that can be found on a municipal website fits with citizens’ demands. This has been done by comparing the most-searched keywords with the contents of municipal websites.FindingsA positive relationship between the supply and demand of municipal information on the internet has been found, but much can still be improved. Analysed administrations rank the basic data of the organisation, as well as some of the fundamental competences thereof, at the top in search engines, but the results are not entirely effective with some keywords still highly demanded by citizens, such as those related to employment or tourism. Factors explaining internet ranking include the number of pages of the municipal website, its presence in social networks and an indicator designed to measure the difficulty of ranking the municipal place-name.Originality/valueThe results obtained from this study provide valuable information for municipal managers. Municipal websites should not only include information in which citizens are interested, but achieve accessibility standards, have a responsive web design, and follow the rules of web usability. Additionally, they should be findable, which also requires improvement in terms of the design of the municipal website thinking in search engines, particularly in terms of certain technical characteristics that improve findability. A municipal website that wants to have a good positioning should increase its contents and attain the maximum degree possible of visibility in social networks.
APA, Harvard, Vancouver, ISO, and other styles
15

Hasan, Fares, Koo Kwong Ze, Rozilawati Razali, Abudhahir Buhari, and Elisha Tadiwa. "An IMPROVED PAGERANK ALGORITHM BASED ON A HYBRID APPROACH." Science Proceedings Series 2, no. 1 (April 9, 2020): 17–21. http://dx.doi.org/10.31580/sps.v2i1.1215.

Full text
Abstract:
PageRank is an algorithm that brings an order to the Internet by returning the best result to the users corresponding to a search query. The algorithm returns the result by calculating the outgoing links that a webpage has thus reflecting whether the webpage is relevant or not. However, there are still problems existing which relate to the time needed to calculate the page rank of all the webpages. The turnaround time is long as the webpages in the Internet are a lot and keep increasing. Secondly, the results returned by the algorithm are biased towards mainly old webpages resulting in newly created webpages having lower page rankings compared to old webpages even though new pages might have comparatively more relevant information. To overcome these setbacks, this research proposes an alternative hybrid algorithm based on an optimized normalization technique and content-based approach. The proposed algorithm reduces the number of iterations required to calculate the page rank hence improving efficiency by calculating the mean of all page rank values and normalising the page rank value through the use of the mean. This is complemented by calculating the valid links of web pages based on the validity of the links rather than the conventional popularity.
APA, Harvard, Vancouver, ISO, and other styles
16

Anuforo, Peter U., Hazeline Ayoup, Umar Aliyu Mustapha, and Ahmad Haruna Abubakar. "The Implementation of Balance Scorecard and Its Impact on Performance: Case of Universiti Utara Malaysia." International Journal of Accounting & Finance Review 4, no. 1 (January 9, 2019): 1–16. http://dx.doi.org/10.46281/ijafr.v4i1.226.

Full text
Abstract:
The intensity of competition among contemporary Higher Education Institution (HEIs) has led to many of such institutions to focus more on how to provide high quality education so as to attain a suitable position in the university world ranking by adopting a suitable performance management. This study aims to demonstrate how UUM implement and uses the BSC to enhance and improve its strategic plans by addressing the issues facing its strategic management process. This study employed a qualitative case study approach. Data were collected using interview and reviewing the university quarterly and annual reports, organizational structure and the university’s webpage as well as its news bulletins. Data were analyzed qualitatively using thematic analysis. Consistent with Kaplan and Norton BSC model in public sector, findings indicates that the case institution implements the BSC ideology by adapting the concept such that it reflects the unique contextual needs of UUM. This study found that in implementing the BSC project, the university staff’s buy-in, top management commitment, organizational culture and communication strategy has significant effects on the case institution’s performance. Also, findings revealed that the implementation of BSC ideology in UUM has a significant impact on its performance in that it helps in improving the case institution’s overall university rankings. The implication of BSC implementation in UUM is that it helps the university management to monitor its performance with regard to its 2016-2020 phase II strategy plans. Future study should consider more institution that implements BSC so as to get more detail results that may be generalized.
APA, Harvard, Vancouver, ISO, and other styles
17

Rodriguez, Jorge Alberto, Roger B. Davis, Ryan David Nipp, Beverly Moy, and Sanja Percac-Lima. "Are NCI-designated cancer centers websites linguistically accessible?" Journal of Clinical Oncology 35, no. 15_suppl (May 20, 2017): e18078-e18078. http://dx.doi.org/10.1200/jco.2017.35.15_suppl.e18078.

Full text
Abstract:
e18078 Background: The digital divide has shifted from disparities in internet access to disparities in content and design. Technology overlooks limited English proficient (LEP) patients, resulting in a lack of translated online content, possibly increasing disparities in cancer prevention, treatment and clinical trial participation. We sought to determine the language accessibility for the websites of the NCI-Designated CC. Methods: In January 2017, we performed a cross-sectional review of the language accessibility of NCI-Designated CC homepages using manual review and informatics methods (i.e., web scraping). Web scraping automates data extraction from online content. The primary outcomes were presence of translated content, number of languages available and method of translation defined as no translation, Google Translate (GT) or manual translation. Manual translation was categorized as limited (few phrases), moderate (1 webpage) or full ( > 1 webpage or entire site). We used logistic regression to assess the relationship between translated website content and CC county demographics: percent LEP, median income and percent of households with an internet subscription. We performed Spearman Rank Correlation by ranking translation effort: no translation, GT, limited, moderate and full translation. Results: Of 69 NCI-Designated CC websites, 54 (78.3%) were without translation, 12 (17.4%) were manually translated and 3 (4.3%) used GT. Of 12 manually translated websites, 6 had fully, 4 had moderate and 2 had limited translations. Of 16 languages offered, Spanish was the most common (100%), followed by Chinese (50%) and Arabic (33%). We found no significant increase in the odds of having a translated website as related to LEP population, median income or percent of households with internet subscription by CC county. There was no correlation between the translation effort and LEP population, internet subscription or median income by CC county. Conclusions: We found that most NCI-Designated CC offered no translations of their website content. Despite cancer health disparities and the increasing role of health technology, the NCI-Designated CC websites currently remain inaccessible to LEP patients.
APA, Harvard, Vancouver, ISO, and other styles
18

Mylsami, T., and B. L. Shivakumar. "Improved Weighted Page Ranking Algorithm Based on Principal Component Analysis and Map Reduce Frame work for Web Access." Asian Journal of Computer Science and Technology 8, no. 2 (May 5, 2019): 32–39. http://dx.doi.org/10.51983/ajcst-2019.8.2.2144.

Full text
Abstract:
In general the World Wide Web become the most useful information resource used for information retrievals and knowledge discoveries. But the Information on Web to be expand in size and density. The retrieval of the required information on the web is efficiently and effectively to be challenge one. For the tremendous growth of the web has created challenges for the search engine technology. Web mining is an area in which applies data mining techniques to deal the requirements. The following are the popular Web Mining algorithms, such as PageRanking (PR), Weighted PageRanking (WPR) and Hyperlink-Induced Topic Search (HITS), are quite commonly used algorithm to sort out and rank the search results. In among the page ranking algorithm uses web structure mining and web content mining to estimate the relevancy of a web site and not to deal the scalability problem and also visits of inlinks and outlinks of the pages. In recent days to access fast and efficient page ranking algorithm for webpage retrieval remains as a challenging. This paper proposed a new improved WPR algorithm which uses a Principal Component Analysis technique called (PWPR) based on mean value of page ranks. The proposed PWPR algorithm takes into account the importance of both the number of visits of inlinks and outlinks of the pages and distributes rank scores based on the popularity of the pages. The weight values of the pages is computed from the inlinks and outlinks with their mean values. But in PWPR method new data and updates are constantly arriving, the results of data mining applications become stale and obsolete over time. To solve this problem is a MapReduce (MR) framework is promising approach to refreshing mining results for mining big data .The proposed MR algorithm reduces the time complexity of the PWPR algorithm by reducing the number of iterations to reach a convergence point.
APA, Harvard, Vancouver, ISO, and other styles
19

Du, YaJun, ChenXing Li, Qiang Hu, XiaoLei Li, and XiaoLiang Chen. "Ranking webpages using a path trust knowledge graph." Neurocomputing 269 (December 2017): 58–72. http://dx.doi.org/10.1016/j.neucom.2016.08.142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hush, Stefanie, and Joseph K. Williams. "A Review of Craniofacial Training Programs in North America." FACE 1, no. 1 (July 2020): 11–20. http://dx.doi.org/10.1177/2732501620949187.

Full text
Abstract:
Introduction: The specialty of craniofacial surgery has expanded rapidly since the landmark surgeries of Dr. Paul Tessier. The expansion of fellowship programs over the last 50 years has been seen in both numbers and structure. This growth has been complemented by the continued expansion of skill sets that fellows are experiencing. However, the exposure to these skill sets are varied. The study had 2 objectives: (1) Create a clearer picture of the skill sets that fellows are exposed to during training and (2) provide some threshold of case numbers shared by programs that may be used to establish shared expectations for the fellow’s experience. Method: A comprehensive database was created and placed on the webpage for the American Society of Craniofacial Surgery (ASCFS). Fellows in the year 2017 to 2018 were asked to input their case logs. The cumulative data base was categorized into 9 groupings, capturing surgeries of the facial skeleton, cleft surgeries and specialty surgeries in the area of microsurgery, facial reanimation, and ear reconstruction. These 9 groupings were used to establish 3 tiers that provided an opportunity to discover thresholds of experience that captured consistent skill sets for the majority of the programs. Results: A total of 6018 cases were entered into the cumulative database of which 3469.5 cases were placed into 9 specified groups. Group 1 (craniosynostosis) had 578 cases (mean = 30.4, SD = 22.3). Sixteen of the 19 programs participating (84.2%) were found to be at or above the 20th percentile ranking for this procedure (20th percentile = 10 cases). Group 2 consisted of Mandibular distraction (144 cases), Group 3 midface skeletal surgeries (87), Group 4 facial trauma (641.5), Group 5 orthognathic surgery (506), Group 6 cleft surgeries (1303.5), Group 7 microsurgery (67), Group 8 facial reanimation (40.5), and Group 9 ear reconstruction (113). Percentile rankings were found for each group. Three tiers were created for comparison, Tier 1 (group 1), Tier 2 (groups 2-6), Tier 3 (groups 7-9). When a 20th percentile threshold for case numbers was created for groups 1 to 5, 77.9% of all programs met this criteria (95% CI: 63.7%-92.1%). When group 6 was included 78.9% of programs met the 20th percentile (95% CI: 67.9%-90.0%). Conclusion: Fellows are receiving consistent exposure to areas of training related to manipulation of the facial skeleton with the exception of midface surgeries. The study also demonstrates a significant volume of both cleft surgery and facial trauma. The majority of the participating programs meet a threshold of 20% for skill sets associated with our subspecialty. These thresholds could be used as guides by fellowship programs and the ASCFS to better monitor our training goals.
APA, Harvard, Vancouver, ISO, and other styles
21

Limongi, Marco, John C. Lattanzio, Corinne Charbonnel, Inma Dominguez, Jordi Isern, Amanda Karakas, Claus Leitherer, Marcella Marconi, Giora Shaviv, and Jacco van Loon. "DIVISION G COMMISSION 35: STELLAR CONSTITUTION." Proceedings of the International Astronomical Union 11, T29A (August 2015): 436–52. http://dx.doi.org/10.1017/s1743921316000909.

Full text
Abstract:
Commission 35 (C35), “Stellar Constitution”, consists of members of the International Astronomical Union whose research spans many aspects of theoretical and observational stellar physics and it is mainly focused on the comprehension of the properties of stars, stellar populations and galaxies. The number of members of C35 increased progressively over the last ten years and currently C35 comprises about 400 members. C35 was part of Division IV (Stars) until 2014 and then became part of Division G (Stars and Stellar Physics), after the main IAU reorganisation in 2015. Four Working Groups have been created over the years under Division IV, initially, and Division G later: WG on Active B Stars, WG on Massive Stars, WG on Abundances in Red Giant and WG on Chemically Peculiar and Related Stars. In the last decade the Commission had 4 presidents, Wojciech Dziembowski (2003-2006), Francesca D'Antona (2006-2009), Corinne Charbonnel (2009-2012) and Marco Limongi (2012-2015), who were assisted by an Organizing Committee (OC), usually composed of about 10 members, all of them elected by the C35 members and holding their positions for three years. The C35 webpage (http://iau-c35.stsci.edu) has been designed and continuously maintained by Claus Leitherer from the Space Telescope Institute, who deserves our special thanks. In addition to the various general information on the Commission structure and activities, it contains links to various resources, of interest for the members, such as stellar models, evolutionary tracks and isochrones, synthetic stellar populations, stellar yields and input physics (equation of state, nuclear cross sections, opacity tables), provided by various groups. The main activity of the C35 OC is that of evaluating, ranking and eventually supporting the proposals for IAU sponsored meetings. In the last decade the Commission has supported several meetings focused on topics more or less relevant to C35. Since the primary aim of this document is to present the main activity of C35 over the last ten years, in the following we present some scientific highlights that emerged from the most relevant IAU Symposia and meetings supported and organized by C35 in the last decade.
APA, Harvard, Vancouver, ISO, and other styles
22

Sharma, Prahlad Kumar, and Sanjay Tiwari. "An Enhanced Page Ranking Algorithm Based on Weights and Third level Ranking of the Webpages." International Journal of Computer Trends and Technology 34, no. 1 (April 25, 2016): 9–14. http://dx.doi.org/10.14445/22312803/ijctt-v34p102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Ali, Fayyaz, and Shah Khusro. "Content and link-structure perspective of ranking webpages: A review." Computer Science Review 40 (May 2021): 100397. http://dx.doi.org/10.1016/j.cosrev.2021.100397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Xie, Yue, and Ting-Zhu Huang. "A Model Based on Cocitation for Web Information Retrieval." Mathematical Problems in Engineering 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/418605.

Full text
Abstract:
According to the relationship between authority and cocitation in HITS, we propose a new hyperlink weighting scheme to describe the strength of the relevancy between any two webpages. Then we combine hyperlink weight normalization and random surfing schemes as used in PageRank to justify the new model. In the new model based on cocitation (MBCC), the pages with stronger relevancy are assigned higher values, not just depending on the outlinks. This model combines both features of HITS and PageRank. Finally, we present the results of some numerical experiments, showing that the MBCC ranking agrees with the HITS ranking, especially in top 10. Meanwhile, MBCC keeps the superiority of PageRank, that is, existence and uniqueness of ranking vectors.
APA, Harvard, Vancouver, ISO, and other styles
25

Dai, Lu, Wei Wang, and Wanneng Shu. "An Efficient Web Usage Mining Approach Using Chaos Optimization and Particle Swarm Optimization Algorithm Based on Optimal Feedback Model." Mathematical Problems in Engineering 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/340480.

Full text
Abstract:
The dynamic nature of information resources as well as the continuous changes in the information demands of the users has made it very difficult to provide effective methods for data mining and document ranking. This paper proposes an efficient particle swarm chaos optimization mining algorithm based on chaos optimization and particle swarm optimization by using feedback model of user to provide a listing of best-matching webpages for user. The proposed algorithm starts with an initial population of many particles moving around in aD-dimensional search space where each particle vector corresponds to a potential solution of the underlying problem, which is formed by subsets of webpages. Experimental results show that our approach significantly outperforms other algorithms in the aspects of response time, execution time, precision, and recall.
APA, Harvard, Vancouver, ISO, and other styles
26

Mehta, Shaily, Daria Ghezzi, Alessia Catalani, Tania Vanzolini, and Pietro Ghezzi. "Online information on face masks: analysis of websites in Italian and English returned by different search engines." BMJ Open 11, no. 7 (July 2021): e046364. http://dx.doi.org/10.1136/bmjopen-2020-046364.

Full text
Abstract:
ObjectiveCountries have major differences in the acceptance of face mask use for the prevention of COVID-19. This work aims at studying the information online in different countries in terms of information quality and content.DesignContent analysis.MethodWe analysed 450 webpages returned by searching the string ‘are face masks dangerous’ in Italy, the UK and the USA using three search engines (Bing, Duckduckgo and Google) in August 2020. The type of website and the stance about masks were assessed by two raters for each language and inter-rater agreement reported as Cohen’s kappa. The text of the webpages was collected from the web using WebBootCaT and analysed using a corpus analysis software to identify issues mentioned.ResultsMost pages were news outlets, and few (2%–6%) from public health agencies. Webpages with a negative stance on masks were more frequent in Italian (28%) than English (19%). Google returned the highest number of mask-positive pages and Duckduckgo the lowest. Google also returned the lowest number of pages mentioning conspiracy theories and Duckduckgo the highest. Webpages in Italian scored lower than those in English in transparency (reporting authors, their credentials and backing the information with references). When issues about the use of face masks were analysed, mask effectiveness was the most discussed followed by hypercapnia (accumulation of carbon dioxide), contraindication in respiratory disease and hypoxia, with issues related to their contraindications in mental health conditions and disability mentioned by very few pages.ConclusionsThis study suggests that: (1) public health agencies should increase their web presence in providing correct information on face masks; (2) search engines should improve the information quality criteria in their ranking; (3) the public should be more informed on issues related to the use of masks and disabilities, mental health and stigma arising for those people who cannot wear masks.
APA, Harvard, Vancouver, ISO, and other styles
27

Zilincan, Jakub. "SEARCH ENGINE OPTIMIZATION." CBU International Conference Proceedings 3 (September 19, 2015): 506–10. http://dx.doi.org/10.12955/cbup.v3.645.

Full text
Abstract:
Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ranking because search engines have sophisticated algorithms, which measure the quality of webpages and derive their position in search results from it. Next, we introduced and examined the object of the optimization, which is a particular website. This web site was created for the sole purpose of implementing and testing all the main SEO techniques. The main objective of this article was to determine whether search engine optimization increases ranking of website in search results and subsequently leads to higher traffic. This research question is supported by testing and verification of results. The last part of our article concludes the research results and proposes further recommendations.
APA, Harvard, Vancouver, ISO, and other styles
28

Luh, Cheng-Jye, Sheng-An Yang, and Ting-Li Dean Huang. "Estimating Google’s search engine ranking function from a search engine optimization perspective." Online Information Review 40, no. 2 (April 11, 2016): 239–55. http://dx.doi.org/10.1108/oir-04-2015-0112.

Full text
Abstract:
Purpose – The purpose of this paper is to estimate Google search engine’s ranking function from a search engine optimization (SEO) perspective. Design/methodology/approach – The paper proposed an estimation function that defines the query match score of a search result as the weighted sum of scores from a limited set of factors. The search results for a query are re-ranked according to the query match scores. The effectiveness was measured by comparing the new ranks with the original ranks of search results. Findings – The proposed method achieved the best SEO effectiveness when using the top 20 search results for a query. The empirical results reveal that PageRank (PR) is the dominant factor in Google ranking function. The title follows as the second most important, and the snippet and the URL have roughly equal importance with variations among queries. Research limitations/implications – This study considered a limited set of ranking factors. The empirical results reveal that SEO effectiveness can be assessed by a simple estimation of ranking function even when the ranks of the new and original result sets are quite dissimilar. Practical implications – The findings indicate that web marketers should pay particular attention to a webpage’s PR, and then place the keyword in URL, the page title, and snippet. Originality/value – There have been ongoing concerns about how to formulate a simple strategy that can help a website get ranked higher in search engines. This study provides web marketers much needed empirical evidence about a simple way to foresee the ranking success of an SEO effort.
APA, Harvard, Vancouver, ISO, and other styles
29

GORDON, GEOFF. "Indicators, Rankings and the Political Economy of Academic Production in International Law." Leiden Journal of International Law 30, no. 2 (March 7, 2017): 295–304. http://dx.doi.org/10.1017/s0922156517000188.

Full text
Abstract:
Rankings and indicators have been with us for a while now, and have increasingly been objects of attention in international law. Likewise, they have been with the LJIL as a journal for a while now. Our so-called ‘impact factor’ is, to my untrained eye, the most prominent feature on the LJIL webpage hosted by our publisher. So this editorial is something of a rearguard action. The indicators and rankings, however, keep piling up. Proliferation may afford a perverse sort of optimism, about which below but, as will be clear, I do not share it. The increasing number and command of indicators and rankings reflects a consistent trend and a bleak mode of knowledge production. Knowledge production has been a topic in these pages recently, for instance Sara Kendall's excellent editorial on academic production and the politics of inclusion. I mean to continue in that vein, with respect to other aspects of the political economy of the academic production of international law, especially at a nexus of publishing, scholarship and market practices. There is an undeniable element of nostalgia in what will follow, but I do not really mean to celebrate the publishing industry, status quo ante, that has put me in this privileged position to wax nostalgic. The academic publishing business is flawed. What we are preparing the way for is worse. When I say we, I mean to flag my complicity, both as an individual researcher and as an editorial board member. I use the word complicity to convey a personal anxiety, also in my role as editor, so let me be clear: the LJIL board has no policy concerning rankings, and rankings have never influenced review at the journal. Moreover, while I cannot claim to speak for the LJIL as a whole concerning the topic of rankings or any other matter, nor is mine exactly a dissenting voice on the board. The tone of this polemic is mine alone; the concern is not.
APA, Harvard, Vancouver, ISO, and other styles
30

Hicks, John M., Ashley A. Cain, and Jeremiah D. Still. "Visual Saliency Predicts Fixations in Low Clutter Web Pages." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 1114–18. http://dx.doi.org/10.1177/1541931213601883.

Full text
Abstract:
Previous research has shown a computational model of visual saliency can predict where people fixate in cluttered web pages (Masciocchi & Still, 2013). Over time, web site designers are moving towards simpler, less cluttered webpages to improve aesthetics and to make searches more efficient. Even with simpler interfaces, determining a saliency ranking among interface elements is a difficult task. Also, it is unclear whether the traditionally employed saliency model (Itti, Koch, & Niebur, 1998) can be applied to simpler interfaces. To examine the model’s ability to predict fixations in simple web pages we compared a distribution of observed fixations to a conservative measure of chance performance (a shuffled distribution). Simplicity was determined by using two visual clutter models (Rosenholz, Li, & Nakano, 2007). We found under free-viewing conditions that the saliency model was able to predict fixations within less cluttered web pages.
APA, Harvard, Vancouver, ISO, and other styles
31

Bello, Rotimi-Williams, and Firstman Noah Otobo. "Conversion of Website Users to Customers-The Black Hat SEO Technique." International Journal of Advanced Research in Computer Science and Software Engineering 8, no. 6 (June 29, 2018): 29. http://dx.doi.org/10.23956/ijarcsse.v8i6.714.

Full text
Abstract:
Search Engine Optimization (SEO) is a technique which helps search engines to find and rank one site over another in response to a search query. SEO thus helps site owners to get traffic from search engines. Although the basic principle of operation of all search engines is the same, the minor differences between them lead to major changes in results relevancy. Choosing the right keywords to optimize for is thus the first and most crucial step to a successful SEO campaign. In the context of SEO, keyword density can be used as a factor in determining whether a webpage is relevant to a specified keyword or keyword phrase. SEO is known for its contribution as a process that affects the online visibility of a website or a webpage in a web search engine's results. In general, the earlier (or higher ranked on the search results page), and more frequently a website appears in the search results list, the more visitors it will receive from the search engine's users; these visitors can then be converted into customers. It is the objective of this paper to re-present black hat SEO technique as an unprofessional but profitable method of converting website users to customers. Having studied and understood white hat SEO, black hat SEO, gray hat SEO, crawling, indexing, processing and retrieving methods used by search engines as a web software program or web based script to search for documents and files for keywords over the internet to return the list of results containing those keywords; it would be seen that proper application of SEO gives website a better user experience, SEO helps build brand awareness through high rankings, SEO helps circumvent competition, and SEO gives room for high increased return on investment.
APA, Harvard, Vancouver, ISO, and other styles
32

Sun, Zhong, Giacomo Pedretti, Elia Ambrosi, Alessandro Bricalli, Wei Wang, and Daniele Ielmini. "Solving matrix equations in one step with cross-point resistive arrays." Proceedings of the National Academy of Sciences 116, no. 10 (February 19, 2019): 4123–28. http://dx.doi.org/10.1073/pnas.1815682116.

Full text
Abstract:
Conventional digital computers can execute advanced operations by a sequence of elementary Boolean functions of 2 or more bits. As a result, complicated tasks such as solving a linear system or solving a differential equation require a large number of computing steps and an extensive use of memory units to store individual bits. To accelerate the execution of such advanced tasks, in-memory computing with resistive memories provides a promising avenue, thanks to analog data storage and physical computation in the memory. Here, we show that a cross-point array of resistive memory devices can directly solve a system of linear equations, or find the matrix eigenvectors. These operations are completed in just one single step, thanks to the physical computing with Ohm’s and Kirchhoff’s laws, and thanks to the negative feedback connection in the cross-point circuit. Algebraic problems are demonstrated in hardware and applied to classical computing tasks, such as ranking webpages and solving the Schrödinger equation in one step.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Wenjia, Jiancheng Zhu, and Pu Zhao. "Comparing World City Networks by Language: A Complex-Network Approach." ISPRS International Journal of Geo-Information 10, no. 4 (April 1, 2021): 219. http://dx.doi.org/10.3390/ijgi10040219.

Full text
Abstract:
City networks are multiplex and diverse rather than being regarded as part of a single universal model that is valid worldwide. This study contributes to the debate on multiple globalizations by distinguishing multiscale structures of world city networks (WCNs) reflected in the Internet webpage content in English, German, and French. Using big data sets from web crawling, we adopted a complex-network approach with both macroscale and mesoscale analyses to compare global and grouping properties in varying WCNs, by using novel methods such as the weighted stochastic block model (WSBM). The results suggest that at the macro scale, the rankings of city centralities vary across languages due to the uneven geographic distribution of languages and the variant levels of globalization of cities perceived in different languages. At the meso scale, the WSBMs infer different grouping patterns in the WCNs by language, and the specific roles of many world cities vary with language. The probability-based comparative analyses reveal that the English WCN looks more globalized, while the French and German worlds appear more territorial. Using the mesoscale structure detected in the English WCN to comprehend the city networks in other languages may be biased. These findings demonstrate the importance of scrutinizing multiplex WCNs in different cultures and languages as well as discussing mesoscale structures in comparative WCN studies.
APA, Harvard, Vancouver, ISO, and other styles
34

"A Multimodal Learning to Rank model for Web Pages." Regular 9, no. 6 (August 30, 2020): 308–13. http://dx.doi.org/10.35940/ijeat.f1442.089620.

Full text
Abstract:
“Learning-to-rank” or LTR utilizes machine learning technologies to optimally combine many features to solve the problem of ranking. Web search is one of the prominent applications of LTR. To improve the ranking of webpages, multimodality based Learning to Rank model is proposed and implemented. Multimodality is the fusion or the process of integrating multiple unimodal representations into one compact representation. The main problem with the web search is that the links that appear on the top of the search list may be either irrelevant or less relevant to the user than the one appearing at a lower rank. Researches have proven that a multimodality based search would improve the rank list populated. The multiple modalities considered here are the text on a webpage as well as the images on a webpage. The textual features of the webpages are extracted from the LETOR dataset and the image features of the webpages are extracted from the images inside the webpages using the concept of transfer learning. VGG-16 model, pre-trained on ImageNet is used as the image feature extractor. The baseline model which is trained only using textual features is compared against the multimodal LTR. The multimodal LTR which integrates the visual and textual features shows an improvement of 10-15% in web search accuracy.
APA, Harvard, Vancouver, ISO, and other styles
35

Jamshidi, Mojtaba, Mastoreh Haji, Mohamad Reza Kamankesh, Mahya Daghineh, and Abdusalam Abdulla Shaltooki. "A Multi-Criteria Ranking Algorithm Based on the VIKOR Method for Meta-Search Engines." JOIV : International Journal on Informatics Visualization 3, no. 3 (August 10, 2019). http://dx.doi.org/10.30630/joiv.3.3.269.

Full text
Abstract:
Ranking of web pages is one of the most important parts of search engines and is, in fact, a process that through it the quality of a page is estimated by the search engine. In this study, a ranking algorithm based on VIKOR multi-criteria decision-making method for Meta-Search Engines (MSEs) was proposed. In this research, the considered MSE first will receive the suggested pages associated with the search term from eight search engines including, Teoma, Google, Yahoo!, AlltheWeb, AltaVista, Wisenut, ODP, MSN. The results, at most 10 first pages are selected from each search engine and creates the initial dataset contains 80 web pages. The proposed parser is then executed on these pages and the eight criteria including the rank of web page in the related search engine, access time, number of repetitions of search terms, positions of search term at the webpage, numbers of media at the webpage, the number of imports in the webpage, the number of incoming links, and the number of outgoing links are extracted from these web pages. Finally, by using the VIKOR method and these extracted criteria, web pages will rank and 10 top results will be provided for the user. To implement the proposed method, JAVA and MATLAB languages are used. In the experiments, the proposed method is implemented for a query and its ranking results have been compared in terms of accuracy with three famous search engine including Google, Yahoo, and MSN. The results of comparisons show that the proposed method offers higher accuracy.
APA, Harvard, Vancouver, ISO, and other styles
36

Muzammil, Shaik, and Sai Kiran Yerramaneni. "Web Site Ranking Feedback System." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, February 1, 2019, 443–47. http://dx.doi.org/10.32628/cseit195135.

Full text
Abstract:
We all search on google for something and get the results in the form of different websites with some description. We generally click on first or second website links if results are not found we go down on google page. The website ranking is given by search engines by different criteria. Last website won’t be seen by none of the people in most cases and first website will be having a great market compared to last website. So we need to help the last website to be moved up in results and help in generating revenue and have good rank on searching by giving feedback. This system will provide the difference between the first website and last website on the google results and will provide the feedback to the last website like content, links, images used by first website which helps the last website to be used in his webpage. This system is user friendly which is built on HTML as front end and Python flask as back end and used python package beautiful soup to parse HTML data and to automate browser behaviour with python. This system is done on web mining which has three categories firstly web content mining in which we scan the web pages and get to know the links, text, images used. Secondly web usage mining in which reports are generated after analysis which contain the details of text, images, links. Finally the web structure mining states that structural summary of website
APA, Harvard, Vancouver, ISO, and other styles
37

"A Comparative Analysis of Search Engine Ranking Algorithms." International Journal of Advanced Trends in Computer Science and Engineering 10, no. 2 (April 5, 2021): 1247–52. http://dx.doi.org/10.30534/ijatcse/2021/1081022021.

Full text
Abstract:
Ranking Algorithm is the most proper way of positioning on a scale. As the information and knowledge on the internet are increasing every day.The search engine's ability to deliver the most appropriate material to the customer. It is more and more challenging without even any assistance in filtering through all of it. However, searching what user requires is extremely difficult. In this research, an effort has been made to compare and analyze the most popular and effective search engines. The keywords were used in uniform resource locator like, title tag, header, or even the keyword's resembles to the actual text. The page rank algorithm computes a perfect judgment of how relevant a webpage is by analyzing the quality and calculating the number of links connected to it. In this study the keyword relevancy and time response were used for search engines and observed the results. It is observed that the google search engine is faster than the bing and youtube, and after all, bing is the best search engine after google. Moreover, youtube is the fastest search engine in terms of video content search. The google results were found more accurate. However, it is better than all of the search engine
APA, Harvard, Vancouver, ISO, and other styles
38

Kameshwar, Ayyappa Kumar Sista, Luiz Pereira Ramos, and Wensheng Qin. "CAZymes-based ranking of fungi (CBRF): an interactive web database for identifying fungi with extrinsic plant biomass degrading abilities." Bioresources and Bioprocessing 6, no. 1 (December 2019). http://dx.doi.org/10.1186/s40643-019-0286-0.

Full text
Abstract:
AbstractCarbohydrate-active enzymes (CAZymes) are industrially important enzymes, which are involved in synthesis and breakdown of carbohydrates. CAZymes secreted by microorganisms especially fungi are widely used in industries. However, identifying an ideal fungal candidate is costly and time-consuming process. In this regard, we have developed a web-database “CAZymes Based Ranking of Fungi (CBRF)”, for sorting and selecting an ideal fungal candidate based on their genome-wide distribution of CAZymes. We have retrieved the complete annotated proteomic data of 443 published fungal genomes from JGI-MycoCosm web-repository, for the CBRF web-database construction. CBRF web-database was developed using open source computing programing languages such as MySQL, HTML, CSS, bootstrap, jQuery, JavaScript and Ajax frameworks. CBRF web-database sorts complete annotated list of fungi based on three selection functionalities: (a) to sort either by ascending (or) descending orders; (b) to sort the fungi based on a selected CAZy group and class; (c) to sort fungi based on their individual lignocellulolytic abilities. We have also developed a simple and basic webpage “S-CAZymes” using HTML, CSS and Java script languages. The global search functionality of S-CAZymes enables the users to understand and retrieve information about a specific carbohydrate-active enzyme and its current classification in the corresponding CAZy family. The S-CAZymes is a supporting web page which can be used in complementary with the CBRF web-database (knowing the classification of specific CAZyme in S-CAZyme and use this information further to sort fungi using CBRF web-database). The CBRF web-database and S-CAZymes webpage are hosted through Amazon® Web Services (AWS) available at http://13.58.192.177/RankEnzymes/about. We strongly believe that CBRF web-database simplifies the process of identifying a suitable fungus both in academics and industries. In future, we intend to update the CBRF web-database with the public release of new annotated fungal genomes.
APA, Harvard, Vancouver, ISO, and other styles
39

Michel, Franck, Gargominy Olivier, Benjamin Ledentec, and The Bioschemas Community. "Unleash the Potential of your Website! 180,000 webpages from the French Natural History Museum marked up with Bioschemas/Schema.org biodiversity types." Biodiversity Information Science and Standards 4 (September 29, 2020). http://dx.doi.org/10.3897/biss.4.59046.

Full text
Abstract:
The challenge of finding, retrieving and making sense of biodiversity data is being tackled by many different approaches. Projects like the Global Biodiversity Information Facility (GBIF) or Encyclopedia of Life (EoL) adopt an integrative approach where they republish, in a uniform manner, records aggregated from multiple data sources. With this centralized, siloed approach, such projects stand as powerful one-stop shops, but tend to reduce the visibility of other data sources that are not (yet) aggregated. At the other end of the spectrum, the Web of Data promotes the building of a global, distributed knowledge graph consisting of datasets published by independent institutions according to the Linked Open Data principles (Heath and Bizer 2011), such as Wikidata or DBpedia. Beyond these "sophisticated" infrastructures, websites remain the most common way of publishing and sharing scientific data at low cost. Thanks to web search engines, everyone can discover webpages. Yet, the summaries provided in results lists are often insufficiently informative to decide whether a web page is relevant with respect to some research interests, such that integrating data published by a wealth of websites is hardly possible. A strategy around this issue lies in annotating websites with structured, semantic metadata such as the Schema.org vocabulary (Guha et al. 2015). Webpages typically embed Schema.org annotations in the form of markup data (written in the RDFa or JSON-LD formats), which search engines harvest and exploit to improve ranking and provide more informative summarization. Bioschemas is a community effort working to extend Schema.org to support markup for Life Sciences websites (Michel and The Bioschemas Community 2018, Garcia et al. 2017). Bioschemas primarily re-uses existing terms from Schema.org, occasionally re-uses terms from third-party vocabularies, and when necessary proposes new terms to be endorsed by Schema.org. As of today, Bioschemas's biodiversity group has proposed the Taxon type*1 to support the annotation of any webpage denoting taxa, TaxonName to support more specifically the annotation of taxonomic names registries, and guidelines describing how to leverage existing vocabularies such as Darwin Core terms. To proceed further, the biodiversity community must now demonstrate its interest in having these terms endorsed by Schema.org: (1) through a critical mass of live markup deployments, and (2) by the development of applications capable of exploiting this markup data. Therefore, as a first step, the French National Museum of Natural History has marked up its natural heritage inventory website: over 180,000 webpages describing the species inventoried in French territories have been annotated with the Taxon and TaxonName types in the form of JSON-LD scripts (see example scripts). As an example, one can check the source of the Delphinus delphis page. In this presentation, by demonstrating that marking up existing webpages can be very inexpensive, we wish to encourage the biodiversity community to adopt this practice, engage in the discussion about biodiversity-related markup, and possibly propose new terms related e.g. to traits or collections. We believe that generalizing the use of such markup by the many websites reporting checklists, museum collections, occurrences, life traits etc. shall be a major step towards the generalized adoption of FAIR*2 principles (Wilkinson 2016), shall dramatically improve information discovery using search engines, and shall be a key accelerator for the development of novel, web-scale, biodiversity data integration scenarios.
APA, Harvard, Vancouver, ISO, and other styles
40

Vijaya, P., and Satish Chander. "LionRank: lion algorithm-based metasearch engines for re-ranking of webpages." Science China Information Sciences 61, no. 12 (November 20, 2018). http://dx.doi.org/10.1007/s11432-017-9343-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wołoszyn, Paweł. "Assessment of ranking algorithms in complex networks." Computer Science and Mathematical Modelling, October 18, 2017, 45–51. http://dx.doi.org/10.5604/01.3001.0010.5521.

Full text
Abstract:
A particularly helpful search of a network such as the Internet or a citation network not only finds nodes that satisfy some criteria but also ranks those nodes for importance to create what amounts to a “reading list”. In the recent past, there has been a large interest across a number of research communities in the analysis of complex networks. The selected set of pages from the World Wide Web can be modeled as a directed graph, where nodes are designated as individual pages, and the links as a connection between them. As the number of webpages to be ranked is in the billions, the computation is time-consuming and can take several days or more. Algorithms like PageRank, HITS, SALSA and their modifications has a challenge to deal with the size of the processed data. The need for accelerated algorithms is clear. This article presents the characteristics of three best known ranking algorithms and the assumptions for new algorithm development with first test runs.
APA, Harvard, Vancouver, ISO, and other styles
42

"Extended model for Privacy Enhanced Personalized Web Search Ranking System." VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE 8, no. 10 (August 10, 2019): 3271–75. http://dx.doi.org/10.35940/ijitee.j1194.0881019.

Full text
Abstract:
Our existing society is totally dependent on web search to fulfill our daily requirements. Therefore millions of web pages are accessed every day. To fulfill user need number of websites and webpages are added .The growing size of web data results to the difficulty in attaining useful information with a minimum clicks. This results to the acquisition of personalization a major place in Web search. But the use of personalization breaches privacy in searching. Personalization with privacy is leading issue in current web environment. This paper aims at user satisfaction by using user identification based personalization approach in web search engine. Beside personalization the proposed model creates privacy during personalization. The proposed system will prove to be user friendly with less efforts and privacy concern.
APA, Harvard, Vancouver, ISO, and other styles
43

Horn, Paul, and Lauren M. Nelsen. "Gradient and Harnack-type estimates for PageRank." Network Science, September 3, 2020, 1–19. http://dx.doi.org/10.1017/nws.2020.34.

Full text
Abstract:
Abstract Personalized PageRank has found many uses in not only the ranking of webpages, but also algorithmic design, due to its ability to capture certain geometric properties of networks. In this paper, we study the diffusion of PageRank: how varying the jumping (or teleportation) constant affects PageRank values. To this end, we prove a gradient estimate for PageRank, akin to the Li–Yau inequality for positive solutions to the heat equation (for manifolds, with later versions adapted to graphs).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography