Academic literature on the topic 'Microsoft Art Collection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Microsoft Art Collection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Microsoft Art Collection"

1

Krenz, Joanna. "Ice Cream in the Cathedral: The Literary Failures and Social Success of Chinese Robot Poet Xiao Bing." Asiatische Studien - Études Asiatiques 74, no. 3 (2020): 547–81. http://dx.doi.org/10.1515/asia-2019-0024.

Full text
Abstract:
Abstract In May 2017, Xiao Bing, a popular Chinese chatbot built by Microsoft Research Asia, made her debut as a poet with Sunlight Has Lost Its Glass Windows, a collection marketed as the entirely created by artificial intelligence. She learnt the art of poetry by “reading” the works of 519 modern Chinese poets, and her “inspiration” comes from pictures provided first by her programmers and later by netizens, who upload photographs through her website. Xiao Bing’s emergence made a splash in Chinese society and raised grave concerns among the poets, who polemicized with her engineers. This essay traces Xiao Bing’s literary and media career, which includes both notable literary failures and notable commercial success, exploring her complex connections to technologies of power/knowledge as well as cultural phenomena that range from traditional Chinese poetry and poetry education to postmodern camp aesthetics. From within the renegotiation of the nature of poetry at the threshold of the posthuman era, I propose the critical notion of reading-as-playing to help poetry take advantage of its various entanglements and strictures in order to survive and co-shape the brave new world.
APA, Harvard, Vancouver, ISO, and other styles
2

Koenecke, Allison, Andrew Nam, Emily Lake, et al. "Racial disparities in automated speech recognition." Proceedings of the National Academy of Sciences 117, no. 14 (2020): 7684–89. http://dx.doi.org/10.1073/pnas.1915768117.

Full text
Abstract:
Automated speech recognition (ASR) systems, which use sophisticated machine-learning algorithms to convert spoken language to text, have become increasingly widespread, powering popular virtual assistants, facilitating automated closed captioning, and enabling digital dictation platforms for health care. Over the last several years, the quality of these systems has dramatically improved, due both to advances in deep learning and to the collection of large-scale datasets used to train the systems. There is concern, however, that these tools do not work equally well for all subgroups of the population. Here, we examine the ability of five state-of-the-art ASR systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with 42 white speakers and 73 black speakers. In total, this corpus spans five US cities and consists of 19.8 h of audio matched on the age and gender of the speaker. We found that all five ASR systems exhibited substantial racial disparities, with an average word error rate (WER) of 0.35 for black speakers compared with 0.19 for white speakers. We trace these disparities to the underlying acoustic models used by the ASR systems as the race gap was equally large on a subset of identical phrases spoken by black and white individuals in our corpus. We conclude by proposing strategies—such as using more diverse training datasets that include African American Vernacular English—to reduce these performance differences and ensure speech recognition technology is inclusive.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Kai, Xiao Feng Shang, and Wei Jun Liu. "Realtime Measurement of Temperature Field during Direct Laser Deposition Shaping." Advanced Materials Research 143-144 (October 2010): 521–26. http://dx.doi.org/10.4028/www.scientific.net/amr.143-144.521.

Full text
Abstract:
Direct laser deposition shaping is a state-of-the-art rapid prototyping technology. It can directly fabricate metal parts layer-by-layer without any die, mold, fixture and intermediate, just driven by the laminated CAD model. Accordingly, how to improve the quality of as-formed parts becomes an urgent issue in this research field. It is well known that as for the hot working, the heat history can generate enormous influence on the microstructure and mechanical properties of the parts. Due to the large quantity of heat introduced by laser fabrication process, it is necessary to build a temperature measuring platform to realtime monitor and control the temperature field in the laser fabrication process. As a result, such platform was created to communicate with computer by the temperature data collecting module and interface standard converting module, and achieved the temperature acquisition in the serial communication process through the Microsoft programming software. The experimental result proves the validity of the platform, which can provide effective boundary condition and experimental verification for the numerical simulation. In addition, the desirable temperature distribution can be obtained through the realtime process monitoring and effective parameter adjusting.
APA, Harvard, Vancouver, ISO, and other styles
4

Petrache, Tatiana-Andreea, Traian Rebedea, and Stefan Trausan-Matu. "Interactive language learning - How to explore complex environments using natural language?" International Joural of User-System Interaction 13, no. 1 (2020): 18–32. http://dx.doi.org/10.37789/ijusi.2020.13.1.2.

Full text
Abstract:
Implicit knowledge about the physical world we live in is gained almost effortlessly through interaction with the environment. In the same manner, this knowledge cannot be simply inferred from language, as humans normally avoid stating what is trivially implied or observed in the world. This paper is about a novel perspective into progressing artificial intelligence toward understanding the true language meaning through interaction with complex environments. The arising field of text-based games seems to hold the key for such an endeavour. Text-based games placed in a reinforcement learning formalism have the potential of being a strategic path into advancing real-world natural language applications - the human world itself is one of partial understanding through communication and acting on the world using language. We present a comparative study highlighting the importance of having a unified approach in the area of learning agents to play families of text-based games, with the scope of establishing a benchmark that will enable the community to advance the state of the art. To this end, we will look at the corpora and the first two winner solutions from the competition launched by Microsoft Research - FirstTextWorld Problems. The games from the proposed corpora share the same objective, cooking a meal after collecting ingredients from a modern house environment, having the layout and the recipes change from one game to another.
APA, Harvard, Vancouver, ISO, and other styles
5

Umeh, Chinedu, and Simona Ionita. "Audit of high dose antipsychotic prescribing in the havering community recovery team." BJPsych Open 7, S1 (2021): S354—S355. http://dx.doi.org/10.1192/bjo.2021.939.

Full text
Abstract:
AimsThe main aim of this audit was to determine the prevalence of HDAP in Havering Community Recovery Team (CRT). The secondary aim was to determine how well HDAP has been monitored and documented - specifically, whether discussions around the reasons for continuing and the risks and benefits have been discussed.BackgroundThere is a focus to reduce high dose antipsychotic prescribing (HDAP) due to the lack of evidence that it is efficacious and that smaller doses have an equivalent effect and are better tolerated. Similarly, the consensus by the Royal College of Psychiatrists is that any prescribing of high dose antipsychotics should be an 'explicit, time-limited individual trial’ with a distinct treatment target. There should be a clear plan for regular clinical review including safety monitoring. The high-dose regimen should only be continued if the trial shows evidence of benefit that is not outweighed by tolerability or safety problems.' Following a CQC inspection in 2014 of NELFT which found that the trust was failing to comply with the relevant requirements of the Health and Social Care Act 2008 with regards to safe use of medicines, yearly audits of inpatient HDAP have been undertaken. Although improvements have been made in the inpatient setting, no such audits have been performed in the community setting and consequently there is no data in NELFT regarding community services compliance with the above regulations.MethodAll 349 patients in Havering CRT clinical records were screen by either using RIO or GP letters from recent CPA reviews. A data collection and analysis tool was created using Microsoft Excel. Data collection and analysis was carried out by the project lead and checked by a fellow project member.ResultOf the 349 patients included for analysis: 16 (4.58%) of patients were prescribed a high-dose antipsychoticOf the 16 prescribed high dose antipsychotics: 0 out of 16 had the high dose antipsychotic monitoring form available12 (75%) had well documented evidence of review of HDAP.4 (25%) had no documented evidence of review of HDAP.ConclusionThere is a small group of patients receiving high dose antipsychotic therapy for which better monitoring is needed. This should include education of staff regarding HDAP, better documentation in their care plans and working with pharmacy to make HDAP monitoring forms available widely in the community.
APA, Harvard, Vancouver, ISO, and other styles
6

Foka-Nkwenti Christopher, Nguendo Yongsi H. Blaise, Noela Ambe Mpeh, and Nganou-Mouafo Madelle. "COVID-19 and food insecurity in Cameroon." GSC Advanced Research and Reviews 5, no. 2 (2020): 111–17. http://dx.doi.org/10.30574/gscarr.2020.5.2.0104.

Full text
Abstract:
Background: more than half of the world's population is currently facing health crisis. As a result, millions of businesses have had to shutdown either temporarily or permanently. With COVID-19 and its economic fallout, now spreading in the poorest regions of the world, many more people will become poor and food-insecure. Increased food insecurity may act as a multiplier for the epidemic due to its negative health effects and increased in national starvation. The impacts of COVID-19 are particularly strong for people in the lower tail of the food insecurity distribution. In the current context, the effects of food insecurity could be made worse as a result of the general rise of food stuff prices. Objective: in this paper, we will investigate the interaction between COVID-19 and the drop in the food price leading to food insecurity in Cameroon. Data collection: rapid phone survey across the national territory (Cameroon) confirm(s) the widespread impact of COVID-19 on household and food insecurity. Data collected in urban markets shows that main cities are highly affected by the covid-19 crises. Data retrieved was linked and processed in data editing software (Microsoft Office) for the production of results in text and tabular format. Result: as the coronavirus crisis unfolds, disruptions in domestic food supply chains and loss of incomes and remittances are creating strong tensions and food insecurity in Cameroon. Despite stable food prices of certain goods, most cities are experiencing varying levels of food price inflation at the retail level, reflecting supply disruptions due to COVID-19. Rising food prices have a greater impact in low and middle income consumers since a larger share of their income is spent on food.
APA, Harvard, Vancouver, ISO, and other styles
7

Tolosa-Kline, Ayla, Elad Yom-Tov, Caitlin Hoffman, Cherie Walker-Baban, and Felicia M. T. Lewis. "Trojan Horse: An Analysis of Targeted Advertising to Reduce Sexually Transmitted Diseases Among YMSM." Health Education & Behavior 48, no. 5 (2021): 637–50. http://dx.doi.org/10.1177/10901981211000312.

Full text
Abstract:
Background Men who have sex with men (MSM) increasingly use internet-based websites and geospatial apps to seek sex. Though these platforms may be useful for public health intervention, evaluations of such interventions are rare. We sought to evaluate the online behavior of young MSM of color in Philadelphia and the effectiveness of using ads to link them to DoYouPhilly.org, where users can order free condoms, lubricant, and sexually transmitted infection test kits delivered via the U.S. postal service. Method Data collection and analyses were conducted in two phases. First, we performed keyword research and analyzed web browser logs using a proprietary data set owned by Microsoft. Subsequently, we ran a Google Ads campaign using the keywords identified in the preliminary phase, and directed targeted users to the DoYouPhilly.org condom or test kit ordering pages. Results were analyzed using MATLAB 2018. Results Test kit advertisements received 5,628 impressions, 157 clicks, and 18 unique conversions. The condom advertisements received 128,007 impressions, 2,583 clicks, and 303 unique conversions. Correlation between the click-through rate and the conversion rate per keyword was ρ = −.35 ( P = .0096) and per advertisement was ρ = .40 ( P = .14). Keywords that directly related to condoms were most effective for condom ordering (42% conversion rate vs. ≤2% for other classes), while keywords emphasizing the adverse effects of unprotected sex were most effective in test kit ordering (91% conversion rate vs. 13% and 12% for other classes). Conclusions Online advertisements seemed to affect real-world sexual health behavior, as measured by orders of condoms and test kits, among a group of young MSM living in the same community.
APA, Harvard, Vancouver, ISO, and other styles
8

Adel Moussa, Rasha, Fawziya Saleh Alhor, and Ben Min-Woo illigens. "IS MINDFULNESS-BASED STRESS REDUCTION EFFECTIVE IN REDUCING STRESS DURING COVID-19 PANDEMIC AND INCREASING LEVEL OF SATISFACTION AMONG HEALTH CARE PROFESSIONALS? A META-ANALYSIS OF RCTs." International Journal of Integrative Medical Sciences 7, no. 9 (2020): 948–53. http://dx.doi.org/10.16965/ijims.2020.117.

Full text
Abstract:
Introduction: During Covid-19 pandemics, healthcare workers are on the front lines putting themselves and their families at risk this could result in mental health problems .stress is a major obstacle for health care personnel that makes them less satisfied, less capable of making the best choices and could have difficulties when confronted with their patients which affects patient's care. Mindfulness – Based stress reduction (MBSR) is a program aimed to improve awareness of one’s mental processes, become flexible and act with the principal of compassion. (2). Many researches proposed MBSR for helping practitioners of becoming less vulnerable to stressors. However, results were inconclusive. Objective: To evaluate the effectiveness of MBSR intervention in stress reduction and enhancing level of satisfaction among health care professionals. Method and design: Meta-analysis of Randomized Controlled Trials (RCTs). Data Source: Medline, Psych info, PubMed, Web science and Cochrane Library Database from 2009 till 2019 for related RCTs. Selection Criteria: Published RCTs Comparing Mindfulness- Based Stress Reduction with other modalities for stress reduction and improving level of satisfaction among health care workers and stressed personnel were eligible for inclusion. Data Collection and analysis: Data entered organized in Microsoft excel 2010 then exported to comprehensive meta-analysis software version 3. Pooled: for analysis of multiple studies, and found adjusted accumulative outcome Z score method: to test difference in mean. Test for heterogeneity: Cochran’s Q test and I2. Results: In the ALL 6 included studies, 2896 subjects. There is significant improvement in perceived stress score significantly more among intervention (MBSR) group with pooled significant difference (Mean change-3.47 & SE 1.01, Z score 8.11) with no significant heterogeneity among studies. There is significant job satisfaction improvement in MBSR group significantly more than other group with pooled significant difference (mean change 5.18, SE 1.23, Z score 13.2) with no significant heterogeneity. Conclusion: these finding support that MBSR program is effective in reducing stress and increases job satisfaction among health care professionals. KEYWORDS: Mindfulness, Health workers, Stress, COVID-19 Pandemic, Systematic review, RCTs.
APA, Harvard, Vancouver, ISO, and other styles
9

Shahbaz, Shumaila, and Richard Ward. "QI project: Improvement in quality of Seclusion Medical Review." BJPsych Open 7, S1 (2021): S218—S219. http://dx.doi.org/10.1192/bjo.2021.584.

Full text
Abstract:
AimsTo establish the improvements in the quality of seclusion medical review after introducing a template to complete the review.BackgroundThe Mental Health Act – Code of Practice outlines the standards of patient care while in seclusion. It also emphasis that supportive engagement/observation schedules should be reviewed in person and continued at the point an episode of seclusion was initiated.Furthermore, NICE also set up standards to monitor side effect profile while prescribing psychotropic for such patients and regular management review. It also gives importance to staff training to ensure these standards.To improve the quality of the seclusion medical review, we completed an audit in July 2019 to ascertain whether medics are following Trust Policy.We identified good results (above 90%) in the following areas:Time of seclusion reviewRecord keepingManagement planGood documentation of risk, mental state examination and physical health.We also noticed that the following areas can be improved:Prescribed Medications. (60%)Medication side effects. (40%)Physical Observations (40%)We used the following audit standards for our audit after our last audit and a template was designed and after discussion with medics incorporated into the existing documentation template.Time of reviewReason and duration for seclusionPsychiatric diagnosisMental State Examination/BehaviourPhysical health (including physical observations)/EnvironmentMedication (prescribed, rapid tranquilisation, side effects, or adverse effects)Risk (to self-DSH or accidental) (risks to others)Plan :(frequency of physical obs./medical review, management, restrictions, exit plan for terminating seclusion, patient's capacity to understand it)MethodWe considered the following aspects:Retrospective data collection from 01.03.2020 to 30.08.2020.Sample selection: random selection of mixture of clinicians on different times and days of the week.Data analysis was carried out by using Microsoft Excel.ResultWe noticed a marked improvement in the quality of seclusion medical review (between 95% and 100%) after introducing a template for it. There were no major concerns identified during the re-audit.ConclusionTo continue to use the template for Seclusion Medical Review which has shown significant improvement in the quality of the reviews which will improve patient care.It also helped us to deliver person centred care and safe practice.To continue teaching and training of doctors.This QIP project motivated nurses to do an audit on nursing seclusion review and made necessary changes.
APA, Harvard, Vancouver, ISO, and other styles
10

Shahbaz, Shumaila, and Richard Ward. "Quality of seclusion medical review according to trust guidelines." BJPsych Open 7, S1 (2021): S219. http://dx.doi.org/10.1192/bjo.2021.585.

Full text
Abstract:
AimsWe accessed whether medics are following Trust Policy while conducting seclusion medical review and identify the strengths in quality of seclusion medical review and identify the areas which need improvements to improve our quality and standards of patient's care and safety and to reduce risks.BackgroundThe Mental Health Act Code of Practice sets an expectation for mental health services for restrictive interventions (use of restraint, seclusion and rapid tranquilisation) by following good standards. Medical reviews provide an opportunity to evaluate and amend seclusion management plan. This clinical audit was undertaken by looking at quality of record keeping about seclusion review by junior doctors, staff grades and consultants at different times (day, night, and weekend).MethodData analysis was carried out by using Microsoft Excel. The audit had Humber Teaching NHSFT approval. We assessed electronic healthcare records. Data collection was carried out or retrospectively in 2019(n = 40) using following parameters: 1)A review of patient's physical and psychiatric health.2)An assessment medication prescribed and adverse effects of medication.3)A review of observations required.4)An assessment of the risk posed by the patient to others.5)An assessment of any risk to the patient from deliberate or accidental self-harm.6)An assessment of need for continuing seclusion, and whether it is possible for seclusion measures to be applied more flexibly, or in a less restrictive manner.7)Time of Seclusion Review: Within first hour after seclusion and then every 4 hours until internal MDT. After MDT twice a day.8)Record Keeping.ResultKey Successes (above 80%)Time of seclusion review (with in first hour or when required)Record keeping (accurate time and place for clinical notes).Plan for continuing need for seclusion.Good documentation of Risk to self and risk to others.Good documentation of mental state examination.Comments on physical health although it can be improved.Key Concerns(Less than 60%):Prescribed Medications.Medication side effects.Physical ObservationsConclusionMedics are missing some important parts in seclusion medical review. We developed a template for seclusion medical review according to trust guidelines which are based on Code of Practice and to incorporate in already existing seclusion review form. We also delivered teaching and training to doctors and also showed junior doctor's an example of documentation. We will re-audit in 1 years’ time to see improvement.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Microsoft Art Collection"

1

Collection, Microsoft Art. Microsoft Art Collection: 25 years of celebrating creativity and inspiring innovation. Microsoft, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Collection, Microsoft Art. Microsoft Art Collection: 25 years of celebrating creativity and inspiring innovation. Microsoft, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clark-Langager, Sarah A. Photographs from America: Selections from the collections of Seafirst Bank, Microsoft Corporation, the Washington Art Consortium. Edited by Seafirst Corporation, Microsoft Corporation, Washington Art Consortium, and Western Gallery (Western Washington University). The Western Gallery, Western Washington University, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Smith, Katherine, Michael I. Duke, L. Murphy Smith, and Lawrence C. Smith. Microsoft Excel for Macroeconomics. Prentice Hall, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Smith, Katherine, Michael I. Duke, L. Murphy Smith, and Lawrence C. Smith. Microsoft Excel for Macroeconomics. Prentice Hall, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Microsoft Art Collection"

1

Freudenberg, Nicholas. "Social Connections." In At What Cost. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190078621.003.0007.

Full text
Abstract:
How individuals connect to others, buy wanted products, and work to achieve shared goals determine their opportunities for health and life success. In this century, companies like Google, Amazon, Facebook, Apple, and Microsoft now decide how people can connect with others. By collecting data on purchases, behavior, and beliefs from their customers’ hardware, digital and cellphone use, Big Tech companies have created surveillance capitalism where personal data is a commodity to buy and sell. By targeting users for digital ads for unhealthy products; giving bullies access to a global audience; using likes and dislike to polarize people into opposing factions; or selling personal information to advertisers and special interests, these companies have compromised health, democracy, and privacy. In response, tech workers, social media users, privacy groups, and anti-monopoly reformers have challenged the domination of Big Tech companies and forged ways to use technology for human well-being instead of corporate profit.
APA, Harvard, Vancouver, ISO, and other styles
2

Hai-Jew, Shalin. "Sampling Public Sentiment Using Related Tags (and User-Created Content) Networks from Social Media Platforms." In Enhancing Qualitative and Mixed Methods Research with Technology. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-6493-7.ch014.

Full text
Abstract:
The broad popularity of social content-sharing sites like Flickr and YouTube have enabled the public to access a variety of photographs and videos on a wide range of topics. In addition to these resources, some new capabilities in multiple software programs enable the extraction of related tags networks from these collections. Related tags networks are relational contents built on the descriptive metadata created by the creators of the digital contents. This chapter offers some insights on how to understand public sentiment (inferentially and analytically) from related tags and content networks from social media platforms. This indirect approach contributes to Open-Source Intelligence (OSINT) with nuanced information (and some pretty tight limits about assertions and generalizability). The software tools explored for related tags data extractions include Network Overview, Discovery, and Exploration for Excel (NodeXL) (an open-source graph visualization tool which is an add-in to Microsoft Excel), NCapture in NVivo 10 (a commercial qualitative data analysis tool), and Maltego Tungsten (a commercial penetration-testing Internet-network-extraction tool formerly known as Maltego Radium).
APA, Harvard, Vancouver, ISO, and other styles
3

Bulazel, Alexei, Dominic DiFranzo, John S. Erickson, and James A. Hendler. "The Importance of Authoritative URI Design Schemes for Open Government Data." In Information Retrieval and Management. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5191-1.ch097.

Full text
Abstract:
A major challenge when working with open government data is managing, connecting, and understanding the links between references to entities found across multiple datasets when these datasets use different vocabularies to refer to identical entities (i.e.: one dataset may refer to Microsoft as “Microsoft”, another may refer to the company by its SEC filing number as “0000789019”, and a third may use its stock ticker “MSFT”.) In this paper the authors propose a naming scheme based on Web URLs that enables unambiguous naming and linking of datasets and, more importantly, data elements, across the Web. They further describe their ongoing work to demonstrate the implementation and authoritative management of such schemes through a class of web service they refer to as the “instance hub”. When working with linked government data, provided either directly from governments via open government programs or through other sources, the issue of resolving inconsistencies in naming schemes is particularly important, as various agencies have disparate conventions for referring to the same concepts and entities. Using linked data technologies the authors have created instance hubs to assist in the management and linking of entity references for collections of categorically and hierarchically related entities. Instance hubs are of particular interest to governments engaged in the publication of linked open government data, as they can help data consumers make better sense of published data and can provide a starting point for development of linked data applications. In this paper the authors present their findings from the ongoing development of a prototype instance hub at the Tetherless World Constellation at Rensselaer Polytechnic Institute (TWC RPI). The TWC RPI Instance Hub enables experimentation and verification of proposed URI design schemes for open government data, especially those developed at TWC in collaboration with the United States Data.gov program. They discuss core principles of the TWC RPI Instance Hub design and implementation, and summarize how they have used their instance hub to demonstrate the possibilities for authoritative entity references across a number of heterogeneous categories commonly found in open government data, including countries, federal agencies, states, counties, crops, and toxic chemicals.
APA, Harvard, Vancouver, ISO, and other styles
4

Subraya, B. M. "Introduction to Performance Monitoring and Tuning." In Integrated Approach to Web Performance Testing. IGI Global, 2006. http://dx.doi.org/10.4018/978-1-59140-785-0.ch010.

Full text
Abstract:
For any applications to be performance conscious, its performance must be monitored continuously. Monitoring performance is a necessary part of the preventive maintenance of the application. By monitoring, we obtain performance data which are useful in diagnosing performance problems under operational conditions. Based on data collected through monitoring, one can define a baseline — a range of measurements that represent acceptable performance under typical operating conditions. This baseline provides a reference point that makes it easier to spot problems when they occur. In addition, during troubleshooting system problems, performance data give information about the behavior of system resources at the time the problem occurs, which is useful in pinpointing the cause. In order to monitor the system, the operational environment provides various parameters implemented through counters for collection of performance data. Applications developed must ultimately be installed and run on a specific operating system. Hence, applications performance also depends on factors that govern the operating system. Each operating system has its own set of performance parameters to monitor and tune for better performance. Performance of applications also depends on the architectural level monitoring and tuning. However, architectural design depends on specific technology. Hence, technology level monitoring and tuning must be addressed for better results. To achieve all these, proper guidelines must be enforced at various stages for monitoring and tuning. All the previous chapters, together, described the performance testing from concept to reality whereas this chapter highlights aspects of monitoring and tuning to specific technologies. This chapter provides an overview of monitoring and tuning applications with frameworks in Java and Microsoft .NET technologies. Before addressing the technology specific performance issues, we need to know the overall bottlenecks that arise in Web applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Udoh, Emmanuel. "Open Source Database Technologies." In Encyclopedia of Multimedia Technology and Networking, Second Edition. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch150.

Full text
Abstract:
The free or open source software (OSS) movement, pioneered by Richard Stallman in 1983, is gaining mainstream acceptance and challenging the established order of the commercial software world. The movement is taking root in various aspects of software development, namely operating systems (Linux), Web servers (Apache), databases (MySQL), and scripting languages (PHP) to mention but a few. The basic tenet of the movement is that the underlying code of any open source software should be freely viewable, modifiable, or redistributable by any interested party, as enunciated under the copyleft concept (Stallman, 2002) This is in sharp contrast to the proprietary software (closed source), in which the code is controlled under the copyright laws. In the contemporary software landscape, the open source movement can no longer be overlooked by any major players in the industry, as the movement portends a paradigm shift and is forcing a major rethinking of strategy in the software business. For instance, companies like Oracle, Microsoft, and IBM now offer the lightweight versions of their proprietary flagship products to small—to-medium businesses at no cost for product trial (Samuelson, 2006). These developments are signs of the success of the OSS movement. Reasons abound for the success of the OSS, viz. the collective effort of many volunteer programmers, flexible and quick release rate, code availability, and security. On the other hand, one of the main disadvantages of OSS is the limited technical support, as it may be difficult to find an expert to help an organization with system setup or maintenance. Due to the extensive nature of OSS, this article will only focus on the database aspects. A database is one of the critical components of the application stack for an organization or a business. Increasingly, open-source databases (OSDBs) such as MYSQL, PostgreSQL, MaxDB, Firebird, and Ingress are coming up against the big three commercial proprietary databases: Oracle, SQL server, and IBM DB (McKendrick, 2006; Paulson, 2004; Shankland, 2004). Big companies like Yahoo and Dell are now embracing OSDBs for enterprise-wide applications. According to the Independent Oracle Users Group (IOUG) survey, 37% of enterprise database sites are running at least one of the major brands of open source databases (McKendrik, 2006). The survey further finds that the OSDBs are mostly used for single function systems, followed by custom home-grown applications and Web sites. But critics maintain that these OSDBs are used for nonmission critical purposes, because IT organizations still have concerns about support, security, and management tools (Harris, 2004; Zhao & Elbaum, 2003)
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Qing, Yi Zhuang, Jun Yang, and Yueting Zhuang. "Multimedia Information Retrieval at a Crossroad." In Encyclopedia of Multimedia Technology and Networking, Second Edition. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch134.

Full text
Abstract:
From late 1990s to early 2000s, the availability of powerful computing capability, large storage devices, high-speed networking, and especially the advent of the Internet, led to a phenomenal growth of digital multimedia content in terms of size, diversity, and impact. As suggested by its name, “multimedia” is a name given to a collection of data of multiple types, which include not only “traditional multimedia” such as images and videos, but also emerging media such as 3D graphics (like VRML objects) and Web animations (like Flash animations). Furthermore, relevant techniques have been developed for a growing number of applications, ranging from document editing software to digital libraries and many Web applications. For example, most people who have used Microsoft Word have tried to insert pictures and diagrams into their documents, and they have the experience of watching online video clips such as movie trailers from Web sites such as YouTube.com. Multimedia data have been available in every corner of the digital world. With the huge volume of multimedia data, finding and accessing the multimedia documents that satisfy people’s needs in an accurate and efficient manner becomes a nontrivial problem. This problem is referred to as multimedia information retrieval. The core of multimedia information retrieval is to compute the degree of relevance between users’ information needs and multimedia data. A user’s information need is expressed as a query, which can be in various forms such as a line of free text like “Find me the photos of George Washington,” a few keywords like “George Washington photo,” a media object like a sample picture of George Washington, or their combinations. On the other hand, multimedia data are represented using a certain form of summarization, typically called index, which is directly matched against queries. Similar to a query, the index can take a variety of forms, including keywords, visual features such as color histogram and motion vector, depending on the data and task characteristics. For textual documents, mature information retrieval (IR) technologies have been developed and successfully applied in commercial systems such as Web search engines. In comparison, the research on multimedia retrieval is still in its early stage. Unlike textual data, which can be well represented by term vectors that are descriptive of data semantics, multimedia data lack an effective, semantic-level representation that can be computed automatically, which makes multimedia retrieval a much harder research problem. On the other hand, the diversity and complexity of multimedia data offer new opportunities for the retrieval task to be leveraged by the techniques in other research areas. In fact, research on multimedia retrieval has been initiated and investigated by researchers from areas of multimedia database, computer vision, natural language processing, human-computer interaction, and so forth. Overall, it is currently a very active research area that has many interactions with other areas. In the coming sections, we will overview the techniques for multimedia information retrieval, followed by a review on the applications and challenges in this area. Then, the future trends will be discussed, and some important terms in this area are defined at the end of this chapter.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Microsoft Art Collection"

1

Gunaratna, Kalpa, Amir Hossein Yazdavar, Krishnaprasad Thirunarayan, Amit Sheth, and Gong Cheng. "Relatedness-based Multi-Entity Summarization." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/147.

Full text
Abstract:
Representing world knowledge in a machine processable format is important as entities and their descriptions have fueled tremendous growth in knowledge-rich information processing platforms, services, and systems. Prominent applications of knowledge graphs include search engines (e.g., Google Search and Microsoft Bing), email clients (e.g., Gmail), and intelligent personal assistants (e.g., Google Now, Amazon Echo, and Apple's Siri). In this paper, we present an approach that can summarize facts about a collection of entities by analyzing their relatedness in preference to summarizing each entity in isolation. Specifically, we generate informative entity summaries by selecting: (i) inter-entity facts that are similar and (ii) intra-entity facts that are important and diverse. We employ a constrained knapsack problem solving approach to efficiently compute entity summaries. We perform both qualitative and quantitative experiments and demonstrate that our approach yields promising results compared to two other stand-alone state-of-the-art entity summarization approaches.
APA, Harvard, Vancouver, ISO, and other styles
2

Lynn, Roby, Wafa Louhichi, Mahmoud Parto, Ethan Wescoat, and Thomas Kurfess. "Rapidly Deployable MTConnect-Based Machine Tool Monitoring Systems." In ASME 2017 12th International Manufacturing Science and Engineering Conference collocated with the JSME/ASME 2017 6th International Conference on Materials and Processing. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/msec2017-3012.

Full text
Abstract:
The amount of data that can be gathered from a machining process is often misunderstood, and even if these data are collected, they are frequently underutilized. Intelligent uses of data collected from a manufacturing operation can lead to increased productivity and lower costs. While some large-scale manufacturers have developed custom solutions for data collection from their machine tools, small- and medium-size enterprises need efficient and easily deployable methods for data collection and analysis. This paper presents three broad solutions to data collection from machine tools, all of which rely on the open-source and royalty-free MTConnect protocol: the first is a machine monitoring dashboard based on Microsoft Excel; the second is an open source solution using Python and MTConnect; and the third is a cloud-based system using Google Sheets. Time studies are performed on these systems to determine their capability to gather near real-time data from a machining process.
APA, Harvard, Vancouver, ISO, and other styles
3

Peter, Cruickshank, Hazel Hall, and Bruce Ryan. "Information literacy as a joint competence shaped by everyday life and workplace roles amongst Scottish community councillors." In ISIC: the Information Behaviour Conference. University of Borås, Borås, Sweden, 2020. http://dx.doi.org/10.47989/irisic2008.

Full text
Abstract:
Introduction: This paper addresses the information practices of hyperlocal democratic representatives, and their acquisition and application of information literacy skills. Method: 1034 Scottish community councillors completed an online questionnaire on the information-related activities they undertake as part of their voluntary roles, and the development of supporting competencies. The questions related to: information needs for community council work; preparation and onward dissemination of information gathered; factors that influence community councillors’ abilities to conduct their information-related duties. Analysis: Data were summarised for quantitative analysis using Microsoft Excel. Free text responses were analysed in respect of the themes from the quantitative analysis and literature. Results: Everyday life and workplace roles are perceived as the primary shapers of information literacy as a predominantly joint competence. Conclusion: The focus of information literacy development has traditionally been the contribution of formal education, yet this study reveals that prior employment, community and family roles are perceived as more important to the acquisition of relevant skills amongst this group. This widens the debate as to the extent to which information literacy is specific to particular contexts. This adds to arguments that information literacy may be viewed as a collective accomplishment dependant on a socially constructed set of practices.
APA, Harvard, Vancouver, ISO, and other styles
4

Lambiase, Nicole E., Douglas J. Nelson, Frank J. Falcone, Michael A. Wahlstrom, and Kristen G. De La Rosa. "Using Online Resources for an Advanced Vehicle Technology Engineering Competition." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-37934.

Full text
Abstract:
Advanced Vehicle Technology Competitions have adopted an online collaboration system to coordinate information sharing and dissemination among hundreds of people from numerous organizations and across multiple countries, including universities, competition organizers, and sponsors involved in the competitions. Microsoft SharePoint is a collection of software elements that includes web browser based collaboration functions, process management modules, search modules and a document-management platform that serves as the foundation for this online collaboration system. SharePoint is used to host a secure web site that accesses shared workspaces, information stores and documents, as well as threaded discussion forums. Users can manipulate controls called “web parts” or interact with pieces of content such as lists and document libraries. The overall team-based engineering education strategy is facilitated throughout the three year EcoCAR program by a two way flow of information between the teams and organizers. Safety and design rules are updated and posted for teams to access. Each team has their own secure document library area for posting required progress reports, design reports, safety documentation, and technical report deliverables that are scored as part of the competition. Scoring results with comments are returned to each team under the team specific site. Proprietary vehicle and component data are also made available, and can be restricted to only those teams that have approved non disclosure agreements with the sponsor. Specific subject and component-based forums are used for asynchronous, threaded exchange of information and questions to subject matter experts. Issues and solutions discovered by students are shared among all teams. The SharePoint Online Collaboration system has significantly improved the information-sharing, evaluation and communications capabilities of the Advanced Vehicle Technology Competitions across a vast audience. This has enabled us to significantly enhance the technical scope of the program and improve the educational value to the university participants.
APA, Harvard, Vancouver, ISO, and other styles
5

Clarke, Cody J., Simeon R. Eberz, and Ephraim F. Zegeye. "An Affordable and Portable Palpable System for Sensing Breast Tissue Abnormalities." In ASME 2020 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/smasis2020-2273.

Full text
Abstract:
Abstract Due to the high cost of equipment and lack of trained personnel, manual palpation is a preferred alternative breast examination technique over mammography. The process involves a thorough search pattern using trained fingers and applying adequate pressure, with the objective of identifying solid masses from the surrounding breast tissue. However, palpation requires skills that must be obtained through adequate training in order to ensure proper diagnosis. Consequently, palpation performance and reporting techniques have been inconsistent. Automating the palpation technique would optimize the performance of self-breast examination, optimize clinical breast examinations (CBE), and enable the visualization of breast abnormalities as well as assessing their mechanical properties. Various methods of reconstructing the internal mechanical properties of breast tissue abnormalities have been explored. However, all systems that have been reported are bulky and rely on complex electronic systems. Hence, they are both expensive and require trained medical professionals. The methods also do not involve palpation, a key element in CBE. This research aims in developing a portable and inexpensive automated palpable system that mimics CBE to quantitatively image breast lumps. The method uses a piezoresistive sensor equipped probe consisting of an electronic circuit for collecting deformation-induced electrical signals. The piezoresistive sensor is made by spraying microwave exfoliated graphite/latex blend on a latex sheet. Lumps can be detected by monitoring a change in electrical resistance caused by the deformation of the sensor which is induced by abnormalities in the breast tissue. The electrical signals are collected using a microcontroller and a pixelated image of the breast can be reconstructed. The research is still in progress, and this report serves as proof of concept testing by pressing the probe with hand pressure and reconstructing the electrical signals using Microsoft Excel. Four maps were created for qualitatively analyzing the result. The pressure maps clearly display areas where pressure was applied, indicating the potential of the probe in detecting breast tissue abnormalities. The pressure maps show the feasibility for using such a sensor for the application in CBE. Furthermore, a sensor such as this is also possible of detecting the depth and size of masses within breast tissue, which, may lead to a more accurate diagnosis. Better manufacturing, accuracy, precision, and realtime data feeds are areas of future consideration for this project. This project involves knowledge and applications from mechanical, electrical, computational, and materials engineering.
APA, Harvard, Vancouver, ISO, and other styles
6

A. Buzzetto-Hollywood, Nicole. "Findings From an Examination of a Class Purposed to Teach the Scientific Method Applied to the Business Discipline." In InSITE 2021: Informing Science + IT Education Conferences. Informing Science Institute, 2021. http://dx.doi.org/10.28945/4774.

Full text
Abstract:
Aim/Purpose: This brief paper will provide preliminary insight into an institutions effort to help students understand the application of the scientific method as it applies to the business discipline through the creation of a dedicated, required course added to the curriculum of a mid-Atlantic minority-serving institution. In or-der to determine whether the under-consideration course satisfies designated student learning outcomes, an assessment regime was initiated that included examination of rubric data as well as the administration of a student perception survey. This paper summarizes the results of the early examination of the efficacy of the course under consideration. Background: A small, minority-serving, university located in the United States conducted an assessment and determined that students entering a department of business following completion of their general education science requirements had difficulties transferring their understanding of the scientific method to the business discipline. Accordingly, the department decided to create a unique course offered to sophomore standing students titled Principles of Scientific Methods in Business. The course was created by a group of faculty with input from a twenty person department. Methodology: Rubrics used to assess a course term project were collected and analyzed in Microsoft Excel to measure student satisfaction of learning goals and a stu-dent satisfaction survey was developed and administered to students enrolled in the course under consideration to measure perceived course value. Contribution: While the scientific method applies across the business and information disciplines, students often struggle to envision this application. This paper explores the implications of a course specifically purposed to engender the development and usage of logical and scientific reasoning skills in the business discipline by students in the lower level of an bachelors degree program. The information conveyed in this paper hopefully makes a contribution in an area where there is still an insufficient body of research and where additional exploration is needed. Findings: For two semesters rubrics were collected and analyzed representing the inclusion of 53 students. The target mean for the rubric was a 2.8 and the overall achieved mean was a 2.97, indicating that student performance met minimal expectations. Nevertheless, student deficiencies in three crucial areas were identified. According to the survey findings, as a result of the class students had a better understanding of the scientific method as it applies to the business discipline, are now better able to critically assess a problem, feel they can formulate a procedure to solve a problem, can test a problem-solving process, have a better understanding of how to formulate potential business solutions, understand how potential solutions are evaluated, and understand how business decisions are evaluated. Conclusion: Following careful consideration and discussion of the preliminary findings, the course under consideration was significantly enhanced. The changes were implemented in the fall of 2020 and initial data collected in the spring of 2021 is indicating measured improvement in student success as exhibited by higher rubric scores. Recommendations for Practitioners: These initial findings are promising and while considering student success, especially as we increasingly face a greater and greater portion of under-prepared students entering higher education, initiatives to build the higher order thinking skills of students via transdisciplinary courses may play an important role in the future of higher education. Recommendations for Researchers: Additional studies of transdisciplinary efforts to improve student outcomes need to be explored through collection and evaluation of rubrics used to assess student learning as well as by measuring student perception of the efficacy of these efforts. Impact on Society: Society needs more graduates who leave universities ready to solve problems critically, strategically, and with scientific reasoning. Future Research: This study was disrupted by the COVID-19 pandemic; however, it is resuming in late 2021 and it is the hope that a robust and detailed paper, with more expansive findings will eventually be generated. *** NOTE: This Proceedings paper was revised and published in the journal Issues in Informing Science and Information Technology, 18, 161-172. Click DOWNLOAD PDF to download the published paper. ***
APA, Harvard, Vancouver, ISO, and other styles
7

"Changing Paradigms of Technical Skills for Data Engineers." In InSITE 2018: Informing Science + IT Education Conferences: La Verne California. Informing Science Institute, 2018. http://dx.doi.org/10.28945/4001.

Full text
Abstract:
Aim/Purpose: [This Proceedings paper was revised and published in the 2018 issue of the journal Issues in Informing Science and Information Technology, Volume 15] This paper investigates the new technical skills that are needed for Data Engineering. Past research is compared to new research which creates a list of the 20 top tech-nical skills required by a Data Engineer. The growing availability of Data Engineering jobs is discussed. The research methodology describes the gathering of sample data and then the use of Pig and MapReduce on AWS (Amazon Web Services) to count occurrences of Data Engineering technical skills from 100 Indeed.com job advertisements in July, 2017. Background: A decade ago, Data Engineering relied heavily on the technology of Relational Database Management Sys-tems (RDBMS). For example, Grisham, P., Krasner, H., and Perry D. (2006) described an Empirical Soft-ware Engineering Lab (ESEL) that introduced Relational Database concepts to students with hands-on learning that they called “Data Engineering Education with Real-World Projects.” However, as seismic im-provements occurred for the processing of large distributed datasets, big data analytics has moved into the forefront of the IT industry. As a result, the definition for Data Engineering has broadened and evolved to include newer technology that supports the distributed processing of very large amounts of data (e.g. Hadoop Ecosystem and NoSQL Databases). This paper examines the technical skills that are needed to work as a Data Engineer in today’s rapidly changing technical environment. Research is presented that re-views 100 job postings for Data Engineers from Indeed (2017) during the month of July, 2017 and then ranks the technical skills in order of importance. The results are compared to earlier research by Stitch (2016) that ranked the top technical skills for Data Engineers in 2016 using LinkedIn to survey 6,500 peo-ple that identified themselves as Data Engineers. Methodology: A sample of 100 Data Engineering job postings were collected and analyzed from Indeed during July, 2017. The job postings were pasted into a text file and then related words were grouped together to make phrases. For example, the word “data” was put into context with other related words to form phrases such as “Big Data”, “Data Architecture” and “Data Engineering”. A text editor was used for this task and the find/replace functionality of the text editor proved to be very useful for this project. After making phrases, the large text file was uploaded to the Amazon cloud (AWS) and a Pig batch job using Map Reduce was leveraged to count the occurrence of phrases and words within the text file. The resulting phrases/words with occurrence counts was download to a Personal Computer (PC) and then was loaded into an Excel spreadsheet. Using a spreadsheet enabled the phrases/words to be sorted by oc-currence count and then facilitated the filtering out of irrelevant words. Another task to prepare the data involved the combination phrases or words that were synonymous. For example, the occurrence count for the acronym ELT and the occurrence count for the acronym ETL were added together to make an overall ELT/ETL occurrence count. ETL is a Data Warehousing acronym for Extracting, Transforming and Loading data. This task required knowledge of the subject area. Also, some words were counted in lower case and then the same word was also counted in mixed or upper case, thus producing two or three occur-rence counts for the same word. These different counts were added together to make an overall occur-rence count for the word (e.g. word occurrence counts for Python and python were added together). Fi-nally, the Indeed occurrence counts were sorted to allow for the identification of a list of the top 20 tech-nical skills needed by a Data Engineer. Contribution: Provides new information about the Technical Skills needed by Data Engineers. Findings: Twelve of the 20 Stitch (2016) report phrases/words that are highlighted in bold above matched the tech-nical skills mentioned in the Indeed research. I considered C, C++ and Java a match to the broader cate-gory of Programing in the Indeed data. Although the ranked order of the two lists did not match, the top five ranked technical skills for both lists are similar. The reader of this paper might consider the skills of SQL, Python, Hadoop/HDFS to be very important technical skills for a Data Engineer. Although the programming language R is very popular with Data Scientists, it did not make the top 20 skills for Data Engineering; it was in the overall list from Indeed. The R programming language is oriented towards ana-lytical processing (e.g. used by Data Scientists), whereas the Python language is a scripting and object-oriented language that facilitates the creation of Data Pipelines (e.g. used by Data Engineers). Because the data was collected one year apart and from very different data sources, the timing of the data collection and the different data sources could account for some of the differences in the ranked lists. It is worth noting that the Indeed research ranked list introduced the technical skills of Design Skills, Spark, AWS (Amazon Web Services), Data Modeling, Kafta, Scala, Cloud Computing, Data Pipelines, APIs and AWS Redshift Data Warehousing to the top 20 ranked technical skills list. The Stitch (2016) report that did not have matches to the Indeed (2017) sample data for Linux, Databases, MySQL, Business Intelligence, Oracle, Microsoft SQL Server, Data Analysis and Unix. Although many of these Stitch top 20 technical skills were on the Indeed list, they did not make the top 20 ranked technical skills. Recommendations for Practitioners: Some of the skills needed for Database Technologies are transferable to Data Engineering. Recommendation for Researchers: None Impact on Society: There is not much peer reviewed literature on the subject of Data Engineering, this paper will add new information to the subject area. Future Research: I'm developing a Specialization in Data Engineering for the MS in Data Science degree at our university.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Microsoft Art Collection"

1

Dempsey, Terri L. Handling the Qualitative Side of Mixed Methods Research: A Multisite, Team-Based High School Education Evaluation Study. RTI Press, 2018. http://dx.doi.org/10.3768/rtipress.2018.mr.0039.1809.

Full text
Abstract:
Attention to mixed methods studies research has increased in recent years, particularly among funding agencies that increasingly require a mixed methods approach for program evaluation. At the same time, researchers operating within large-scale, rapid-turnaround research projects are faced with the reality that collection and analysis of large amounts of qualitative data typically require an intense amount of project resources and time. However, practical examples of efficiently collecting and handling high-quality qualitative data within these studies are limited. More examples are also needed of procedures for integrating the qualitative and quantitative strands of a study from design to interpretation in ways that can facilitate efficiencies. This paper provides a detailed description of the strategies used to collect and analyze qualitative data in what the research team believed to be an efficient, high-quality way within a team-based mixed methods evaluation study of science, technology, engineering, and math (STEM) high-school education. The research team employed an iterative approach to qualitative data analysis that combined matrix analyses with Microsoft Excel and the qualitative data analysis software program ATLAS.ti. This approach yielded a number of practical benefits. Selected preliminary results illustrate how this approach can simplify analysis and facilitate data integration.
APA, Harvard, Vancouver, ISO, and other styles
2

Evans, Julie, Kendra Sikes, and Jamie Ratchford. Vegetation classification at Lake Mead National Recreation Area, Mojave National Preserve, Castle Mountains National Monument, and Death Valley National Park: Final report (Revised with Cost Estimate). National Park Service, 2020. http://dx.doi.org/10.36967/nrr-2279201.

Full text
Abstract:
Vegetation inventory and mapping is a process to document the composition, distribution and abundance of vegetation types across the landscape. The National Park Service’s (NPS) Inventory and Monitoring (I&M) program has determined vegetation inventory and mapping to be an important resource for parks; it is one of 12 baseline inventories of natural resources to be completed for all 270 national parks within the NPS I&M program. The Mojave Desert Network Inventory & Monitoring (MOJN I&M) began its process of vegetation inventory in 2009 for four park units as follows: Lake Mead National Recreation Area (LAKE), Mojave National Preserve (MOJA), Castle Mountains National Monument (CAMO), and Death Valley National Park (DEVA). Mapping is a multi-step and multi-year process involving skills and interactions of several parties, including NPS, with a field ecology team, a classification team, and a mapping team. This process allows for compiling existing vegetation data, collecting new data to fill in gaps, and analyzing the data to develop a classification that then informs the mapping. The final products of this process include a vegetation classification, ecological descriptions and field keys of the vegetation types, and geospatial vegetation maps based on the classification. In this report, we present the narrative and results of the sampling and classification effort. In three other associated reports (Evens et al. 2020a, 2020b, 2020c) are the ecological descriptions and field keys. The resulting products of the vegetation mapping efforts are, or will be, presented in separate reports: mapping at LAKE was completed in 2016, mapping at MOJA and CAMO will be completed in 2020, and mapping at DEVA will occur in 2021. The California Native Plant Society (CNPS) and NatureServe, the classification team, have completed the vegetation classification for these four park units, with field keys and descriptions of the vegetation types developed at the alliance level per the U.S. National Vegetation Classification (USNVC). We have compiled approximately 9,000 existing and new vegetation data records into digital databases in Microsoft Access. The resulting classification and descriptions include approximately 105 alliances and landform types, and over 240 associations. CNPS also has assisted the mapping teams during map reconnaissance visits, follow-up on interpreting vegetation patterns, and general support for the geospatial vegetation maps being produced. A variety of alliances and associations occur in the four park units. Per park, the classification represents approximately 50 alliances at LAKE, 65 at MOJA and CAMO, and 85 at DEVA. Several riparian alliances or associations that are somewhat rare (ranked globally as G3) include shrublands of Pluchea sericea, meadow associations with Distichlis spicata and Juncus cooperi, and woodland associations of Salix laevigata and Prosopis pubescens along playas, streams, and springs. Other rare to somewhat rare types (G2 to G3) include shrubland stands with Eriogonum heermannii, Buddleja utahensis, Mortonia utahensis, and Salvia funerea on rocky calcareous slopes that occur sporadically in LAKE to MOJA and DEVA. Types that are globally rare (G1) include the associations of Swallenia alexandrae on sand dunes and Hecastocleis shockleyi on rocky calcareous slopes in DEVA. Two USNVC vegetation groups hold the highest number of alliances: 1) Warm Semi-Desert Shrub & Herb Dry Wash & Colluvial Slope Group (G541) has nine alliances, and 2) Mojave Mid-Elevation Mixed Desert Scrub Group (G296) has thirteen alliances. These two groups contribute significantly to the diversity of vegetation along alluvial washes and mid-elevation transition zones.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography