Academic literature on the topic 'Data Analytics in Litigation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data Analytics in Litigation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data Analytics in Litigation"

1

Kappiarathel, Shahsoor Muhammad. "Legal Tech and the Future of Litigation: Transforming Justice through Innovation." Indiana Journal of Arts & Literature 6, no. 2 (2025): 01–04. https://doi.org/10.5281/zenodo.14970194.

Full text
Abstract:
<strong>Abstract:</strong> The legal profession is experiencing a fundamental transformation with the application of technology in the litigation process. Technology advancements like blockchain, cloud computing, and data analytics are revolutionizing the manner in which legal professionals handle case management, research, and the production of evidence. These innovations have transformed efficiency, accessibility, and transparency to a significant degree, rendering litigation more efficient. This paper addresses the growing role of technology in litigation and the value it adds in document management, submission to court, and live collaborative work. The use of blockchain technology enables protection of data and authenticating legal documents, while cloud platforms allow remote ease of access to case documents. Additionally, data-driven analytics are enabling legal professionals to make informed decisions, monitor trends in case law and automate litigation tactics. However with these advantages comes the consideration of how technology incorporation into litigation raises a plethora of legal and ethical concerns. This paper analyses the urgent concerns such as data privacy cybersecurity threats and the risks of overreliance on automated systems. It also addresses wider concerns of procedural fairness and the effect of technology on equal access to justice especially for those with limited digital means. For the growth of a responsible and equitable incorporation of technology into litigation, this paper calls for legal reforms and regulatory guidelines that ensure due process and fairness. It calls for the importance of judicial oversight ethical codes and clear guidelines to avert potential abuse and maximize the advantages of technological innovations. Through the acquisition of an equitable balance between innovation and legal protection, the justice system can utilize technology to enhance efficiency without undermining the integrity of the legal process.
APA, Harvard, Vancouver, ISO, and other styles
2

Molitor, Dominik, Wullianallur Raghupathi, Aditya Saharia, and Viju Raghupathi. "Exploring Key Issues in Cybersecurity Data Breaches: Analyzing Data Breach Litigation with ML-Based Text Analytics." Information 14, no. 11 (2023): 600. http://dx.doi.org/10.3390/info14110600.

Full text
Abstract:
While data breaches are a frequent and universal phenomenon, the characteristics and dimensions of data breaches are unexplored. In this novel exploratory research, we apply machine learning (ML) and text analytics to a comprehensive collection of data breach litigation cases to extract insights from the narratives contained within these cases. Our analysis shows stakeholders (e.g., litigants) are concerned about major topics related to identity theft, hacker, negligence, FCRA (Fair Credit Reporting Act), cybersecurity, insurance, phone device, TCPA (Telephone Consumer Protection Act), credit card, merchant, privacy, and others. The topics fall into four major clusters: “phone scams”, “cybersecurity”, “identity theft”, and “business data breach”. By utilizing ML, text analytics, and descriptive data visualizations, our study serves as a foundational piece for comprehensively analyzing large textual datasets. The findings hold significant implications for both researchers and practitioners in cybersecurity, especially those grappling with the challenges of data breaches.
APA, Harvard, Vancouver, ISO, and other styles
3

Raghupathi, Viju, Jie Ren, and Wullianallur Raghupathi. "Understanding the nature and dimensions of litigation crowdfunding: A visual analytics approach." PLOS ONE 16, no. 4 (2021): e0250522. http://dx.doi.org/10.1371/journal.pone.0250522.

Full text
Abstract:
The escalating cost of civil litigation is leaving many defendants and plaintiffs unable to meet legal expenses such as attorney fees, court charges and others. This significantly impacts their ability to sue or defend themselves effectively. Related to this phenomenon is the ethics discussion around access to justice and crowdfunding. This article explores the dimensions that explain the phenomenon of litigation crowdfunding. Using data from CrowdJustice, a popular Internet fundraising platform used to assist in turning legal cases into publicly funded social cases, we study litigation crowdfunding through the lenses of the number of pledges, goal achievement, target amount, length of description, country, case category, and others. Overall, we see a higher number of cases seeking funding in the categories of human rights, environment, and judicial review. Meanwhile, the platform offers access to funding for other less prominent categories, such as voting rights, personal injury, intellectual property, and data &amp; privacy. At the same time, donors are willing to donate more to cases related to health, politics, and public services. Also noteworthy is that while donors are willing to donate to education, animal welfare, data &amp; privacy, and inquest-related cases, they are not willing to donate large sums to these causes. In terms of lawyer/law firm status, donors are more willing to donate to cases assisted by experienced lawyers. Furthermore, we also note that the higher the number of successful cases an attorney presents, the greater the amount raised. We analyzed valence, arousal, and dominance in case description and found they have a positive relationship with funds raised. Also, when a case description is updated on a crowdsourcing site, it ends up being more successful in funding—at least in the categories of health, immigration, and judicial review. This is not the case, however, for categories such as public service, human rights, and environment. Our research addresses whether litigation crowdfunding, in particular, levels the playing field in terms of opening up financing opportunities for those individuals who cannot afford the costs of litigation. While it may support social justice, ethical concerns with regards to the kinds of campaigns must also be addressed. Most of the ethical concerns center around issues relating to both the fundraisers and donors. Our findings have ethical and social justice implications for crowdfunding platform design.
APA, Harvard, Vancouver, ISO, and other styles
4

Nabirye H, Kato, and Asiimwe Kyomugisha T. "Interpreting Contracts: The Importance of Language Precision." RESEARCH INVENTION JOURNAL OF CURRENT ISSUES IN ARTS AND MANAGEMENT 4, no. 1 (2025): 1–4. https://doi.org/10.59298/rijciam/2025/411400.

Full text
Abstract:
Contracts are integral to modern economic and social systems, providing a structured framework for agreements. However, disputes often arise from ambiguous or imprecise language, necessitating robust principles of interpretation in contract law. This paper examines the critical role of language precision in drafting and interpreting contracts, addressing key concepts such as clarity, ambiguity, and contextual meaning. Through analysis of legal theories, case studies, and practical tools, the paper underscores the implications of precise drafting on reducing litigation risks and fostering equitable enforcement. It also examines emerging trends, including the integration of AI and data analytics to enhance contract clarity, positioning these advancements as transformative for future legal practice. Keywords: Contract interpretation, language precision, ambiguity, legal drafting, litigation risk.
APA, Harvard, Vancouver, ISO, and other styles
5

Bhanu Pratap Singh. "ML and Legal Analytics: A Computational Approach to Case Outcome Prediction in Legal Management." Communications on Applied Nonlinear Analysis 32, no. 5s (2024): 43–50. https://doi.org/10.52783/cana.v32.2945.

Full text
Abstract:
The legal industry has undergone a transformation through the combination of machine learning and artificial intelligence techniques. This work focuses the application of such approaches in legal management and also explores how these techniques are useful in various aspects of legal service. With this work, there is an analysis done using case studies from leading organisations such as Lex Machina, JP Morgan, Deloitte, IBM Watson, All State insurance and others. We show that the potential of machine learning is useful in improving the efficiency and decision making in these processes applied to critical legal domains such as contract intelligence, IPR analysis, litigation risk assessment and other our work. The potential of ML to combine with traditional legal practices offer a lot of advantages in the field of data analytics, and pattern recognition.
APA, Harvard, Vancouver, ISO, and other styles
6

Pujiyono and Sufmi Dasco Ahmad. "Legal Protection Carried Out by the Financial Service Authority in a Dispute between Consumers and Insurance Companies in Indonesia." International Journal of Social and Administrative Sciences 3, no. 1 (2018): 55–61. http://dx.doi.org/10.18488/journal.136.2018.31.55.61.

Full text
Abstract:
This study aims to find out how the form of legal protection carried out by the Financial Service Authority towards consumers who experience disputes with insurance companies in Indonesia. This research is a normative legal research that is the prescriptive approach. The data are taken from secondary data types that consist of primary and secondary legal materials. Data collection techniques used are library studies, and the analytical techniques used are deductive by syllogism method. The result of the study shows that a form of repressive protections is carried out by the Financial Service Authority after a dispute between consumers and insurance services and a legal defense that contains many weaknesses. Settlement of disputes between consumers and Insurance Companies can be done through litigation/ court and non-litigation/ out of court settlement. In the litigation process through the Commercial Court. The non-litigation process that will carried out with the institution/ internal dispute resolution step, limited facilities through mediation that facilitated by Financial Services Authority and finally through the external dispute resolution or the arbitration institution.
APA, Harvard, Vancouver, ISO, and other styles
7

Hardian, Randy, Gustati Gustati, and Armel Yentifa. "Pengaruh Asimetri Informasi, Insentif Pajak, Risiko Litigasi, Ukuran Perusahaan Dan Financial Distress Terhadap Prudence Akuntansi (Studi Pada Perusahaan Sektor Property and Real Estate Yang Terdaftar di Bursa Efek Indonesia Periode 2021-2023)." Jurnal Ilmiah Raflesia Akuntansi 11, no. 1 (2025): 242–53. https://doi.org/10.53494/jira.v11i1.849.

Full text
Abstract:
This study aims to examine the influence of information asymmetry, tax incentives, litigation risk, company size, and financial distress on accounting prudence. This research employs a quantitative approach. The sample was obtained using purposive sampling, selecting samples based on predetermined criteria. The purposive sampling resulted in 130 observation data from property and real estate companies listed on the Indonesia Stock Exchange (IDX) during the 2021–2023 period. The analytical method used is multiple linear regression with SPSS version 25. The results of this study indicate that, partially, information asymmetry, tax incentives, litigation risk, company size, and financial distress have an effect on accounting prudence. Simultaneously, information asymmetry, tax incentives, litigation risk, company size, and financial distress affect accounting prudence.
APA, Harvard, Vancouver, ISO, and other styles
8

Zambrano, Guillaume. "Case Law as Data : Prompt Engineering Strategies for Case Outcome Extraction with Large Language Models in a Zero-Shot Setting." Law, Technology and Humans 6, no. 3 (2024): 80–101. http://dx.doi.org/10.5204/lthj.3623.

Full text
Abstract:
This study explores the effectiveness of prompt optimization techniques for legal case outcome extraction using Large Language Models (LLMs). Two state-of-the-art LLMs, LLaMA3 70b and Mixtral 8x7b, are used in a zero-shot data extraction task on a diverse dataset of 400 French appellate court decisions. The results show that LLMs exhibit remarkable efficiency in extraction tasks. Our findings indicate that baseline prompts achieve high performance metrics, with a best F1 score of 0.980 and a worst F1 score of 0.853. Optimized prompts yield varying degrees of improvement, with a best F1 score of 0.994 and a worst F1 score of 0.912. While some optimized prompts demonstrate significant improvements, others exhibit minor or even negative changes. Our results suggest that the optimization process has a non-uniform impact on performance metrics, and the effectiveness of optimized prompts depends on the specific model and dataset being used. These results underscore the significance of prompt engineering in optimizing LLM performance for Legal Information Extraction and Litigation Analytics reliability.
APA, Harvard, Vancouver, ISO, and other styles
9

Syaiful Anam. "PENDEKATAN DALAM PENYELESAIAN SENGKETA PERUSAHAAN ASURANSI." Ar-Ribhu : Jurnal Manajemen dan Keuangan Syariah 2, no. 1 (2021): 47–64. http://dx.doi.org/10.55210/arribhu.v2i1.562.

Full text
Abstract:
Abstrak&#x0D; Introduction: Insurance is an agreement whereby the insurer binds himself to the insured by accepting a premium to compensate him for the loss, damage or loss of expected profit that he may suffer as a result of an event (uncertain event).&#x0D; Methods: The method used in this research is a normative juridical approach, with analytical descriptive specifications, while the data collection techniques use primary data and secondary data.&#x0D; Results: Today, the Indonesian people have realized the important role of the insurance industry in providing security guarantee against the risks that will occur, so they gradually tie themselves to several insurance companies in Indonesia. However, like companies in general, the existence of an insurance company has also experienced several disputes, such as cases of insurance claims that were not disbursed and even rejected by the insurance company. Of course this case requires a resolution as regulated in law. Based on Financial Services Authority Circular Letter Number 2 / Seojk.07 / 2014, there are two alternative dispute resolution options between Financial Services Business Actors and consumers, namely litigation and non-litigation.&#x0D; Conclusion and suggestion: Insurance disputes often occur between insurance companies and consumers who have tied himselves to the company. The law provides for the resolution of insurance disputes by providing two alternative solutions. Namely through litigation and non-litigation. In terms of non-litigation, the Financial Services Authority has established an Alternative Institution for Financial Services Sector Dispute Resolution as regulated in POJK Number 61 / POJK.07 / 2020.&#x0D; &#x0D; &#x0D; Keywords: Despute, resolution, insurance
APA, Harvard, Vancouver, ISO, and other styles
10

Kristina, Dian, and Gede Adi Yuniarta. "Pengaruh Intensitas Modal, Financial Distress, Insentif Pajak dan Risiko Litigasi terhadap Konservatisme Akuntansi Pada Perusahaan Manufaktur Sektor Industri Barang Konsumsi yang Terdaftar di Bursa Efek Indonesia Tahun 2016-2020." Jurnal Akuntansi Profesi 12, no. 2 (2021): 460. http://dx.doi.org/10.23887/jap.v12i2.36433.

Full text
Abstract:
This study was aimed at finding out the effect of (1) capital intensity on accounting conservatism, (2) financial distress on accounting conservatism, (3) tax incentives on accounting conservatism, and (4) litigation risk on accounting conservatism. The type of research used is quantitative research. The population in this study are all manufacturing companies in the consumer goods industry sector listed on the Indonesia Stock Exchange in 2016-2020 which are known to be 54 companies and the sampling technique uses the purposive sampling method. Obtained a sample of 23 companies x 5 years = 115 financial statement data. In this study, the data used are secondary data and the analytical techniques used are the classical assumption test analysis method, multiple linear regression analysis, hypothesis testing and the coefficient of determination. The results obtained using multiple linear regression analysis concluded that capital intensity, financial distress, tax incentives and litigation risk partially have a significant effect on accounting conservatism in manufacturing companies in the consumer goods industry sector listed on the Indonesia Stock Exchange in 2016-2020.Keywords: Capital Intensity, Financial Distress, Tax Incentives, Litigation Risk, and Accounting Conservatism.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Data Analytics in Litigation"

1

Tata, Maitreyi. "Data analytics on Yelp data set." Kansas State University, 2017. http://hdl.handle.net/2097/38237.

Full text
Abstract:
Master of Science<br>Department of Computing and Information Sciences<br>William H. Hsu<br>In this report, I describe a query-driven system which helps in deciding which restaurant to invest in or which area is good to open a new restaurant in a specific place. Analysis is performed on already existing businesses in every state. This is based on certain factors such as the average star rating, the total number of reviews associated with a specific restaurant, the price range of the restaurant etc. The results will give an idea of successful restaurants in a city, which helps you decide where to invest and what are the things to be kept in mind while starting a new business. The main scope of the project is to concentrate on Analytics and Data Visualization.
APA, Harvard, Vancouver, ISO, and other styles
2

Le, Quoc Do. "Approximate Data Analytics Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234219.

Full text
Abstract:
Today, most modern online services make use of big data analytics systems to extract useful information from the raw digital data. The data normally arrives as a continuous data stream at a high speed and in huge volumes. The cost of handling this massive data can be significant. Providing interactive latency in processing the data is often impractical due to the fact that the data is growing exponentially and even faster than Moore’s law predictions. To overcome this problem, approximate computing has recently emerged as a promising solution. Approximate computing is based on the observation that many modern applications are amenable to an approximate, rather than the exact output. Unlike traditional computing, approximate computing tolerates lower accuracy to achieve lower latency by computing over a partial subset instead of the entire input data. Unfortunately, the advancements in approximate computing are primarily geared towards batch analytics and cannot provide low-latency guarantees in the context of stream processing, where new data continuously arrives as an unbounded stream. In this thesis, we design and implement approximate computing techniques for processing and interacting with high-speed and large-scale stream data to achieve low latency and efficient utilization of resources. To achieve these goals, we have designed and built the following approximate data analytics systems: • StreamApprox—a data stream analytics system for approximate computing. This system supports approximate computing for low-latency stream analytics in a transparent way and has an ability to adapt to rapid fluctuations of input data streams. In this system, we designed an online adaptive stratified reservoir sampling algorithm to produce approximate output with bounded error. • IncApprox—a data analytics system for incremental approximate computing. This system adopts approximate and incremental computing in stream processing to achieve high-throughput and low-latency with efficient resource utilization. In this system, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error. • PrivApprox—a data stream analytics system for privacy-preserving and approximate computing. This system supports high utility and low-latency data analytics and preserves user’s privacy at the same time. The system is based on the combination of privacy-preserving data analytics and approximate computing. • ApproxJoin—an approximate distributed joins system. This system improves the performance of joins — critical but expensive operations in big data systems. In this system, we employed a sketching technique (Bloom filter) to avoid shuffling non-joinable data items through the network as well as proposed a novel sampling mechanism that executes during the join to obtain an unbiased representative sample of the join output. Our evaluation based on micro-benchmarks and real world case studies shows that these systems can achieve significant performance speedup compared to state-of-the-art systems by tolerating negligible accuracy loss of the analytics output. In addition, our systems allow users to systematically make a trade-off between accuracy and throughput/latency and require no/minor modifications to the existing applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Canon, Moreno Javier Mauricio 1977. "Leading data analytics transformations." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111472.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2017.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 77-79).<br>The phenomenal success of big technology companies founded with a strong emphasis on data, has epitomized the rise of the new "digital economy." Large traditional organizations, that were not long ago "on top of the world" are now at a crossroads. Their business models seem threatened by newcomers as they face pressure to "transform" and "modernize." Publicity has reinforced the perception that data can now be exploited and turned into a source of competitive advantage. In this context, data analytics presumably offers a vehicle to hasten this transformation. Who are the individuals leading these transformation efforts? Where do they come from? What are their challenges and perspectives? This thesis attempted to answer these questions and by doing so, uncover the "faces behind the leadership titles." Interviews of 33 individuals leading data analytics in large traditional organizations and under different capacities, (i.e., at the C-Suite, at the senior leadership level and in middle management) had a few elements in common: They articulated the difficulty of change, and the significant challenges in balancing strategic design with political savviness and cultural awareness. At their core, these are true leadership stories. Change management processes and the "Three Perspectives on Organizations" framework offer mechanisms to better understand the root causes for inhibitors of transformation and provide a path to guide data analytics initiatives. Whether data analytics proves to be a "passing fad" or not, by now, it has served as a catalyst for large traditional organizations to embark on transformation initiatives and reexamine ways to remain relevant. Leadership stories will most certainly abound as these organizations attempt to find ways to survive and prosper in what is now the "digital age."<br>by Javier Mauricio Canon Moreno.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahsan, Ramoza. "Time Series Data Analytics." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/529.

Full text
Abstract:
Given the ubiquity of time series data, and the exponential growth of databases, there has recently been an explosion of interest in time series data mining. Finding similar trends and patterns among time series data is critical for many applications ranging from financial planning, weather forecasting, stock analysis to policy making. With time series being high-dimensional objects, detection of similar trends especially at the granularity of subsequences or among time series of different lengths and temporal misalignments incurs prohibitively high computation costs. Finding trends using non-metric correlation measures further compounds the complexity, as traditional pruning techniques cannot be directly applied. My dissertation addresses these challenges while meeting the need to achieve near real-time responsiveness. First, for retrieving exact similarity results using Lp-norm distances, we design a two-layered time series index for subsequence matching. Time series relationships are compactly organized in a directed acyclic graph embedded with similarity vectors capturing subsequence similarities. Powerful pruning strategies leveraging the graph structure greatly reduce the number of time series as well as subsequence comparisons, resulting in a several order of magnitude speed-up. Second, to support a rich diversity of correlation analytics operations, we compress time series into Euclidean-based clusters augmented by a compact overlay graph encoding correlation relationships. Such a framework supports a rich variety of operations including retrieving positive or negative correlations, self correlations and finding groups of correlated sequences. Third, to support flexible similarity specification using computationally expensive warped distance like Dynamic Time Warping we design data reduction strategies leveraging the inexpensive Euclidean distance with subsequent time warped matching on the reduced data. This facilitates the comparison of sequences of different lengths and with flexible alignment still within a few seconds of response time. Comprehensive experimental studies using real-world and synthetic datasets demonstrate the efficiency, effectiveness and quality of the results achieved by our proposed techniques as compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Carle, William R. II. "Active Analytics: Adapting Web Pages Automatically Based on Analytics Data." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/629.

Full text
Abstract:
Web designers are expected to perform the difficult task of adapting a site’s design to fit changing usage trends. Web analytics tools give designers a window into website usage patterns, but they must be analyzed and applied to a website's user interface design manually. A framework for marrying live analytics data with user interface design could allow for interfaces that adapt dynamically to usage patterns, with little or no action from the designers. The goal of this research is to create a framework that utilizes web analytics data to automatically update and enhance web user interfaces. In this research, we present a solution for extracting analytics data via web services from Google Analytics and transforming them into reporting data that will inform user interface improvements. Once data are extracted and summarized, we expose the summarized reports via our own web services in a form that can be used by our client side User Interface (UI) framework. This client side framework will dynamically update the content and navigation on the page to reflect the data mined from the web usage reports. The resulting system will react to changing usage patterns of a website and update the user interface accordingly. We evaluated our framework by assigning navigation tasks to users on the UNF website and measuring the time it took them to complete those tasks, one group with our framework enabled, and one group using the original website. We found that the group that used the modified version of the site with our framework enabled was able to navigate the site more quickly and effectively.
APA, Harvard, Vancouver, ISO, and other styles
6

Erlandsson, Niklas. "Game Analytics och Big Data." Thesis, Mittuniversitetet, Avdelningen för arkiv- och datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-29185.

Full text
Abstract:
Game Analytics är ett område som vuxit fram under senare år. Spelutvecklare har möjligheten att analysera hur deras kunder använder deras produkter ned till minsta knapptryckning. Detta kan resultera i stora mängder data och utmaning ligger i att lyckas göra något vettigt av sitt data. Utmaningarna med speldata beskrivs ofta med liknande egenskaper som används för att beskriva Big Data: volume, velocity och variability. Detta borde betyda att det finns potential för ett givande samarbete. Studiens syfte är att analysera och utvärdera vilka möjligheter Big Data ger att utveckla området Game Analytics. För att uppfylla syftet genomförs en litteraturstudie och semi-strukturerade intervjuer med individer aktiva inom spelbranschen. Resultatet visar att källorna är överens om att det finns värdefull information bland det data som kan lagras, framförallt i de monetära, generella och centrala (core) till spelet värdena. Med mer avancerad analys kan flera andra intressanta mönster grävas fram men ändå är det övervägande att hålla sig till de enklare variablerna och inte bry sig om att gräva djupare. Det är inte för att datahanteringen skulle bli för omständlig och svår utan för att analysen är en osäker investering. Även om någon tar sig an alla utmaningar speldata ställer fram finns det en osäkerhet på informationens tillit och användbarheten hos svaren. Framtidsvisionerna inom Game Analytics är blygsamma och inom den närmsta framtiden är det nästan bara effektiviseringar och en utbredning som förutspås vilket inte direkt ställer några nya krav på datahanteringen.<br>Game Analytics is a research field that appeared recently. Game developers have the ability to analyze how customers use their products down to every button pressed. This can result in large amounts of data and the challenge is to make sense of it all. The challenges with game data is often described with the same characteristics used to define Big Data: volume, velocity and variability. This should mean that there is potential for a fruitful collaboration. The purpose of this study is to analyze and evaluate what possibilities Big Data has to develop the Game Analytics field. To fulfill this purpose a literature review and semi-structured interviews with people active in the gaming industry were conducted. The results show that the sources agree that valuable information can be found within the data you can store, especially in the monetary, general and core values to the specific game. With more advanced analysis you may find other interesting patterns as well but nonetheless the predominant way seems to be sticking to the simple variables and staying away from digging deeper. It is not because data handling or storing would be tedious or too difficult but simply because the analysis would be too risky of an investment. Even if you have someone ready to take on all the challenges game data sets up, there is not enough trust in the answers or how useful they might be. Visions of the future within the field are very modest and the nearest future seems to hold mostly efficiency improvements and a widening of the field, making it reach more people. This does not really post any new demands or requirements on the data handling.
APA, Harvard, Vancouver, ISO, and other styles
7

Doucet, Rachel A., Deyan M. Dontchev, Javon S. Burden, and Thomas L. Skoff. "Big data analytics test bed." Thesis, Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/37615.

Full text
Abstract:
Approved for public release; distribution is unlimited<br>The proliferation of big data has significantly expanded the quantity and breadth of information throughout the DoD. The task of processing and analyzing this data has become difficult, if not infeasible, using traditional relational databases. The Navy has a growing priority for information processing, exploitation, and dissemination, which makes use of the vast network of sensors that produce a large amount of big data. This capstone report explores the feasibility of a scalable Tactical Cloud architecture that will harness and utilize the underlying open-source tools for big data analytics. A virtualized cloud environment was built and analyzed at the Naval Postgraduate School, which offers a test bed, suitable for studying novel variations of these architectures. Further, the technologies directly used to implement the test bed seek to demonstrate a sustainable methodology for rapidly configuring and deploying virtualized machines and provides an environment for performance benchmark and testing. The capstone findings indicate the strategies and best practices to automate the deployment, provisioning and management of big data clusters. The functionality we seek to support is a far more general goal: finding open-source tools that help to deploy and configure large clusters for on-demand big data analytics.
APA, Harvard, Vancouver, ISO, and other styles
8

Naumanen, Hampus, Torsten Malmgård, and Eystein Waade. "Analytics tool for radar data." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353857.

Full text
Abstract:
Analytics tool for radar data was a project that started when radar specialists at Saab needed to modernize their tools that analyzes binary encoded radar data. Today, the analysis is accomplished using inadequate and ineffective applications not designed for that purpose, and consequently this makes the analysis tedious and more difficult compared to using an appropriate interface. The applications had limitations regarding different radar systems too, which restricted their usage significantly. The solution was to design a new software that imports, translates and visualizes the data independent of the radar system. The software was developed with several parts that communicates with each other to translate a binary file. A binary file consists of a series of bytes containing the information of the targets and markers separating the revolutions of the radar. The byte stream is split according to the ASTERIX protocol that defines the length of each Data Item and the extracted positional values are stored in arrays. The code is then designed to convert the positional values to cartesian coordinates and plot them on the screen. The software has implemented features such as play, pause, reverse and a plotting history that allows the user to analyze the data in a simple and user-friendly manner. There are also numerous ways the software could be extended. The code is constructed in such a way that new features can be implemented for additional analytical abilities without affecting the components already designed.
APA, Harvard, Vancouver, ISO, and other styles
9

Komolafe, Tomilayo A. "Data Analytics for Statistical Learning." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/87468.

Full text
Abstract:
The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. Big data is a widely-used term without a clear definition. The difference between big data and traditional data can be characterized by four Vs: velocity (speed at which data is generated), volume (amount of data generated), variety (the data can take on different forms), and veracity (the data may be of poor/unknown quality). As many industries begin to recognize the value of big data, organizations try to capture it through means such as: side-channel data in a manufacturing operation, unstructured text-data reported by healthcare personnel, various demographic information of households from census surveys, and the range of communication data that define communities and social networks. Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called statistical learning of the data, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies in the process. However, several open challenges still exist in this framework for big data analytics. Recently, data types such as free-text data are also being captured. Although many established processing techniques exist for other data types, free-text data comes from a wide range of individuals and is subject to syntax, grammar, language, and colloquialisms that require substantially different processing approaches. Once the data is processed, open challenges still exist in the statistical learning step of understanding the data. Statistical learning aims to satisfy two objectives, (1) develop a model that highlights general patterns in the data (2) create a signaling mechanism to identify if outliers are present in the data. Statistical modeling is widely utilized as researchers have created a variety of statistical models to explain everyday phenomena such as predicting energy usage behavior, traffic patterns, and stock market behaviors, among others. However, new applications of big data with increasingly varied designs present interesting challenges. Consider the example of free-text analysis posed above. There's a renewed interest in modeling free-text narratives from sources such as online reviews, customer complaints, or patient safety event reports, into intuitive themes or topics. As previously mentioned, documents describing the same phenomena can vary widely in their word usage and structure. Another recent interest area of statistical learning is using the environmental conditions that people live, work, and grow in, to infer their quality of life. It is well established that social factors play a role in overall health outcomes, however, clinical applications of these social determinants of health is a recent and an open problem. These examples are just a few of many examples wherein new applications of big data pose complex challenges requiring thoughtful and inventive approaches to processing, analyzing, and modeling data. Although a large body of research exists in the area of anomaly detection increasingly complicated data sources (such as side-channel related data or network-based data) present equally convoluted challenges. For effective anomaly-detection, analysts define parameters and rules, so that when large collections of raw data are aggregated, pieces of data that do not conform are easily noticed and flagged In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This paper focuses on the healthcare, manufacturing and social-networking industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows: • In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data. • In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection o I address the research area of statistical modeling in two ways: -There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups -In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors o I address the research area of anomaly detection in two ways: -A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network based anomaly detection technique and introduce methodological improvements -Manufacturing enterprises which are now more connected than ever are vulnerably to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process<br>PHD<br>The prevalence of big data has rapidly changed the usage and mechanisms of data analytics within organizations. The fields of manufacturing and healthcare are two examples of industries that are currently undergoing significant transformations due to the rise of big data. The addition of large sensory systems is changing how parts are being manufactured and inspected and the prevalence of Health Information Technology (HIT) systems in healthcare systems is also changing the way healthcare services are delivered. These industries are turning to big data analytics in the hopes of acquiring many of the benefits other sectors are experiencing, including reducing cost, improving safety, and boosting productivity. However, there are many challenges that exist along with the framework of big data analytics, from pre-processing raw data, to statistical modeling of the data, and identifying anomalies present in the data or process. This work offers significant contributions in each of the aforementioned areas and includes practical real-world applications. Big data analytics generally follows this framework: first, a digitized process generates a stream of data, this raw data stream is pre-processed to convert the data into a usable format, the pre-processed data is analyzed using statistical tools. In this stage, called ‘statistical learning of the data’, analysts have two main objectives (1) develop a statistical model that captures the behavior of the process from a sample of the data (2) identify anomalies or outliers in the process. In this work, I investigate the different steps of the data analytics framework and propose improvements for each step, paired with practical applications, to demonstrate the efficacy of my methods. This work focuses on the healthcare and manufacturing industries, but the materials are broad enough to have wide applications across data analytics generally. My main contributions can be summarized as follows: In the big data analytics framework, raw data initially goes through a pre-processing step. Although many pre-processing techniques exist, there are several challenges in pre-processing text data and I develop a pre-processing tool for text data. In the next step of the data analytics framework, there are challenges in both statistical modeling and anomaly detection I address the research area of statistical modeling in two ways: There are open challenges in defining models to characterize text data. I introduce a community extraction model that autonomously aggregates text documents into intuitive communities/groups In health care, it is well established that social factors play a role in overall health outcomes however developing a statistical model that characterizes these relationships is an open research area. I developed statistical models for generalizing relationships between social determinants of health of a cohort and general medical risk factors o I address the research area of anomaly detection in two ways: A variety of anomaly detection techniques exist already, however, some of these methods lack a rigorous statistical investigation thereby making them ineffective to a practitioner. I identify critical shortcomings to a proposed network-based anomaly detection technique and introduce methodological improvements Manufacturing enterprises which are now more connected than ever are vulnerable to anomalies in the form of cyber-physical attacks. I developed a sensor-based side-channel technique for anomaly detection in a manufacturing process.
APA, Harvard, Vancouver, ISO, and other styles
10

Miloš, Marek. "Nástroje pro Big Data Analytics." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199274.

Full text
Abstract:
The thesis covers the term for specific data analysis called Big Data. The thesis firstly defines the term Big Data and the need for its creation because of the rising need for deeper data processing and analysis tools and methods. The thesis also covers some of the technical aspects of Big Data tools, focusing on Apache Hadoop in detail. The later chapters contain Big Data market analysis and describe the biggest Big Data competitors and tools. The practical part of the thesis presents a way of using Apache Hadoop to perform data analysis with data from Twitter and the results are then visualized in Tableau.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Data Analytics in Litigation"

1

Badiru, Adedeji B. Data Analytics. CRC Press, 2020. http://dx.doi.org/10.1201/9781003083146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Shuai, and Houtao Deng. Data Analytics. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003102656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Calì, Andrea, Peter Wood, Nigel Martin, and Alexandra Poulovassilis, eds. Data Analytics. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-60795-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Runkler, Thomas A. Data Analytics. Springer Fachmedien Wiesbaden, 2016. http://dx.doi.org/10.1007/978-3-658-14075-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Runkler, Thomas A. Data Analytics. Springer Fachmedien Wiesbaden, 2020. http://dx.doi.org/10.1007/978-3-658-29779-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Runkler, Thomas A. Data Analytics. Vieweg+Teubner Verlag, 2012. http://dx.doi.org/10.1007/978-3-8348-2589-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cuadrado-Gallego, Juan J., and Yuri Demchenko. Data Analytics. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-39129-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Runkler, Thomas A. Data Analytics. Springer Fachmedien Wiesbaden, 2025. http://dx.doi.org/10.1007/978-3-658-45951-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tyagi, Amit Kumar. Data Science and Data Analytics. Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003111290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sedkaoui, Soraya. Data Analytics and Big Data. John Wiley & Sons, Inc., 2018. http://dx.doi.org/10.1002/9781119528043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Data Analytics in Litigation"

1

Steward, Dwight, and Roberto Cavazos. "Data Analytics and Litigation." In Palgrave Advances in the Economics of Innovation and Technology. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31780-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Steward, Dwight, and Roberto Cavazos. "Examples of Litigation Involving Big Data Analytics." In Palgrave Advances in the Economics of Innovation and Technology. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31780-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Andrews, Joshua Kelly, Karen M. Cheek, Matthew P. Jennings, David W. Rogers, and Vincent M. Walden. "Data Management." In Litigation Services Handbook. John Wiley & Sons, Inc., 2015. http://dx.doi.org/10.1002/9781119204794.ch14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cheek, Karen M., Erik W. Gibson, Cathy Hasenzahl, Matthew P. Jennings, Russell L. Miller, and Vincent M. Walden. "Data Management." In Litigation Services Handbook. John Wiley & Sons, Inc., 2017. http://dx.doi.org/10.1002/9781119363194.ch15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cuadrado-Gallego, Juan J., and Yuri Demchenko. "Data." In Data Analytics. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-39129-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pincetl, Stephanie, Hannah Gustafson, Felicia Federico, Eric Daniel Fournier, Robert Cudd, and Erik Porse. "Data Analytics." In Energy Use in Cities. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55601-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Balali, Farhad, Jessie Nouri, Adel Nasiri, and Tian Zhao. "Data Analytics." In Data Intensive Industrial Asset Management. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-35930-0_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fleckenstein, Mike, and Lorraine Fellows. "Data Analytics." In Modern Data Strategy. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-68993-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Akhtar, Saaema. "Data Analytics." In Integration of AI-Based Manufacturing and Industrial Engineering Systems with the Internet of Things. CRC Press, 2023. http://dx.doi.org/10.1201/9781003383505-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Treder, Martin. "Data Analytics." In Das Management-Handbuch für Chief Data Officer. Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9346-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data Analytics in Litigation"

1

Bala, Jerzy, Michael Kellar, and Fred Ramberg. "Predictive analytics for litigation case management." In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8258384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vacek, Thomas, Ronald Teo, Dezhao Song, Timothy Nugent, Conner Cowling, and Frank Schilder. "Litigation Analytics: Case Outcomes Extracted from." In Proceedings of the Natural Legal Language Processing Workshop 2019. Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-2206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gadalla, Mohamed, and Ahmed Azab. "Decision Support for Locating Manufacturing Plants in Emerging Economies Using a Reliability Approach." In ASME 2022 17th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/msec2022-83098.

Full text
Abstract:
Abstract In today’s distributed manufacturing reality, investors worldwide are faced with the dilemma of deciding on the optimal geographic spot for their manufacturing plants. On the one hand, emerging economies could be appealing because of their cheap labor as well as possibly their lack of or reduced regulations, litigation, and paperwork in some cases. On the other hand, these very same emerging economies can be quite risky because of the lack of stability of their political systems and hence, the associated economic volatility. Such economies can collapse in a relatively short period of time due to factors such as political instability, corruption, lack of democracy and the rule of law, social and racial injustices, and religious extremism, to name a few. In this paper, we propose a modeling approach where an economy is represented as an engineering system, the lifespan of which is subject to potential conditions, events, and failure modes. Such conditions and factors in the face of these fragile economies are modeled as pushers and deflators contributing to their instability. Hence, all laws of Reliability Engineering can be used to decide on the probability of success of such a system and its lifetime in the face of all uncertainty and given risks in today’s global climate. It is imperative that the health of the economic climate is a critical element solving the facility location and allocation problem; this entails deciding on large manufacturing investments in the form of new manufacturing plants being constructed and the accompanied supply chains. Enablers to allow for packageable manufacturing systems easier to relocate in the wake of this uncertain economic turmoil are also discussed. System Dynamics will be used as future work to account for the forces (deflators and pushers) when quantifying the proposed metrics. AI and Data Analytics techniques are also recommended to quantify the reliability parameters.
APA, Harvard, Vancouver, ISO, and other styles
4

Vacek, Thomas, Dezhao Song, Hugo Molina-Salgado, Ronald Teo, Conner Cowling, and Frank Schilder. "Litigation Analytics: Extracting and querying motions and orders from." In Proceedings of the 2019 Conference of the North. Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/n19-4020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gallese, Chiara, Elena Falletti, Marco S. Nobile, Lucrezia Ferrario, Fabrizio Schettini, and Emanuela Foglia. "Preventing litigation with a predictive model of COVID-19 ICUs occupancy." In 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Franceschina, Luciano. "Data Analytics." In CCS'16: 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016. http://dx.doi.org/10.1145/2996429.2996441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Triantafillou, Peter. "Data-Less Big Data Analytics (Towards Intelligent Data Analytics Systems)." In 2018 IEEE 34th International Conference on Data Engineering (ICDE). IEEE, 2018. http://dx.doi.org/10.1109/icde.2018.00205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Baron, Jason R., and Paul Thompson. "The search problem posed by large heterogeneous data sets in litigation." In the 11th international conference. ACM Press, 2007. http://dx.doi.org/10.1145/1276318.1276344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Chi, and Hossain Shahriar. "Health Data Analytics." In SIGITE '18: The 19th Annual Conference on Information Technology Education. ACM, 2018. http://dx.doi.org/10.1145/3241815.3241887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Madden, Samuel. "Interactive data analytics." In SoCC '15: ACM Symposium on Cloud Computing. ACM, 2015. http://dx.doi.org/10.1145/2806777.2809956.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data Analytics in Litigation"

1

Murphy, David Patrick, and Matthew Thomas Calef. Data Analytics for SAR. Office of Scientific and Technical Information (OSTI), 2017. http://dx.doi.org/10.2172/1396159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Jovan Yang, Hari Viswanathan, Jeffery Hyman, and Richard Middleton. Data Analytics of Hydraulic Fracturing Data. Office of Scientific and Technical Information (OSTI), 2016. http://dx.doi.org/10.2172/1304742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hollis, Andrew Nathan. Data Visualization for Threat Analytics. Office of Scientific and Technical Information (OSTI), 2015. http://dx.doi.org/10.2172/1212630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Krishnamurthy, Narayanan, Siddharth Maddali, Amit Verma, et al. Data Analytics for Alloy Qualification. Office of Scientific and Technical Information (OSTI), 2018. http://dx.doi.org/10.2172/1456238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ruch, Marc Lavi. Data Analytics for Nonproliferation Applications. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1529508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Doucet, Rachel A., Deyan M. Dontchev, Javon S. Burden, and Thomas L. Skoff. Big Data Analytics Test Bed. Defense Technical Information Center, 2013. http://dx.doi.org/10.21236/ada589903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Calderon, Marta, and Luis Jugo. Leveraging Data Analytics Beyond Assurance. Inter-American Development Bank, 2014. http://dx.doi.org/10.18235/0006992.

Full text
Abstract:
This presentation gives an overview of the importance of creating a data analytics strategy in an internal audit department. It describes reasons for having a data analytics strategy, where to start (steps) when formulating a strategy, typical challenges to be addressed, what should be the focus of the strategy, and how to analyze the data analytics capability and usage maturity. Finally, it presents the IDB experience implementing a data analytics strategy for its internal audit activities.
APA, Harvard, Vancouver, ISO, and other styles
8

Gungor, Osman, Imad Al-Qadi, and Navneet Garg. Pavement Data Analytics for Collected Sensor Data. Illinois Center for Transportation, 2021. http://dx.doi.org/10.36501/0197-9191/21-034.

Full text
Abstract:
The Federal Aviation Administration instrumented four concrete slabs of a taxiway at the John F. Kennedy International Airport to collect pavement responses under aircraft and environmental loading. The study started with developing preprocessing scripts to organize, structure, and clean the collected data. As a result of the preprocessing step, the data became easier and more intuitive for pavement engineers and researchers to transform and process. After the data were cleaned and organized, they were used to develop two prediction models. The first prediction model employs a Bayesian calibration framework to estimate the unknown material parameters of the concrete pavement. Additionally, the posterior distributions resulting from the calibration process served as a sensitivity analysis by reporting the significance of each parameter for temperature distribution. The second prediction model utilized a machine-learning (ML) algorithm to predict pavement responses under aircraft and environmental loadings. The results demonstrated that ML can predict the responses with high accuracy at a low computational cost. This project highlighted the potential of using ML for future pavement design guidelines as more instrumentation data from future projects are collected to incorporate various material properties and pavement structures.
APA, Harvard, Vancouver, ISO, and other styles
9

Phillips, Thurman B., and Raymond J. Lanclos III. Data Analytics in Procurement Fraud Prevention. Defense Technical Information Center, 2014. http://dx.doi.org/10.21236/ada626749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Allen, Courtney, Samuel Faluade, Cameron Payseno, Zachary Thomander, Cody Kasten, and Alex Darwin. Data Analytics: Project-level Cost Estimates. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1762650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!