To see the other types of publications on this topic, follow the link: API (Application Programming Interface).

Dissertations / Theses on the topic 'API (Application Programming Interface)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'API (Application Programming Interface).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yao, Kuan. "Implementing an Application Programming Interface for Distributed Adaptive Computing Systems." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/33329.

Full text
Abstract:
Developing applications for distributed adaptive computing systems (ACS) requires developers to have knowledge of both parallel computing and configurable computing. Furthermore, portability and scalability are required for developers to use innovative ACS research directly in deployed systems. This thesis presents an Application Programming Interface (API) implementation developed in a scalable parallel ACS system. The API gives the developer the ability to easily control both single board and multi-board systems in a network cluster environment. The API implementation is highly portable and scalable, allowing ACS researchers to easily move from a research system to a deployed system. The thesis details the design and implementation of the API, as well as analyzes its performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Chivi, Daniel, and Gran Joakim Östling. "Administration av API-drivna enheter och tjänster för slutanvändare : En fallstudie av API-tjänster." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223347.

Full text
Abstract:
I dagsläget tenderar många att använda tjänster som hanteras separat. Arbetet undersöker om det är möjligt att sammankoppla ett fåtal tjänster i en gemensam webbapplikation för att underlätta kommunicering mellan tjänsterna samt att förbättra användarvänligheten. Företaget A Great Thing har i åtanke att skapa en webbapplikation som tillåter användare att med hjälp av deras applikation skapa agenter som sköter händelser från användarens begäran, exempelvis “spela upp en låt vid en viss tid”. Metodiken som tillämpats har dels varit en fallstudie och dels användarbaserade metoder i form av enkätundersökning samt ett användartest. Ytterligare undersöktes hur kommunikationen går mellan tjänsters API, och de nödvändiga parametrar som utbyter data. Slutligen evalueras den framtagna prototypen enligt vissa riktlinjer inom användarvänlighet. Examensarbetets resultat är i form av en webbprototyp med fokus på användarvänlighet, implementering av API:er, användartest på faktiska användare samt statistik på efterfrågan av tjänster. Vidare har även en marknadsundersökning utförts för att belysa ekonomiska vinstmöjligheter genom API-distribution. Slutsatsen dras att det är möjligt att sammankoppla API:er och dess tjänster för att uppnå ett användarvänligt gränssnitt samt hur nödvändiga parametrar disponeras på ett effektivt vis. Vidare är förhoppningen att utomstående läsare skapar en förståelse om hur sammankopplingen går till på ett strukturerat och informativt tillvägagångssätt. Även hur olika tekniska metoder inom användarvänlighet kan tillämpas vid konstruktion av prototyper.
Nowadays people tend to use services and appliances that are managed seperately. This thesis examines the possibility of connecting different services into one main web application to faciliate communication between these services. A Great Thing have embraced the need of connecting applications to a single device and therefore, wants to create a web application integrated with the use of agents. These agents are built to manage the events a user request. For example ”Play a song at a specific time”. The methodology applied has partly been a case study and partly user-based methods in form of a survery and a user test. Further research was conducted on communication between service API:s and the necessary parameters that exchange data. Finally, the developed prototype was evaluated according to some usability guidelines. The thesis’s results are presented in the form of a web prototype focused on usability, implementation of APIs, user test of actual users and statistics of demanded services. In addition, a market research has been conducted to highlight economic benefits through API distribution. The conclusion is drawn that it is possible to link API:s and their services to achieve a user-friendly interface and how to use different parameters in an efficient way. Furthermore, the hope is that external readers will understand how the connection between API:s works in a structured and informative approach. Also how different technical methods for usability can be applied in construction of prototypes.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Xuan. "Evaluation of Capuchin Application Programming Interface : Implementing a Mobile TV Client." Thesis, KTH, Kommunikationssystem, CoS, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-91497.

Full text
Abstract:
The purpose of this research was to evaluate the Capuchin API launched by Sony Ericsson at Lund, Sweden in 2008. The Capuchin API bridges Adobe’s Flash graphics and effects with JSR support from Java ME. We evaluated Capuchin API with regard to its suitability for a Mobile TV application. We tested this API in Ericsson’s TV lab where we had access to live TV streams and online multimedia resources by implementing a Mobile TV client. This test application was named “Min TV”, in English: “My TV”. Using Capuchin in the Ericsson TV lab environment has shown that it has some benefits, but also has many drawbacks. The Flash developer can be used to create an animated user interface and Java developers can do complex programming. At this early stage Capuchin technology is not mature enough, nor is it suitable for Mobile TV client development. Only after Sony Ericsson adds features such as soft keys, easier debugging of Flash Lite standalone applications, test emulator support in the software development kit, and more data communication methods than string and number, only then it will be a suitable technology for Mobile TV applications. Ericsson’s current Mobile TV application client was built using a framework called ECAF, which supports a graphics frontend and Java ME as backend. We compared ECAF and Min TV with respect to parameters such as: flexibility, performance, memory footprint, code size, and cost of skinning. (All these parameters are explained in detail in the methodology chapter.) As a possible future technology for Mobile TV, we evaluated a number of different presentation/graphics technologies including HECL, SVG Tiny, MIDP 3.0, .NET Compact Framework, etc. Moreover, we examed if a pure Flash Lite client application is a viable solution for Mobile TV. The comparison of different presentation technologies showed that Java ME is a comprehensive platform for mobile development offering all the necessary support from third party graphical user interface makers. .NET CF also looks like a good option for development with the scaled down capabilities for different programming languages supported using CLR.
Syftet med denna forskning var att utvärdera Capuchin API lanserades av Sony Ericsson i Lund, Sverige 2008. Den Capuchin API broar Adobe Flash grafik och effekter med JSR stöd från Java ME. Vi utvärderade Capuchin API med avseende på dess lämplighet för ett mobil-tv ansökan. Vi testade detta API i Ericssons TV lab där vi hade tillgång till TV-strömmar och online multimediaresurser genom en mobil-TV-klient. Detta test ansökan hette "Min TV", på engelska: "My TV". Använda Capuchin i Ericsson TV lab miljö har visat att det har vissa fördelar, men också många nackdelar. Flash-utvecklare kan användas för att skapa en animerad användargränssnitt och Java utvecklare kan göra komplexa programmering. På detta tidiga stadium Capuchin tekniken inte mogen, det är inte heller lämpliga för mobil-TV-klient utveckling. Först efter Sony Ericsson lägger till detaljer såsom mjuka nycklar, enklare felsökning av Flash Lite fristående program, testa emulator stöd i Software Development Kit, och mer data kommunikationsmetoder än string och antal, först då kommer det att vara en lämplig teknik för mobil-TV-program . Ericssons nuvarande mobil-tv ansökan klient byggdes med hjälp av en ram som kallas ECAF, som stöder en grafiska gränssnittet och Java ME som backend. Vi jämförde ECAF och min TV med avseende på parametrar såsom flexibilitet, prestanda, minne fotavtryck kod storlek och kostnaden för avhudning. (Alla dessa parametrar förklaras i detalj i den metod kapitel.) Som en möjlig framtida teknik för mobil-TV Vi utvärderade ett antal olika presentation / grafik teknik inklusive HECL, SVG Tiny, MIDP 3.0,. NET Compact Framework, etc. Dessutom har vi examed om en ren Flash Lite klientprogrammet är en hållbar lösning för mobil-TV. Jämförelsen mellan olika presentation teknik visade att Java ME är en övergripande plattform för mobila utvecklingen erbjuder allt nödvändigt stöd från tredje part grafiskt användargränssnitt beslutsfattare. . NET CF också ser ut som ett bra alternativ för utvecklingen med ned kapacitet för olika programspråk som stöds med hjälp av CLR.
APA, Harvard, Vancouver, ISO, and other styles
4

Veldsman, Werner Pieter. "SNP based literature and data retrieval." Thesis, University of the Western Cape, 2016. http://hdl.handle.net/11394/5345.

Full text
Abstract:
>Magister Scientiae - MSc
Reference single nucleotide polymorphism (refSNP) identifiers are used to earmark SNPs in the human genome. These identifiers are often found in variant call format (VCF) files. RefSNPs can be useful to include as terms submitted to search engines when sourcing biomedical literature. In this thesis, the development of a bioinformatics software package is motivated, planned and implemented as a web application (http://sniphunter.sanbi.ac.za) with an application programming interface (API). The purpose is to allow scientists searching for relevant literature to query a database using refSNP identifiers and potential keywords assigned to scientific literature by the authors. Multiple queries can be simultaneously launched using either the web interface or the API. In addition, a VCF file parser was developed and packaged with the application to allow users to upload, extract and write information from VCF files to a file format that can be interpreted by the novel search engine created during this project. The parsing feature is seamlessly integrated with the web application's user interface, meaning there is no expectation on the user to learn a scripting language. This multi-faceted software system, called SNiPhunter, envisions saving researchers time during life sciences literature procurement, by suggesting articles based on the amount of times a reference SNP identifier has been mentioned in an article. This will allow the user to make a quantitative estimate as to the relevance of an article. A second novel feature is the inclusion of the email address of a correspondence author in the results returned to the user, which promotes communication between scientists. Moreover, links to external functional information are provided to allow researchers to examine annotations associated with their reference SNP identifier of interest. Standard information such as digital object identifiers and publishing dates, that are typically provided by other search engines, are also included in the results returned to the user.
National Research Foundation (NRF) /The South African Research Chairs Initiative (SARChI)
APA, Harvard, Vancouver, ISO, and other styles
5

Collini, Alex. "Realizzazione di un'applicazione mobile Android basata su interazioni con interfacce API di diversi servizi." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9256/.

Full text
Abstract:
La grande crescita e l'enorme distribuzione che hanno avuto negli ultimi tempi i moderni devices mobile (smartphones, tablet, dispositivi wearable, etc...) ha dato l'avvio ad un massiccio sviluppo di applicazioni mobile di qualunque genere, dall'health-care all'AR (Augmented Reality, realtà aumentata), dalle applicazioni social alle applicazioni che offrono servizi all'utente.
APA, Harvard, Vancouver, ISO, and other styles
6

Aguilar, David Pedro. "A framework for evaluating the computational aspects of mobile phones." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guerra, Eduardo Leal. "InGriDE: um ambiente integrado e extensível de desenvolvimento para computação em grade." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-04052010-193112/.

Full text
Abstract:
Recentes avanços proporcionaram às grades computacionais um bom nível de maturidade. Esses sistemas têm sido implantados em ambientes de produção de qualidade na comunidade de pesquisa acadêmica e vêm despertando um grande interesse da indústria. Entretanto, desenvolver aplicações para essas infra-estruturas heterogêneas e distribuídas ainda é uma tarefa complexa e propensa a erros. As iniciativas de facilitar essa tarefa resultaram, na maioria dos casos, em ferramentas não integradas e baseadas em características específicas de cada grade computacional. O presente trabalho tem como objetivo minimizar a dificuldade de desenvolvimento de aplicações para a grade através da construção de um ambiente integrado e extensível de desenvolvimento (IDE) para computação em grade chamado InGriDE. O InGriDE fornece um conjunto único de ferramentas compatíveis com diferentes sistemas de middleware, desenvolvidas baseadas na interface de programação Grid Application Toolkit (GAT). O conjunto de funcionalidades do InGriDE foi desenvolvido com base na plataforma Eclipse que, além de fornecer um arcabouço para construção de IDEs, facilita a extensão do conjunto inicial de funcionalidades. Para validar a nossa solução, utilizamos em nosso estudo de caso o middleware InteGrade, desenvolvido no nosso grupo de pesquisa. Os resultados obtidos nesse trabalho mostraram a viabilidade de fornecer independência de middleware para IDEs através do uso de uma interface genérica de programação como o GAT. Além disso, os benefícios obtidos com o uso do Eclipse como arcabouço para construção de IDEs indicam que os recursos fornecidos por esse tipo de arcabouço atendem de forma eficiente as necessidades inerentes ao processo de desenvolvimento de aplicações para a grade.
Computational grids have evolved considerably over the past few years. These systems have been deployed in production environments in the academic research community and have increased the interest by the industrial community. However, developing applications over heterogeneous and distributed infrastructure is still a complex and error prone process. The initiatives to facilitate this task, in the majority of the cases, resulted in isolated, middleware-specific tools. This work has the objective of minimizing the difficulty of developing grid applications through the construction of an integrated and extensible development environment for grid computing, called InGriDE. InGriDE provides a unique set of tools, compliant with different middleware systems, based on the Grid Application Toolkit (GAT). We developed the InGriDE set of features, based on the Eclipse platform, which provides both a framework for building IDEs and the possibility to extend the initial set of features. To validate our solution we used the InteGrade middleware, developed in our research group, as our case study. The results obtained from our work showed the viability of providing middleware independence to IDEs through the use of a generic application programming interface like GAT. Moreover, the benefits obtained through the use of Eclipse as our framework for building IDEs indicates that this kind of framework satisfies the requirements inherent to the grid application development process in a efficient way.
APA, Harvard, Vancouver, ISO, and other styles
8

Fiume, Claudio. "La tecnologia API come fattore di cambiamento per le aziende e per i clienti." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Un interesse sempre maggiore per lo studio dei Business Process ha apportato all'interno delle aziende implementazioni organizzative e software, conseguendo numerosi vantaggi sul coordinamento delle attività e sulla gestione delle risorse in modo efficace. Infatti, l’approccio ai processi di business nel nuovo millennio mette in evidenza come la componente informatica sia fondamentale per ottenere processi snelli ed efficienti. Questo lavoro di tesi nasce dal periodo di stage in Crif S.p.A. dove ho avuto la possibilità di lavorare sulla tecnologia API (Application Programming Interface) ed apprendere il modo in cui essa viene utilizzata, avendo un duplice risvolto in quanto impatta sia sui processi aziendali che sul cliente stesso. Le API offrono un modello comune per aiutare le persone a collaborare, rendendo dati ed informazioni disponibili tramite applicazioni informatiche. Le API sono un'estensione di buoni rapporti commerciali, infatti consentono una migliore comunicazione e condivisione delle risorse, garantiscono un accesso alle informazioni in tempo reale ed infine, di fondamentale importanza, permettono l'accesso a nuovi mercati. Inoltre, offrono un cambiamento rispetto ai metodi del passato in quanto consentono una maggiore agilità, innovazione tecnologica, integrazione e nuove fonti di monetizzazione. È trattato anche il tema del Project Management che è diventato molto più di un insieme di tecniche e di metodologie, infatti può essere definito come un sistema orientato ad obiettivi prestabiliti. Particolare attenzione è rivolta ai progetti software in quanto la tecnologia API è strettamente legata ad uno sviluppo recente che ha una notevole componente tecnologica. Infine, è effettuata una suddivisone delle API (privata, partner e pubblica) dove ognuna ha i propri vantaggi e benefici. Poi, ad ogni fornitore di API spetta la scelta della strategia migliore per avere una buona commercializzazione e di conseguenza maggiori ricavi.
APA, Harvard, Vancouver, ISO, and other styles
9

MAIA, Mikaela Anuska Oliveira. "Uma abordagem para adaptação de clientes do Java Collections framework baseada em técnicas de migração de APÌs." Universidade Federal de Campina Grande, 2014. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/392.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-16T18:53:29Z No. of bitstreams: 1 MIKAELA ANUSKA OLIVEIRA MAIA - DISSERTAÇÃO PPGCC 2014..pdf: 1160102 bytes, checksum: 5eb99698589be1aeca83623ca4b79e2f (MD5)
Made available in DSpace on 2018-04-16T18:53:29Z (GMT). No. of bitstreams: 1 MIKAELA ANUSKA OLIVEIRA MAIA - DISSERTAÇÃO PPGCC 2014..pdf: 1160102 bytes, checksum: 5eb99698589be1aeca83623ca4b79e2f (MD5) Previous issue date: 2014-08
Apesar da diversidade que a API do Java Collections Framework(JCF) provê, com uma variedade de implementações para várias estruturas de dados, os desenvolvedores podem escolher interfaces ou classes inadequadas, em termos de eficiência ou propósito. Isto pode acontecer devido à documentação da API ser insuficiente ou a falta de análise ponderada pelo desenvolvedor de acordo com exigências do contexto. É possível a substituição manual, em paralelo com uma análise do contexto do programa. No entanto, isso é cansativo e suscetível a erros,desestimulando a modificação. Neste trabalho, nós definimos uma abordagem semi-automática para a seleção de interfaces e implementações dentro do JCF e a modificação de clientes do JCF, com base em técnicas de migração de API. A abordagem ajuda o usuário a escolher a coleção mais apropriada, com base em requisitos coletados por meio de perguntas mais intuitivas para o usuário. A seleção é resolvida com uma árvore de decisão que, a partir das respostas dadas pelo desenvolvedor, decide qual é a interface e implementação mais adequada do JCF. Após esta decisão, a modificação do programa é realizado por meio de adaptadores, minimizando a modificação do código fonte. Nós avaliamos a abordagem, implementada em uma ferramenta de apoio, com um estudo experimental que compreende estudantes de Ciência da Computação distribuídos aleatoriamente em grupos, os quais realizaram mudanças para clientes do JCF por diferentes métodos: manualmente, utilizando-se do EclipseJavaSearch e nossa abordagem. Os resultados foram avaliados na qualidade, esforço e tempo gasto. Descobrimos que a maioria dos usuários teve dificuldades em escolher a interface ou implementação apropriada para os requisitos apresentados. Nossa abordagem evidenciou uma melhora no esforço de selecionar a melhor coleção para a exigência, poupando algum tempo no processo. Sobre a qualidade da coleção selecionada, encontramos o mesmo comportamento usando as duas ferramentas.
Despite the API diversity that the Java Collections Framework (JCF) provides, with diverse implementations for several data structures, developers may choose inappropriate interfaces or classes, in terms of efficiency or purpose. This may happen due to insufficient API documentation or the lack of thoughtful analysis by the developer according to context requirements. A possible solution is manual replacement, in parallel with an analysis of the program context. However, this is tiresome and error-prone, discouraging the modification. In this work, we define a semi-automatic approach for (i) the selection of interfaces and implementation within the JCF and (ii) the modification of JCF clients, based on API migration techniques. The approach helps the user in choosing the most appropriate collection, based on requirements collected by means of simple yes/no questions. The selection is resolved with a decision tree that, from the answers given by the developer, decides which is the most adequate interface (and implementation) from the JCF. After this decision, the actual program modification is performed by means of adapters, minimizing the source code modification We evaluate the approach, as implemented in a supporting tool,with an experimental study comprising computer science students randomly distributed into groups,whose task was performing changes to JCF clients by different methods (manually, using Eclipse’s Java Search and our approach); the results were evaluated on quality, effort and time spent. We found that most students had a hard time choosing the right interface or implementation for the given requirements. Our approach seemed to improve the effort of selecting the best collection for the requirement, saving sometime in the process. Regarding the quality of the collection selected, we found the same behavior using both tools.
APA, Harvard, Vancouver, ISO, and other styles
10

Silva, Lincoln David Nery e. "Uma proposta de API para desenvolvimento de aplicações multiusuário e multidispositivo para TV Digital utilizando o Middleware Ginga." Universidade Federal da Paraí­ba, 2008. http://tede.biblioteca.ufpb.br:8080/handle/tede/6118.

Full text
Abstract:
Made available in DSpace on 2015-05-14T12:36:47Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 8775685 bytes, checksum: 7021be54b3d48e2a9247804ad1a980ab (MD5) Previous issue date: 2018-08-08
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The Interactive Digital TV applications progress does not occur at the same speed we found at Web or Desktop applications. This fact is due to constraints encountered in both hardware and the middleware in which applications run, and also due to the limited way we have to interact with the TV: with the traditional remote control. In the Brazilian scene, the middleware Ginga specification allows the incorporation of new functionalities through the Device Integration API, which is target of this dissertation. The API allows TVDI applications to use mobile devices both as a means of interaction, and to share its multimedia resources. As a result of the API use, TVDI applications are able to employ new possibilities not available in others existing Digital TV middlewares, like the use of multimedia resources and multiuser support. The new API has been implemented and applied to develop TVDI applications aiming to explore the new advanced features available.
avanço das aplicações de TV Digital Interativa não ocorre na mesma velocidade que as aplicações para WEB ou Desktop. Tal fato se deve tanto por limitações encontradas no hardware e no middleware no qual as aplicações são executadas, quanto pela limitação do dispositivo usado na interação dos usuários com a TV. No panorama nacional, a especificação do middleware Ginga permite a incorporação de novas funcionalidades através da API de Integração de Dispositivos, alvo desse trabalho. Esta API que permite que aplicações de TVDI usem dispositivos móveis tanto como meio de interação, como para compartilhamento de seus recursos multimídia. Como resultado do uso da API proposta, as aplicações de TVDI passam a contar com novas possibilidades até então não disponíveis nos middlewares de TV Digital existentes; como a utilização de mais de um dispositivo simultaneamente, o suporte ao desenvolvimento de aplicações multiusuário e o acesso a recursos de captura de mídias contínuas disponíveis em aparelhos como celulares, que podem ser integrados aos aparelhos de TV. A API resultante desse trabalho foi implementada e utilizada no desenvolvimento de aplicações para TVDI voltadas a explorar os novos recursos avançados disponíveis.
APA, Harvard, Vancouver, ISO, and other styles
11

Hamberg, Christer. "GUI driven End to End Regression testing with Selenium." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-68529.

Full text
Abstract:
Digitalization has changed our world and how we interact with different systems. Desktop applications have more and more been integrated with internet, and the web browser has become the Graphical User Interface (GUI) in today’s system solutions. A change that needs to be considered in the automated regression testing process. Using the actual GUI has over time shown to be a complicated task and is therefore often broken out as its own standalone test object. This study looked into time and quality constrains of using the GUI as driver of the regression testing of business requirements in a web based solution. By evaluating the differences in execution times of test cases between Application Programming Interface (API) calls and GUI driven testing, flakiness of test results and required modifications over time for a specific test suite. These constraints were analyzed by looking into how reliability of the test results could be achieved. With a GUI driven full end to end scope the quality in software solutions could be improved with a reduction in the number of interface issues and detected errors in deployed systems. It would also reduce the volume of test cases that needs to be executed and maintained as there are no longer standalone parts to verify separately with partially overlapping test cases. The implementation utilized Selenium WebDriver to drive the GUI and the results showed that by utilizing Selenium the test execution times were increased from approximately 2 seconds (API) to 20-75 seconds (Selenium). The flaky test results could be eliminated by applying the appropriate pattern to detect, locate, and scroll into visibility prior to interacting with the elements. In the end of the study the test execution results were 100% reliable. The navigation required 15 modifications over time to keep them running. By applying the appropriate pattern a reliable test result can be achieved in end to end regression testing where the test case is driven from the GUI, however with an increase in execution time.
APA, Harvard, Vancouver, ISO, and other styles
12

Pizetta, Daniel Cosmo. "Biblioteca, API e IDE para o desenvolvimento de projetos de metodologias de Ressonância Magnética." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-28042014-160738/.

Full text
Abstract:
Neste trabalho serão discutidas novas ferramentas para a construção de um espectrômetro de Ressonância Magnética (RM) totalmente digital. A motivação parte das dificuldades encontradas pelos pesquisadores no momento de programar um equipamento de RM, incluindo a falta de ferramentas para desenvolvimento de metodologias, as quais não são oferecidas pelos softwares atuais. Em particular tratamos do desenvolvimento de uma biblioteca, a PyMR (Python Magnetic Resonance), de uma API (Application Program Interface) e de um IDE (Integrated Development Environment). Nesta estrutura, a biblioteca PyMR é o front-end para programação e setup dos equipamentos de RM enquanto a API constitui o back-end. O IDE, por sua vez, é uma ferramenta de auxílio especializado para criação e gerenciamento das metodologias e protocolos de RM de forma funcional e amigável. O desenvolvimento baseado no estado-da-arte das tecnologias de Computação e Ressonância Magnética garante a qualidade, robustez, adaptabilidade e ainda assim, a simplicidade para uso dos menos experientes. Para a validação do sistema, além de métricas de software, foi montada uma sequência de pulsos conhecida como CPMG (Carr-Purcell-Meiboom-Gill) executada no espectrômetro local sobre uma amostra de CuSO4 em solução, o qual mostrou valores de T2 compatíveis com os valores esperados. Os resultados do novo sistema mostram sua capacidade de atender as principais exigências dos usuários e desenvolvedores de metodologias de RM, oferecendo um amplo conjunto de ferramentas. Em suma, este projeto provê a estrutura básica e funcional de uma nova forma de se programar e utilizar equipamentos de RM, gerando um poderoso instrumento para a pesquisa na área.
In this study we discuss new tools for the building of a fully digital Magnetic Resonance (MR) spectrometer. The research was motivated by several difficulties experienced by researchers in programming MR machines, which include the lack of tools for the development of methodologies that are not currently offered by companies. In particular, we treat the development of a library, PyMR (Python Magnetic Resonance), an API (Application Program Interface) and an IDE (Integrated Development Environment). In this structure, the PyMR library acts as a front-end for MR equipment programming and setup while the API is a back-end. Finally, the IDE is a user-friendly tool that helps the developer to create and manage methodologies and protocols. The state-of-the-art of Computer Sciences and Magnetic Resonance technologies adopted here has ensured the quality, robustness and adaptability keeping simplicity for non-experienced users. For the validation of the system, besides software metrics, a pulse sequence known as CPMG (Carr-Purcell-Meiboom-Gill) was assembled and performed on an onsite spectrometer, using a solution of CuSO4 as a sample, which exhibited compatible T2 values. The results show that the system can meet the main requirements of both users and developers and offer a large set of tools. This project provides a basic and functional structure of a new way to program and use the MR equipment and a powerful tool for researchers in this area.
APA, Harvard, Vancouver, ISO, and other styles
13

Almkvist, Magnus, and Marcus Wahren. "Preserving Integrity inTelecommunication Networks Opened bythe Parlay Service Interface." Thesis, KTH, Mikroelektronik och Informationsteknik, IMIT, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93210.

Full text
Abstract:
This Master’s Thesis in Electrical Engineering concerns the introduction of a Parlay gateway in Skanova’s public circuit switched telephone network, what network integrity problems this brings, and how to preserve the integrity of the network. There is a rising demand from the market on Skanova to be able to offer integrated and useful services via their network. Examples of such services are Web Controlled Call Forwarding and Virtual Call Centres. Until now, these services have been implemented with the Intelligent Network concept which is a technology for concentrating the service logic in the telephone network to centralised service platforms within the network operator’s domain. Developing new services in this environment is expensive and therefore, Skanova wants to open the network for third party service providers. The opening of the network is enabled by the introduction of a gateway implementing the open service interface Parlay. The crucial point when opening the network for third party service providers is to maintain the integrity of the network. Parlay is an object oriented Application Programming Interface that enables a third party service access to core network resources in a controlled manner. The authors’ definition of network integrity is: “the ability of a network to steadily remain in a safe state, while performing according to the expectations and specifications of its owner, i.e. delivering the expected functionality and providing means to charge for utilised network resources”. The thesis describes a few services implemented via the Parlay interface and points out examples of activities in these services that may jeopardise the integrity of the network. The described activities belong to one of the two categories: Call Control Functionality or Lack of Charging Instruments. The thesis also describes two important methods for addressing encountered integrity problems. The methods are: Parlay Service Level Agreement and Policy Management.</p> Finally, the solutions are compared and the conclusion is that Policy Management is a conformable and flexible method for addressing lots of integrity problems and that these are important qualities, since new integrity problems will arise all the time.
APA, Harvard, Vancouver, ISO, and other styles
14

Abdulrahman, Ruqayya. "Multi agent system for web database processing, on data extraction from online social networks." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5502.

Full text
Abstract:
In recent years, there has been a flood of continuously changing information from a variety of web resources such as web databases, web sites, web services and programs. Online Social Networks (OSNs) represent such a field where huge amounts of information are being posted online over time. Due to the nature of OSNs, which offer a productive source for qualitative and quantitative personal information, researchers from various disciplines contribute to developing methods for extracting data from OSNs. However, there is limited research which addresses extracting data automatically. To the best of the author's knowledge, there is no research which focuses on tracking the real time changes of information retrieved from OSN profiles over time and this motivated the present work. This thesis presents different approaches for automated Data Extraction (DE) from OSN: crawler, parser, Multi Agent System (MAS) and Application Programming Interface (API). Initially, a parser was implemented as a centralized system to traverse the OSN graph and extract the profile's attributes and list of friends from Myspace, the top OSN at that time, by parsing the Myspace profiles and extracting the relevant tokens from the parsed HTML source files. A Breadth First Search (BFS) algorithm was used to travel across the generated OSN friendship graph in order to select the next profile for parsing. The approach was implemented and tested on two types of friends: top friends and all friends. In case of top friends, 500 seed profiles have been visited; 298 public profiles were parsed to get 2197 top friends' profiles and 2747 friendship edges, while in case of all friends, 250 public profiles have been parsed to extract 10,196 friends' profiles and 17,223 friendship edges. This approach has two main limitations. The system is designed as a centralized system that controlled and retrieved information of each user's profile just once. This means that the extraction process will stop if the system fails to process one of the profiles; either the seed profile (first profile to be crawled) or its friends. To overcome this problem, an Online Social Network Retrieval System (OSNRS) is proposed to decentralize the DE process from OSN through using MAS. The novelty of OSNRS is its ability to monitor profiles continuously over time. The second challenge is that the parser had to be modified to cope with changes in the profiles' structure. To overcome this problem, the proposed OSNRS is improved through use of an API tool to enable OSNRS agents to obtain the required fields of an OSN profile despite modifications in the representation of the profile's source web pages. The experimental work shows that using API and MAS simplifies and speeds up the process of tracking a profile's history. It also helps security personnel, parents, guardians, social workers and marketers in understanding the dynamic behaviour of OSN users. This thesis proposes solutions for web database processing on data extraction from OSNs by the use of parser and MAS and discusses the limitations and improvements.
APA, Harvard, Vancouver, ISO, and other styles
15

Goyet, Samuel. "De briques et de blocs. La fonction éditoriale des interfaces de programmation (api) web : entre science combinatoire et industrie du texte." Thesis, Paris 4, 2017. http://www.theses.fr/2017PA040188.

Full text
Abstract:
Boutons « J’aime », tweets ancrés… Toutes ces formes sont générées par des interfaces de programmation ou API, outils d’écriture informatique qui ont integré la chaîne de production des textes de réseau contemporains. Cette thèse interroge la fonction éditoriale des API, soit leur rôle dans la production, la standardisation et la circulation des « petites formes » des textes de réseau. Avec comme corpus les API de Facebook et de Twitter ainsi que les petites formes qu’elles permettent de produire, notre analyse techno-sémiotique s’articule en cinq chapitres. Le premier est une généalogie des API du point de vue de l’écriture combinatoire. Nous montrons que cette conception de l’écriture est un trait saillant de la programmation et de l’informatique. Le second chapitre interroge les imaginaires de l’écriture informatique, entre chiffre, combinatoire et méthode scientifique universelle. Le troisième chapitre est une analyse des conséquences sémiotiques de cet universalisme combinatoire, où nous montrons que les API proposent une conception du texte comme ensemble abstrait de blocs combinables. Abstraction du texte qui sert une « économie des passages », objet de notre quatrième chapitre, dans laquelle les API sont des lieux d’industrialisation d’une « pratique lettrée » : elles établissent des critères de lisibilité et de reproductibilité du texte. Parmi ces critères, nous notons une invisibilisation du rôle pourtant fondamental du calcul informatique. Nous proposons donc, dans un cinquième chapitre, des pistes pour développer une sémiotique qui prenne en compte le calcul comme mode d’expression propre aux médias numériques
« Like » buttons, embedded tweets… All of these visual forms are produced by Application Programming Interfaces (APIs). APIs are digital writing tools which have become part of the publishing process of contemporary web pages. This thesis aims at understanding the « publishing function » of APIs : their role in the production, standardization and circulation of the « little forms » of online texts. Focused on Facebook’s and Twitter’s APIs, our work is divided into five chapters. The first one is a genealogy of the APIs, starting from their combinatorial aspect, a conception of writing which trace back to early programming and the invention of computer science. The second chapter is an inquiry about the imaginaries of calculus as a kind of writing, torn between the imaginary of numbers, of combinatorics and the search for a universal scientific method. The third chapter is a study of the semiotic consequences of this combinatorial universalism. We show how APIs are based on an idea of text as an abstract, modular object. This abstraction of the text is beneficial to an « economy of passages ». In this economy where circulation produce value, APIs are a place of « literate practices » (chapter four). They establish visual standards for the readability, production and circulation of online texts. Among these standards, there’s a systematic invizibilisation of the action of machines, although calculus is a necessary part of the production of digital texts. Therefore, in the fifth chapter, we give some epistemological elements towards non-anthropocentric semiotics, meaning : semiotics which would take into account computational machines as a part of the utterance of digital texts
APA, Harvard, Vancouver, ISO, and other styles
16

Samuel, John. "Feeding a data warehouse with data coming from web services. A mediation approach for the DaWeS prototype." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22493/document.

Full text
Abstract:
Cette thèse traite de l’établissement d’une plateforme logicielle nommée DaWeS permettant le déploiement et la gestion en ligne d’entrepôts de données alimentés par des données provenant de services web et personnalisés à destination des petites et moyennes entreprises. Ce travail s’articule autour du développement et de l’expérimentation de DaWeS. L’idée principale implémentée dans DaWeS est l’utilisation d’une approche virtuelle d’intégration de données (la médiation) en tant queprocessus ETL (extraction, transformation et chargement des données) pour les entrepôts de données gérés par DaWeS. A cette fin, un algorithme classique de réécriture de requêtes (l’algorithme inverse-rules) a été adapté et testé. Une étude théorique sur la sémantique des requêtes conjonctives et datalog exprimées avec des relations munies de limitations d’accès (correspondant aux services web) a été menée. Cette dernière permet l’obtention de bornes supérieures sur les nombres d’appels aux services web requis dans l’évaluation de telles requêtes. Des expérimentations ont été menées sur des services web réels dans trois domaines : le marketing en ligne, la gestion de projets et les services d’aide aux utilisateurs. Une première série de tests aléatoires a été effectuée pour tester le passage à l’échelle
The role of data warehouse for business analytics cannot be undermined for any enterprise, irrespective of its size. But the growing dependence on web services has resulted in a situation where the enterprise data is managed by multiple autonomous and heterogeneous service providers. We present our approach and its associated prototype DaWeS [Samuel, 2014; Samuel and Rey, 2014; Samuel et al., 2014], a DAta warehouse fed with data coming from WEb Services to extract, transform and store enterprise data from web services and to build performance indicators from them (stored enterprise data) hiding from the end users the heterogeneity of the numerous underlying web services. Its ETL process is grounded on a mediation approach usually used in data integration. This enables DaWeS (i) to be fully configurable in a declarative manner only (XML, XSLT, SQL, datalog) and (ii) to make part of the warehouse schema dynamic so it can be easily updated. (i) and (ii) allow DaWeS managers to shift from development to administration when they want to connect to new web services or to update the APIs (Application programming interfaces) of already connected ones. The aim is to make DaWeS scalable and adaptable to smoothly face the ever-changing and growing web services offer. We point out the fact that this also enables DaWeS to be used with the vast majority of actual web service interfaces defined with basic technologies only (HTTP, REST, XML and JSON) and not with more advanced standards (WSDL, WADL, hRESTS or SAWSDL) since these more advanced standards are not widely used yet to describe real web services. In terms of applications, the aim is to allow a DaWeS administrator to provide to small and medium companies a service to store and query their business data coming from their usage of third-party services, without having to manage their own warehouse. In particular, DaWeS enables the easy design (as SQL Queries) of personalized performance indicators. We present in detail this mediation approach for ETL and the architecture of DaWeS. Besides its industrial purpose, working on building DaWeS brought forth further scientific challenges like the need for optimizing the number of web service API operation calls or handling incomplete information. We propose a bound on the number of calls to web services. This bound is a tool to compare future optimization techniques. We also present a heuristics to handle incomplete information
APA, Harvard, Vancouver, ISO, and other styles
17

Engberg, Patricia. "Explorativ studie av faktorer som påverkar framgångsrik utveckling och användning av Internet of Things-enheter : En kvalitativ intervjustudie fokuserad på informationssäkerhet och personlig integritet." Thesis, Karlstads universitet, Handelshögskolan, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-62824.

Full text
Abstract:
Året är 2017 och den ökande användningen av enheter som är kopplade mot internet har exploderat okontrollerat. Enheter modifieras i en snabb takt för att kunna kopplas samman med syftet att få en mer effektivare vardag, i folkmun häftigare enheter och framförallt för att generera en ökad försäljning av dessa produkter. Kommunikationsverktygen kopplas samman och därmed samlas en stor mängd data på enskilda enheter som kan bli sårbara i form av övervakning, intrång och övertagande för syften som innehavaren kan vara helt omedveten om. Om individens enhet är medverkande i scenariot att stänga ned servern som håller en samhällstjänst uppe under en tidpunkt av en allvarlig fysisk attack mot Sverige: Vem bär i så fall skulden? Detta kallas överbelastningsattack och är en av många potentiella sårbarheter i dagens samhälle.   Internet of Things är ett nytt fenomen som är relativt outforskat med många öppna och obesvarade frågor. Forskningen ligger otvivelaktigt steget efter. Det gemensamma i forskningsartiklarna är slutsatsen: vi ska forskare vidare inom detta område. Alarmerande eftersom enheterna redan är närvarande i vardagen. Informationssäkerheten och individens personliga integritet är vad som står på spel, och frågan är vad individerna är villiga att offra för att ha de senaste produkterna.   Metoden och genomförandet av denna kandidatuppsats har bestått av det explorativa tillvägagångssättet. En litteraturstudie av forskningsartiklar och personliga intervjuer har genomförts med relevanta individer inom området.   Denna kandidatuppsats kommer inte att ge facit på vad som bör göras härnäst. Målet är att upplysa om olika problemställningar avseende fenomenet Internet of Things. I uppsatsen ligger fokuseringen på att generera en beskrivning av hur informationssäkerhet och personlig integritet kan påverka skapande och användandet av enheter som är uppkopplade mot Internet of Things.   Syftet med denna explorativa studie är att identifiera och beskriva faktorer som bidrar med ett framgångsrikt skapande och användande av Internet of Things enheter med fokusering på informationssäkerhet och personlig integritet. Slutsatserna är att faktorer som påverkar är bekräftelse av identitet, standarder, otillgänglig åtkomst samt användarkontroll.
APA, Harvard, Vancouver, ISO, and other styles
18

Cavalazzi, Marco Carlo. "Enterprise Social Networks: The Case of CERN." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11560/.

Full text
Abstract:
Social networks are commonly seen as a global trend that allows users to search and contact others with similar interests, write a post, reply, like or share content, create groups and organize events. This said, there is much more that can be done to exploit the full potential of social media. In order to improve the business, providing employees, customers and partners the best tools to cooperate and gain value from the whole community, many organizations are taking the matter in their own hands, using Enterprise Social Networks. Close analysis of case studies and comprehensive statistics shows why it is important to pursue this path. At CERN, the European Organization for Nuclear Research, where the number of employees, students and volunteers that everyday work in partnership both on site and through the network reaches the thousands, a new kind of platform has been deployed, able to exploit the social knowledge of the personnel. The thesis will describe the case study of CERN to understand not only why it is essential to become a social organization but also how a social environment can be developed. The last chapters will focus on examining my work on the platform, considering a mobile responsive design, realized to make the environment adapt to any screen size, an integrated resource planning tool, which gives the scientists the mean to easily manage the everyday work on the particle accelerators, and the platform’s Application Programming Interface, which allows anyone with the right credentials to include content from the enterprise social network into a personal or departmental webpage, giving everyone an even easier way to participate.
APA, Harvard, Vancouver, ISO, and other styles
19

Jahanbakhsh, &amp Farahmand Mokarremi Ashkan &amp Hanif. "An Application Programming Interface for HighPerformance Distributed Computing." Thesis, KTH, Fysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Brown, Christopher A. "Usability analysis of the channel application programming interface." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FBrown.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lachheb, Tawfik. "A secure client/server java application programming interface." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2561.

Full text
Abstract:
The purpose of this project is to develop a generic Java Application Programming Interface (API) that would be used to provide security and user privacy to functions such as data transfer, key management, digital signature, etc.
APA, Harvard, Vancouver, ISO, and other styles
22

Ratchford, Tristan. "Creating application programming interface code templates from usage patterns." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106349.

Full text
Abstract:
Application programing interfaces promote reuse by facilitating interaction between software components and/or software libraries. API code templates are parameterized API scenarios that can be quickly instantiated by copy-and-pasting or through support from integrated development environments. They provide the skeletal structure of an API coding scenario and let developers simply "fill in the blanks" with the details of their coding task. Unfortunately, creating relevant API code templates requires time and experience with the API. To address these problems we present a technique that mines API usage patterns and transforms them into API code templates. Our intuition is that API usage patterns are a solid basis for code templates because they are grounded by actual API usage. We evaluate our approach performing retroactive study on the Mammoth, ArgoUML, and Eclipse projects to see if API code templates created from earlier versions could have been helpful to developers in later versions. Our results show that, on average, each API code template our technique mined could have helped developers with creating six, nine, and twelve new methods in Mammoth, ArgoUML, and Eclipse, respectively. In our evaluation, we mined many API code templates from the three test projects that provide evidence that our technique could have helped developers learn and use an API faster in many opportunities.
Les interfaces de programmation (API) encouragent la réutilisation de code en facilitant l'interaction entre les composantes du logiciel et ses librairies. Les "templates" d'API sont des scénarios d'utilisation de l'API paramétrées pour êtres rapidement instanciés par copier-coller ou par le soutien intégré des environnements de développement. Ils fournissent le squelette d'un scénario d'utilisation de l'API et laissent au développeurs la simple tâche de "remplir les espace". Malheureusement, afin de créer des "templates" pertinents, du temps et de l'expérience avec l'API sont nécessaires. Pour résoudre ces problèmes, nous présentons une technique par laquelle les "templates" d'API sont découverts en analysant des scénarios d'utilisation existants. Notre intuition est que de tels scénarios sont une base valide pour la découverte de "templates" car ils sont une représentation existante de l'API en action. Nous évaluons notre méthode en effectuant une étude rétroactive sur les projets Mammoth, ArgoUML, et Eclipse pour voir si les modèles créés à partir de versions antérieures auraient été utiles aux développeurs dans les versions ultérieures. Nos résultats illustrent que, en moyenne, chaque "template" d'API créé par notre technique aurait permis aux développeurs de créer six, neuf et douze nouvelles méthodes dans les projets Mammoth, ArgoUML, et Eclipse, respectivement. De plus, dans notre évaluation, de nombreux modèles de code API ont été créé à partir des ces trois projets, ainsi prouvant que notre technique pourrait avoir aidée les développeurs à apprendre et utiliser une API plus rapidement dans de nombreuses occasions.
APA, Harvard, Vancouver, ISO, and other styles
23

BLOMKVIST, YLVA, and LOENBOM LEO ULLEMAR. "Improving supply chain visibility within logistics by implementing a Digital Twin : A case study at Scania Logistics." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279054.

Full text
Abstract:
As organisations adapt to the rigorous demands set by global markets, the supply chains that constitute their logistics networks become increasingly complex. This often has a detrimental effect on the supply chain visibility within the organisation, which may in turn have a negative impact on the core business of the organisation. This paper aims to determine how organisations can benefit in terms of improving their logistical supply chain visibility by implementing a Digital Twin — an all-encompassing virtual representation of the physical assets that constitute the logistics system. Furthermore, challenges related to implementation and the necessary steps to overcome these challenges were examined.  The results of the study are that Digital Twins may prove beneficial to organisations in terms of improving metrics of analytics, diagnostics, predictions and descriptions of physical assets. However, these benefits come with notable challenges — managing implementation and maintenance costs, ensuring proper information modelling, adopting new technology and leading the organisation through the changes that an implementation would entail.  In conclusion, a Digital Twin is a powerful tool suitable for organisations where the benefits outweigh the challenges of the initial implementation. Therefore, careful consideration must be taken to ensure that the investment is worthwhile. Further research is required to determine the most efficient way of introducing a Digital Twin to a logistical supply chain.
I takt med att organisationer anpassar sig till de hårda krav som ställs av den globala marknaden ökar också komplexiteten i deras logistiknätverk. Detta har ofta en negativ effekt på synligheten inom logistikkedjan i organisationen, vilken i sin tur kan ha en negativ påverkan på organisationens kärnverksamhet. Målet med denna studie är att utröna de fördelar som organisationer kan uppnå vad gäller att förbättra synligheten inom deras logistikkedjor genom att implementera en Digital Tvilling — en allomfattande virtuell representation av de fysiska tillgångar som utgör logistikkedjan.  Resultaten av studien är att Digitala Tvillingar kan vara gynnsamma för organisationer när det gäller att förbättra analys, diagnostik, prognoser och beskrivningar av fysiska tillgångar. Implementationen medför dock utmaningar — hantering av implementations- och driftskostnader, utformning av informationsmodellering, anammandet av ny teknik och ledarskap genom förändringsarbetet som en implementering skulle innebära.  Sammanfattningsvis är en Digital Tvilling ett verktyg som lämpar sig för organisationer där fördelarna överväger de utmaningar som tillkommer med implementationen. Därmed bör beslutet om en eventuell implementation endast ske efter noggrant övervägande. Vidare forskning behöver genomföras för att utröna den mest effektiva metoden för att introducera en Digital Tvilling till en logistikkedja.
APA, Harvard, Vancouver, ISO, and other styles
24

Maymí, Fernando J. "A sockets application programming interface for the Petite Amateur Naval Satellite." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA381016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Selldin, Håkan. "Design and Implementation of an Application. Programming Interface for Volume Rendering." Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1209.

Full text
Abstract:

To efficiently examine volumetric data sets from CT or MRI scans good volume rendering applications are needed. This thesis describes the design and implementation of an application programming interface (API) to be used when developing volume-rendering applications. A complete application programming interface has been designed. The interface is designed so that it makes writing application programs containing volume rendering fast and easy. The interface also makes created application programs hardware independent. Volume rendering using 3d-textures is implemented on Windows and Unix platforms. Rendering performance has been compared between different graphics hardware.

APA, Harvard, Vancouver, ISO, and other styles
26

Maymi, Fernando J. "A sockets application programming interface for the Petite Amateur Naval Satellite." Thesis, Monterey, California. Naval Postgraduate School, 2000. http://hdl.handle.net/10945/9295.

Full text
Abstract:
The Petite Amateur Naval Satellite (PANSAT) is an operational communications microsatellite designed at the Naval Postgraduate School (NPS). PANSAT's communications software was intended to be developed after orbital insertion and transmitted to the satellite. The Sockets Application Programming Interface (API) developed at the University of California, Berkeley is the de facto standard API for network applications. It provides a strong and flexible platform on which to develop a wide variety of programs. It accelerates the development of new applications by providing a standard set of features and isolating the program from the underlying networking mechanisms. This thesis studied the viability of implementing of a Sockets API for PANSAT based on the Berkeley Sockets. PANSAT's Sockets API was built on BekTek's spacecraft Operating System (SCOS). Because SCOS source code was not available, network protocols had to be implemented in user mode. SCOS is optimized for multiple small tasks, not the complex processes required for Internet programming. Because of SCOS' limitations in memory management, the development of this protocol stack and API was not successful. SCOS does not have the features required for an implementation like this.
APA, Harvard, Vancouver, ISO, and other styles
27

Bilderback, Mark Leslie. "Graphics.c, a simplified graphics application programming interface for the X Window environment." Virtual Press, 1995. http://liblink.bsu.edu/uhtbin/catkey/935938.

Full text
Abstract:
An often overlooked area of graphics is the ability of application programs to create graphical images. Many programs exist which allow creation interactively, but few offer the same ability for noninteractive application programs. By allowing an application program to create graphical images more user friendly programs may be created by programmers.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
28

Metwly, Nevin. "Common application programming interface architecture bridge for connecting heterogeneous home computing middleware." Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28492.

Full text
Abstract:
Microprocessors are now embedded in various equipments such as home appliances and digital audio/video devices, providing varieties of services. An integration of these services and creation of new ones is always needed to fulfill ongoing users' demands. There are a number of issues limiting the expandability of the intelligent home networks. One of these issues is that most middleware for home computing networks are not compatible with each other. To solve this problem of diversification of middleware and to ensure interoperability among them, the one-to-one bridge has been developed, which connects two specific middleware varieties and performs protocol conversion; also various one-to-any home network middleware bridges have lately been introduced. In this thesis, a proposal for any-to-any bridge connecting home computing middleware is introduced. It enables any appliance under any middleware to communicate and control any other appliances under different middleware. Also, new middleware can be easily integrated with the proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
29

Colon, Micah. "A New Operating System and Application Programming Interface for the EvBot Robot Platform." NCSU, 2010. http://www.lib.ncsu.edu/theses/available/etd-04012010-153840/.

Full text
Abstract:
The research presented in this thesis describes the development of the Linux distribution and a new control architecture for robots. The reasons Linux was chosen are enumerated and a description of the build system and setup used to generate the distribution, with support for multiple platforms, is discussed. The Evbot Abstraction Layer (EAL), a new robot control architecture and framework is described, and the simple API is detailed.
APA, Harvard, Vancouver, ISO, and other styles
30

Dedekarginoglu, Ozgur. "Optimum Design Of Steel Structures Via Differential Evolution Algorithm And Application Programming Interface Of Sap2000." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614198/index.pdf.

Full text
Abstract:
The objective of this study is to investigate the use and efficiency of Differential Evolution (DE) method on structural optimization. The solution algorithm developed with DE is computerized into software called SOP2011 using VB.NET. SOP2011 is automated to achieve size optimum design of steel structures consisting of 1-D elements such as trusses and frames subjected to design provisions according to ASD-AISC (2010) and LRFD-AISC (2010). SOP2011 works simultaneously with the structural analysis and design software SAP2000 in order to find the global or near optimum designs for real size truss and frame structures in which the optimization problem is classified as constrained, discrete size optimization. Software interacts with SAP2000 through the Open Application Programming Interface (OAPI), which provides an access to information of SAP2000 inputs and outputs. It is programmed for finding reasonable and optimized results for truss and frame steel structures by choosing appropriate ready sections for structural members considering the minimum weight via DE technique. Based on the comparison of the obtained results with the literature, DE algorithm with penalty function implementation is proved to be an efficient optimization technique amongst several major methods used for discrete constrained size optimization of real size steel structures. Also, it has been shown that by using optimized designs obtained by DE, weight of the structures can be reduced up to 67.9% for steel truss structures and 41.7% for steel frame structures compared to SAP2000 auto design procedure, hence resulting a significant saving of materials, cost, work hours and energy required for the project.
APA, Harvard, Vancouver, ISO, and other styles
31

Holzer, Scott Walter. "DESIGN AND IMPLEMENTATION OF USER LEVEL SOCKET APPLICATION PROGRAMMING INTERFACE WITH SOCKET SPLITTING AND MEDIATION." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/403.

Full text
Abstract:
Over the past few decades, the size and scope of the Internet has grown exponentially. In order to maintain support for legacy clients, new applications and services have been limited by dependence on traditional sockets and TCP, which provide no support for modifying endpoints after connection setup. This forces applications to implement their own logic to reroute communications to take advantage of composable services or handle failover. Some solutions have added socket operations that allow for endpoints to be redirected on the fly, but these have been limited in scope to handling failover and load balancing. We present two new sets of socket operations. The first set allows servers to dynamically insert and remove intermediaries into communication streams. This allows applications to decide in real time whether to use services provided by 3rd parties such as encryption, filtering, and compression. In this way, applications can employ dynamic service composition to customize communication between clients and servers. The second set of operations allows sockets to be split such that all frames written to the socket are sent to multiple recipients. This is useful for implementing fast failover and passive communication monitoring. All of these operations are implemented in user space and gracefully handle legacy TCP clients, making quick deployment of distributed Internet applications a real possibility. Performance tests of the new operations on remote hosts show that the overhead introduced is not prohibitive.
APA, Harvard, Vancouver, ISO, and other styles
32

Narayanaswami, Anand. "A prototype of an online speech-enabled information access tool using Java speech application programming interface." Ohio : Ohio University, 2001. http://www.ohiolink.edu/etd/view.cgi?ohiou1174063861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Rodrigues, Fernando de Assis [UNESP]. "Coleta de dados em redes sociais: privacidade de dados pessoais no acesso via Application Programming Interface." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/149768.

Full text
Abstract:
Submitted by Fernando de Assis Rodrigues (fernando@elleth.org) on 2017-03-08T18:31:59Z No. of bitstreams: 79 final.pdf: 41703816 bytes, checksum: f59b8e221e9bc15e9bc0f2579244c6c6 (MD5) 1.png: 295336 bytes, checksum: 459beda12600ba335b293bc6ab407543 (MD5) 2.png: 4275028 bytes, checksum: 61d7e2aeba17cbb6bc721cd955357559 (MD5) 3.png: 4278867 bytes, checksum: 78190c57f8aa4cae32be9ff4e3c2b8b9 (MD5) 4.png: 3413913 bytes, checksum: d98179cd8a1720ec3a2ffe941f2617e0 (MD5) 5.png: 2572352 bytes, checksum: 094e37b003e0099dd78a1ec54ed78cdd (MD5) 6.png: 36209 bytes, checksum: 9e5d19706c8131e6c8d169b779114605 (MD5) 7.png: 13916308 bytes, checksum: 23c1db3f0340ac1e79a86f73bf8b64d9 (MD5) 8.png: 10016221 bytes, checksum: eae1e977ef51ebdf0f46990ee85f830a (MD5) 10.png: 4823750 bytes, checksum: fddce1fe56bc6e50d5aefc7d88a5599b (MD5) 11.png: 15076 bytes, checksum: 8f025ab18b9c12128f26be839fb7e7c6 (MD5) 12.png: 2034468 bytes, checksum: 73296cbf213f9e58b0e812bb82d65854 (MD5) 13.png: 2124812 bytes, checksum: 84a03f6e7edfd66fa4bdd6ede8f8d5e9 (MD5) 14.png: 2198512 bytes, checksum: dfec659b294b885f1d853c6a475a8fee (MD5) 15.png: 2053157 bytes, checksum: 0feab30e0c892b45939f17644d3000cf (MD5) 16.png: 70960 bytes, checksum: a675706ca5e284a01544b772aa354c35 (MD5) 17.png: 501763 bytes, checksum: 9f41bb83cac460cc7aeb618a3b81c623 (MD5) 18.png: 30898 bytes, checksum: 4cad6424be9272a63ad826682ac2cedc (MD5) 19.png: 227247 bytes, checksum: 1df7ea50df51d979d740bd21a78f7192 (MD5) 20.png: 176546 bytes, checksum: 29eca9e49b3ec71fa6438ffa2a90665f (MD5) 21.png: 171097 bytes, checksum: 20e944c37814c57bdb330bcca624142f (MD5) 22.png: 64778 bytes, checksum: eb66e18acd5f0942a332e7ad57b3a475 (MD5) 23.png: 911367 bytes, checksum: f733bd8baeffd584a71cc153e87f3fc0 (MD5) 24.png: 443014 bytes, checksum: dad199a0357cf118c1e834e32f29a656 (MD5) 25.png: 133262 bytes, checksum: d9a29c70f3e1bfaa17682dc06d7a94a8 (MD5) 26.png: 97345 bytes, checksum: 7c4f87d60bfddef584a1e31929b125e3 (MD5) 27.png: 121518 bytes, checksum: b648bdf82f061549b8615a276c35e86b (MD5) 28.png: 3907801 bytes, checksum: bce8b22ee075dd30de8903a89a482a44 (MD5) 29.png: 110594 bytes, checksum: 98d4617ddfc62e5186b2ea50750e145f (MD5) 30.png: 65675 bytes, checksum: d6d0ec7e29b63b3d8f3bbf93b33fe392 (MD5) 31.png: 77259 bytes, checksum: 7c650862e389ee010ebf227394b78ae5 (MD5) 32.png: 224392 bytes, checksum: 0f153cc54827e1e125950ba15a93a064 (MD5) 33.compressao.png: 2342753 bytes, checksum: 243f5209f09cf7b998f6a5e579e4b289 (MD5) 34.png: 13209736 bytes, checksum: a3fca0c436e0167e5292cb47683263c6 (MD5) 35.png: 331900 bytes, checksum: 230d44f9fbbd1d1f1b8e16ef4a90bef4 (MD5) 36.png: 7795486 bytes, checksum: ff219f75dea53645c3c8c506aee4f92c (MD5) 37.png: 53430944 bytes, checksum: 497a9784729f29b143815938b3d4d6dd (MD5) 38.png: 5733799 bytes, checksum: 665d395bae6b16fe105fde18c13bfad8 (MD5) 39.png: 219532 bytes, checksum: 5827b2cc4fdf0f8a4bf716aec8a1c5fa (MD5) 40.png: 127177 bytes, checksum: e152cddfffc9c805a89e5e50b8a1817c (MD5) 41.png: 1124665 bytes, checksum: 29d9fd8d9ee671f2b00643f986ed4ee6 (MD5) 42.png: 2383771 bytes, checksum: 3bf368dce4ef03e41b1b9b321ebf5d6b (MD5) 43.png: 2048283 bytes, checksum: d8c1afc1ff4926f26f87d6da2c340189 (MD5) 44.compressao.png: 4545708 bytes, checksum: 1a244e7d2a77739f25582a39ee0dca43 (MD5) 1.png: 91444 bytes, checksum: 97a9b0c6327395ecf131b7f08f852afe (MD5) 2.png: 93204 bytes, checksum: c4f43e7beace7624decc69d5c0bbd763 (MD5) 3.png: 136229 bytes, checksum: decbc71a5c674b502a19bb7ec3a4af39 (MD5) 4.png: 51275 bytes, checksum: 3fd73b126e99f2b7dd130af0bb44bdf2 (MD5) 5.png: 55362 bytes, checksum: 25c62c2e04a6487ff397321ace683cfe (MD5) 6.png: 30359 bytes, checksum: e5a89fc989ad68634201c273405d87cd (MD5) 1.ods: 14748 bytes, checksum: 5c53786fb4f59d40cfc67e265b3ccdbb (MD5) 2.ods: 12277 bytes, checksum: 85d3262eda8d0d19f2b9e68ca6a2ff60 (MD5) 3.ods: 29902 bytes, checksum: 21dce0898e77df8191be437bcf168491 (MD5) 4.ods: 26020 bytes, checksum: 75044d4b95e66473d1c38e29e2c1ddd9 (MD5) 5.ods: 25161 bytes, checksum: 64743c83195cdf34938b5e3142210535 (MD5) 6.ods: 23147 bytes, checksum: 67873780574e4a0c15048d1c18e8092c (MD5) 7.ods: 24872 bytes, checksum: 58b0b7758825e8bc80ec319f90cb30ed (MD5) 8.ods: 23642 bytes, checksum: 7e718def249e8b5c1ba5298d3e6101ae (MD5) 9.ods: 14540 bytes, checksum: b0af9d51d96c085aca53e416c8dc06e6 (MD5) 10.ods: 21679 bytes, checksum: eb531d5e51df06f11e0ae669a881e85f (MD5) 11.ods: 19560 bytes, checksum: 71de1352ca73f10c2cf6d738cd1cec4d (MD5) 12.ods: 14605 bytes, checksum: 31b8d0e606a2af909074980cd9a4a9c1 (MD5) 13.ods: 14233 bytes, checksum: a92806c87118d17b424acccb1f049816 (MD5) 14.ods: 17135 bytes, checksum: ba4d33d3045b7031dad15157a31d90be (MD5) 15.ods: 14245 bytes, checksum: 0a1b4b52bad09fc2dcf67639b1b6ba5b (MD5) 16.ods: 15790 bytes, checksum: 8cf6f938d822bb16cc84be8a422fc1c2 (MD5) 17.ods: 16843 bytes, checksum: c497070bd4d369d226a70c9e5d74c77e (MD5) 18.ods: 25388 bytes, checksum: b1871bedff625da6193de7545d19c4e7 (MD5) 19.ods: 67252 bytes, checksum: 8ebaf944dcae9d220b845680ca31188f (MD5) 20.ods: 27473 bytes, checksum: 923c1755f09c370ccb30729f2710f714 (MD5) 21.ods: 31513 bytes, checksum: 7d25175b6369b25e8b1dab068e610ba7 (MD5) 22.ods: 27301 bytes, checksum: 1064d84b82091b5f6d43cfc17b1d4636 (MD5) 23.ods: 38229 bytes, checksum: 290c7fe1ccbd28e18b5722899702bfed (MD5) 24.ods: 48098 bytes, checksum: 620ed74459343bc06f7eb6ed0d7679ec (MD5) 25.ods: 24148 bytes, checksum: 3cd42fc255ecea838588e3aae8dc19c4 (MD5) 26.ods: 26550 bytes, checksum: 4b6997336ee92c6def97e4909e858b7c (MD5) 27.ods: 27015 bytes, checksum: a20630e9c67be416d0e99f8fcbfe2384 (MD5) 28.ods: 20340 bytes, checksum: ce513515d2d38bb74a6234d7b748ab26 (MD5) 29.ods: 25812 bytes, checksum: 8f94b034213b8a281a25c1bc26e1e543 (MD5)
Rejected by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br), reason: Solicitamos que realize uma nova submissão seguindo as orientações abaixo: Foram submetidos 79 arquivos, apenas 1 arquivo deve ser submetido em formato PDF. O arquivo PDF não deve estar protegido e a dissertação/tese deve estar em um único arquivo, inclusive os apêndices e anexos, se houver. Corrija estas informações e realize uma nova submissão contendo o arquivo correto. Agradecemos a compreensão. on 2017-03-13T17:21:20Z (GMT)
Submitted by Fernando de Assis Rodrigues (fernando@elleth.org) on 2017-03-14T01:09:45Z No. of bitstreams: 1 final.pdf: 41703816 bytes, checksum: f59b8e221e9bc15e9bc0f2579244c6c6 (MD5)
Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-03-17T19:30:40Z (GMT) No. of bitstreams: 1 rodrigues_fa_dr_mar.pdf: 41703816 bytes, checksum: f59b8e221e9bc15e9bc0f2579244c6c6 (MD5)
Made available in DSpace on 2017-03-17T19:30:40Z (GMT). No. of bitstreams: 1 rodrigues_fa_dr_mar.pdf: 41703816 bytes, checksum: f59b8e221e9bc15e9bc0f2579244c6c6 (MD5) Previous issue date: 2017-03-03
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O desenvolvimento das redes sociais é tema de estudos de várias áreas do conhecimento, e com o aumento do uso da Internet em atividades profissionais e de entretenimento, surgiram as redes sociais online: serviços com o intuito de proporcionar uma interface de relacionamento entre indivíduos. Algumas destas redes possuem milhões de usuários, que consentiram acordo aos Termos de Uso. Os Termos de Uso destes serviços contém a delimitação dos processos de coleta de dados por agentes externos, criando um efeito em cascata de identificação do usuário, e pode potencializar atividades prejudiciais à privacidade. O estudo procura verificar se processos sistematizados de coleta de dados sobre documentos que contém características das interfaces de coleta das Application Programming Interfaces (APIs), e os Termos de Uso podem auxiliar a identificação de atividades potencialmente prejudiciais à privacidade dos usuários (referenciados) e revelar pré-requisitos de conhecimentos sobre as tecnologias envolvidas neste processo, conceitos prévios à identificação das características, e áreas profissionais envolvidas no entendimento das informações sobre tecnologias voltadas às APIs e condições dos Termos de Uso. O objetivo é propor um modelo de dados orientado a análise sobre questões de privacidade de dados pessoais, a partir da identificação das características da coleta de dados de referenciados via API, para auxiliar na identificação de potenciais ações e atividades prejudiciais à privacidade, realizadas na coleta de dados. O universo de pesquisa está delimitado aos serviços disponíveis na Internet que utilizam APIs como interfaces de interoperabilidade de seus conteúdos, e a amostra foi definida em três APIs: do Facebook, do Twitter e do LinkedIn. A metodologia adotada é a análise exploratória, de caráter qualitativo, com métodos combinados a partir da exploração das características técnicas das APIs e da leitura dos documentos disponíveis, sendo segmentada pelas perspectivas: Áreas Profissionais envolvidas, Tecnologias de Coleta e Pré-requisitos de Conhecimento. Para realização, propõe-se três ciclos: o primeiro, com a identificação das características das estruturas de coletas de dados e das funcionalidades apresentadas pelas APIs; o segundo ciclo propõe uma modelagem de dados, a partir da coleta das características das estruturas existentes (Modelagem Direta), e; o terceiro ciclo, uma Modelagem de Segunda Ordem, com informações específicas sobre a privacidade de dados de referenciados para a análise de aspectos de privacidade de dados compartilhados a terceiros. Ao final, apresenta uma lista de critérios para o acompanhamento e a avaliação das informações dos documentos de referência e Termos de Uso das redes sociais, como forma de identificar possíveis relações entre a ausência de dados. Nas considerações, sustentamos que este ambiente é complexo e ofuscado aos referenciados, porém o modelo de dados e os instrumentos elaborados podem auxiliar a minimizar a complexidade dos documentos de referência sobre a interoperabilidade de conjuntos de dados a agentes externos e no entendimento dos Termos de Uso.
The development of social networks is a topic of study for several areas, and with the increased use of the Internet in professional and leisure activities, online social networks have emerged: services with the goal of providing an interface between individuals. Some of these networks have millions of users, who agree and give their consente to the Terms of Use. The Terms of Use of these services contain the delimitation of the processes of data collection by external agents, creating a cascading effect of user identification and can enhance activities which are detrimental to user privacy. This study looks to verify if the systematic data collection processes for documents which contain characteristics of the Application Programming Interfaces (APIs) data collection and the Terms of Use can help in identifying activities potentially harmful to user privacy (referenced) and reveal prerequisites of knowledge about the technology involved in this process, concepts prior to identifying characteristics and professional areas involved in understanding the technology of the API and the Terms of Use. The objective is to propose an analysis based data model on personal privacy data issues, from the identification of the characteristics of the collection of data from the referenced API to assisting in identifying potential actions and activities which are detrimental to privacy obtained through the data collection process. The research universe is limited to the services available on the Internet that use APIs as interoperability interfaces of their content and the sample was defined in three APIs: from Facebook, Twitter and LinkedIn. The methodology adopted was exploratory analysis, in qualitative form, with combined methods based on the exploitation of the technical characteristics of APIs and the reading of available documents, being segmented by the perspectives: professional areas involved, collection technology and knowledge prerequisites. To conduct this study, three cycles are proposed: first, with the identification of the characteristics of the structure of data collection and the functionalities presented by the APIs; second, propose a model of the data from the collection of the characteristics of existing structures (Direct Model); and third, a model of Second Order, with specific information about referenced data privacy for the analysis of data privacy aspects to share with third parties. In the end, present a list of criteria for the monitoring and evaluation of the information of referenced documents and the Terms of Use of social networks, as a way of identifying possible relationships between the absence of data. In the considerations, we maintain the idea that this environment is complex and obfuscated to those referenced, but the data model and the instruments developed can help to minimize the complexity of referenced documents about the interoperability of datasets to external agents and understanding the Terms of Use.
El desarrollo de las redes sociales es tema de estudio de varias áreas del conocimiento, y con el aumento del uso de la Internet en actividades profesionales y de entretenimiento, surgieron las redes sociales en línea: servicios con el fin de proporcionar una interface de relacionamiento entre individuos. Algunas de estas redes poseen millones de usuarios, los cuales dieron su consentimiento al acuerdo de los Términos de Uso. Los Términos de estos servicios engloban la delimitación de los procesos de colecta de datos por agentes externos, creando un efecto en cascada de identificación del usuario, y puede potencializar actividades perjudiciales para la privacidad. El estudio pretende verificar si procesos sistematizados de colecta de datos sobre documentos que contienen características de las interfaces de colecta de las Application Programming Interfaces (APIs), y los Términos de Uso, pueden auxiliar en la identificación de actividades potencialmente perjudiciales para la privacidad de los usuarios (referenciados) y revelar prerrequisitos de conocimientos sobre las tecnologías involucradas en este proceso, conceptos previos a la identificación de las características, y áreas profesionales que participan en el entendimiento de las informaciones sobre tecnologías direccionadas a las APIs y condiciones de los Términos e Uso. El objetivo es proponer un modelo de datos orientado al análisis sobre cuestiones de privacidad de datos personales, a partir de la identificación de las características de la colecta de datos referenciados vía API, para auxiliar en la identificación de potenciales acciones y actividades perjudiciales para la privacidad, realizadas en la colecta de datos. El universo de pesquisa está delimitado a los servicios disponibles en la Internet que utilizan APIs como interfaces de interoperabilidad de sus contenidos, y la muestra fue definida en tres APIs: de Facebook, de Twitter y de LinkedIn. La metodología adoptada es un análisis exploratorio, de carácter cualitativo, con métodos combinados a partir de la exploración de las características técnicas de las APIs y de la lectura de los documentos disponibles, siendo segmentada por las perspectivas: Áreas Profesionales involucradas, Tecnologías de Colecta y Prerrequisitos de Conocimiento. Para la realización, se proponen tres ciclos: i. con la identificación de las características de las estructuras de colectas de datos y de las funcionalidades presentadas por las APIs; ii. ciclo, propone un modelaje de datos, a partir de la colecta de las características de las estructuras existentes (Modelaje Directo), y; iii. ciclo, un Modelaje de Segundo Orden, con informaciones específicas sobre la privacidad de los datos de referenciados para el análisis de aspectos de privacidad de datos compartidos con terceros. Finalmente, presenta una lista de criterios para el acompañamiento y la evaluación de las informaciones de los documentos de referencia y Términos de Uso de las redes sociales, como forma de identificar posibles relaciones entre la ausencia de datos. En las consideraciones, sustentamos que este ambiente es complejo y confuso para los referenciados, no obstante el modelo de datos y los instrumentos elaborados pueden contribuir a minimizar la complejidad de los documentos de referencia sobre la interoperabilidad de conjuntos de datos a agentes externos y en el entendimiento de los Términos de Uso.
APA, Harvard, Vancouver, ISO, and other styles
34

Rodrigues, Fernando de Assis. "Coleta de dados em redes sociais : privacidade de dados pessoais no acesso via Application Programming Interface /." Marília, 2017. http://hdl.handle.net/11449/149768.

Full text
Abstract:
Orientador: Ricardo Cesar Gonçalves Sant'Ana
Banca: Silvana Aparecida Borsetti Gregório Vidotti
Banca: Guilherme Ataíde Dias
Banca: Ana Alice Baptista Rodrigues
Banca: José Vicente Rodríguez Muñoz
Resumo: O desenvolvimento das redes sociais é tema de estudos de várias áreas do conhecimento, e com o aumento do uso da Internet em atividades profissionais e de entretenimento, surgiram as redes sociais online: serviços com o intuito de proporcionar uma interface de relacionamento entre indivíduos. Algumas destas redes possuem milhões de usuários, que consentiram acordo aos Termos de Uso. Os Termos de Uso destes serviços contém a delimitação dos processos de coleta de dados por agentes externos, criando um efeito em cascata de identificação do usuário, e pode potencializar atividades prejudiciais à privacidade. O estudo procura verificar se processos sistematizados de coleta de dados sobre documentos que contém características das interfaces de coleta das Application Programming Interfaces (APIs), e os Termos de Uso podem auxiliar a identificação de atividades potencialmente prejudiciais à privacidade dos usuários (referenciados) e revelar pré-requisitos de conhecimentos sobre as tecnologias envolvidas neste processo, conceitos prévios à identificação das características, e áreas profissionais envolvidas no entendimento das informações sobre tecnologias voltadas às APIs e condições dos Termos de Uso. O objetivo é propor um modelo de dados orientado a análise sobre questões de privacidade de dados pessoais, a partir da identificação das características da coleta de dados de referenciados via API, para auxiliar na identificação de potenciais ações e atividades prejudiciais à privacid... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The development of social networks is a topic of study for several areas, and with the increased use of the Internet in professional and leisure activities, online social networks have emerged: services with the goal of providing an interface between individuals. Some of these networks have millions of users, who agree and give their consente to the Terms of Use. The Terms of Use of these services contain the delimitation of the processes of data collection by external agents, creating a cascading effect of user identification and can enhance activities which are detrimental to user privacy. This study looks to verify if the systematic data collection processes for documents which contain characteristics of the Application Programming Interfaces (APIs) data collection and the Terms of Use can help in identifying activities potentially harmful to user privacy (referenced) and reveal prerequisites of knowledge about the technology involved in this process, concepts prior to identifying characteristics and professional areas involved in understanding the technology of the API and the Terms of Use. The objective is to propose an analysis based data model on personal privacy data issues, from the identification of the characteristics of the collection of data from the referenced API to assisting in identifying potential actions and activities which are detrimental to privacy obtained through the data collection process. The research universe is limited to the services available on the Internet that use APIs as interoperability interfaces of their content and the sample was defined in three APIs: from Facebook, Twitter and LinkedIn. The methodology adopted was exploratory analysis, in qualitative form, with combined methods based on the exploitation of the technical characteristics of APIs and the reading of available documents, being segmented by the perspectives: professional ...
Doutor
APA, Harvard, Vancouver, ISO, and other styles
35

Francia, Matteo. "A Foundational Library for Aggregate Programming." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13090/.

Full text
Abstract:
L'elevata diffusione di entità computazionali ha contribuito alla costruzione di sistemi distribuiti fortemente eterogenei. L'ingegnerizzazione di sistemi auto-organizzanti, incentrata sull'interazione tra singoli dispositivi, è intrinsecamente complessa, poiché i dettagli di basso livello, come la comunicazione e l’efficienza, condizionano il design del sistema. Una pletora di nuovi linguaggi e tecnologie consente di progettare e di coordinare il comportamento collettivo di tali sistemi, astraendone i singoli componenti. In tale gruppo rientra il field calculus, il quale modella i sistemi distribuiti in termini di composizione e manipolazione di field, "mappe" dispositivo-valore variabili nel tempo, attraverso quattro operatori sufficientemente generici e semplici al fine di rendere universale il modello e di consentire la verifica di proprietà formali, come la stabilizzazione di sistemi auto-organizzanti. L'aggregate programming, ponendo le sue fondamenta nel field calculus, utilizza field computazionali per garantire elasticità, scalabilità e composizione di servizi distribuiti tramite, ad esempio, il linguaggio Protelis. Questa tesi contribuisce alla creazione di una libreria Protelis per l'aggregate programming, attraverso la creazione di interfacce di programmazione (API) adatte all'ingegnerizzazione di sistemi auto-organizzanti con crescente complessità. La libreria raccoglie, all’interno di un unico framework, algoritmi tra loro eterogenei e meta-pattern per la coordinazione di entità computazionali. Lo sviluppo della libreria richiede la progettazione di un ambiente minimale di testing e pone nuove sfide nella definizione di unit e regression testing in ambienti auto-organizzanti. L'efficienza e l'espressività del lavoro proposto sono testate e valutate empiricamente attraverso la simulazione di scenari pervasive computing a larga scala.
APA, Harvard, Vancouver, ISO, and other styles
36

Anstensen, Jan. "An investigation of the possibility of defining a new conditional access application programming interface for digital television receivers." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1283.

Full text
Abstract:

Digital television broadcasters use conditional access (CA) systems to protect some of their services from being viewed by people not subscribing for these services. A manufacturer of digital television receivers develops applications to handle the CA systems that the receiver shall support. A problem for the application developer is that a CA application developed for one specific CA system is usually not reusable for other CA systems because of the differences between CA systems. The CA systems are different in both their application programming interfaces (API) as well as the types of functionality that they support.

This master thesis presents a study of three APIs from different CA systems. The possibilities of defining a new CA API that supports all the functionality that is provided by existing CA APIs while still being as similar as possible to these existing APIs are investigated. The conclusion from the study is that it is not possible to define this new CA API because the studied CA systems are so different and only small parts of the provided functionality are shared between them.

APA, Harvard, Vancouver, ISO, and other styles
37

Araneda, Freccero Leonardo, and Caceres Jorge Munoz. "Towards a natural language interface for web based application programming interfaces : An analysis of state-of-the-art methods." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186433.

Full text
Abstract:
Natural language interfaces aim at providing seamless interaction for users to the functionality computer systems offer. Much of this functionality is accessible online through web based application programming interfaces. A NLI capable of leveraging this has the potential of being a powerful tool and to our knowledge, no previous research in this area exist. Therefore, this thesis aims at investigating how existing methods and concepts can be applied to the subject of a NLI over several APIs. The thesis is based on an extensive literature study focused on state-of-the-art within portability of NLI research. Methods from this are discussed in the context of a NLI over APIs and a suitable way of approaching a solution is presented. The approach divides the problem into identifying a domain for the conversation using available knowledge bases and handling specific domains individually using dialog state trackers. The main finding of the study is that it is feasible to create a scalable general purpose NLI over web APIs leveraging existing research.
Gränssnitt för naturligt språk ämnar att underlätta interaktionen för användare till den funktionalitet som datorsystem erbjuder. Mycket av denna funktionalitet finns tillgänglig online genom webb-baserade applikationsprogrammeringsgränssnitt. Ett NLI kapabelt att utnyttja denna funktionalitet har potentialen att vara ett kraftfullt verktyg och till vår vetskap har ingen forskning gjorts inom detta område. På denna grund ämnar denna undersökning att finna hur nuvarande metoder och koncept kan appliceras till ett NLI för flera APIer. Denna rapport bygger på en bred literaturundersökning fokuserad på den mest aktuella forskningen inom portabilitet för NLI. Metoder från detta område diskuteras i sammanhanget av ett NLI över APIer och ett lämpligt sätt att närma sig en lösning presenteras. Lösningen delar upp problemet i att identifiera domän för konversationen genom att använda tillgängliga kunskapsbaser samt att använda “dialog state trackers” för att hantera individuella domäner. Den huvudsakliga slutsatsen som dras är att det är möjligt att skapa ett skalbart NLI kapabelt att hantera flera APIer genom att applicera nuvarande forskning.
APA, Harvard, Vancouver, ISO, and other styles
38

Ardi, Shanai. "A Nonlinear Programming Approach for Dynamic Voltage Scaling." Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2774.

Full text
Abstract:

Embedded computing systems in portable devices need to be energy efficient, yet they have to deliver adequate performance to the often computationally expensive applications. Dynamic voltage scaling is a technique that offers a speed versus power trade-off, allowing the application to achieve considerable energy savings and, at the same time, to meet the imposed time constraints.

In this thesis, we explore the possibility of using optimal voltage scaling algorithms based on nonlinear programming at the system level, for a complex multiprocessor scheduling problem. We present an optimization approach to the modeled nonlinear programming formulation of the continuous voltage selection problem excluding the consideration of transition overheads. Our approach achieves the same optimal results as the previous work using the same model, but due to its speed, can be efficiently used for design space exploration. We validate our results using numerous automatically generated benchmarks.

APA, Harvard, Vancouver, ISO, and other styles
39

Moua, Theng C. "Application Programmer's Interface (API) for heterogeneous language environment and upgrading the legacy embedded software." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA397557.

Full text
Abstract:
Thesis (M.S. in Software Engineering)--Naval Postgraduate School, Sept. 2001.
Thesis advisor: Berzins, Valdis. "September 2001." Includes bibliographical references (p. 87-88). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Yuefan. "Proximal Policy Optimization in StarCraft." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1505267/.

Full text
Abstract:
Deep reinforcement learning is an area of research that has blossomed tremendously in recent years and has shown remarkable potential in computer games. Real-time strategy game has become an important field of artificial intelligence in game for several years. This paper is about to introduce a kind of algorithm that used to train agents to fight against computer bots. Not only because games are excellent tools to test deep reinforcement learning algorithms for their valuable insight into how well an algorithm can perform in isolated environments without the real-life consequences, but also real-time strategy games are a very complex genre that challenges artificial intelligence agents in both short-term or long-term planning. In this paper, we introduce some history of deep learning and reinforcement learning. Then we combine them with StarCraft. PPO is the algorithm which have some of the benefits of trust region policy optimization (TRPO), but it is much simpler to implement, more general for environment, and have better sample complexity. The StarCraft environment: Blood War Application Programming Interface (BWAPI) is open source to test. The results show that PPO can work well in BWAPI and train units to defeat the opponents. The algorithm presented in the thesis is corroborated by experiments.
APA, Harvard, Vancouver, ISO, and other styles
41

Blommendahl, Simon. "An analysis of API usability and Azure API management." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131750.

Full text
Abstract:
In today’s computer environments the systems are getting bigger and more complex for each day that passes by. The motivating factor for this is that the customer wants to achieve more and more with their computer systems than before. The only way to really solve this task is to use even more APIs (Application program interfaces) in their systems.     When using more APIs in a system, there is a chance that the specific system provides the same type of API twice, which of course, is a waste of storage and resources. In addition, the more APIs a system contains, the bigger the risk is for mismanagement of these APIs. In the worst case, this can result in security breaches or data leaks.  This thesis investigates specific APIs provided for a customer of Sigma IT Consulting. The aim is to evaluate and organize the APIs according to their usability criteria. The main focus of the evaluation is the available documentation which will be evaluated by a questionnaire survey distributed to senior software developers at Sigma IT Consulting in Växjö. Conclusions will then be drawn depending on the result from the survey, and we can then see if Azure API management (which is a service to make a system more user – friendly) is accurate in its way of organizing with the API usability as the main focus! Unfortunately, Azure API management did not have any possibility what so ever to customize the API placement in a system, and the only way the APIs are organized is in alphabetical order. Therefore, a prototype with even more sorting functionality than Azure API management will also be presented in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
42

Zemanovičová, Monika. "Řízení informačních toků využíváním systému Business Intelligence ve vybrané firmě." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2015. http://www.nusl.cz/ntk/nusl-224861.

Full text
Abstract:
This diploma thesis proposes usage of Business Intelligence tools in the chosen company. It considers the costs for installing, assesses the economic benefits and based on the analysis, it proposes appropriate solutions to the currently unsatisfactory situation in the company.
APA, Harvard, Vancouver, ISO, and other styles
43

Domingues, Miguel Brazão. "Core language for web applications." Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/5103.

Full text
Abstract:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Web applications have a very high demand for rapid development and constant change. Several languages were created for this kind of applications, which are very flexible but many times trade the benefits of strongly-typed programming languages by untyped interpreted languages. With this kind of languages the interaction between different layers in a web application is usually developed using dialects and programming conventions with no real mechanical verifications between the client and server sides, and the SQL code within the application and the database. We present a typed core language for web applications that integrates the typing of the interface definition, business logic, and database manipulation representing these interactions at a high abstract level. Using only one language, typed and with its own instructions to define the interface and the interaction with the database, becomes possible to make static checks. Thereby, avoiding execution errors caused by the usual heterogeneity among web applications. We also describe the implementation of a prototype with a highly flexible programming environment for our language that allows the application development and publishing tasks to be done through a web interface, interacting directly with the application and without loosing the integrity checks. This kind of development relies on an agile development methodology. Therefore, the modifications made to the application are made active using the dynamic reconfiguration mechanism, avoiding the recompilation of the application and system restart.
APA, Harvard, Vancouver, ISO, and other styles
44

Gu, Pei. "Prototyping the simulation of a gate level logic application program interface (API) on an explicit-multi-threaded (XMT) computer." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2626.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
45

Chuindja, Ngniah Christian. "Application of Web Mashup Technology to Oyster Information Services." ScholarWorks@UNO, 2012. http://scholarworks.uno.edu/td/1568.

Full text
Abstract:
Web mashup is a lightweight technology used to integrate data from remote sources without direct access to their databases. As a data consumer, a Web mashup application creates new contents by retrieving data through the Web application programming interface (API) provided by the external sources. As a data provider, the service program publishes its Web API and implements the specified functions. In the project reported by this thesis, we have implemented two Web mashup applications to enhance the Web site oystersentinel.org: the Perkinsus marinus model and the Oil Spill model. Each model overlay geospatial data from a local database on top of a coastal map from Google Maps. In addition, we have designed a Web-based data publishing service. In this experimental system, we illustrated a successful Web mashup interface that allows outside developers to access the data about the local oyster stock assessment.
APA, Harvard, Vancouver, ISO, and other styles
46

Bhardwaj, Yogita. "Reverse Engineering End-user Developed Web Applications into a Model-based Framework." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/33150.

Full text
Abstract:
The main goal of this research is to facilitate end-user and expert developer collaboration in the creation of a web application. This research created a reverse engineering toolset and integrated it with Click (Component-based Lightweight Internet-application Construction Kit), an end-user web development tool. The toolset generates artifacts to facilitate collaboration between end-users and expert web developers when the end-users need to go beyond the limited capabilities of Click. By supporting smooth transition of workflow to expert web developers, we can help them in implementing advanced functionality in end-user developed web applications. The four artifacts generated include a sitemap, text documentation, a task model, and a canonical representation of the user interface. The sitemap is automatically generated to support the workflow of web developers. The text documentation of a web application is generated to document data representation and business logic. A task model, expressed using ConcurTaskTrees notation, covers the whole interaction specified by the end-user. A presentation and dialog model, represented in User Interface Markup Language (UIML), describe the user interface in a declarative language. The task model and UIML representation are created to support development of multi-platform user interfaces from an end-user web application. A formative evaluation of the usability of these models and representations with experienced web developers revealed that these representations were useful and easy to understand.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
47

Charles-Elie, Simon. "Development of a tool allowing to create and use JSON schemas so as to enhance the validation of existing projects." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210744.

Full text
Abstract:
A mobile application is typically divided into two sides that communicate with each other: the front-end (i.e. what the user can see and interact with on the phone) and the back-end (the hidden ”server” side, which processes requests from the front-end). Ways to improve their production cycle are constantly investigated by corporations such as Applidium, which is a French startup company specialized in mobile applications. For instance, the firm often has to deal with external back-ends that are not properly documented, which makes the development of products intricate. Furthermore, test and documentation files for certain parts of projects are manually written, which is time consuming, and are all largely based on the same information (back-end descriptions). Hence, this information frequently finds itself scattered in different files, sometimes in different versions. Having identified issues that most regularly disrupt the work of the company’s employees, a certain number of goals to solve these are set, such as, notably, centralizing all back-end-related information into one authoritative source, and automatizing the generation of test and documentation files. A tool (in the form of a web application) allowing users to describe back-ends, called Pericles, is then proposed as the outcome of the master thesis, to deal with the described problems and materialize the defined objectives. Finally, a qualitative evaluation is performed through a questionnaire designed to assess how users feel the tool helps them in their work, which constitutes the metric for this project. The evaluation suggests that the implemented tool is relevant with respect to the fixed goals, and allows to infer its propensity to help Applidium’s developers and project managers by making the development and validation of projects easier.
APA, Harvard, Vancouver, ISO, and other styles
48

Sun, Tao. "Carrier Grade Adaptation for an IP-based Multimodal Application Server: Moving the SoftBridge into SLEE." Thesis, University of the Western Cape, 2004. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7776_1370594549.

Full text
Abstract:

Providing carrier grade characteristics for Internet Protocol (IP) communication applications is a significant problem for IP application providers in order to offer integrated services that span IP 
and telecommunication networks. This thesis addresses the provision of life-cycle management, which is only one carrier grade characteristic, for a SoftBridge application, which is an example of IP communication applications. A SoftBridge provides semi-synchronous multi-modal IP-based communication. The work related to IP-Telecommunication integrated services and the SoftBridge is analyzed with respect to life-cycle management in a literature review. It is suggested to use an Application Server in a Next Generation Network (NGN) to provide life-cyclemanagement functionality for IP-Telecommunication applications. In this thesis, the Application Server is represented by a JAIN Service Logic Execution Environment(JSLEE), in which 
a SoftBridge application can be deployed, activated, deactivated, uninstalled and upgraded online.Two methodologies are applied in this research: exploratory prototyping, which evolves the development of a SoftBridge application, and empirical comparison, which is concerned with the empirical evaluation of a SoftBridge application in terms of carriergrade capabilities. A SoftBridge application called SIMBA 
provides a Deaf Telephony service similar to aprevious Deaf Telephony SoftBridge, However, SIMBA&rsquo
s SoftBridge design and implementation are unique to this thesis. In order to test the life-cycle 
management ability of SIMBA, an empirical evaluation is carried out including the experiments oflife-cycle management and call-processing performance. The final experimental results of the evaluation show that a JSLEE is able to provide life-cycle management for SIMBA without causing a significant decrease in performance. In conclusion, the life-cycle management can be provided 
or a SoftBridge application by using an Application Server such as a JSLEE. Futhermore, the results indicate that 
approach of using Application Server (JSLEE) integration should be 
sufficiently general to provide life cycle management, and indeed other carrier grade capabilities, for other IP communication applications. This allows IP communication applications to be 
 
 
integrated into an NGN.

APA, Harvard, Vancouver, ISO, and other styles
49

Gardian, Ján. "Serverová aplikace pro zpracování dat z databáze MySQL a jejich interpretaci." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-242020.

Full text
Abstract:
Diploma thesis is about creating server application that process and interprets data from the database. Main aim of such application is able to process a large number of database requests in real-time environment. Provided database contains records of measuring download speed and quality of mobile connection via different radio technology from various providers. Those measured data are sent from users all around the world and amount of data collected is still growing. Therefore created server application can adapt to increasing size of database thanks to aggregation. This method of aggregation and use of index in database tables are further discussed in the theoretical part. Mainly putting indexes in tables produce significant acceleration of processing database requests. Final product of this thesis is an application that consist of three components: a server application running aggregation, website that interprets measured data and back-end interface providing measured data as well. Data at the website are presented in form of graphs for different countries and used radio technologies. Web address and user manual for finished applications are provided in the fourth chapter of diploma thesis. In the last part of thesis are performed various speed tests of programmed application that confirm the effectiveness of selected and described methods to accelerate work with the database.
APA, Harvard, Vancouver, ISO, and other styles
50

Monk, Andrew Michael. "Exploration into the Use of a Software Defined Radio as a Low-Cost Radar Front-End." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8742.

Full text
Abstract:
Inspection methods for satellites post-launch are currently expensive and/or dangerous. To address this, BYU, in conjunction with NASA, is designing a series of small satellites called CubeSATs. These small satellites are designed to be launched from a satellite and to visually inspect the launching body. The current satellite revision passively tumbles through space and is appropriately named the passive inspection cube satellite (PICS). The next revision actively maintains translation and rotation relative to the launching satellite and is named the translation, rotation inspection cube satellite (TRICS). One of the necessary sensors aboard this next revision is the means to detect distance. This work explores the feasibility of using a software defined radio as a small, low-cost front end for a ranging radar to fulfill this need. For this work, the LimeSDR-Mini is selected due to its low-cost, small form factor, full duplex operation, and open-source hardware/software. Additionally, due to the the channel characteristics of space, the linear frequency modulated continuous-wave (LFMCW) radar is selected as the radar architecture due to its ranging capabilities and simplicity. The LFMCW radar theory and simulation are presented. Two programming methods for the LimeSDR-Mini are considered: GNU Radio Companion and the pyLMS7002Soapy API. GNU Radio Companion is used for initial exploration of the LimeSDR-Mini and confirms its data streaming (RX and TX) and full duplex capabilities. The pyLMS7002Soapy API demonstrates further refined control over the LimeSDR-Mini while providing platform independence and deployability. This work concludes that the LimeSDR-Mini is capable of acting as the front end for a ranging radar aboard a small satellite provided the pyLMS7002Soapy API is used for configuration and control. GNU Radio Companion is not recommended as a programming platform for the LimeSDR-Mini and the pyLMS7002Soapy API requires further research to fine tune the SDR's performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography