Segui questo link per vedere altri tipi di pubblicazioni sul tema: Decompiler.

Articoli di riviste sul tema "Decompiler"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-31 articoli di riviste per l'attività di ricerca sul tema "Decompiler".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Gusarovs, Konstantins. "An Analysis on Java Programming Language Decompiler Capabilities". Applied Computer Systems 23, n. 2 (1 dicembre 2018): 109–17. http://dx.doi.org/10.2478/acss-2018-0014.

Testo completo
Abstract (sommario):
Abstract Along with new artifact development, software engineering also includes other tasks. One of these tasks is the reverse engineering of binary artifacts. This task can be performed by using special “decompiler” software. In the present paper, the author performs a comparison of four different Java programming language decompilers that have been chosen based on both personal experience and results of a software developer survey.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Mikhailov, A. A., e A. E. Hmelnov. "Delphi object files decompiler". Proceedings of the Institute for System Programming of the RAS 29, n. 6 (2017): 105–16. http://dx.doi.org/10.15514/ispras-2017-29(6)-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Mihajlenko, Kristina, Mikhail Lukin e Andrey Stankevich. "A method for decompilation of AMD GCN kernels to OpenCL". Information and Control Systems, n. 2 (29 aprile 2021): 33–42. http://dx.doi.org/10.31799/1684-8853-2021-2-33-42.

Testo completo
Abstract (sommario):
Introduction: Decompilers are useful tools for software analysis and support in the absence of source code. They are available for many hardware architectures and programming languages. However, none of the existing decompilers support modern AMD GPU architectures such as AMD GCN and RDNA. Purpose: We aim at developing the first assembly decompiler tool for a modern AMD GPU architecture that generates code in the OpenCL language, which is widely used for programming GPGPUs. Results: We developed the algorithms for the following operations: preprocessing assembly code, searching data accesses, extracting systemvalues, decompiling arithmetic operations and recovering data types. We also developed templates for decompilation of branching operations. Practical relevance: We implemented the presented algorithms in Python as a tool called OpenCLDecompiler, which supports a large subset of AMD GCN instructions. This tool automatically converts disassembled GPGPU code into the equivalent OpenCL code, which reduces the effort required to analyze assembly code.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Li, Zhiming, Qing Wu e Kun Qian. "Adabot: Fault-Tolerant Java Decompiler (Student Abstract)". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 10 (3 aprile 2020): 13861–62. http://dx.doi.org/10.1609/aaai.v34i10.7203.

Testo completo
Abstract (sommario):
Reverse Engineering has been an extremely important field in software engineering, it helps us to better understand and analyze the internal architecture and interrealtions of executables. Classical Java reverse engineering task includes disassembly and decompilation. Traditional Abstract Syntax Tree (AST) based disassemblers and decompilers are strictly rule defined and thus highly fault intolerant when bytecode obfuscation were introduced for safety concern. In this work, we view decompilation as a statistical machine translation task and propose a decompilation framework which is fully based on self-attention mechanism. Through better adaption to the linguistic uniqueness of bytecode, our model fully outperforms rule-based models and previous works based on recurrence mechanism.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Harrand, Nicolas, César Soto-Valero, Martin Monperrus e Benoit Baudry. "Java decompiler diversity and its application to meta-decompilation". Journal of Systems and Software 168 (ottobre 2020): 110645. http://dx.doi.org/10.1016/j.jss.2020.110645.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Chen, Gengbiao, Zhengwei Qi, Shiqiu Huang, Kangqi Ni, Yudi Zheng, Walter Binder e Haibing Guan. "A refined decompiler to generate C code with high readability". Software: Practice and Experience 43, n. 11 (13 luglio 2012): 1337–58. http://dx.doi.org/10.1002/spe.2138.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Křoustek, Jakub, Fridolín Pokorný e Dusan Kolář. "A new approach to instruction-idioms detection in a retargetable decompiler". Computer Science and Information Systems 11, n. 4 (2014): 1337–59. http://dx.doi.org/10.2298/csis131203076k.

Testo completo
Abstract (sommario):
Retargetable executable-code decompilation is a one of the most complicated reverse-engineering tasks. Among others, it involves de-optimization of compiler-optimized code. One type of such an optimization is usage of so-called instruction idioms. These idioms are used to produce faster or even smaller executable files. On the other hand, decompilation of instruction idioms without any advanced analysis produces almost unreadable high-level language code that may confuse the user of the decompiler. In this paper, we revisit and extend the previous approach of instruction-idioms detection used in a retargetable decompiler developed within the Lissom project. The previous approach was based on detection of instruction idioms in a very-early phase of decompilation (a front-end part) and it was inaccurate for architectures with a complex instruction set (e.g. Intel x86). The novel approach is based on delaying detection of idioms and reconstruction of code to the later phase (a middleend part). For this purpose, we use the LLVM optimizer and we implement this analysis as a new pass in this tool. According to experimental results, this new approach significantly outperforms the previous approach as well as the other commercial solutions.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Qasim, Syed Ali, Jared M. Smith e Irfan Ahmed. "Control Logic Forensics Framework using Built-in Decompiler of Engineering Software in Industrial Control Systems". Forensic Science International: Digital Investigation 33 (luglio 2020): 301013. http://dx.doi.org/10.1016/j.fsidi.2020.301013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Křoustek, Jakub, e Dusan Kolář. "Context parsing (not only) of the object-file-format description language". Computer Science and Information Systems 10, n. 4 (2013): 1673–701. http://dx.doi.org/10.2298/csis130120071k.

Testo completo
Abstract (sommario):
The very first step of each tool such as linker, disassembler, or debugger is parsing of an input executable or object file. These files are stored in one of the existing object file formats (OFF). Retargetable tools are not limited to any particular target platform and they have to deal with handling of several OFFs. Handling of these formats is similar to parsing of computer languages - both of them have a predefined structure and a list of allowed constructions. However, OFF constructions are heavily mutually interconnected and they create context-sensitive units. In present, there is no generic system, which can be used for OFF description and its effective parsing. In this paper, we propose a formal language that can be used for OFF description. Furthermore, we present a design of a context parser of this language that is based on the formal models. The major advance of this solution is an ability to describe context-sensitive properties on the level of the language itself. This concept is planned to be used in the existing retargetable decompiler developed within the Lissom project. In this project, the language and its parser will be used for an object file parsing and its automatic conversion into the internal uniform file format. It is important to say that the concept of this parser can be utilized within other programming languages.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Yang, Pin, Huiyu Zhou, Yue Zhu, Liang Liu e Lei Zhang. "Malware Classification Based on Shallow Neural Network". Future Internet 12, n. 12 (2 dicembre 2020): 219. http://dx.doi.org/10.3390/fi12120219.

Testo completo
Abstract (sommario):
The emergence of a large number of new malicious code poses a serious threat to network security, and most of them are derivative versions of existing malicious code. The classification of malicious code is helpful to analyze the evolutionary trend of malicious code families and trace the source of cybercrime. The existing methods of malware classification emphasize the depth of the neural network, which has the problems of a long training time and large computational cost. In this work, we propose the shallow neural network-based malware classifier (SNNMAC), a malware classification model based on shallow neural networks and static analysis. Our approach bridges the gap between precise but slow methods and fast but less precise methods in existing works. For each sample, we first generate n-grams from their opcode sequences of the binary file with a decompiler. An improved n-gram algorithm based on control transfer instructions is designed to reduce the n-gram dataset. Then, the SNNMAC exploits a shallow neural network, replacing the full connection layer and softmax with the average pooling layer and hierarchical softmax, to learn from the dataset and perform classification. We perform experiments on the Microsoft malware dataset. The evaluation result shows that the SNNMAC outperforms most of the related works with 99.21% classification precision and reduces the training time by more than half when compared with the methods using DNN (Deep Neural Networks).
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Mateless, Roni, Daniel Rejabek, Oded Margalit e Robert Moskovitch. "Decompiled APK based malicious code classification". Future Generation Computer Systems 110 (settembre 2020): 135–47. http://dx.doi.org/10.1016/j.future.2020.03.052.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Du, Yao, Mengtian Cui e Xiaochun Cheng. "A Mobile Malware Detection Method Based on Malicious Subgraphs Mining". Security and Communication Networks 2021 (17 aprile 2021): 1–11. http://dx.doi.org/10.1155/2021/5593178.

Testo completo
Abstract (sommario):
As mobile phone is widely used in social network communication, it attracts numerous malicious attacks, which seriously threaten users’ personal privacy and data security. To improve the resilience to attack technologies, structural information analysis has been widely applied in mobile malware detection. However, the rapid improvement of mobile applications has brought an impressive growth of their internal structure in scale and attack technologies. It makes the timely analysis of structural information and malicious feature generation a heavy burden. In this paper, we propose a new Android malware identification approach based on malicious subgraph mining to improve the detection performance of large-scale graph structure analysis. Firstly, function call graphs (FCGs), sensitive permissions, and application programming interfaces (APIs) are generated from the decompiled files of malware. Secondly, two kinds of malicious subgraphs are generated from malware’s decompiled files and put into the feature set. At last, test applications’ safety can be automatically identified and classified into malware families by matching their FCGs with malicious structural features. To evaluate our approach, a dataset of 11,520 malware and benign applications is established. Experimental results indicate that our approach has better performance than three previous works and Androguard.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Cen, Lei, Christoher S. Gates, Luo Si e Ninghui Li. "A Probabilistic Discriminative Model for Android Malware Detection with Decompiled Source Code". IEEE Transactions on Dependable and Secure Computing 12, n. 4 (1 luglio 2015): 400–412. http://dx.doi.org/10.1109/tdsc.2014.2355839.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Guan, Jun, Huiying Liu, Baolei Mao e Xu Jiang. "Android Malware Detection Based on API Pairing". Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38, n. 5 (ottobre 2020): 965–70. http://dx.doi.org/10.1051/jnwpu/20203850965.

Testo completo
Abstract (sommario):
Aiming at the problem that the permission-based detection is too coarse-grained, a malware detection method based on sensitive application program interface(API) pairing is proposed. The method decompiles the application to extract the sensitive APIs corresponding to the dangerous permissions, and uses the pairing of the sensitive APIs to construct the undirected graph of malicious applications and undirected graph of benign applications. According to the importance of sensitive APIs in malware and benign applications, different weights on the same edge in the different graphs are assigned to detect Android malicious applications. Experimental results show that the proposed method can effectively detect Android malicious applications and has practical significance.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Escalada, Javier, Francisco Ortin e Ted Scully. "An Efficient Platform for the Automatic Extraction of Patterns in Native Code". Scientific Programming 2017 (2017): 1–16. http://dx.doi.org/10.1155/2017/3273891.

Testo completo
Abstract (sommario):
Different software tools, such as decompilers, code quality analyzers, recognizers of packed executable files, authorship analyzers, and malware detectors, search for patterns in binary code. The use of machine learning algorithms, trained with programs taken from the huge number of applications in the existing open source code repositories, allows finding patterns not detected with the manual approach. To this end, we have created a versatile platform for the automatic extraction of patterns from native code, capable of processing big binary files. Its implementation has been parallelized, providing important runtime performance benefits for multicore architectures. Compared to the single-processor execution, the average performance improvement obtained with the best configuration is 3.5 factors over the maximum theoretical gain of 4 factors.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Zhao, Xiao Lin, Gang Hao, Chang Zhen Hu e Zhi Qiang Li. "A Discovery Method of the Dirty Data Transmission Path Based on Complex Network". Applied Mechanics and Materials 651-653 (settembre 2014): 1741–47. http://dx.doi.org/10.4028/www.scientific.net/amm.651-653.1741.

Testo completo
Abstract (sommario):
With the increasing scale of software system, the interaction between software elements becomes more and more complex, which lead to the increased dirty data in running software system. This may reduce the system performance and cause system collapse. In this paper, we proposed a discovery method of the dirty data transmission path based on complex network. Firstly, the binary file is decompiled and the function call graph is drawn by using the source code. Then the software structure is described as a weighted directed graph based on the knowledge of complex network. In addition, the dirty data node is marked by using the power-law distribution characteristics of the scale-free network construction of complex network chart. Finally, we found the dirty data transmission path during software running process. The experimental results show the transmission path of dirty data is accurate, which confirmed the feasibility of the method.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Chen, Tieming, Qingyu Mao, Yimin Yang, Mingqi Lv e Jianming Zhu. "TinyDroid: A Lightweight and Efficient Model for Android Malware Detection and Classification". Mobile Information Systems 2018 (17 ottobre 2018): 1–9. http://dx.doi.org/10.1155/2018/4157156.

Testo completo
Abstract (sommario):
With the popularity of Android applications, Android malware has an exponential growth trend. In order to detect Android malware effectively, this paper proposes a novel lightweight static detection model, TinyDroid, using instruction simplification and machine learning technique. First, a symbol-based simplification method is proposed to abstract the opcode sequence decompiled from Android Dalvik Executable files. Then, N-gram is employed to extract features from the simplified opcode sequence, and a classifier is trained for the malware detection and classification tasks. To improve the efficiency and scalability of the proposed detection model, a compression procedure is also used to reduce features and select exemplars for the malware sample dataset. TinyDroid is compared against the state-of-the-art antivirus tools in real world using Drebin dataset. The experimental results show that TinyDroid can get a higher accuracy rate and lower false alarm rate with satisfied efficiency.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Mu, Zhiying, Zhihu Li e Xiaoyu Li. "Structural similarity based common library detection method for Android". Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, n. 2 (aprile 2021): 448–53. http://dx.doi.org/10.1051/jnwpu/20213920448.

Testo completo
Abstract (sommario):
The correct classifying and filtering of common libraries in Android applications can effectively improve the accuracy of repackaged application detection. However, the existing common library detection methods barely meet the requirement of large-scale app markets due to the low detection speed caused by their classification rules. Aiming at this problem, a structural similarity based common library detection method for Android is presented. The sub-packages with weak association to main package are extracted as common library candidates from the decompiled APK (Android application package) by using PDG (program dependency graph) method. With package structures and API calls being used as features, the classifying of those candidates is accomplished through coarse and fine-grained filtering. The experimental results by using real-world applications as dataset show that the detection speed of the present method is higher while the accuracy and false positive rate are both ensured. The method is proved to be efficient and precise.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Týma, Paul. "Transient Variable Caching in Java’s Stack-Based Intermediate Representation". Scientific Programming 7, n. 2 (1999): 157–66. http://dx.doi.org/10.1155/1999/501879.

Testo completo
Abstract (sommario):
Java’s stack‐based intermediate representation (IR) is typically coerced to execute on register‐based architectures. Unoptimized compiled code dutifully replicates transient variable usage designated by the programmer and common optimization practices tend to introduce further usage (i.e., CSE, Loop‐invariant Code Motion, etc.). On register based machines, often transient variables are cached within registers (when available) saving the expense of actually accessing memory. Unfortunately, in stack‐based environments because of the need to push and pop the transient values, further performance improvement is possible. This paper presents Transient Variable Caching (TVC), a technique for eliminating transient variable overhead whenever possible. This optimization would find a likely home in optimizers attached to the back of popular Java compilers. Side effects of the algorithm include significant instruction reordering and introduction of many stack‐manipulation operations. This combination has proven to greatly impede the ability to decompile stack‐based IR code sequences. The code that results from the transform is faster, smaller, and greatly impedes decompilation.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Yang, Yang, Xuehui Du, Zhi Yang e Xing Liu. "Android Malware Detection Based on Structural Features of the Function Call Graph". Electronics 10, n. 2 (15 gennaio 2021): 186. http://dx.doi.org/10.3390/electronics10020186.

Testo completo
Abstract (sommario):
The openness of Android operating system not only brings convenience to users, but also leads to the attack threat from a large number of malicious applications (apps). Thus malware detection has become the research focus in the field of mobile security. In order to solve the problem of more coarse-grained feature selection and larger feature loss of graph structure existing in the current detection methods, we put forward a method named DGCNDroid for Android malware detection, which is based on the deep graph convolutional network. Our method starts by generating a function call graph for the decompiled Android application. Then the function call subgraph containing the sensitive application programming interface (API) is extracted. Finally, the function call subgraphs with structural features are trained as the input of the deep graph convolutional network. Thus the detection and classification of malicious apps can be realized. Through experimentation on a dataset containing 11,120 Android apps, the method proposed in this paper can achieve detection accuracy of 98.2%, which is higher than other existing detection methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Yan, Jinpei, Yong Qi e Qifan Rao. "LSTM-Based Hierarchical Denoising Network for Android Malware Detection". Security and Communication Networks 2018 (2018): 1–18. http://dx.doi.org/10.1155/2018/5249190.

Testo completo
Abstract (sommario):
Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN), a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequences are too long for LSTM to train due to the gradient vanishing problem. Hence, HDN uses a hierarchical structure, whose first-level LSTM parallelly computes on opcode subsequences (we called them method blocks) to learn the dense representations; then the second-level LSTM can learn and detect malware through method block sequences. Considering that malicious behavior only appears in partial sequence segments, HDN uses method block denoise module (MBDM) for data denoising by adaptive gradient scaling strategy based on loss cache. We evaluate and compare HDN with the latest mainstream researches on three datasets. The results show that HDN outperforms these Android malware detection methods,and it is able to capture longer sequence features and has better detection efficiency than N-gram-based malware detection which is similar to our method.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Makaryan, Aleksandr, e Mikhail Karmanov. "Aspects of Analyzing the Security and Vulnerabilities of Mobile Applications". NBI Technologies, n. 1 (agosto 2018): 30–33. http://dx.doi.org/10.15688/nbit.jvolsu.2018.1.5.

Testo completo
Abstract (sommario):
The given article deals with the variants of mobile applications’ local data protection on devices with operation systems Android and iOS. The following programs have been investigated: messengers WhatsApp, Viber, Telegram, WeChat, Signal. The conducted analysis let define and classify the programs for protection mechanisms, the types of stored data, the required tools and technologies, as well as the techniques for improving the protection of the stored local data. As it turned out in the course of this research work, locally stored software data on the device is not given enough attention in terms of protection, as in some cases, this protection is based solely on the mechanisms of the operating system of the device. For more reliable protection of locally stored data of the application it is necessary to implement the following approaches in the application: encryption of both the database in full and some critical data in it separately by an additional layer of encryption; encryption of files that appear during the program execution (media files, for example); coding and representation of data in a program using proprietary algorithms; the use of confusing names of critical files and data (the key file should not be called “key”, as in the case of WhatsApp), and data traps; the encoding of the configuration files containing sensitive information to ensure the security of the data; making the functionality of the kernel cryptographic transformations in a separate plug-in library in order to make the study of the decompiled source code on the subject of these reforms meaningless.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Bodei, Chiara, Lorenzo Ceragioli, Pierpaolo Degano, Riccardo Focardi, Letterio Galletta, Flaminia Luccio, Mauro Tempesta e Lorenzo Veronese. "FWS: Analyzing, maintaining and transcompiling firewalls". Journal of Computer Security 29, n. 1 (3 febbraio 2021): 77–134. http://dx.doi.org/10.3233/jcs-200017.

Testo completo
Abstract (sommario):
Firewalls are essential for managing and protecting computer networks. They permit specifying which packets are allowed to enter a network, and also how these packets are modified by IP address translation and port redirection. Configuring a firewall is notoriously hard, and one of the reasons is that it requires using low level, hard to interpret, configuration languages. Equally difficult are policy maintenance and refactoring, as well as porting a configuration from one firewall system to another. To address these issues we introduce a pipeline that assists system administrators in checking if: (i) the intended security policy is actually implemented by a configuration; (ii) two configurations are equivalent; (iii) updates have the desired effect on the firewall behavior; (iv) there are useless or redundant rules; additionally, an administrator can (v) transcompile a configuration into an equivalent one in a different language; and (vi) maintain a configuration using a generic, declarative language that can be compiled into different target languages. The pipeline is based on IFCL, an intermediate firewall language equipped with a formal semantics, and it is implemented in an open source tool called FWS. In particular, the first stage decompiles real firewall configurations for iptables, ipfw, pf and (a subset of) Cisco IOS into IFCL. The second one transforms an IFCL configuration into a logical predicate and uses the Z3 solver to synthesize an abstract specification that succinctly represents the firewall behavior. System administrators can use FWS to analyze the firewall by posing SQL-like queries, and update the configuration to meet the desired security requirements. Finally, the last stage allows for maintaining a configuration by acting directly on its abstract specification and then compiling it to the chosen target language. Tests on real firewall configurations show that FWS can be fruitfully used in real-world scenarios.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Picot, Jo, Vicky Copley, Jill L. Colquitt, Neelam Kalita, Debbie Hartwell e Jackie Bryant. "The INTRABEAM® Photon Radiotherapy System for the adjuvant treatment of early breast cancer: a systematic review and economic evaluation". Health Technology Assessment 19, n. 69 (agosto 2015): 1–190. http://dx.doi.org/10.3310/hta19690.

Testo completo
Abstract (sommario):
BackgroundInitial treatment for early breast cancer is usually either breast-conserving surgery (BCS) or mastectomy. After BCS, whole-breast external beam radiotherapy (WB-EBRT) is the standard of care. A potential alternative to post-operative WB-EBRT is intraoperative radiation therapy delivered by the INTRABEAM®Photon Radiotherapy System (Carl Zeiss, Oberkochen, Germany) to the tissue adjacent to the resection cavity at the time of surgery.ObjectiveTo assess the clinical effectiveness and cost-effectiveness of INTRABEAM for the adjuvant treatment of early breast cancer during surgical removal of the tumour.Data sourcesElectronic bibliographic databases, including MEDLINE, EMBASE and The Cochrane Library, were searched from inception to March 2014 for English-language articles. Bibliographies of articles, systematic reviews, clinical guidelines and the manufacturer’s submission were also searched. The advisory group was contacted to identify additional evidence.MethodsSystematic reviews of clinical effectiveness, health-related quality of life and cost-effectiveness were conducted. Two reviewers independently screened titles and abstracts for eligibility. Inclusion criteria were applied to full texts of retrieved papers by one reviewer and checked by a second reviewer. Data extraction and quality assessment were undertaken by one reviewer and checked by a second reviewer, and differences in opinion were resolved through discussion at each stage. Clinical effectiveness studies were included if they were carried out in patients with early operable breast cancer. The intervention was the INTRABEAM system, which was compared with WB-EBRT, and study designs were randomised controlled trials (RCTs). Controlled clinical trials could be considered if data from available RCTs were incomplete (e.g. absence of data on outcomes of interest). A cost–utility decision-analytic model was developed to estimate the costs, benefits and cost-effectiveness of INTRABEAM compared with WB-EBRT for early operable breast cancer.ResultsOne non-inferiority RCT, TARGeted Intraoperative radioTherapy Alone (TARGIT-A), met the inclusion criteria for the review. The review found that local recurrence was slightly higher following INTRABEAM than WB-EBRT, but the difference did not exceed the 2.5% non-inferiority margin providing INTRABEAM was given at the same time as BCS. Overall survival was similar with both treatments. Statistically significant differences in complications were found for the occurrence of wound seroma requiring more than three aspirations (more frequent in the INTRABEAM group) and for a Radiation Therapy Oncology Group toxicity score of grade 3 or 4 (less frequent in the INTRABEAM group). Cost-effectiveness base-case analysis indicates that INTRABEAM is less expensive but also less effective than WB-EBRT because it is associated with lower total costs but fewer total quality-adjusted life-years gained. However, sensitivity analyses identified four model parameters that can cause a switch in the treatment option that is considered cost-effective.LimitationsThe base-case result from the model is subject to uncertainty because the disease progression parameters are largely drawn from the single available RCT. The RCT median follow-up of 2 years 5 months may be inadequate, particularly as the number of participants with local recurrence is low. The model is particularly sensitive to this parameter.Conclusions and implicationsA significant investment in INTRABEAM equipment and staff training (clinical and non-clinical) would be required to make this technology available across the NHS. Longer-term follow-up data from the TARGIT-A trial and analysis of registry data are required as results are currently based on a small number of events and economic modelling results are uncertain.Study registrationThis study is registered as PROSPERO CRD42013006720.FundingThe National Institute for Health Research Health Technology Assessment programme. Note that the economic model associated with this document is protected by intellectual property rights, which are owned by the University of Southampton. Anyone wishing to modify, adapt, translate, reverse engineer, decompile, dismantle or create derivative work based on the economic model must first seek the agreement of the property owners.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Liang, Ruigang, Ying Cao, Peiwei Hu e Kai Chen. "Neutron: an attention-based neural decompiler". Cybersecurity 4, n. 1 (5 marzo 2021). http://dx.doi.org/10.1186/s42400-021-00070-0.

Testo completo
Abstract (sommario):
AbstractDecompilation aims to analyze and transform low-level program language (PL) codes such as binary code or assembly code to obtain an equivalent high-level PL. Decompilation plays a vital role in the cyberspace security fields such as software vulnerability discovery and analysis, malicious code detection and analysis, and software engineering fields such as source code analysis, optimization, and cross-language cross-operating system migration. Unfortunately, the existing decompilers mainly rely on experts to write rules, which leads to bottlenecks such as low scalability, development difficulties, and long cycles. The generated high-level PL codes often violate the code writing specifications. Further, their readability is still relatively low. The problems mentioned above hinder the efficiency of advanced applications (e.g., vulnerability discovery) based on decompiled high-level PL codes.In this paper, we propose a decompilation approach based on the attention-based neural machine translation (NMT) mechanism, which converts low-level PL into high-level PL while acquiring legibility and keeping functionally similar. To compensate for the information asymmetry between the low-level and high-level PL, a translation method based on basic operations of low-level PL is designed. This method improves the generalization of the NMT model and captures the translation rules between PLs more accurately and efficiently. Besides, we implement a neural decompilation framework called Neutron. The evaluation of two practical applications shows that Neutron’s average program accuracy is 96.96%, which is better than the traditional NMT model.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Ager, Mads Sig, Olivier Danvy e Mayer Goldberg. "A Symmetric Approach to Compilation and Decompilation". BRICS Report Series 9, n. 37 (5 agosto 2002). http://dx.doi.org/10.7146/brics.v9i37.21752.

Testo completo
Abstract (sommario):
Just as specializing a source interpreter can achieve compilation from a source language to a target language, we observe that specializing a target interpreter can achieve compilation from the target language to the source language. In both cases, the key issue is the choice of whether to perform an evaluation or to emit code that represents this evaluation.<br /> <br />We substantiate this observation by specializing two source interpreters and two target interpreters. We first consider a source language of arithmetic expressions and a target language for a stack machine, and then the lambda-calculus and the SECD-machine language. In each case, we prove that the target-to-source compiler is a left inverse of the source-to-target compiler, i.e., it is a decompiler.<br /> <br />In the context of partial evaluation, compilation by source-interpreter specialization is classically referred to as a Futamura projection. By symmetry, it seems logical to refer to decompilation by target-interpreter specialization as a Futamura embedding.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

"Decompile: automatic knowledge acquisition from concurrent reports using production system architectures". Knowledge-Based Systems 3, n. 2 (giugno 1990): 125. http://dx.doi.org/10.1016/0950-7051(90)90014-9.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Nghi Phu, Tran, Nguyen Dai Tho, Le Huy Hoang, Nguyen Ngoc Toan e Nguyen Ngoc Binh. "An Efficient Algorithm to Extract Control Flow-Based Features for IoT Malware Detection". Computer Journal, 28 ottobre 2020. http://dx.doi.org/10.1093/comjnl/bxaa087.

Testo completo
Abstract (sommario):
Abstract Control flow-based feature extraction method has the ability to detect malicious code with higher accuracy than traditional text-based methods. Unfortunately, this method has been encountered with the NP-hard problem, which is infeasible for the large-sized and high-complexity programs. To tackle this, we propose a control flow-based feature extraction dynamic programming algorithm for fast extraction of control flow-based features with polynomial time O($N^{2}$), where N is the number of basic blocks in decompiled executable codes. From the experimental results, it is demonstrated that the proposed algorithm is more efficient and effective in detecting malware than the existing ones. Applying our algorithm to an Internet of Things dataset gives better results on three measures: Accuracy = 99.05%, False Positive Rate = 1.31% and False Negative Rate = 0.66%.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Baguna, Nabella L., Sifrid S. Pangemanan e Treesje Runtu. "ANALISIS PERHITUNGAN DAN PELAPORAN PAJAK PENGHASILAN PASAL 21 PEGAWAI TETAP PADA PT. BANK RAKYAT INDONESIA KANTOR". GOING CONCERN : JURNAL RISET AKUNTANSI 12, n. 2 (29 novembre 2017). http://dx.doi.org/10.32400/gc.12.2.17685.2017.

Testo completo
Abstract (sommario):
Income Tax Article 21 is tax payableon income which becomes the obligation of Taxpayer to pay it. Such income is in the form salaries, honoraria, allowances and other payment of any kind in respect of employment, services or activities performed by an individual Taxpayer in the country. The law used to regulate the amount of tax rqate, the procedure of payment and tax reporting is an Act No. 36 of 2008. The purpose of this study is to find out how the Calculation and Reporting Income Tax Article 21 At PT. Bank Rakyat Indonesia Branch Manado. Method of analysis used in thes research is decriptive method is to discuss of the problem by collection, decompile, calculate, compare, and explain a situation so that it can be drawn conclusions that include calculation and reporting Article 32 of permanent employees at PT. Bank Rakyat Indonesia Branch Manado. Based on the result of the study found the there is a mistake in the Calculation of Income Tax Article 21 at PT. Bank Rakyat Indonesia Branch Manado, resulting in the overpayment resulting in taxpayer incur losses.Keywords: Accounting, Income Tax Article 21
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Rizqony, Yusril Izza, Denar Regata Akbi e Fauzi Dwi Sumadi Setiawan. "Analisis Karakteristik Malware Joker Berdasarkan Fitur Menggunakan Metode Statik Pada Platform Android". Jurnal Repositor 2, n. 10 (21 settembre 2020). http://dx.doi.org/10.22219/repositor.v2i10.1145.

Testo completo
Abstract (sommario):
Malware merupakan musuh utama bagi setiap sistem operasi, salah satunya yaitu sistem operasi Android. Salah satu malware yang sempat beredar pada pertengahan tahun 2019 yaitu malware Joker. Malware Joker setidaknya telah ditanam atau menjangkiti pada 23 aplikasi dan telah beredar pada platform unduh aplikasi android. Untuk membuktikkan dan mengetahui apa saja karakter dan yang dilakukan dari malware Joker, maka pada penelitian ini akan menggunakan salah satu metode analisis malware yaitu metode analisis statis. Dengan menggunakan metode analisis statis memungkinkan untuk mengetahui beberapa karakter dari malware tersebut seperti, bagaimana malware bekerja, letak aktifitas malware pada aplikasi terjangkit,hingga asal usul dari malware tersebut. Terdapat 2 tools yang digunakan untuk melakukan analisis statis pada malware Joker yaitu, Andropytool untuk mengekstraksi beberapa fitur yang ada pada APK dan Dex2jar untuk decompile file DEX pada APK sehingga dapat lihat beberapa baris kode yang digunakan. Hasil yang ditemukan dengan menggunakan analisis statis pada malware Joker yaitu, malware tersebut mengambil informasi kartu SIM pada perangkat pengguna, menargetkan negara dari perangkat pengguna tertentu dengan kode MCC, melakukan broadcast secara tersembunyi dari atau untuk aplikasi terjangkit, dan melakukan komunikasi dengan webserver yang menggunakan layawan AWS.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Zhu, Jie, Thanjai Vadivel e C. B. Sivaparthipan. "Bigdata Assisted Energy Conversion Model for Innovative City Application". Journal of Interconnection Networks, 4 agosto 2021, 2141008. http://dx.doi.org/10.1142/s0219265921410085.

Testo completo
Abstract (sommario):
Centrally controlled energy conversion schemes in intelligent residential microgrids are a difficult optimization challenge because of their range of processing and power devices accessible. Typical steps to shrink the weight and seriousness of the issues are decreasing modelling precision, adding several weights, or adjusting the measurement accuracy. Nevertheless, because these interventions modify the specialization issue and thus result in various approaches as expected, this article introduces a Bigdata assisted energy conversion model (BD-ECM) and evaluates a decomposition approach to solve the initial problem recursively. Compared to the initial compact version, the decayed approach is tested to demonstrate that all versions differ less than 18.8%. Moreover, both methods contribute to the use of roughly similar structures. The results reveal that because of the existing constraints on computational capital and simulation techniques, condensed development of the common law can only be extended to moderate and limited intelligent grids. However, decentralized approaches can be dealing with sizeable dispersed generation structures. To assess the month’s environmental and strategic advantages as part of the system, researchers extend the decompiled approach to a massive smart grid. The data reveal that prices can be lowered by 14.0% in local energy exchanges and pollution by 23.9% in the situation studied.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia