IMEKO Event Proceedings Search

Page 450 of 977 Results 4491 - 4500 of 9762

Gioele Barabucci, Paolo Ciancarini, Angelo Di Iorio, Fabio Vitali
Measuring the domain-oriented quality of diff algorithms

Software changes over time: not only the source code but also its models and documentation, configuration files, messages payloads, and so on. Diff algorithms help users to track such evolution. These algorithms vary a lot in terms of efficiency, resources consumption, internal strategies and final results. The focus of this paper is on measuring and comparing the quality of the output produced by these algorithms. There is no univocal definition of quality in this context. We propose a top-down approach to characterise deltas, in order to investigate which is the most suitable algorithm for a given domain and a given class of users. Some measuring experiments on XML diff algorithms are presented too.

Luigi Buglione
Software Product Quality: Some Thoughts about its Evolution and Perspectives

From the mid ‘70s on, a plenty of attention was devoted to evaluate and measure software quality. As Tom Demarco said: “you cannot control what you cannot measure”. But coming back, it’s also true that “you cannot measure what you cannot define” and again “you cannot define what you don’t know”. Thus, moving from definition is the priority for any activity and creates also measures from a common, shared definition. The FCM (Factor-Criteria-Model) was the first ‘quality model’ in 1977 by the US Air Force trying to state a “three-tier” model for defining what quality could be for a software product. Later, Boehm (1978) and ISO (1991) with its first version of the 9126 model (now evolved into the 25010:2011 one, in the SQuARE standard series, did the same exercise. Again, maybe less known, other models and taxonomies have been created and proposed in the technical literature (e.g. FURPS+, ECSS-E-10A, ISO 21351:2005, etc.) for the same purpose. What a very few do is to understand (and deal with, accordingly) that ‘quality’ means ‘non-functional’ (or at least, a large part of the ISO definition of NFR – Non-functional Requirements). From a measurement perspective it means to deal with a very (relative) unexplored area, with a plenty of possible developments. In fact, a FUR (Functional User Requirement) is something about the ‘what’ a software can do by its functionalities and FPA (Function Point Analysis) - whatever the variant adopted – in a single number tries to relate a sizing unit expressing such ‘functional dimension’. Dealing with NFR and Software Quality is a very complex work, because of the large number of attributes composing ‘quality’. Each category in one of the aforementioned quality models could be a separate issue as well as now is the ‘functionality’ one. Taking into account several attributes at the same time and determining a ‘quality profile’ for a certain type of software will be one of the next decade challenges. Estimators will need to understand better and better which NFR-related (quality) measures to include (at least 2+ ones) as independent proxies in estimation models, allowing estimators to reduce MRE (Mean Relative Error) figures as much as possible, saving project resources and improving the overall project value for its stakeholders. In order to do that, this paper will try to discuss from an evolutionary perspective what software quality has been, is and should/could be perceived and defined during next years, by a measurement perspective.

Luca Santillo
Error Propagation in Software Measurement & Estimation

The real challenge in any activity is to minimize as much as possible the error between an estimate and an actual value, whatever the phenomenon to be evaluated. When dealing with software, the number of proxies can be quite high: the application of an algorithm including one or more independent variables (measures) is finalized to provide one or more output variables (estimates) for a series of measures typically about effort, cost, time, quality or other aspects of the software being developed. Recently ISO proposed also a specific standard on Measurement (ISO/IEC 15939), with a glossary aligned to the Metrology field and to the International Vocabulary of Metrology (VIM). Estimation could be seem as a “black art”, but error is intrinsic in estimates and must be managed. Thus, regardless of the estimation model (algorithm) being used, practitioners must face the uncertainty aspects of such process: errors in initial measures do affect the derived metrics (or estimated values for indirect variables). Measurement theory does provide an accurate way to evaluate such error propagation for algorithmic derivation of variable values from direct measures, as in the GUM (Guide to the expression of Uncertainty in Measurement). Although some software estimation models already propose confidence ranges on their results, the formal application of error propagation can yield some surprising results, depending on the mathematical functional form underlying the model being examined. Moving from a previous paper, this one will discuss the propagation of errors in software measurement and shows with some application and examples based on some of the most common software measurement methods and estimation models as Function Point Analysis (FPA) for product sizing issues and COCOMO (Cost Construction Model) for effort and/or duration issues as well as other ones, updating also the discussion to new advancement in the Software Quality field, in particular about product NFRs (Non-Functional Requirements). Few cases and examples will be shown, in order to stimulate a critical analysis for methods and models being examined from a possibly new perspective, with regards to the accuracy they can offer in practice.

Vasilis Kolias, Ioannis Anagnostopoulos, Eleftherios Kayafas
Exploratory Analysis of a Terabyte Scale Web Corpus

In this paper we present a preliminary analysis over the largest publicly accessible web dataset: the Common Crawl Corpus. We measure nine web characteristics from two levels of granularity using MapReduce and we comment on the initial observations over a fraction of it. To the best of our knowledge two of the characteristics, the language distribution and the HTML version of pages have not been analyzed in previous work, while the specific dataset has been only analyzed on page level.

Zygmunt L. Warsza, Stefan Kubisa
Midrange as estimator of measured value for samples from population of uniform and Flatten-Gaussian distributions

In this paper the statistical properties of midrange as estimator of measured value, for the samples of varying number of observations taken from a population of uniform distribution, were examined by the Monte Carlo simulation. The midrange of such samples had a smaller standard deviation than the mean value recommended by the Guide GUM (Fig. 1). A distribution similar to Student’s t-distribution and an expanded uncertainty were also calculated for such samples (chapters 3 and 4). In chapter 5 it was found that for samples from the general population of Flatten-Gaussian distribution, with increasing share of the normal distribution, the advantage of midrange quickly decreased. Considerations have been illustrated by figures. Final conclusions have been enclosed.

Nicola Giaquinto, Laura Fabbiano, Amerigo Trotta, Gaetano Vacca
About the Frequentist and the Bayesian Approach to Uncertainty

There are two well-known and different approaches to statistical inference and hypothesis testing, i.e. the frequentist (or orthodox) and the Bayesian one. Consequently, there are also (if one stays in the framework of probability theory) two rival approaches to uncertainty. The present work is partly a tutorial, aimed at explaining the basic aspects of the two approaches, and their relationship with the GUM; and partly a demonstration that the implementation of the Bayesian approach in the GUM Supplement 1 is too rigid. In particular, objective Bayesianism is incompatible with the propagation of distributions prescribed in Supplement 1.

Daniel Belega, Dominique Dallet, Dario Petri
Sine-Wave Amplitude Estimation by a Two-Point Interpolated DFT Method Robust to Spectral Interference from the Image Component

This paper proposes a new two-point Interpolated Discrete Fourier Transform (IpDFT) method for the amplitude estimation of a sine-wave with high rejection capability of the spectral interference from the image component. The method is based on the Maximum Sidelobe Decay (MSD) windows. The analytical expression of both the proposed amplitude estimator and its variance due to additive white noise are derived. Computer simulations and experimental results show that the proposed estimator outperforms both the classical IpDFT and the three-point IpDFT methods when the number of acquired sine-wave cycles is small.

Alessandro Ferrero, Marco Prioli, Simona Salicone
Possibility distribution: a new mathematical tool to express measurement uncertainty

In the last years, the Authors have proposed the Theory of Evidence as the mathematical framework to handle measurement uncertainty, and the possibility distribution as the mathematical variable to represent it. The mathematical Theory of Evidence is more general than the well known probability theory and appears to be more suitable to represent all possible kinds of uncertainty contributions. The aim of this paper is to give a general overview of this theory, showing its potentialities and the obtained results.

Nicola Pompeo, Kostiantyn Torokhtii, Enrico Silva
Design of a bitonal dielectric resonator for the measurement of anisotropic surface impedance

The measurement of the microwave surface impedance is a fundamental characterization tool for a wide class of conducting, semiconducting and superconducting materials. In many cases, material anisotropy can show up as an intrinsic or tailored property, and its measure is often desirable. Microwave resonators can be designed to give at the same time nondestructive and highly sensitive measurements, in particular with the surface perturbation method for planar samples. Rectangular resonators can be designed to preserve sensitivity to the anisotropy of the samples under study, since they can induce straight currents on the sample. In this manuscript we report on the design, based on finite elements electromagnetic simulations, of a rectangular dielectric resonator which induces straight currents on the sample, with the additional feature of simultaneous operation at two different resonant frequencies, to allow multifrequency study e.g., for validation of the results.

Lucio Fiscarelli, Olaf Dunkel, Stephan Russenschuck
Improvements on rotating coil systems at CERN

A large variety of magnetic measurement requirements arises from the multiple accelerator projects at CERN, such as MedAustron, SESAME, HIE-ISOLDE, ELENA, and Linac4. Limited resources and a narrow time scale impose optimized procedures and instrumentations. Standardization of measurement equipment becomes essential in order to increase efficiency in terms of installation time and workflow. This paper gives an overview of the ongoing effort to optimize CERN measurement resources by keeping a suitable measurement quality. A flexible control and acquisition software, a standard drive unit, rotating coil systems with standard assembly of tangential search coils, and multipurpose measurement benches are described as main elements of an optimized development of high-precision magnetic measurement systems.

Page 450 of 977 Results 4491 - 4500 of 9762