IMEKO Event Proceedings Search

Page 908 of 936 Results 9071 - 9080 of 9356

Siva Venkatachalam, Jay Raja
AN INTERNET BASED GAGE R&R AND UNCERTAINTY ANALYSIS SYSTEM

In a world where the gap between the macro and the micro scales is on the increase, measurement and measurement procedures assume significant importance. In an effort to produce high quality products, industries employ state-of-the-art manufacturing techniques. The best manufacturing techniques would be meaningless without being complimented by the best measurement system and procedures in order to get accurate measurement results. The present industrial scenario is such that the measurement process has become increasingly tedious and also subject to a lot of constraints.
Traceability (VIM 6.10), a requirement for quality standard certification, is the property of the result of a measurement or the value of a standard whereby it can be related to stated references, usually national or international standards, through an unbroken chain of comparisons all having stated uncertainties. In order to establish traceability, measurement results must be accompanied by a statement of uncertainty. The theory and computation of measurement uncertainty requires a certain degree of knowledge and expertise and so in most cases, the industry follows a much simpler procedure called gage repeatability and reproducibility (Gage R&R) to represent the total variation in the measurement system. The need for simple tools for estimation of Gage R&R and measurement uncertainty has provided the motivation for us to build an internet based software system for the estimation of Gage R&R and measurement uncertainty.

Alexander I. Zaiko, Natalia A. Zaiko
ACCURACY OF STATISTIC AND SPECTRAL MEASUREMENTS

The paper is concerned with attempts to investigate the influence of measurement results errors, the algorithms for signal values recovering between the measurement actions, time discrete period, the samples number, and the interval of signal observation as well as other factors on the procedures of statistic treatment of measurements results. The application of the obtained procedures for measurement precision assessment to knowledge base creation provides a possibility to increase the intelligent measurement efficiency.

Giovanni B. Rossi, Francesco Crenna, Michele Codda
COMPUTER AIDED EVALUATION OF MEASUREMENT UNCERTAINTY BY CONVOLUTION OF PROBABILITY DISTRIBUTIONS

The evaluation of measurement uncertainty, from the final user’s standpoint, involves several issues which are only partially addressed by the GUM. An evaluation procedure assisted by a computer program can be of great support and provide the user with detailed results useful for further analysis. In this paper a code is presented for the assisted evaluation of measurement uncertainty, based upon the convolution of probability distributions. So the method allows expression of the final result of measurement as a probability distribution, on which it is possible to evaluate useful parameters such as expanded uncertainty, at whatever confidence level. It also permits to evaluate, in probabilistic terms, the risk associated to matching tolerances. Moreover it is possible to manage the uncertainty on dispersion parameters (sometimes called ‘uncertainty of uncertainty’) considering a modified probability distribution. In the paper the code is presented together with some case studies showing the support available to the operator, the GUM compatibility and the application importance of the final probability distribution.

Tanasko Tasic, Roman Flegar
VALIDATION OF CUSTOM DEVELOPED SOFTWARE IN METROLOGY APPLICATIONS

Requirements for validation of custom developed software in metrology applications are mainly influenced by technical realisation of measuring instrument/system. In this article some issues related to the technical realisation of validated software are described. Furthermore, the state-of the art validation approaches and tools are presented, including failure risk analysis and test case selection. In the discussed example the validation approach in case of MIRS Mass Laboratory automation software is presented. Synthesis of presented methods, tools and techniques may be useful as guidance for validation of similar software applications.

J. Song, T. Vorburger, R. Clary, L. Ma, E. Whitenton, M. Ols
NIST REFERENCE MATERIAL (RM) 8240/8250 PROJECT – STANDARD BULLETS AND CASINGS

Standard bullets and casings are currently under development to support the National Integrated Ballistics Information Network (NIBIN) in the U.S. Based on a numerically controlled diamond turning technique, 20 RM 8240 standard bullets were fabricated in 2002. Test results show high repeatability and reproducibility for the bullet signatures on these RM bullets. Prototype standard casings were also manufactured using an electro-forming technique, and are currently under test. These RM bullets and casings are intended for measurement traceability and quality control for ballistics laboratories nationwide.

Jure Vindišar, Andrej Smrecnik, Ivan Bajsic
METROLOGICAL HISTORY ANALYSES OF PRESSURE STANDARDS FOR DETERMINATION OF THEIR METROLOGICAL CAPABILITIES

The knowledge of metrological history of pressure standards is of a big importance for determination of their metrological capabilities. When providing traceability to lower metrological levels it is essential to determine the uncertainty of generated reference pressure. The estimation of the uncertainty of generated reference pressure can be done only by analysing metrological history of selected standard and estimation of all parameters that have the influence on the generated reference pressure uncertainty. Analyses of selected LMPS (Laboratory of Measurements in Process Engineering) pressure standards (mechanical and electromechanical manometers only) are showing non-negligible deviations of estimated measurement capabilities from declared accuracy for selected standards. These deviations are a source for changed best metrological capabilities and therefore require all necessary attention.

Renata Styblíková, Karel Draxler
CALIBRATION OF INSTRUMENT CURRENT TRANSFORMERS WITH ATYPICAL SECONDARY CURRENT

Methods enabling a determination of errors eI and dI for a high transformation ratio difference between a standard and tested instrument current transformers (ICT) are described in this article. A widespread method uses an automatic transformer test set for the measurement of a difference between a standard and tested ICT. A method transforming secondary currents to voltages and an indirect method using error measurement from magnetizing current are also described.

Klaus-Dieter Sommer, Manfred Kochsiek, Bernd Siebert, Albert Weckenmann
A GENERALIZED PROCEDURE FOR MODELLING OF MEASUREMENTS FOR EVALUATING THE MEASUREMENT UNCERTAINTY

The modelling of the measurement is a key element of the evaluation of measurement uncertainty in accordance with the basic concept of the Guide to the Expression of Uncertainty in Measurement (GUM). The model equation expresses the relationship between the measurand and all relevant input quantities contributing to the measurement result. It serves as a basis for propagation of the probability density distributions for the input quantities or, in case of (almost) linear systems, for Gaussian propagation of the related standard uncertainty contributions. A practical and highly versatile modelling concept has been developed. It is based on both the idea of the classical measuring chain and the measurement method used. Therefore, this concept gets on with only a few generic structures. The concept has led to a modelling procedure which is structured into five elementary steps. Only three types of modelling components are employed. It holds for most kinds of measurements performed in the steady state.

G. Iuculano, A. Lazzari, G. Pellegrini, A. Zanobini
THE EVALUATION OF MEASUREMENT UNCERTAINTY AND THE PRINCIPLE OF MINIMUM JOINT CROSS-ENTROPY

A measurement process represents a controlled learning process in which various aspects on uncertainty analysis are investigated.
A measurement process is performed if information supplied by it is likely to be considerably more accurate, stable and reliable than the pool of information already available.
The substantial amount of information, got with respect to the conditions prior to the result after the measurement process is performed, can be connected to the "Kullback's principle of minimum cross-entropy".
This, as it is known, is a correct method of inductive inference when no sufficient knowledge about the statistical distributions of the involved random variables is available before the measurement process is carried out except for the permitted ranges, the essential model relationships and some constraints, gained in past experience, valuable usually in terms of expectations of given functions or bounds on them.
In this paper the authors pointed out the connection between the evaluation of the uncertainty in a repeated measurements process and the "Kullback's principle of minimum cross-entropy".

Anna Chunovkina, Maurice Cox
A MODEL-BASED APPROACH TO KEY COMPARISON DATA EVALUATION

The evaluation of key comparison data is discussed, the general case of correlated data being considered. Particular attention is paid to a simplified procedure for data evaluation, founded on a mixture of distributions associated with the results from the institutes participating in the comparison. The suggested approach uses the model of an interlaboratory experiment from ISO 5725 and uncertainty evaluation in accordance with the Guide to the Expression of Uncertainty in Measurement.

Page 908 of 936 Results 9071 - 9080 of 9356