Mil HDBK 2165
Mil HDBK 2165
Mil HDBK 2165
I 26 JANUARY 1985
i.
I
MILITARY STANDARD
TESTABILITY PROGRAM
I
I
FOR ELECTRONIC SYSTEMS
I
,
AND EQUIPMENTS
I
if
I
1“
I
ATTS
‘o
AMSC No. N3423
i
I
MIL-STD-2165
DEPARTMENT OF DEFENSE
WASNINGTON, DC 20301
MIL-STD-2165
9,
ii
MIL-STD-2165
FOREWORD
● 1. Testability addresses the extent to which a system or unit supporta fault detection
and fault isolation in a confident, timely and cost-effective manner. The incorporation
of adequate testability, including built-in test (BIT), requires early end systematic
management attention to testability requirements, design and measurement.
‘o b.
tasks.
1 c.
measure of testability earlyin the design phase.
iii
MfL-STD-2165
CONTEN13
Paragraph 1. SCOPE 1
1.1 Purpose 1
1.2 Application 1
1.3 Tailoring of Teaka 1
.
2. REFERENCED DOCUMENTS 1
2.1 fssues of Documents 1
4. GENERAL REQUIREMEN~ 2
4.1 Scope of testability program 2
4,2 Testability program requirements 2
4.3 Application of requirements 3
5. DETAILED REQUIREMENTS 3
5.1 Teak descriptions 3
5.2 Task integration 3
6. NOTES 3
6.1 Data requirements
TASN SECTIONS
3
●
Task 100. PROGRAM MONITORING AND CONTROL 5
‘!
101. Testability program planning 6
102. Testability reviews 7
1030 Testability data collection and analysis planning 9
200. DESIGN AND ANALYSIS 10
I 201.
202.
Testability requirements
Testabilityy preliminary design and analysis
11
13
203. Testability detail design and analysis 15 *
iv
MLL+I’D-2165
I 1. SCOPE
,0 1.1 =J+LKEW..This sta nd erd provides uniform procedures end methods for establishing
testabll yprogram, foressessing testability indesigna and for integration of testability
into the acquisition process forelectronic syatems end equipments.
2. RRPBRENCED DOCUMENT9
2.1 ISSUWof documents. Tire foltowing documents, of the issue in effact on the date
of invitation for bicb or reauest for moposal, form a part of this standard to the extent
specified herein.
STANDARDS
!!!!@)!
MIL-STD-470 Maintainability Program for Systems end Equipment
I 3. DEPINTITONSAND ACRONYMS
4. GENERAL REQUIREMENTS #
4.1 scope of testebtity pr Ogram. This standard is intended to impose and facilitate
inter-iisciplurary efforts requmecf to develop testable systems and equipments. The ,
testability program scope includes:
2
●
!
..- — . .. .
MISJ_S1-SJ_610~
●
c. Support of and integration with design engineering requirements, including
the hierarchical development of testability designs from the piece part to the
system.
4.2 Tcstabiity prc@e m requ irements. A t estabiii ty program shetl be estabtiihed which
accomplishes the following general requirements:
I a. preparation of a Testability Program Plan
b. Establishment of sufficient, achievable, and affordable testability, built-in
and off-line test requirements
c. Integration of testability into equipments and systems during the design
1. process in coordination with maintainability design process
d. Evaluation of the extent to which the design meets testability requirements
e. Inclusion of testability in the program review process.
5. DETAILED REQUtREMEN’IS
S.1 ‘lWskd~riptiors. Individual task requirements are provided for the establishment
of a testabdlty program for electronic system and equipment acquisition. The tasks are
categorized es foUows:
6.1 Datfr r equirements. When this standard is used in an acquisition, the data
identi~kd b elow shall be deliverable only when specified on the DD Form 1423 Contract
1 0 3
I
MIL-STW2165
●
Data Requirement LMt (CDRL). When the DD Form 1423 is not used and Defense
1 Acquisition Regulation 7-104. 9(n) (2) is cited, the data identified below shelf be delivered
in accordance with requirements sDecified in the contract or Durchase order.
Deliverable data esaocia~ed with the ‘requirements of this standard are cited in the
following teaks:
I Review:
I User:
4
“
-— _. —- —
●
1
I
I
✎
✎
)
t
I
t 0
i
I
I
I
.
I
I
-——.–. .-
MIL-STD-2165
TASK 101
TESTARtLITT PROGRAM PLANNING
101.1 PURPOSE. To plan for a testability program which will identify and
integrate all testability design management tasks required to accomplish program
requirements.
101.2.1 Identify a single organizational element within the performing activity which
hea overall responsibility and authority for implementation of the testability program.
Establish analyses and data interfaces between the organizational element responsible .
for testability and other related elements.
101.3.2 Identification of the time period over which each task is to be conducted. * ,
I 6 ●
I
F
MIL-SIV-2165
TMK 102
TES2ARILITY RKvIRws
1- 102.2.1 tnclude the formal review end assessment of the testability program as en
integrat pert of each system program review (e.g., system design review, preliminary
design review, critical design review, etc.) specified by the contract. Reviews shaU
I cover alt pertinent aspects of the testability program such as:
I a. Status end results of testability-refated task.%
b. Documentation of teak results in the testability analysis report.
I c. Testabitity-related requirements in specifications.
d. Testability design, cost or schedule problems.
102.2.2 Conduct and document testability design reviews with performing activity
personnel and with subcontractors end suppliers. Coordinate and conduct testability
reviews, in conjunction with retiabitity, maintainability end logistic support reviews
whenever possible. Inform the requiring authority in advance of each review. Design
reviews shall cover SU pertinent aspects of the design such es the following:
}
d. Review the testability techniques employed by the design groupa.
1. Identify testability design guides or procedures used. Describe any
I testability analysis procedures or automated tools to be used.
i-
e. Review the extent to which testability criteria are being met. Identify
any techniceJ limitations or cost considerations inhibiting futl
implementation.
I
f. Review adequacy of Failure Modes and Effects Analysis (FMEA) data es
I a basis for test design. Assess adequacy of testability/FMEA data
interface.
‘1
MIL-STD-2165
& Review coordination between BIT hardware and BIT software efforts.
h.
Review BIT interface to operator and maintenance personnel.
Review BIT fault detection and fault isolation measures to be USOCL
●
Identify models used and model assumptions. Identify any methods to
be used for automatic test generation and test grading.
8
t
1 MIL-WD-2165
TASK 103
1
● mABILtTY DATA COLLECTION AND ANALYSIS PLANNING
I 103.2.1 Develop a plan for the analysis of production test resutte to determine if BIT
I hardware and software, ATE hardware and software, and maintenance documentation are
meeting specifications in terms of fault detection, fault resolution, fault detection times
1“
and fault isolation times.
I
103.2.2 Develop a plan for the analysis of maintenance actiom for the fielded system
to determine if BIT hardware and software, ATE hardware and software, and
I
maintenance documentation are meeting specifications in terms of fault detection, fault
resolution, false indications, fault detection times and fautt isolation times.
I
I 103.2.3 Define data collection requirements to meet the needs of the testability
anatysis. ‘t%edata cotlected shaU include a description of relevant operational anomahs
and maintenance actions. Data coUection shaU be integrated with similar data
1 cottection procedures, such as those for reliabltity. and maintainabtiity, and Logiitic
Support Anatysis and shaU be compatible with specified data systems in use by the
I mititary user organization. ,
103.3.3 Relationship of Task 103 to Task 104 of MIL-STD-785 and Task 104 of MIL-
STD-470. *
1. 103.4 TASK OUTPUT
I 103.4.1 Testability data coUection and analysis plan for production test; documented
in accordance with DI-R-7105. (103.2.1)
f-
I 103.4.2 Testability data coUection end anstysis ptan for analyzing maintenance
actions on fielded systems; documented in accordance with D1-R-7105. (103.2.2 and
I 103.2.3).
9
MIL-W’D-2165
10
●
MU-STD-2165
TASK 201
Tl?S1’ABILITYREQUIREMI.WR2
I 201.1 PURPOSE. To (t) recommend system test and testability y requirements which
best achieve availability end supportability requirements end (2) allocate those
I requirements to subsystems and items.
201.2.2 Establish performance monitoring, built-in test and off-line test objectives
10 for the new system at the system and subsystem levets. identify the risks and
I uncertainties in;olved in achieving the objectiv=- established.
201.2.3 Estabtish BIT, test equipment and testabitit y constraints for the new system,
f such as limitations on additional hardware for BIT, for inclusion in system specifications
or other requirement documents. These constraints shell incbsde both quantitative and
qualitative constraints.
I 201.2.4 Evetuate alternative diagnostic concepts to include varying degrees of BIT,
I manual end off-line automatic testing, diagnostic test points, etc., and identify the
selected diagnostic concept. The evaluation shell include:
11
I
I
MIL.+TD-2165 1
d. An estimation of risk associated with each concept. 1
201.2.5 Establish BIT performance requirements at the system and subsystem level.
These requirements include specific numeric performance requirements imposed by the
requiring authority. Other requirements shell be based, in pert, om
●
a. Maximum allowable time between the occurrence of a failure condition
and the detection of the failure for each mission function.
12
0
[
MIL%l’D-2165
TABK 202
● 202.1
‘TESTABILITYPRRLtMfNARY D~GN
design practices
MLL-6TD-2165
●
I *To be specified by the requiring authority.
14
e
M~2165
TASK 203
1“
‘203.2.2 Ana)yze that atl critical functions of the prime equipment are exercised by
testing to the extent specified. llre performing activity shatt conduct functional test
enetysis for each configuration item (Cl) and for each physicet partition of the Cl
I designated es a UUT.
,
203.2.3 Conduct en analysis of the test effectiveness of BIT and off-Une test.
I Identify the failures of each component and the failures between
eomponentsa~hich correspond to the specified failure modss for each item to be tested.
I
These failures represent the predicted failure population end are the basis for test
derivation (BIT and off-line test) and test effectiveness evaluation. Maxtmum use shell
I be made of a failure modes and effects analysis (FMEA), from Task 204 of ivllL-STD-
470A, if a FMEA is required. ‘he FMEA requirements may have to be modified or
supplemented to provide the level of detail needed.
I
10
r—
b. Model components and interconnections for each item such that the
predicted failure population may be accurately modeled. ‘f%e performing activity shell
develop or select modets which are optimum considering accuracy required, cost of test
t generation and simutetion, standardization and commonality.
203.2.5 Develop system-level BIT hardware and software, integrating the buUt-in test
capabititias of each subsystem/item.
203.2.6 Predict the level of BfT fault detection for the overall system bescd upon the
BIT detection predictions, weighted by failure rate, of the individual items, including
15
MIL-BTD-2165
GFE. Predict the level of fault isolation for the overall system through system-level
test. Predict the probability of BIT false alarms for the overall system.
203.2.7 Assemble cost data associated with BIT and design for testability on a per
unit basis (e.g., additional hardware, increased modularity, additional connector pins,
●
etc.). Estract and summarize cost data associated with the implementation of the
testability program, test generation efforts and production test. Provide teat
effectiveness predictions es inputs to availability and fife cycle cost analyses.
203.2.9 incorporate changes and corrections into testability modeb test generation .
soft ware, etc., which reflect an improved understanding of operations and failure modes
es the design progresses. Use u~ated models, software, etc., to update test
effectiveness predictions as necessary.
203.3.4 Identification of failure modes and effects and failure rates for each item
from Task 204 of MIL-STD-470A.
203.4.2 Description of built-in test and testability features for each item designated
as a Unit Under Test; documented in appropriate Test Requirements Document.
(203.2.1)
203.4.3 Test effectiveness prediction for each item; data provided in support of Task
205 of MIL-STD-47 OAand Task 401 of MIL-STD-1 388-1A and documented in accordance
with DI-T-7199. (203.2.2, 203.2.3, 203.2.7 and 203.2.9)
203.4.4 System test effectiveness prediction; data provided in support of Task 205 of
MIL-STD-470A and documented in accordance with Dt-T-7199. (203.2.6, 203.2.7 and
203.2.9)
16
I
I
I MIL-51T&2165
I
I
I
1, 17
I
MJL-STD-2165
TASK 301
301.1
TWISTABILITYINPUTS TO MAINTADJAEILITY DEMONS1’EATION
d. The ability of the test equipment and associated TPSS to detect and
isotete failures.
f. The correlation
with off-tine test results.
of BIT fault detection and fault isolation indications ●
g. The validity of models used to predict teetabilit y parameters.
301.2.2 Develop plans for the demonstration of testability parameters and integrate
into the plans and procedures for maintainability demonstration.
301.2.3 Conduct additioml demonstrations, as needed, using the methods and criteria
of MIL-STD-471 and MIL-STD-2077 es appropriate, to obtain sufficient testability data
for evaluation and document as a portion of the testability demonstration results. The
demonstrations are to be combined with other demonstrations whenever practical.
L -1
MtL-SlW2165
● 301.4.1
(301.2.2)
Testability demonstration plan; documented in accordance with DI-R-7112.
1“
1.
I
1
●To be specified by the requiring authority.
19
MIL-STD-2165
APPENDIX A
TFSTABILITY PROGRAM APPIJCATfON GULUANCE
cof4TRNTs ●
Paragraph 10. SCOPE 23
10.1 Purpose 23
< 20. REFERENCED DOCUMENT 23
20.1 Issues of documents 23
30. DEFINITIONS 24
30.1 Definitions 24
20
●
I
MW-SI’D-2165
APPENDIX A
10 CONTENTS (continued)
PaJ&
io 50.6.2
50.6.3
50.6.4
50.6.5
50.6.6
Test design tradeoffs
General. testability issues
UUT end ATE compatibility
Built-in test
46
47
48
49
50.6.7 BtT softwere 50
50.6.8 System-level built-in test 51
50.6.9 Application of testability measures 52
50.6.10 Qualitative inherent testability evaluation 52
50.6.11 Inherent testability assessment 52
50.6.11.1 preliminary design activities 52
50.6.11.2 Checklist scoring 54
50.6.11.3 Threshold determination 54
50.7 Teak 203- Testability detail design
and analysis 54
50.7.1 Testability design techniques 54
50.7.2 fnherent testability assessment 54
50.7.3 Test effectiveness measures 54
50.7.3.1 Fault coverage
50.7.3.2 Fault resolution ;:
50.7.3.3 Fault detection time 56
50.7.3.4 Fault isolation time 56
0 21
MIL-STO-2165
APPSNLMXA
CONTSNTS (Contiiued)
ILLUSTRATIONS
TABLRS
22
I
MLL+71D-2165
I
APPENDIX A
TESTABDJTY PROGRAM APPLICATION GUIDANCE
10
I 10. SCOPE
10.1 Purpcse. This appendix provides rationale and guidance for the selection end
I tailoring of tasks to define a testability program which meets established program
objectives. No contractual requirements are contained in this Appendix.
1-
20. REFERENCED 00CUMENTS
20.1 Lssues of Documents. The fotlowing documents form a pert of Uris Appendix
for guidance purposes.
STANDARDS
I
I MILtTARY
I MIL-STD-781 Reliability Design Quetification end Product
Acceptance Tests Exponential Distribution
I MIL-STD-1345 (Navy) Test Requirements Documents, Preparation of
1 MIL-sTo-1519 (uSAF) Test Requirements Documents, preparation of
MIL-STD-2076 UUT Compatibility with ATE, General Require-
ment for
PUBLICATIONS
JOINT SERVICE
NAVMATP 9405 Joint Service Built-in Test Design Guide
DARCOM 34-1
AFLCP 800-39
AFSCP 800-39
NAVMC 2721
19 March 1981
I
Naval Fleet Joint Service Stectronic Design for Testability
~ Analysis Center Course Notes
TM 824-1628
1’ I October 1983
. TECHNICAL REPORTS
23
J
MIbSTfl-2165
. . . . .. . .
Air Force Rome Air BIT/External Test Figures of Merit and Demon-
Development Center stration Techniques
December 1979
RADC-TR-79-309
DIRECTIVE
DEPARTMENT OF DEFENSE
I
40. GENERAL APPLICATION GUIDANCE ●
40.1 Teek selection criteria. The selection of teaks which can materially aid the
attainment of testability requirements is a difficult problem for both government and
industry organizations faced with severe funding and schedule constraints. This
Appendix providea guidance for the selaction of teaks based upon identified program
needs. Once appropriate testability program teaks have been selected, each teak must be
tailored in terms of timing, comprehensiveness and end products to meet the overall
program requirements.
40.2 TestabiMty prq? eminpspec tive. The planned testability program must be
an integral part of the systems engineering process and serve es an important link .
between design and logistic support in accordance with DOD Directive 5000.39. The
tasks which influence and are influenced by the testability program are extracted from
DOD Directive 5000.39 in the following paragraphs. .
I
a. Concept exploration phase
I
I teatability
1.
parameters
Identify manpower, logistic, reliability,
critical to system readiness and support costs.
maintainability and
I
2. Estimate what is achievable for each parameter.
24
●
MU,-STD-2165
APPENDIX A
‘o
t
b. Demonstration end validation (D&V) phase
1. Conduct tradeoffs among system design characteristics end
support concepts.
a. Correct deficiencies.
40.s system testaMfty prourem (PfKILre 1). For major systems, the testability
teaks for each program phesa are summarized in ‘fable I end listed below.
c. FSD ~hese
1-
------ . .---,
al
5
m
.- .
U
l----d
26
MIL-STD-2165
APPENDIX A
-“
G
‘&
N
a!
r
— s
.-m
U.
------ ---
o 27
MIL-STD-2165 ,,
I APPENDIX A
I ~— —-.-=
I I r ~rl-l
-——— ,——
‘-w
I
LL_L
I t-
28
MJL-Sl’W2165
APPENDIX A
Prcgmxl phase
Task CON
—. o&v FSD
— P/D
—
I 101 Testability program ptannirrg NA G G NA
1.
102 Testability reviews G1 G G s
1“
103 Testabilityy data coUee-
1“ tion and analysis planning NA s G G
! 201 Testability requirements G1 G G NA
I
! 202 Testability preliminary
design end analysis NA s G s
1
MIbS1’D-2165
APPENDIX A
40.4 Item testability progtam (Figur e 2). For afl items, whether developed as a
subsystem under a system acquisition program or developed under an equipment
acquisition program, the testab~lt y tasks are listed below.
a. Preliminary design
b. Detail design
30
I MIL-STD-2165
APPENDIX A
t
●
I
I —
)
,.
1
I
!.
I
-IQ&
— —
31
MIL-STD-2165
APPENDIX A
40.6 Equipment testability progr am (Pigur e 3). For the acquisition of less-than-
major systems or individual equipments, the testability tm.ks are listed below.
40.7 Iteretiors. Certain tasks contained in this standard are highly iterative in
nature and recur at various times during the acquisition cycle, proceeding to lower levels
of hardware indenture and greater detail in tha classical systems engineering manner.
50. DETAILED APPLICATION GUIDANCE
50.1.2 Submission of plan. When requiring a testability program plan, the requiring
authority should allow the performing activity to propose specifically tailored tasks with
support ing rationale to show overall program benefits. The t es.tabilit y program plan
should be a dynamic document that reflects current program status and planned actions.
Accordingly, procedures must be established for updates and approval of updates by the
requiring authority when conditions warrant. Program schedule changes, test results, or
testability task results may dictate a change in the testability program plan in order for
it to be used effectively es a management document.
3’2
●
L —. — -1
I-——— ——__ i
33
1
MIL-STD-2165
APPENDIX A
50.1.3 Plan for D&V phase. When submitted at the beginning of a D&V phase, the
testability program plan shoufd highlight the methodology to be used in establishing
qualitative and quantitative testability requirements for the system specification. The
●’
plan should also describe the methodology to be used in allocating quantitative system
testability requirements down to the subsystem or configuration item level. The nature
of the D&V phase will vary cormiderably from program to program, ranging from a
,Ifirming up,! of preliminary requirements to a multi-contractor “fly Off” of cOmpeting
alternatives. In alf cases, sufficient data must be furnished to the Government to permit
a meaningful evaluation of testing and testability alternatives. The testability pregram
plan should indicate how the flow of information is to be accomplished: through informal
customer reviews, through CDRL.data submiesioms, and through testability reviews es an
integral part of SDR.
50.2.2 Utilization. The testability analysis report should be used by the performing
activity to disseminate all aspects of the testability design status to the various
organizational elements. As such, the testability analysis report should be considered to
be a dynamic document, containing the latest available design information and issued
under an appropriate degree of configuration control. As a minimum, the testability
analysis report should accurately reflect the latest design data when informal t esta.bili ty
design reviews are held.
50.2.3 TRD interface. The testability analysis performed during the FSD phase and
documented in the testability y analysis report should be used as a partiaf basis for the
TRD for each UUT. The TRD, developed in accordance with MIL-STD-1519 or MIL-STD-
1345, constitutes the formal interface between the activity responsible for detailed
I
34
●’
I
MU-SI’D-2165
APPENDIX A
Note: Each submission of the Testability Analysis Report should be required by the
CDRL to be delivered sufficiently in advance of each review such that the requiring
authority may review the materiat.
35
MIL-SfD-2165
APPENDIX A
hard ware design and the activity responsible for TPS development. This document serves
as a single source of aff performance verification .wd diagnostic procedures, and for all
equipment requirements to support each UUT in its maintenance environment, whether
●
supported manually or by ATE. The TRD also provides detailed configuration
identification for UUT design and test requirements data to ensure compatible test
programs.
50.3.2 Additional data review. In addition to formal reviews, useful information can
often be gamed from performmg activity data which is not submitted formally, but
which can be made available through an accession list. A data item for this list must be
included in the CDRL. This list is a compilation of documents and data which the “
requiring authority can order, or which can be reviewed at the performing activity’s
facility.
L ..—
tdIbS1’D-2165
APPENDIX A
o task is to plan for the eveiuetion of the impact of actuai operational end maintenance
environments on the abiiity of production equipment to be tested. The effectiveness of
testability design techniques for intermediate or depot level maintenance tasks is
monitored end anaiyzed es part of US”M evacuation. Much of the actual collection and
enelvsis of date and resuitine corrective actions mav occur bevond the end of the
I con~act under which the tes”hbitity program is im~ed end ma; be accomplished by
personnel other than those of the performing activity. Stiii, it is essentiet that the
I pfanning for this task be initiated in the FSD phase, preferably by the critical design
review.
50.4.2 Data coUection and anatysis ptans. separate ptens should be prepared for
tedabilitv data collection and enelvsis durhw (t) mwduction DhSSe (Subtesk 103.2.1) and
(2) deplo~ment phase (Subtesk 103.~.2). The ~~ns-should cle~ly delineate which enaiysis
data are to be reported in various documents, such es T&E reports, production test
reports, factory acceptance test reports, etc.
50.4.3 Test maturation. Most test implementations, no matter how wetl conceived,
require a period of time for identification of problems and corrective action to reach
specified per for mence leveis. Thii ‘maturing” process eppiies equally to BIT end off-line
test. ‘11-Iisis especially true in setting test tolerances for BIT end of f-ibse test used to
test analog parameters. The setting of test tolerances to achieve an optimum batence
between failure detection and fatse aierms usuaUy requires the logging of considerable
test time. It should be emphasized, however, that the necessity for “fine-tuning” a test
system during production and deployment in no way diminishes the requirement to
provide a %est possdbie” design during FSD. One way of accelerating the test
● maturation process is to utilize ptemed field or depot testers for portions of acceptance
teat. BIT test hardware and softwere should be exercised for those failures discovered
and the BIT effectiveness documented and assessed.
50.4.4 Operational test and evaluation. The suitability of BiT should be assessed es
en integral pert of operattonel test aad evaluation. A closed-loop date trackhtg system
should be implemented to track initief failure occurrences, organizational-level
corrective actions, subsequent higher-ievel maintenance actions, and subsequent
utilization end performance of repaired and returned items. The data collection must be
integrated es much as possible with simiter data collection requirements such es those
for tracking reflabitity end maintainability. The data trackimr svstem must collect
sufficient data to eup~rt the analysis of %0.4.4.1 through 50.4~4.{. AN maintenance
. actions are first reviewed to determine if the failed item is relevant to BIT or off-line
b test. For example, items with loose bolts are not relevant to testability aneiysis. if at
some point in the data tracking, en actual failure is found, the analysis for confirmed
k. failures (50.4.4.1 and 50.4.4.2) is applied. If en actuai faiiure is not found, the enetysie
for non-confirmed failures (50.4.4.3) is eppiied.
50.4.4.1 Confirmed failure, BiT. For each confirmed failure, data on BIT
effectiveness are analyzed:
37
MfL.+TW2165
APPENDIX A
50.4.4.2 Confirmed failure, off-line test. For each confirm ed failure, data on off-line
test compatibility are analyzed:
b. Did the ATE system provide failure detection results consistent with
those of the initiaf detection by BIT?
c. Did the UUT design inhibit the ATE system from providing accurate
fault isolation data?
50.4.4.3 Unconfirmed failure, BIT. For each unconfirmed failure situation (cannot
I duplicate) resulting from a BIT indication or alarm, the following data are analyzed:
e. What are the operational coats of responding to the false alarm (in
terms of aborted mission% degraded mcde operation, system
downtime)?
What are the maintenance costs associated with the false alarm?
4
50.4.5 Corrective action. The data on BIT effectiveness and off-line test
compatibility are summarized and corrective action, if needed, is proposed by the
performing activity or user activity. Those corrective actions dealing with redesign of
the prime system aresubmitted forreview and implementation as pert of the established
engineering change process.
38
1
MU#t’D-2165
APPENDtX A
MIL+l!D-2165
APPENDfX A
during the concept exploration phase. However, a preliminary estimate of the critical
testability parameters should be made to ascertain if the required system availability
●
and maintenance and logistic support concepts can be supported using testability
parameters demonstrated es achievable on similar systems. The diagnostic concept
usually evolves as follows:
As more detailed design data becomes avaifable, usually during the D&V phase, the
diagnostic concept further evolves, making extensive use of readiness and life cycle cost
models:
Note: The sum of the requirements for BIT, off-line automatic test, semi-automatic test
and manual test must always provide for a complete (100%) maintenance capability at
●
each maintenance level.
S0.5.6 Testability requirements. Prior to the Full Scale Development phase, firm
testability requirements are established (Subtasks 201.2.5) which are not subject to
tradeoff. These represent the mini mum - essential levels of performance that must be
satisfied. Overall system objectives, goals and thresholda must be allocated and
translated to arrive at testability requirements to be included in the system specification
40
i
MtkfH’D-2165
APPRNDIX A
Test ointe. Each item within the system shall have sufficient
test
3“X”X”2 +or the measurement or stimutus of intemet circuit nodes so as
points
to achieve en inherently high level of fault detection and isotation.
41
I
I
MIW%PD-2165
APPENDIX A
using BIT.
fL Requirement for fault isolation to a replaceable item
●
j.
i. Requirement for fault isolation times.
42 ●
L
—
MfIATTD-2165
APPENDIX A
or other document for contract compliance (Subtesk 201.2.6). This subtask is necessary
to assure that system specification or contract parameters include onty those parameters
which the performing activity can control through design end support system
development. The support burden and other effects of the government furnished
material, administrative and logistic detey times, and other items outaide the control of
the performing activity must be accounted for in this process.
50.5.7 Testebllity requirements for system specification. Quantitative testabihty
require mentZZ analys during the D&V phase and are
incorporated in the system specification. Requirements may be expressed in terms of
goals andthresholrb rather than esa single number. Model requirements for tsstebtity
inasystem specification are provided in Figure 5andere discussed below.
The system specification includes testability requirements for failure
detection, failure isolation and BIT constraints. Requirement (a) defines the interface
I
between theprimesyetem and an external monitoring eystem, if applicable. Perticutar
attention should be Riven to the use of BIT circuitry to provide performance and status
monitoring. Requir~ment (b) provides the basis ~or alt subsequent t-t design end
evaluation. Failure modes are characterized besed upon theeomponent technology used,
the assembly process used, the detrimental environment effects anticipated in the
intended application, etc. Maximum use should be made of prior retiabitity endysiamd
fault analysis data such as FMEA end fault trees. The data represent a profile of
estimated system failures to beconstantly refined andupdetedm the design progresses.
Requirements (c) through (e) dest with test approaches. Requirement (c)
permits the use of all test resources end, es such, should etways demand 100% failure
‘o coverage. Requirement (d) indicates the proportion of failures to be detected
automaticaUy. Excluded failures form the basis for manual troubleshooting procedures
(swapping large items, manual probing, etc.). Requirement (e) is a requirement for
cleating quickly with critical failures and is a subset of (d). The failure detection
appr~ch selected is based upon the requirement for maximum acceptable failure
latency. Concurrent (continuous) failure detection techniques (utilizing herdwere
redundancy, such es parity) are specified for monitoring those functions which are
mission critical or affect safety and where protection must be provided against the
propegationof errors through the system. ‘t%e manmum permitted failure tetency for
concurrent failure detection end other cfesses of automatic testing is imposed by
requirement (f). This requirement determines the frequency at which periodic diagnostic
softwere, etc. will run. The frequency of periodic end on-demand testing is based upon
function, failure rates, wear out factors, maximum acceptable failure tatency, end the
specified operational and maintenance concept.
Requirement (g) is the maximum BIT feIse eterm rate. A~rms which OCC~
during system operation but cannot be later duplicated may actualfy be intermittent
failures or may indeed be a true probtem with the BtT circuitry. It may be useful to use
the system specification to require sufficient instrumentetion in the system to allow the
sorting out end correction of real BIT problems (e.g., BIT faults, wrong thresholds, etc.)
during operational test and evaluation.
level part,
Requirement (h) requires fault isotation by BIT to a subsystem or to a lower
depending upon the maintenance concept.
43
This requirement is usuatly
I
MfL-SID-2165
APPENDIX A
expressed es “fault isolation to one item X% of the time, fault isolation to N or fewer
items Y% of the time~’ Here, the total failure population (100%) consists of those
● 4
failures detected by BIT (requirement (d)). The percentages should always be weighted
by failure rates to accurately reflect the effectiveness of BIT in the field.
Requirement (j), BIT constraints, should not be arbitrarily imposed but should
be consistent with the BIT performance specified in requirements (a) through (i).
Historically, systems have needed atmut 5 to 20% additional hardware for
implementations of adequate BfT. However, for some systems, 1% may be sufficient
whereas other systems may need more than 20%.
Requirement (k), BIT reliability, again should not be arbitrarily imposed but
should be consistent with the required BIT performance. This requirement may also be
used to specify those critical functions for which a failed BIT must not interfere.
50.5.6 Testability requirements for item specifications.
for configuration items (CLS)support two dktinct requirements:
Testability requirements
system test (primarily
●
BIT) end shop test (ATE and GPETE). Model requirements for testability are presented
in Figure 6.
50.5.8.1 System test, Quantitative testability requirements for each CI are allocated
from system teat ability requirements baaed upon refative failure rates of Cfs, miaAon
crit icafi ty of CJS or other specified criteria. In many digital systems, BIT is
implemented, in whole or in part, through software. Here testability requirements will
aPPe~ in a comPuter Program configuration item (C pCI) development specification. The
program may be dedicated to the BIT function (i.e., a maintenance program) or may be a
mission program which contains test functions.
50.5.8.2 UUT test. Shop test requirements are determined by how the CI is further
partitioned, If at all, into UUTS. Testability requirements for each UUT should be .
included in the appropriate CI development specification. I
50.6 Teak 202- Testability prelim”mary design and analysis
!
50.6.1 Scope of testabifi ty design. Testabilityy addresses three major design areas:
I
a. The compatibility between the item and its off-line test equipment
I
44
●1
1’
I MIL+I’D-2165
APPENDIX A
45
,0
I
MflrST’D-2165
APPENDIX A
b. The BIT (herdware and softwere) provided in the item to detect and
isolate faults a
e. Improve the quality end reduce the cost of maintenance testing and
repair at all levefs of maintenance.
Subtask 202.2.1 requires the performing activity to integrate testability into the design
process. Several design guides and handbooka are available which explain testability
design techniques which have proven successful in certain applications. (See Section 20,
publications).
issues.
The following paragraphs provide a summary of some testability design
●
50.6.2 D&V system deaiSSS. During the D&V phese,alternate system designs are
evsfuated. This includes the analysis of manpower requirements, support costs,
reliability, maintainability, and system readiness. There are usually no detailed,
quantitative specifications for testability in D&V system designs. fn fact, the purpose of
the D&V phase, with respect to testability, is to determine quantitative testability y
requirements that are achievable, affordable, and adequately support system operation
and maintenance. In making this determination, it is reasonable to apply Task 202 to
selected items to be implemented in the alternative systems. These items may be
selected because they have potential testing problems or are not expected to be modified
during FSD.
50.6.3 Test design tradeoffs. The overall test design will usually incorporate a mix
of BIT, off-tine automatic test and manual test which provides a level of test capability
consistent with operational availability requirements and fife-cycle cost requirements.
I Alternate designs are analyzed and traded off against requirements of performance,
supportability, and cost to arrive at a configuration best meeting the requirements at
I minimum cost.
I a. Manual or automatic test tradeoffs. Decisions regarding the type of
test equipment to be used for system morutoring anf maintenance are made based u~n
I
repair policies, overall maintenance plans and planned number of systems. Tradeoffs are
I
46
●
I
I
MUASt’W2155
APPENDIX A
● made for test requirements at each maintenance level, considering test complexity, time
to fault isolate, operationet environment, logistic support requirements, development
time and cost. ‘the degree of testing automation must be consistent with the ptenned
skill Ievets of the equipment operators and maintenance personnel.
I c.
ATE does not impose the weight, volume, power and reliability
penalties on the prime system that BIT does.
:0 47
I
MfL-STD-2165
APPENDfX A
I
c. Electrical partitioni
currently being tested should
Whenever possible, the block of circuitry
from circuitry not being tested through the use
of blocking gates, tri-state devices, relays, etc. This “divide and conquer” approach is
●
baaed upon the concept that test time increases exponentially with the complexity of the
circuit.
f. Test control (controllability). Special test input signata, data paths, end
circuitry should be incorporated to provide the test system, whether BIT or ATE,
sufficient control over internal item or component operation for the detection and
isolation of internef faults. Special attention is ~ven to the independent control of clock
lines, clear lines, breaking of feedback loops, end tri-state isolation of components.
I
& Test access (observabi~ty). Test points, data paths, and circuitry
should be incorporated to provide the test sys tern, whether BIT or ATE, sufficient
signature data for fault detection and isolation within the item. The selection of ● I ]
physical (real) test points should be sufficient to accurately determine the value of
internal nodes (virtual test points) of interest. There should be no requirement to probe
internal points for organizational-level fault isolation.
\
h. Parts selection. In selecting between parts, each. with satisfactory (
performance characteristics, preference is given to integrated circuit components and
assembled items which have satisfactory testability characteristics. Preference is given I
to those integrated circuits for which sufficient disclosure of internet structure and
failure modes has been provided es a basis for effective, economical testing. I
48
●
I
I MIfnTTD-2165
APPENDIX A
I
lo a. Electricet pertitioni ng for off-line test. The ATE should have sufficient
control over the electrlcat partitioning or UT such that relatwely smell,
independent, and manageable blocks of circuitryemay be defined es the basis of test
I derivation, test documentation, and teat evaluation. The UUT design should support
ruining individual test program segments on en ATE independent of other test program
segments.
1- b. UUT test point selection. The number end placement of UUT test
points is based upon the following:
1. L Test points are selected based upon fault isolation requirements.
I
2. Test points selected are readily 6ccessible for connection to ATE
via system/equipment connectors or test connectors.
‘o
1 7.
test points of low logic level signals.
49
I
I
MIL-BTD-Z165
APPEF3DLXA
protection must be provided against the propagation of errors through the system.
Periodic testing is used for monitoring those functions which provide backup or standby
●
capabilities or are not mission critical, On-demand testing is used for monitoring those
functions which require operator interaction, sensor simulation, and so forth, or which
are not easily, safely, or cost-effectively initiated automatically. The frequency end
length of periodic end on-demand testing is based upon function, failure rates, wear out
factors, meximum acceptable failure latency, andthespecified maintenance concept.
b. Electrical partitioning for BIT. The BITcircuitry should have sufficient
control over the electrical partitionimz of the item such that relatively small,
independent, and manageable “blocks of ‘circuitry can be defined as the basis of test
derivation, test documentation, and test evaluation. Ln particular, for computer-based
equipment, such control should be made available to BIT software.
c. BfT design tradeoffs. Some of the BIT design tradeoff ieeues are listed
below:
50 ●
I
MU-STD-2165
APPENDIX A
50.6.8 System-level Built-in test. System BIT includes a mix of BIT hardwere, BIT
saftware, mnd application software error checks to provide the required degree of fault
detection and isolation at the system level.
● BIT intermittent
respond ina’a predictable
failure detection.
manner to intermittent
System BfT must be designed to
failures, considering both the
maximizing of safety end the minimizing of BtT aterms. Detection of a failure by BtT
should be followed by a second test of the failing operation, whenever practical. The
numbers of repeated tests and repeated failures necessary to estabtish a sotid fault
condition needs to be determined. Conditions under which the operator is to be notified
of recoverable intermittent failures should be determined. based uoon failure criticality.
frequency of concurrence, and trends. For digital systems, failure &ata may be reco~d~d
in a failure h~tory queue. Data from the failure history queue could be made accessible
I to assist in troubleshooting of intermittent failures and to identify hardware which is
tfending toward solid failure. The initial implementation of system BIT shoutd be
\ flexible. For example, test tolerances may be stored in software or firmware so that the
1. tolerances and filtering algorithms may be easily changed if BIT is generating too many
false alarms.
i
1. b. BtT failure location. Suitable BfT features must be incorporated into
I the system to localize failures to a small number of items and to advise operator
persomel of degraded mode options. In some cases, the BIT may need to isolate a failure
I to a level lower than a replaceable item in order to determine what system functions are
lost and which system functions are operational. When subsystems are to be developed
I by subcontractors, each subsystem specification should contain a requirement for self-
, contained test with minimal reliance upon the system contractor to perform detailed
L testing of each subsystem through system-level test. The interface between system end
‘o
I
51
MIL-61’W2165
APPENDIX A
subsystem test should be straightforward and relatively simple (e.g., test initiate, test
response, test signature). This allows for the evaluation end demonstration of BIT
●
quality for each subcontractor prior to system integration.
c. Fault-tolerant design coordination. If system availability or safety
requires continued system operation in the presence of certain faults, then the fault-
tolerant design and testability design efforts should be closely coordinated. Equipment
redundancy or functional redundancy may be used to assist in testing. Fault esseasment, “
reconfiguration into degraded mode, and configuration verification should make
maxi mum use of testing resources. The design should provide for the independent testing
of redundant circuitry.
50.6.9 Application of testability measures. Testability achievement is tracked
through sys~ce use utilizing the measures listed in
Table fIf, or similar measures, as appropriate to the phase of system development. Table
IfI provides general guidance on applicability of measures and is subdivided into three
basic areas:
a. fnherent (design) measures. fnherent testability measures are
evaluations of testability dependent only upon item design characteristics. The
evaluation identifies the presence or absence of hardware features which support testing
and identifies general problem areas. The analysis primarily serves as feedback to the
erf or ming activity at a point in time when the design can be changed relatively easily.
?See 50.6.10 and 50.6.11)
b. Test effectiveness measures. Test effectiveness measures are
evaluations of testability dependent upon item design, its relationship to the chosen
maintenance environment and the testing capability of that environment. (See 5.7.3-.5)
●
fn-service measures. fn-eervice testability measures are evaluations of
testability ~~ed upon measurable field (i.e., operational) experience. (See 5.7.6)
5.6.10 Qualitative inherent testability y evaluation. Subtask 202.2.3 requires that the
performing activity gives early visibility to teatability issues and shows, in a qualitative
manner, that the testability considerations have been included in the preliminary design.
Testability considerations include, as a minimum, those concepts described in 5.6.4
through 5.6.7.
/
50.6.11 fnherent testability assessment. Subtask 202.2.3 requires that the inherent
ability of en Item to support high qualltY testi~ be assessed using Appendix B.
52
I
I
MtkS1’D-2165
APPENDIX A
1
10
nherent x x
‘eat Effectiveness
Functional coverage x x x
Predicted FD/Fl x x
predicted FD/FI times x x
Predicted test coat x x x
=ervice
Achieved FD/FI x
Achieved FI time x
FA/CND/RTOK rates x
Actual test coat x
o 53
MIL-STD-2165
APPENDIX A
approaches, such ss scan path, to recei~e 8 high score simply because of their inclusion
In the dewgn. The checklist should be fmallzed prior to the preliminary design review.
●
50.6.11.2 Checklist scoring. As the design progresses, each checklist issue is examined
and scorsd for testability compliance. The use of objective, automated testability
analysis tools (e.g., SCOAP, STAMP) for scoring is permitted. The scores are weighted
and summed, giving a single inherent testability figure of merit for the design. The
design for testability process continues until the figure of merit reaches a predetermined “
threshold value.
50.6.11.3 Threshold determination. The requiring authority must assign a threshold
value to be used for the inherent testability assessment. Due to the wide variety of
possible items which may be analyzed, a single l~bestl! threshold value cannot be
recommended although a value in the range of 80 to 90 should force proper attention to
the assessment process. The actual value chosen is not alf that criticef since another
degree of freedom is available through the negotiation of weighting factors after the
threshold is established. In fact, since each checklist issue is weighted according to its
perceived importance in achieving a testable design, both the eventuet design and the
eventual meeting of the overall figure of merit criteria are essentially determined by the
requiring authority concurrence on the issues to be included and the scoring for each
issue. It is incumbent upon the requiring authority to be aware of the importance of
each proposed issue in achieving a testable design.
a. The failure is not detectable by any test sequence. Any f?ilures, such
as those in unused circuitry, which are impossible to detect are deleted !I
from the failure population.
54
MIL-STD-2165
APPENDIX A
b FD= Ad where
T
d= K Li,
,4=
where ~ i is the failure rate of the ith detected failure, ~ is the overall
failure rate, and K is the number of detected failures.
50.7.3.2 Fault resolution. To calculate predicted fault resolution, date are required
which corretate each detected failure with the signature it produces during testing. ‘lIre
o data are mast conveniently ordered by signature end by failed module within each
signature (fault dictionary format).
signature index
N M’
OVWUI failure rate of detected failures =
Z, g ‘ij
55
MJIA3TD-2165
APPENDIX A
1 if MiS L
FRL . ~ $ Xi ~~ij ‘here ‘i = { Oif Mi >L
100 N
FRL = — XiMi where K = number of detected
K i=
5 .
faults = if Mi
If each of the L modules under the signature group is replaced, in turn, and the test rerun
for PASS or FAIL:
50.7.3.3 Fault detection time. Fault detection time (or failure latency) is the time
which elapses between the occurrence of a fault and the detection (reporting) of the
fault by the test process. This measure is most useful in characterizing how BIT deafe
with critical failures. A suggested format is shown in the following example:
% of Class Max. detection time
●
I Failure class 1 (most critical) 95% ~ 1 second 1
{ 100% 51 minute
Failure class 2 (critical) 85% 51 minute
This measure requires the enumeration of signals considered most critical, critical, and (
so forth and an estimation made of the worst case failure latency for each signaL
S0.7.3.4 Fault isolation time. During maintenance actions using BIT or off-line test,
the time to fault isolate is often the largest and most unpredictable element of repair
time. The testability program should not only attempt to reduce the fault isolation time
but should also try to provide accurate predictions of fault isolation times to
maintenance planners per Task 205 of MIL-STD-470A. The fault isolation time may be
expressed es en average time, a maximum time (at some percentile), or both. The time
is based not only on the length of the diagnostic test sequence but also must include en
estimation of time required for any manual intervention (e.g., the use of a sequential
substitution fault isolation procedure).
I
The physical insertion of a sufficient number of faults into
5;”7”4
P=’=
en Item to eterm me lts response to a test sequence has some obvious problems. The I
two most serious problems are that the time end expense of inserting even a small
number of representative faults into the item is prohibitive and the ability to insert
56
●
MIL-BTW2165
APPENDIX A
‘o where i is the failure rate of the ith item end FDi is the fault detection prediction for
the i th item. This applies equally for systems with centralized BIT and systems with
distributed BIT (i.e., BtT in each item).
50.7.6 Testability cost end benefit data. Ultimately, aU test performance measures
translate into cost impacts. Higher quality tests usuelty cost more to produce but should
result in cost savings over the tife cycle of the system. These cost data are critical in
setting reasonable testability requirements within the framework of supportability
analysis. Subtesk 203.2.7 ensures that testability cost data for this acquisition program
are available for incorporation into appropriate data bases for use by future
1 supportability analysis efforts.
I
a. Non-recurring costs. The development costs associated with the
incorporation of testability mto the system or equipment include, but are not fimited to,
). the fotlowing:
57
MIL-BTD-2165
APPENDIX A
a. Level of automation. Are the testing tools provided consistent with the
training/skU levels of assigned personnel?
b. BIT fault detection. Does BfT provide timely and accurate detection of
faults so as to minimize reliance on manual detection (e.g., squawks)?
c. BIT false alarm rate. Are BIT false alarms adversely impacting
operational availability and maintenance workloads?
~..——— ——-—
MU-BTD-2165
APPENDDZ A
‘o
b
d. Retest okay. Are feults detected at one level of maintenance
detected at the nest level of maintenance?
etso
e. BIT feult isolation time. Does BIT support system MITR and system
avaifabitity requirements?
)
f. Off-tine fault isolation time. Does ATE end its associated TPS6 support
shop throughput requirements?
)“
g, Fault resolution. Does poor fault resolution for BIT or ATE adversely
1. impact spares availabttity?
50.8.3 BIT false alarm rate. One important testabtiity parameter, BIT false etarm
rate, is difficutt to measure m the controlled environment of a demonstration procedure.
If the fatae alarm rate was relatively high, it would be possible to make use of a
reliability demonstration procedure from MIL-STD-781 to demonstrate the false alarm
rate, treating each BIT false alarm es a relevant failure. The environmental conditions
during the demonstration should be indicative of the expected operationet environment in
order to experience a wide range of false alarm causes.
50.8.4 Model vetidation. Even with a reasonably large sample of inserted faults, a
demonstration can yield only timited data on actuat test effectiveness. However, a
demonstration is also useful in validating some of the assumptions end modets that were
used during the earlier testability analysis and prediction efforts (Task 203) which were
based upon a much targer fault set. If certain assumptions or models are invaUdated by
the demonstration, appropriate portions of Task 203 should be repeated and new
predictions should be made.
59
‘.
MIL-STD-Z165
APPENDIX A
10. SCOPE
10.1 -e. This appendix provides requirements for the assessment of the
inherent testablhty of system or equipment design.
Not applicable.
I
30. DEFINITfONS
Not applicable.
1
50.1 Procedure. Assess the inherent testability of the system or equipment design
using the frherent Testability Checklist, Table IV.
1 to the desig~.
Delete those testability criteria from Table IV which are not applicable
60
I
MII.AWD-2165
APPENDIX A
Count the desi~ attributes which are relevant to each tastabitity item
(e.g., the to!el number of nodes m a circuit).
Count the design attributes which meet the testeb~ty criteria for each
item (e.g., ~~e number of nodes accessible to the tester).
Apply the scoring system to each item (e.g., Score = accessible nodes+
total nodes,gr Score = 100 if YES end = Oif NO).
50.2 Criteria. Modify the de@gn as necessary until the inherent testability equals
or exceeds the threshold value.
1“
I
!0
t’
MIL-BTD-2165
APPEWDIX B
Mechanical Design
WT Number Criterie *ore Bcore
✟✍
IS a standard grid layout used on boards to
facilitate identification of components?
✎
Is enough spacing provided between compo-
nents to allow for clips and test probe?
62
MIL-STD-2165
APPENDIX B
p Numbes
Total Meet@
WT Number Criterin 6core 2
partitiord~
Ttst Control
● 63
MIL-6TEM165
APPENDIX B
— —
Numb
Tbtti Meeti
—
WT critel Scol
—
2
‘t’em Access
I
.
Are unused connector pins used to prov[de
additional internal node data to the tester?
PertB SekctiOn
65
MIL-2TD-2165
APPENDfX B
Total
Numbs
Meet@ WT
●
1
I WT Number Criterfi Bccm Bcore
-
Perte Belection (eont%t)
Analcg Design
66
●
MIL-SIIF2165
APtwNINX B
‘Ibtel
Number
Meet@ WT
I
WT Number Crtteria score Score
- Desii (contkt)
I
Are stimulus rise time or pulse width
I requirements compatible with tester
capabilities?
Di@at De@
67
—
MIL-STD-2165
APPIfNDJX B
Total
Number
Meeting WT
●
W’1 Number criteria Soore Score
●
Is a known output defined for every word in
a Reed Only Memory (ROM)? Will the
improper selection of an unused address
result in a well defined error state?
66
●
I
MU.-STD-2165
APPENDIX B
-
Number
TOM Mee.tirg
WT Number criteria score SC%
Digital De3ign (contkt)
Built-fnTest (BIT)
(
‘... .,,
69
MU-BTD-2165
APPENDIX B
/“”. ‘“
Number
WT
●
Total Meetii
I WT Number CrReria Bcore Bcore
70
●
MIL-BlTF2165
I APPENDIX B
Numb
Total Meet@
WI Numbet Criteti
—
Test Requirements
71
MIL-STD-2165
APPENDIX B
Total
Number
Meet@ WT
●
WT tumber Criteria score tire
✌✎
Teat Date (cont4f)
72
I
(d) Production and Deployment Phase - ‘I%e period from production epprovat
untit the last system is delivered end accepted.
I
Built-in test (BIT). An integral capability of the mission system or equipment which
provides an automated test capability to detect, diagnose or isolate failures.
Built-in teat equip ment (BITE). Herdware which is identifiable es performing the
bum-m trst function; a subset o~lT.
cammt duplicate (CND). A fault indicated by BIT or other monitoring circuitry which
cannot be confirmed at tli6 first level of maintenance.
.
I Fellura tateney. me elapsed time between fault occurrence end failure indication.
.. Fetac alarm. A fault indicated by BIT or other monitoring circuitry where no fault
I exists.
I Fault coverage , fautt detection. The ratio of failures detected (by a test program or
test procedure) to failure popule tion, expressed es a percentage.
I
Fault iaoletton time. ‘l%e etapsed time between the detection end isolation of a fault;
I a component of. repair time.
73
MIL-2TD-2165
APPRNDIX C
Fault resolutiooj fault isolation. The degree to which a test program or procedure can
●
isolate a fault within an item; generally expressed as the percent of the cases for which
the isolation procedure results in a given ambiguity group size.
Item. A generic term which may represent a system, subsystem, equipment, assembly,
su=mbly, etc., depending upon its designation in each task. Items may include
configuration items and assemblies designated as Units Under Test.
Off-line ttii~. The testing of an item with the item removed from its normal
operational environment.
●
levies testa~llity task or subtesk performance requirements on another activity
(performing activity) through a contract or other document of agreement.
Test program set (TPS). The combination of test program, interface device, test .
program instruction, and supplementary data required to initiate and execute a given
test of a Unit Under Test.
Test reqmr . ements document. An item specification that contains the required
performance characteristics of a UUT and specifies the test conditions, values (and
aflowable tolerances) of the stimuli, and associated responses needed to indicate a
properly operating UUT.
l—
I 74 ●
,
●
mdhd. Enbtock6, bt u spad!k u uaalbte clout puitcuhr problem - mch u wrdhs did reQuimd Mcrpreimkort. -
LOO rid. ~-. h. -M--- = - komd~*. d sfm Pwowd --dins CIWISea wbicb wouldSU*W tie
~blenm. EnterIn black6 my mnuzh mt mhmdto c ●DRCI!EC ~gb of the document.U block7 Is Cwd out, an
ukn0ddS911E01 d] b dd b YOU -ftbin SO &y# b tat YOU know Lbd YOW cOrnmell tswm, r+mtredauduahhtg
maddemd.
NOTE: ‘fhkIform my not b, d t-a ?8~U~ c@ci of documents, no, to requealwdmm,dmhtiom, o, cluifkitkm of
madnmfianIUEulmaeati
-1
o n mat eantmc!s.Commentt submitted 00 tht8formdo DOI corudmtaLUIIBD!Y autboriamic.n
. to walm my pottk.m of tbo” ufermmd document(,) or to .merid e.ontrutud requlmumifs.
I t
. .“
n
DEPARTMENT OF THE NAVV
I
I &
Naval Electronic Systems
Washington, DC 20363
111111 NO ?OSTAO
N6cE6s*n
IF MAILED
IN TME
UNITE!! STATES
E
Y
!’
OFFtCIAL 8USINE=
. ?ENAI.TV FOR ●R,VATS uSC SS00 BUSINE:SNoR~PLYMAIL
FIRST
C1..ss WAS M! NGTON 0, C, =1
PCSIAOE WILL 8E PAID BY THE DEPARTMENT OF THE NAVY
I
CW’4ANDER
NAVALELECTRON
IC SYSTEMS
CONMAND
DEFENSE STANDARDIZATION PROGRAM BRANCH
DEPARTMENT OF THE NAVY
WASHINGTON , OC 20363
ATTN : ELEX 8111
I
I ~ANDARDIZATION OOCUMENT IMPROVEMENT PROPOSAL
I (SeeInstructimu- Rfvaie SW)
I
I
OCUMENT
.-STD-2165
NJMBER
I
❑ &
1.
I
,DOnE’6s@hwL IX*. Srati, ZIP Code)
❑ MANuFAC7UI!ER
’
,
a C.T14E14,epdh): “
I
ROSLEM AREAS
I P.rDJrQIII N“ mbor and Wordlw
b. ROconlrm”ti Wo,dmg ,
●
c FI-n/ R., ion.!. for R_mrnondmi.m:
I 4.
I
3EMA13KS .
I
I .
I
I
I
I
I
I
I
NAME OF S“9M,TTE~ & 1, Ph t. ml) - or,,lond b, WORK TE LEPHONe NUMBER (1”4”- Arcs
I cd) - Optlolul
I
I
UAILING
~~0~gS9,.sfm.1,
C,fy, 810t8. ZIP Cd) - Oc.tle”d @. DATE OF SU8MI6S1ON O’YMMDD)
●
I
mm .— —.. . -&-
I
w &“J:.1426 PREv1OU9 EOITlON m qSOLETE.