Mil HDBK 2165

Download as pdf or txt
Download as pdf or txt
You are on page 1of 80

MIL-STD-2165

I 26 JANUARY 1985

i.
I

MILITARY STANDARD

TESTABILITY PROGRAM
I
I
FOR ELECTRONIC SYSTEMS
I
,
AND EQUIPMENTS
I

if

I
1“

I
ATTS

‘o
AMSC No. N3423

i
I
MIL-STD-2165

DEPARTMENT OF DEFENSE
WASNINGTON, DC 20301

Testability Program for Electronic Systems and Equipments

MIL-STD-2165

1. This Military Standard is approved foruaebyalf Departments and Agencies of the *


Department of Defense.

2. Beneficial comments (recommendations, additions, deletions) and any pertinent


data which may be of use in improving this document should be addressed to: ‘
Commander, Naval Electronic Systems Command (ELEX-6111), Washington, DC 20363-
5100, by using the self-addressed standardization Document Improvement Proposal (DD
Form 1426) appearing at the end of this document or by letter.

9,

ii
MIL-STD-2165

FOREWORD

● 1. Testability addresses the extent to which a system or unit supporta fault detection
and fault isolation in a confident, timely and cost-effective manner. The incorporation
of adequate testability, including built-in test (BIT), requires early end systematic
management attention to testability requirements, design and measurement.

2. This standard prescribes a uniform approach to testability program planning,


establishment of teatability (including BIT) requirements, testability analysis, prediction
, and evaluation, and preparation of testability documentation. Included ere:

a. Testability program planning


. b. Testability requirements
Testabiti ty design
: Testability prediction
e. Testability demonstration
f. Testability data collect ion end analysis
g. Documentation of testability program
h. Testability review.

3. This standard also prescribes the integration of these testability program


requirements with other closely related, interdisciplinary program requirements, such as
design engineering, maintainability and logistic support.

4. Three appendices are included to augment the teslrsof this standard:

a. Appendix A provides guidance in the selection and application of testability

‘o b.
tasks.

Appendix B describes the Inherent Testability Assessment which provides a

1 c.
measure of testability earlyin the design phase.

Appendix C provides a Glossary of terms used in this standard.

iii
MfL-STD-2165

CONTEN13

Paragraph 1. SCOPE 1
1.1 Purpose 1
1.2 Application 1
1.3 Tailoring of Teaka 1
.
2. REFERENCED DOCUMENTS 1
2.1 fssues of Documents 1

3. DEFINITIONS AND ACRONYMS 1


3.1 Definitions 1
3.2 Acronyms and abbreviations 1

4. GENERAL REQUIREMEN~ 2
4.1 Scope of testability program 2
4,2 Testability program requirements 2
4.3 Application of requirements 3

5. DETAILED REQUIREMENTS 3
5.1 Teak descriptions 3
5.2 Task integration 3

6. NOTES 3
6.1 Data requirements

TASN SECTIONS
3

Task 100. PROGRAM MONITORING AND CONTROL 5
‘!
101. Testability program planning 6
102. Testability reviews 7
1030 Testability data collection and analysis planning 9
200. DESIGN AND ANALYSIS 10
I 201.
202.
Testability requirements
Testabilityy preliminary design and analysis
11
13
203. Testability detail design and analysis 15 *

300. TEST AND EVALUATION 17


301. Testability inputs to maintainability
demonstration 18
APPENDICES
Appendix A Testability Program Application Guidance 20
B Inherent Testability Assessment 60
c Glossary 73

iv
MLL+I’D-2165

I 1. SCOPE
,0 1.1 =J+LKEW..This sta nd erd provides uniform procedures end methods for establishing
testabll yprogram, foressessing testability indesigna and for integration of testability
into the acquisition process forelectronic syatems end equipments.

1.2 APssticatio~ This standard is applicable to the development of electronic


components, equpments, endayatems for the Department of Defense. Appropriate teaks
. of this standard are to be applied during the Conceptual pheae, Demonstration end
Vetidation phase, Full Scale Development phase and Production phase of the system
acquisition proce-w
1.3 Tailor@ of taska. Tasks described are intended to be tailored as appropriate to
the particular needs of the system or equipment acquisition program. Application
guidance and rationete for selecting end tailoring tasks are included in Appendix A.

2. RRPBRENCED DOCUMENT9

2.1 ISSUWof documents. Tire foltowing documents, of the issue in effact on the date
of invitation for bicb or reauest for moposal, form a part of this standard to the extent
specified herein.
STANDARDS

!!!!@)!
MIL-STD-470 Maintainability Program for Systems end Equipment

MIL-ST*471 Maintainablllty Verification/Demonstration/Evaluation


MIL-STD-721 Definition of Effectiveness Terms for Reliability and
Maintainability

MIL-STD-765 Reltablltty Prouram for Systems end Squipment


Development and Production

MIL-STD-1309 Definition of Terms for Test, Measurement end Diegmxtic


Equipment
10
MIL-STD-1388-1 Logistic Support Analysts
I MIL-STD-2077 Test Program Sets, General Requirements for
#
I (Copies of standards required by contractors in connection with specific procurement
I functions should be obtained from the procuring activity or es directed by tha
I contracting officer.)

I 3. DEPINTITONSAND ACRONYMS

I 3.1 Definitions. The definitions included in MIL-STD-1309 and MIL-STD-721 shell


1 aPPIY. ~ addition, the definitions of Appendix C are applicable.
1
MIL-STD-2165

3.2 Acronyms and abbreviations.



The following acronyms end abbreviations listed in
this ?vi>litaryStandard are defined es follows:
a. ATE - automatic taut equipment
b. ATLAS - abbreviated test language for all systems
c. BIT - built-in test
d. BITE - built-in test equipment
e. CDR - critical design review
f. CDRL - contract data requirements liit
& CFE - contractor furnished equipment
h. CI - configuration item
i. CND - cannot duplicate
j. DID - data item description
k. D&V - demonstration and validation
1. FMEA - failure modes and ef fecte analysis
m. FQR - formal quti]fication review
n. FSD - full-scale development
o. GFE - government furnished equipment
e. GPETE - general purpose electronic test equipment
q. HITS - hierarchical interactive test simulator
r. ID - interface device
s. I/O - input or output
t. ILSMT - integrated logistic support management team
u. LSA - logistic support analysis
v. LSAR - logistic support analysis record
w. MTTR - mean time to repair
x. P/D - production end deployment
Y. PDR - preliminary design review
z. R&M - reliability end maintainability
ea. ROM - read only memory
bb. SCOAP - sandia controllabtilty observability anfdysis program
SDR - system design review
z STAMP - system testability and maintenance program
CC. T&E - test and evaluation
ff. TPS - test program set
m. TRD - test requirements document
Ml. UUT - unit under test

4. GENERAL REQUIREMENTS #

4.1 scope of testebtity pr Ogram. This standard is intended to impose and facilitate
inter-iisciplurary efforts requmecf to develop testable systems and equipments. The ,
testability program scope includes:

a. Support of end integration with maintainability design, including


requirements for performance monitoring and corrective maintenance action
at alf levels of maintenance.
b. Support of integrated logistic support requirements, including the support and
test equipment element and other logistic elements.

2

!
..- — . .. .
MISJ_S1-SJ_610~


c. Support of and integration with design engineering requirements, including
the hierarchical development of testability designs from the piece part to the
system.
4.2 Tcstabiity prc@e m requ irements. A t estabiii ty program shetl be estabtiihed which
accomplishes the following general requirements:
I a. preparation of a Testability Program Plan
b. Establishment of sufficient, achievable, and affordable testability, built-in
and off-line test requirements
c. Integration of testability into equipments and systems during the design
1. process in coordination with maintainability design process
d. Evaluation of the extent to which the design meets testability requirements
e. Inclusion of testability in the program review process.

4.3 Application of reqra“remente. Detailed requirements described in this standard are


to be selectively applied and are intended to be tailored, es required, and es appropriate
to particular systems and equipment acquisition programs. Appendix A provides
rationale end guidance for the selection and taitoring of testability program taska.

5. DETAILED REQUtREMEN’IS

S.1 ‘lWskd~riptiors. Individual task requirements are provided for the establishment
of a testabdlty program for electronic system and equipment acquisition. The tasks are
categorized es foUows:

● TASK SECTION 100. PROGRAhl h!ONfTORING AND CONTROL

Task 101 Testability Program Planning


Teak 102 Testability Reviews
Task 103 Testability Data Collection and Analysis Planning

I TASK SECTtON 200. DESIGN AND ANALYSIS

I Task 201 Testability Requirements


Task 202 Testability Preliminary Design and Analysis
Task 203 Testability Detail Design and Anatysis
~.
TASK SECTION 300. TEST AND EVALUATION

I Task 301 Testability Inputs to Maintainability Demonstrations

The individual task requirements provide for integration with


1 ot%%%RP@’ engineering and management tasks to preclude duplication and overtep
while assuring timely consideration and accomplishment of testability requirements.
I
6. NOTES

6.1 Datfr r equirements. When this standard is used in an acquisition, the data
identi~kd b elow shall be deliverable only when specified on the DD Form 1423 Contract

1 0 3

I
MIL-STW2165


Data Requirement LMt (CDRL). When the DD Form 1423 is not used and Defense

1 Acquisition Regulation 7-104. 9(n) (2) is cited, the data identified below shelf be delivered
in accordance with requirements sDecified in the contract or Durchase order.
Deliverable data esaocia~ed with the ‘requirements of this standard are cited in the
following teaks:

Applicable Data Item


Task Data Requirement Description (DID)

101 Testability Program Plan DI-T-7198

102 Program Review Documentation DI-E-5423*

103 Data Collection and Analysis Plan DI-R-7105


201, 202, 203 Testabilityy Analysis Report DI-T-7199

301 Maintainability Demonstration


I Test Plan DI-R-7112

Maintainability Demonstration DI-R-7113


I
● Equivalent approved DID may be used.
I
(Copies of DIDs required by contractors in connection with specific acquisition functions
should be obtained from the Naval Publicetiom and Forms Center or es directed by the
contracting of ficer.)

Custodians: Preparing Activity:



Army CR Navy-EC
Navy EC
Air Force 17 (Project ATTS-0007)

I Review:

I User:

4

-— _. —- —


1
I

I


)

TASK SECTION 100


I PROOEAM MONITOEING AND CONTROL

t
I

t 0
i

I
I

I
.
I

I
-——.–. .-

MIL-STD-2165

TASK 101
TESTARtLITT PROGRAM PLANNING

101.1 PURPOSE. To plan for a testability program which will identify and
integrate all testability design management tasks required to accomplish program
requirements.

101.2 TASK DESCRIPTION

101.2.1 Identify a single organizational element within the performing activity which
hea overall responsibility and authority for implementation of the testability program.
Establish analyses and data interfaces between the organizational element responsible .
for testability and other related elements.

101.2.2 DeveloD a Drocess bv which testabllitv requirements are intc$rrated with


other design requ~rem ;nts and disseminated to d-=ign - personnel and subcontractors.
I Establish controls for assuring that each subcontractor’s testability practices are
consistent with overall system or equipment requirements.
I
101.2.3 Identify testability design guides and testability analysis modefs and
procedures to be imposed upon the design process. Plan for the review, verification and
I utilization of testability data submissions.
i
101.2.4 Develop a testability program plan which describes how the testability
program will be conducted. The testability program plan shall be included as part of the
systems engineering management plan or other integrated planning documents when
required. The plan describes the time phasing of each testability task included in the
contractual requirements and its relationship to other teaks.

101.3 TASK INPUT

101.3.1 Identification of each testabllit y teak which is required to be performed as


part of the testability program. *

101.3.2 Identification of the time period over which each task is to be conducted. * ,

101.3.3 Identification of approval procedures for plan updates. *


101.3.4 Identification of deliverable data items. * #
I
101.4 TASK OUTPUT

101.4.1 Testabititv . .DroZram cdan in accordance with DI-T-7198 if smecified es a


standalone plan. When req;ired to be a part of another engineering or management plan,
use appropriate, specified DID.

*To be specified by the requiring authority. 1

I 6 ●
I
F

MIL-SIV-2165

TMK 102
TES2ARILITY RKvIRws

102.1 PURPOSE. To estabtiah a requirement for the performing activity to (1)


provide for ett off iciat review of testability design information in a timely end controtted
manner, and (2) conduct in-process testability design reviews at specified dates to ensure
that the program is proceedirrg in accordance with the contract requirements end
I program pfrms.

102.2 TASK DRSCRIPTION

1- 102.2.1 tnclude the formal review end assessment of the testability program as en
integrat pert of each system program review (e.g., system design review, preliminary
design review, critical design review, etc.) specified by the contract. Reviews shaU
I cover alt pertinent aspects of the testability program such as:
I a. Status end results of testability-refated task.%
b. Documentation of teak results in the testability analysis report.
I c. Testabitity-related requirements in specifications.
d. Testability design, cost or schedule problems.

102.2.2 Conduct and document testability design reviews with performing activity
personnel and with subcontractors end suppliers. Coordinate and conduct testability
reviews, in conjunction with retiabitity, maintainability end logistic support reviews
whenever possible. Inform the requiring authority in advance of each review. Design
reviews shall cover SU pertinent aspects of the design such es the following:

o a. Review the impact of the selected diagnostic concept on readiness, tife


cycle costs, manpower and training.

b. Review performance monitoring, built-in test and off-tine test


performance requirements and constraints to ensure that they are
complete and consistent.
1
c. Review the rationale for the inherent testability criteria end weighting
factors selected.

}
d. Review the testability techniques employed by the design groupa.
1. Identify testability design guides or procedures used. Describe any
I testability analysis procedures or automated tools to be used.

i-
e. Review the extent to which testability criteria are being met. Identify
any techniceJ limitations or cost considerations inhibiting futl
implementation.
I
f. Review adequacy of Failure Modes and Effects Analysis (FMEA) data es
I a basis for test design. Assess adequacy of testability/FMEA data
interface.

‘1
MIL-STD-2165

& Review coordination between BIT hardware and BIT software efforts.

h.
Review BIT interface to operator and maintenance personnel.
Review BIT fault detection and fault isolation measures to be USOCL

Identify models used and model assumptions. Identify any methods to
be used for automatic test generation and test grading.

i. Review BIT fault detection and fault isolation performance to


determine if BIT specifications are met. Review efforts to improve BIT
performance through improved tests or item redesign. Assess adequacy .
of testability /maintainability data interfaces.

j. Review testability parametem to be included in Maintainability .


Demonstration. Identify procedures through which testability concerns
are included in Demonstration Plans and Procedures.

k. Review compatibility of signal characteristics at selected test points


with planned teat equipment. Assess adequacy of data interface
between testability and Support and Test Equipment organizational !
elements.

L Review performance monitoring, BIT design and off-Iine test


requirements to determine completeness and consistency.

m. Review approaches to monitoring production testing and field


maintenance actions to determine fault detection and fault isolation
effectiveness.
n. Review plans for evaluating
Change Proposafs.
impact on testability for Engineering ●
102.3 TASK INPUT
102.3.1 Identification of amount of time to be devoted to the testability program at
each formal review and the level of technical detail to be provided. *

102.3.2 Identification of level of participation desired by the requiring authority in


internal and subcontractor testability design reviews. *

102.4 TASK OUTPUT


“1
102.4.1 Document ed results of testabi~ ty assessment as an integral part of system
program review documentation. (102.2.1)
-1
102.4.2 Documented results of testability design reviews, including action items
pending. (102.2.2) I

~specified by the requiring authority.

8
t

1 MIL-WD-2165
TASK 103

1
● mABILtTY DATA COLLECTION AND ANALYSIS PLANNING

I 103.1 PURPOSE. To estabfish a method for identifying and tracking te-stabitity-


retated problems during system production and deployment and identifying corrective
I act ions.
I
I 103.2 ‘TASKDESCFtIPTtON

I 103.2.1 Develop a plan for the analysis of production test resutte to determine if BIT
I hardware and software, ATE hardware and software, and maintenance documentation are
meeting specifications in terms of fault detection, fault resolution, fault detection times
1“
and fault isolation times.
I
103.2.2 Develop a plan for the analysis of maintenance actiom for the fielded system
to determine if BIT hardware and software, ATE hardware and software, and
I
maintenance documentation are meeting specifications in terms of fault detection, fault
resolution, false indications, fault detection times and fautt isolation times.
I

I 103.2.3 Define data collection requirements to meet the needs of the testability
anatysis. ‘t%edata cotlected shaU include a description of relevant operational anomahs
and maintenance actions. Data coUection shaU be integrated with similar data
1 cottection procedures, such as those for reliabltity. and maintainabtiity, and Logiitic
Support Anatysis and shaU be compatible with specified data systems in use by the
I mititary user organization. ,

‘o 103.3 TASK INPUT

103.3.1 Identification of field or depot test eauipment (either government furnished


equipment or contractor furnished @ipment) ‘to- be available- for production and
deployment testing. ”

103.3.2 Identification of existing data coUection systems in use by the using


command. *

103.3.3 Relationship of Task 103 to Task 104 of MIL-STD-785 and Task 104 of MIL-
STD-470. *
1. 103.4 TASK OUTPUT
I 103.4.1 Testability data coUection and analysis plan for production test; documented
in accordance with DI-R-7105. (103.2.1)
f-
I 103.4.2 Testability data coUection end anstysis ptan for analyzing maintenance
actions on fielded systems; documented in accordance with D1-R-7105. (103.2.2 and
I 103.2.3).

● To be specified by the requiring authority.

9
MIL-W’D-2165

TASK SECTION 200

DE91GN AND ANALYSIS

10

MU-STD-2165

TASK 201
Tl?S1’ABILITYREQUIREMI.WR2

I 201.1 PURPOSE. To (t) recommend system test and testability y requirements which
best achieve availability end supportability requirements end (2) allocate those
I requirements to subsystems and items.

I 201.2 TASK DWWtIPTION


1“
I 201.2.1 EWablish overall testability de-sign objectives, god% thresholds end
constraints in support of the logistic support analysis process of MIL-STD-1388-1 or
i. equivalent supportability anetysis approved by the requiring authority. In this analysis,
prime system design for testability is to be one of the elements to be traded off for
I ireproved supportabllit y. Inputs to these requirements include:
I
a. Identification of technology advancements which can be exploited in
I system development and test development and which have the potential for increasing
I testing effectiveness, reducing test equipment requirements, reducing test costs, or
enhancing system availability.
I
b. Identification of existing and ptanned test resources (e.g., family of
testers in inventory) which have potential benefits. Identify tester limitations.
I
c. ldentific8tion of testing end testability problems on similar systems
I whlch should be avoided.

201.2.2 Establish performance monitoring, built-in test and off-line test objectives
10 for the new system at the system and subsystem levets. identify the risks and
I uncertainties in;olved in achieving the objectiv=- established.

201.2.3 Estabtish BIT, test equipment and testabitit y constraints for the new system,
f such as limitations on additional hardware for BIT, for inclusion in system specifications
or other requirement documents. These constraints shell incbsde both quantitative and
qualitative constraints.
I 201.2.4 Evetuate alternative diagnostic concepts to include varying degrees of BIT,
I manual end off-line automatic testing, diagnostic test points, etc., and identify the
selected diagnostic concept. The evaluation shell include:

a. A determination of the sensitivity of system readiness parameters to


I variations in key testability parameters. ‘l?sese parameters include BIT fault detection,
fatse atarm rate, etc.

b. A determination of the sensitivity of life cycle costs to variations in


key testability parameters.

c. An estimation of the manpower and personnel implications of


alternative diagnostic concepts in terms of direct maintenance manhours per operating
hour, job classifications, skill levels, and experience required at each level of
maintenance.

11

I
I

MIL.+TD-2165 1
d. An estimation of risk associated with each concept. 1

201.2.5 Establish BIT performance requirements at the system and subsystem level.
These requirements include specific numeric performance requirements imposed by the
requiring authority. Other requirements shell be based, in pert, om

a. Maximum allowable time between the occurrence of a failure condition
and the detection of the failure for each mission function.

b. Maximum allowable occurrence of system downtime due to erroneous .


failure indications (BIT false alarms).
c. Maximum allowable system downtime due to corrective maintenance
actions at the organizational level.

d. Minimum life-cycle costs.

201.2.6 Recommend BIT and testability requirements for inclusion in system


specifications. Appendix A, Figure 5, provides guidance on requirements to be specified.

201.2.7 Allocate BIT and testability requirements to configuration item


specifications baaed upon reliability and criticality considerations.
201.3 TASK INPUT

201.3.1 Supportability analysis data in accordance with MIL-STO-1386-I or other


method approved by the requiring authority.
201.3.2 Reliability and maintainability analysis and requirements
203 of MIL-STD-785 and Task 205 of MIL-STD-470.
such es from Task ●
201.3.3 Specific numeric BIT and testability requirements.;

201.4 TASK OUTPUT

201.4.1 Testability data required for supportability analysis. (201.2.1 through


201.2.4)

201.4.2 Description of selected diaf?nostic concept and tradeoff methodology,


evaluation criteri~, models used, and anal~sis results; doc_umented in accordance with fi_I- .
T- 7199. (201.2.4)

201.4.3 Recommended BIT and testability y requirements for system specification.


(201.2.3, 201.2.5 and 201.2.6)
201.4.4 Recommended BIT and testability requirements for each configuration item
specification. (201.2.7)

~To be specified by the requiring authority.

12
0
[

MIL%l’D-2165

TABK 202

● 202.1
‘TESTABILITYPRRLtMfNARY D~GN

PURPOBE. To incorporate testability


ANO ANALYSIB

design practices
MLL-6TD-2165

202.4.2 Description of teatability design tradeoffs and testability features selected


for implementation; documented in accordance with DI-T-7199. (202.2.2 and 202.2.4)
202.4.3 For each item, assignment of weighting factor and scoring method for each
testability criterion (Appendix B). (202.2.3)

202.4.4 Inherent testability y assessment; documented in accordance with DI-T-7199.


(202.2.3)
202.4.5 Description of methodologies, models and tools to be used in Task 203;
documented in accordance with DI-T-7199. (202.2.5)
I
I


I *To be specified by the requiring authority.

14
e
M~2165

TASK 203

● m-~ D~~ DESIGN AND ANALYS12

203.1 PURPOSE. To incorporate features into the de-sign of a system or equipment


which witl m~tabitity performance requirements and to predict the level of test
effectiveness which witt be achieved for the system or equipment.
I
203.2 TASK DESCIU~ON
,.
203.2.1 incorporate testability design features, including BIT, into the detaUed d=ign
I for each Item.

1“
‘203.2.2 Ana)yze that atl critical functions of the prime equipment are exercised by
testing to the extent specified. llre performing activity shatt conduct functional test
enetysis for each configuration item (Cl) and for each physicet partition of the Cl
I designated es a UUT.
,
203.2.3 Conduct en analysis of the test effectiveness of BIT and off-Une test.
I Identify the failures of each component and the failures between
eomponentsa~hich correspond to the specified failure modss for each item to be tested.
I
These failures represent the predicted failure population end are the basis for test
derivation (BIT and off-line test) and test effectiveness evaluation. Maxtmum use shell
I be made of a failure modes and effects analysis (FMEA), from Task 204 of ivllL-STD-
470A, if a FMEA is required. ‘he FMEA requirements may have to be modified or
supplemented to provide the level of detail needed.
I
10
r—
b. Model components and interconnections for each item such that the
predicted failure population may be accurately modeled. ‘f%e performing activity shell
develop or select modets which are optimum considering accuracy required, cost of test
t generation and simutetion, standardization and commonality.

c. Analyze end evaluate the effectiveness of pLsnned testing based upon


the predicted faiIure poputetion. l%e anetysis shetl give particuter emphasis to fautt
I detection end fault isolation for critical and high failure rate items end interconnections.
‘lIre test effectiveness data shell be used to guide redesign of equipment and test
programs, es required, and to assist in the prediction of spares requirements.

d. Prepare justification for any classes of faults which are undetected,


cannot be isoteted or are poorly isolated when using the developed test stimuli and
submit to the requiring authority for review. prepare additional or etternate dkignastic
approaches. Identify hard to test faults to the LSA process.
203.2.4 Iterate the design of the prime item built-in test until each predicted test
effectiveness value equals or exceeds the specified vetue.

203.2.5 Develop system-level BIT hardware and software, integrating the buUt-in test
capabititias of each subsystem/item.

203.2.6 Predict the level of BfT fault detection for the overall system bescd upon the
BIT detection predictions, weighted by failure rate, of the individual items, including

15
MIL-BTD-2165

GFE. Predict the level of fault isolation for the overall system through system-level
test. Predict the probability of BIT false alarms for the overall system.
203.2.7 Assemble cost data associated with BIT and design for testability on a per
unit basis (e.g., additional hardware, increased modularity, additional connector pins,

etc.). Estract and summarize cost data associated with the implementation of the
testability program, test generation efforts and production test. Provide teat
effectiveness predictions es inputs to availability and fife cycle cost analyses.

203.2.6 Incorporate BIT and testability corrective design actions es determined by


the maintainability demonstration results and initial testing.

203.2.9 incorporate changes and corrections into testability modeb test generation .
soft ware, etc., which reflect an improved understanding of operations and failure modes
es the design progresses. Use u~ated models, software, etc., to update test
effectiveness predictions as necessary.

203.3 TASK INPUT

203.3.1 Identification of items to be included in test effectiveness predictions. *

203.3.2 System or item preliminary design data. i

203.3.3 BIT specification.

203.3.4 Identification of failure modes and effects and failure rates for each item
from Task 204 of MIL-STD-470A.

203.3.5 Test ef festiveness data for GFE. * ●


203.3.6 Corrective action recommendations from maintainability demonstration.

203.4 TASK OUTPUT

203.4.1 System or item design which meets testability and maintainability


requirements. (203.2.1, 203.2.4, 203.2.5 and 203.2.8) .’

203.4.2 Description of built-in test and testability features for each item designated
as a Unit Under Test; documented in appropriate Test Requirements Document.
(203.2.1)
203.4.3 Test effectiveness prediction for each item; data provided in support of Task
205 of MIL-STD-47 OAand Task 401 of MIL-STD-1 388-1A and documented in accordance
with DI-T-7199. (203.2.2, 203.2.3, 203.2.7 and 203.2.9)

203.4.4 System test effectiveness prediction; data provided in support of Task 205 of
MIL-STD-470A and documented in accordance with Dt-T-7199. (203.2.6, 203.2.7 and
203.2.9)

●To be specified by the requiring authority.

16

I
I

I MIL-51T&2165

TASK SECTION S00


I
T’ESl’AND EVALUATION
I

I
I

I
1, 17
I

MJL-STD-2165

TASK 301

301.1
TWISTABILITYINPUTS TO MAINTADJAEILITY DEMONS1’EATION

PURPOSE. To determine compliance with specified testability requirements


●’
lend essess the velidlty of te.s.tabi~lty predictions.

I ,’ 301.2 TASK DESCRIPTIONS

301.2.1 Determine how testability requirements are to be demonstrated using -


maintainability demonstration, test program verification or other demonstration
methods. The fotlowing elements are to be demonstrated to the extent each is specified
in the contractual documents:

a. The ability of operational system checks to detect the presence of


errors.

b. The ability of system or subsystem BIT to detect and isolate failures.

c. The compatibility of each item es a UUT with the selected test


equipment.

d. The ability of the test equipment and associated TPSS to detect and
isotete failures.

e. The adequacy of technical documentation with respect to fault


dictionaries, probing procedures, manual troubleshooting, theory of operation, etc.

f. The correlation
with off-tine test results.
of BIT fault detection and fault isolation indications ●
g. The validity of models used to predict teetabilit y parameters.

301.2.2 Develop plans for the demonstration of testability parameters and integrate
into the plans and procedures for maintainability demonstration.

301.2.3 Conduct additioml demonstrations, as needed, using the methods and criteria
of MIL-STD-471 and MIL-STD-2077 es appropriate, to obtain sufficient testability data
for evaluation and document as a portion of the testability demonstration results. The
demonstrations are to be combined with other demonstrations whenever practical.

301.3 TASK INPUT

301.3.1 Identification of items to be demonstrated. ●

301.3.2 Identification of MIL-STD-471 test method or alternative procedure for


conducting a maintainability demonstration. *

301.3.3 Identification of MIL-STD-2077 quality assurance procedure or alternative


procedure for evaluation of TPSS.*

L -1
MtL-SlW2165

301.4 TASK OUTPUT

● 301.4.1
(301.2.2)
Testability demonstration plan; documented in accordance with DI-R-7112.

301.4.2 Testability demonstration results; documented in accordance with D1-R-7113.


I (301.2.3)

1“
1.
I

1
●To be specified by the requiring authority.

19
MIL-STD-2165

APPENDIX A
TFSTABILITY PROGRAM APPIJCATfON GULUANCE
cof4TRNTs ●
Paragraph 10. SCOPE 23
10.1 Purpose 23
< 20. REFERENCED DOCUMENT 23
20.1 Issues of documents 23

30. DEFINITIONS 24
30.1 Definitions 24

40. GENERAL APPLICATION GUIDANCE 24


40.1 Taak selection criteria 24
40.2 Testability program inperspectjve 24
40.3 System testability program 25
40.4 Item teatability program 30
40.5 Criteria for imposing a testability program
during the D&V phase 30
40.6 Equipment testability program 32
I 40.7 Iterations 32

50. DETAILED APPLICATION GUIDANCE 32


50.1
50.1.1
50.1.2
Task 101- Testability program plan
Scope
Submission of plan
32
32
32
0;
50.1.3 Plan for D&V phase 34
50.1.4 Organizational interfaces 34
50.2 Testability analysis report 34
50.2.1
50.2.2
Content
Utilization
34
34
I
50.2.3 TRD interface 34
50.2.4 Classified data 36
50.3 Task 102- Testability review 36
50.3.1 Type of review 36
50.3.1.1 Program reviews 36
50.3.1.2 Testability design reviews 36
50.3.2 Additional data review 36
50.4 Task 103- Testability data collection,
and analysis planning 36
50.4.1 Testability effectiveness tracking 36
50.4.2 Data collection and analysis plans 37 1
50.4.3 Test maturation 37
50.4.4 Operational test and evaluation 37

20

I
MW-SI’D-2165

APPENDIX A

10 CONTENTS (continued)
PaJ&

t Paregraph 50.4.4.1 Confirmed failure, BIT 37


50.4.4.2 Confirmed failure, off-line test 36
50.4.4.3 Unconfirmed failure, BIT 38
50.4.5 Corrective action 36
I 50.5 Task 201- Testability requirements 39
50.5.1 Supportability y analysis interface 39
I 50.5.2 New technology 39
50.5.3 Standardization 39
50.5.4 Test objectives 39
50.5.5 Diagnostic concept 39
50.5.6 Testability requirements 40
I 50.5.7 Testability requirements for system
specification 43
50.5.8 Testability requirements for item
specifications 44
50.5.8.1 System test 44
50.5.6.2 UUT test 44
50.6 Task 202- Testability preliminary design
end anatyais 44
50.6.1 Scope of testability design 44
I D&Vsystem design 46

io 50.6.2
50.6.3
50.6.4
50.6.5
50.6.6
Test design tradeoffs
General. testability issues
UUT end ATE compatibility
Built-in test
46
47
48
49
50.6.7 BtT softwere 50
50.6.8 System-level built-in test 51
50.6.9 Application of testability measures 52
50.6.10 Qualitative inherent testability evaluation 52
50.6.11 Inherent testability assessment 52
50.6.11.1 preliminary design activities 52
50.6.11.2 Checklist scoring 54
50.6.11.3 Threshold determination 54
50.7 Teak 203- Testability detail design
and analysis 54
50.7.1 Testability design techniques 54
50.7.2 fnherent testability assessment 54
50.7.3 Test effectiveness measures 54
50.7.3.1 Fault coverage
50.7.3.2 Fault resolution ;:
50.7.3.3 Fault detection time 56
50.7.3.4 Fault isolation time 56

0 21
MIL-STO-2165

APPSNLMXA

CONTSNTS (Contiiued)

Paragraph 50.7.4 Fault modeling 56


50.7.5 System level test effectiveness 57
50.7.6 Testability cost end benefit data 57
50.7.7 In-service testability measures 58
50.8 Task 301- Testability demonstration 58
50.8.1 Demonstration parameters 58
50.8,2 BIT and off-line test correlation 59
50.8.3 BIT false alarm rate 59
50.8.4 Model validation 59

ILLUSTRATIONS

Figure 1 System Testability Program Flow Diagram 26


2 Item Testabihty Design Detailed Flow Diagram 31
3 Equipment TestabiUty Program Flow Diagram 33
4 Model Paragraphs, Preliminary System
Specification 41
5 Model Requirements, System Specification 42
6 Model Requirements, CI Development
Specification 45

TABLRS

Table 1 Teak Application Guidance Matrix 29


II Testability Analysis Report Application
Guidance Matrix 35
m Testability Measure Application Guidance
Matrix 53

22
I
MLL+71D-2165
I
APPENDIX A
TESTABDJTY PROGRAM APPLICATION GUIDANCE
10
I 10. SCOPE

10.1 Purpcse. This appendix provides rationale and guidance for the selection end
I tailoring of tasks to define a testability program which meets established program
objectives. No contractual requirements are contained in this Appendix.
1-
20. REFERENCED 00CUMENTS

20.1 Lssues of Documents. The fotlowing documents form a pert of Uris Appendix
for guidance purposes.

STANDARDS
I
I MILtTARY
I MIL-STD-781 Reliability Design Quetification end Product
Acceptance Tests Exponential Distribution
I MIL-STD-1345 (Navy) Test Requirements Documents, Preparation of
1 MIL-sTo-1519 (uSAF) Test Requirements Documents, preparation of
MIL-STD-2076 UUT Compatibility with ATE, General Require-
ment for

PUBLICATIONS

JOINT SERVICE
NAVMATP 9405 Joint Service Built-in Test Design Guide
DARCOM 34-1
AFLCP 800-39
AFSCP 800-39
NAVMC 2721
19 March 1981
I
Naval Fleet Joint Service Stectronic Design for Testability
~ Analysis Center Course Notes
TM 824-1628
1’ I October 1983

. TECHNICAL REPORTS

Naval Air Systems Avionic Design Guide for ATE Compatibility


Command
1 Auf@t 1979
Contract NOO0140-79-C-0696

23
J

MIbSTfl-2165
. . . . .. . .

Air Force Aeronautical


Systems Division
Modular Automatic Test Equipment Guide 3-
Avionics Testability Design Guide

September 1983
Contract F33657-78-C-0502

Air Force Rome Air BIT/External Test Figures of Merit and Demon-
Development Center stration Techniques
December 1979
RADC-TR-79-309

Naval Fleet Analysis Center Testability Requirements Analysis Handbook


2 November 1984
TM-8243-1685

DIRECTIVE

DEPARTMENT OF DEFENSE

DOD Directive 5000.39 Acquisition and Management of Integrated


Logistic Support for Systems and Equipment
30. DEFIfilTIONS

30.1 Definitior6. The def initiona included in MIL-STD-1309, MIL-STD-721 and


Appendix C sh ~ apply.

I
40. GENERAL APPLICATION GUIDANCE ●
40.1 Teek selection criteria. The selection of teaks which can materially aid the
attainment of testability requirements is a difficult problem for both government and
industry organizations faced with severe funding and schedule constraints. This
Appendix providea guidance for the selaction of teaks based upon identified program
needs. Once appropriate testability program teaks have been selected, each teak must be
tailored in terms of timing, comprehensiveness and end products to meet the overall
program requirements.

40.2 TestabiMty prq? eminpspec tive. The planned testability program must be
an integral part of the systems engineering process and serve es an important link .
between design and logistic support in accordance with DOD Directive 5000.39. The
tasks which influence and are influenced by the testability program are extracted from
DOD Directive 5000.39 in the following paragraphs. .
I
a. Concept exploration phase
I
I teatability
1.
parameters
Identify manpower, logistic, reliability,
critical to system readiness and support costs.
maintainability and
I
2. Estimate what is achievable for each parameter.

24

MU,-STD-2165
APPENDIX A

‘o
t
b. Demonstration end validation (D&V) phase
1. Conduct tradeoffs among system design characteristics end
support concepts.

2. Establish consistent set of goals and thresholds for readiness,


reliability, maintainability, BIT, manpower end logistic parameters.
[ 3. Establish test and evaluation (T&E) plans to SSSeSSachievement of
~pport+ekted thresholds.
c. Futl state development (FSD) phase

L Perform detailed analysis end tradeoffs of design, reliability end


maintainability (R&M), manning levets and other logistic requirements.

2. Perform T&E of adequacy of planned manpower, support


concepts, R&M and testability to meet system readiness and utilization objectives.

d. Production and deployment phase


1. Perform foUow-on evaluation of maintenance ptan, support
capability, operation and support costs and manpower.

a. Correct deficiencies.

40.s system testaMfty prourem (PfKILre 1). For major systems, the testability
teaks for each program phesa are summarized in ‘fable I end listed below.

Concept eseloration phase - lMablieh testability objectives in


prelfminary%ystem specification (Task 201).
I b. D&V pheSe
t
1. prepare testabitfty program pien (Tesk 101).

2. Incorporate testability features into D&V items end evaluate


effectiveness (See 40.4).

3. Select alternative system concept(s). Sstablish testability


I requirements in system specification. Allocate testability requirements to item
. development specifications (Tesk 201).

4. Conduct testability review es pert of system design review (Tesk


102).

c. FSD ~hese

1, prepare and revise testability y program pian (Tesk 101).


I
I
25
o
MIL-STD-2165
APPENDIX A

1-

------ . .---,
al
5
m
.- .
U

l----d

26
MIL-STD-2165
APPENDIX A
-“

G
‘&
N

a!
r
— s
.-m
U.
------ ---

o 27
MIL-STD-2165 ,,
I APPENDIX A

I ~— —-.-=
I I r ~rl-l

-——— ,——

‘-w
I
LL_L
I t-

28
MJL-Sl’W2165
APPENDIX A

Table L Teak Spptication@u“denee matrix.

Prcgmxl phase
Task CON
—. o&v FSD
— P/D

I 101 Testability program ptannirrg NA G G NA
1.
102 Testability reviews G1 G G s
1“
103 Testabilityy data coUee-
1“ tion and analysis planning NA s G G
! 201 Testability requirements G1 G G NA
I
! 202 Testability preliminary
design end analysis NA s G s

203 Testability detail design


and enetysis NA s G s

301 TestabUity demonstration NA s G s

NA - Not applicable CON - Concept Exploration


G- Generatty eppilcable D&V - Demonstration end Validation
S - Selectively applicable to PSI) - Fult Scale Development

1
MIbS1’D-2165
APPENDIX A

incorporate testability features into FSD items and evaluate


effectiveness (Se~”40.4).

3. Demonstrate system testability effectiveness (Task 301).

d. Production and deployment phase - Collect data on achieved testability


effectiveness. Take corrective action, es is necessary.

40.4 Item testability progtam (Figur e 2). For afl items, whether developed as a
subsystem under a system acquisition program or developed under an equipment
acquisition program, the testab~lt y tasks are listed below.

a. Preliminary design

1. Prepare testability program plan, if a plan was not developed as


pert of a system acquisition program (Task 101).

2. Incorporate testability features into preliminary design (Tesk


202).

3. Prepare inherent testability checklist for each item (Tssk 202).

4. Conduct testability review as part of preliminary design review


(Task 102).

b. Detail design

Incorporate testability features into detail design (Tesk 203), and


●✛
predict inherent ~~stability for each item (Task 202).

2. Predict test effectiveness for each item (Task 203).

3. Conduct testability review as part of the critical design review


(Task 102).

4. Demonstrate item testability effectiveness (Tesk 301).


40.5 Criteria for imps” mg a teetabtit y program during the D&V phase. During the -
D&V phase, a formaf testability program should be apphed to the system integration
effort and, in addition, should be selectively applied to those subsystems which present a
high risk in testing. The high risk aspect of test design may be a result of:

a. Criticality of function to be tested,

b. Difficulty of achieving desired test quality at an affordable cost,

c. Difficulty of defining appropriate testability measures or demonstra-


tions for technology being tested,

30
I MIL-STD-2165
APPENDIX A

t

I
I —
)
,.
1
I

!.

I
-IQ&

— —

31
MIL-STD-2165
APPENDIX A

d. Large impact on maintainability


automation, throughput, etc., is not achieved, or
or elements if expected test qudlty,

e. High probability that modifications to the subsystem during FSD will be
limited.

40.6 Equipment testability progr am (Pigur e 3). For the acquisition of less-than-
major systems or individual equipments, the testability tm.ks are listed below.

a. Establish system or equipment testability requirements (performed by


requiring authority using Teak 201 as guidance).

b. Prepare testability program plan (Task 101).

c. Incorporate testability features into items and evaluate effectiveness


(See 40.4).

d.” Collect data on achieved t=tability effectivenss.s (performed by


requiring authority using Task 103 as guidance).

40.7 Iteretiors. Certain tasks contained in this standard are highly iterative in
nature and recur at various times during the acquisition cycle, proceeding to lower levels
of hardware indenture and greater detail in tha classical systems engineering manner.
50. DETAILED APPLICATION GUIDANCE

50.1 Task 101- Testabtity pr cgram @anni~ ●


50”1”1 =“ The testability program plan is the basic tool for establishing and
executing an effective testability program. The testability program plan should
document what testability tasks are to be accomplished, how each teak will be
accomplished, when they will be accomplished, and how the results of the task will be
used. The testability y program plan may be a stand-alone document but preferably should
be included as part of the systems engineering planning when required. Plans assist the
requiring authority in evaluating the prospective performing activities approach to and
understanding of the testability task requirements, and the organizational structure for
performing testability tasks. The testability program plan should be closely coordinated
with the maintainabiliaty program plan.

50.1.2 Submission of plan. When requiring a testability program plan, the requiring
authority should allow the performing activity to propose specifically tailored tasks with
support ing rationale to show overall program benefits. The t es.tabilit y program plan
should be a dynamic document that reflects current program status and planned actions.
Accordingly, procedures must be established for updates and approval of updates by the
requiring authority when conditions warrant. Program schedule changes, test results, or
testability task results may dictate a change in the testability program plan in order for
it to be used effectively es a management document.

3’2

L —. — -1

I-——— ——__ i
33
1

MIL-STD-2165
APPENDIX A

50.1.3 Plan for D&V phase. When submitted at the beginning of a D&V phase, the
testability program plan shoufd highlight the methodology to be used in establishing
qualitative and quantitative testability requirements for the system specification. The
●’
plan should also describe the methodology to be used in allocating quantitative system
testability requirements down to the subsystem or configuration item level. The nature
of the D&V phase will vary cormiderably from program to program, ranging from a
,Ifirming up,! of preliminary requirements to a multi-contractor “fly Off” of cOmpeting
alternatives. In alf cases, sufficient data must be furnished to the Government to permit
a meaningful evaluation of testing and testability alternatives. The testability pregram
plan should indicate how the flow of information is to be accomplished: through informal
customer reviews, through CDRL.data submiesioms, and through testability reviews es an
integral part of SDR.

50.1.4 Organizational interfaces. In order to establish and maintain an effective


testability program, the testability manager must form a close liaison with all design
disciplines. In satisfying system support requirements, the prime system design must be
treated as one of the elements which may be traded off through the supportability.
analysis process. As a result, the testability manager must be prepared to work
aggressively with design engineers to ensure a proper balance between performance, cost
and supportability. It is not efficient or effective for the testability manager to assume
the role of post-design critic and risk large cost and schedule impacts. The testability
influence must be apparent from the initiation of the design effort, through design
guidelines, training programs, objective measures, etc.

50.2 Testability arialyeis report

50.2.1 Content. The testability analysis report collects a number of different


testability ~ults and documents the results in a single, standard format. The
testability analysis report should be updated at least prior to every major program
review and this requirement must be reflected in the CDRL. The actual content and
level of detail in each submission of the testability analysis report wifl depend upon the
program phase. Table If provides guidance es to which tasks/subtasks would be included
for review by the requiring authority prior to four specific reviews. The first entry for a
subtask indicates the time of initial submission. Any following entries indicate one or
more updates to the originef data es the design becomes defined in greater detaiL

50.2.2 Utilization. The testability analysis report should be used by the performing
activity to disseminate all aspects of the testability design status to the various
organizational elements. As such, the testability analysis report should be considered to
be a dynamic document, containing the latest available design information and issued
under an appropriate degree of configuration control. As a minimum, the testability
analysis report should accurately reflect the latest design data when informal t esta.bili ty
design reviews are held.
50.2.3 TRD interface. The testability analysis performed during the FSD phase and
documented in the testability y analysis report should be used as a partiaf basis for the
TRD for each UUT. The TRD, developed in accordance with MIL-STD-1519 or MIL-STD-
1345, constitutes the formal interface between the activity responsible for detailed
I

34
●’

I
MU-SI’D-2165
APPENDIX A

Table IL Testability Anatyata Report eRPtieatton RUI


“dance matrix

I Subtask D@mt SDR PDR CDB


— x
201.4.2 Requirements tradeoffs x x

202.4.2 Design tradeoffs x x x

202.4.2 Testability design data x x

202.4.3 Inherent testability checklist x

202.4.4 Inherent testability assessment x x

202.4.5 Detail design analysis procedures x x

203.4.3 Item test effectiveness prediction x x

203.4.4 System test effectiveness x x


prediction

Note: Each submission of the Testability Analysis Report should be required by the
CDRL to be delivered sufficiently in advance of each review such that the requiring
authority may review the materiat.

35
MIL-SfD-2165
APPENDIX A

hard ware design and the activity responsible for TPS development. This document serves
as a single source of aff performance verification .wd diagnostic procedures, and for all
equipment requirements to support each UUT in its maintenance environment, whether

supported manually or by ATE. The TRD also provides detailed configuration
identification for UUT design and test requirements data to ensure compatible test
programs.

50.2.4 Classified data. If classified data is required to document the testability .


analysis, it should be placed in a separate attachment to the testability analysis report
such that the testability analysis report may have the widest possible distribution.

50.3 Teak 102- Testability review

50.3.1 ,~{ This t as k IS


“ directed toward two types of review: (1) formal
system program reviews Subtask 102.2.1), and (2) review of design information within
the performing activity from a testability standpoint (Subtask 102.2.2). The second type
provides testability specialists the authority with which to manage design tradeoffs. For
most developers this type of review is a normal operating practice. Procedures for this
type of review would be included in the testability program plan.

50.3.1.1 Program reviews. System program reviews such as preliminary design


eviews are an important management and technical tool of
the requiring authority. They should be specified in statements of work to ensure
adequate staffing and funding and are typically held periodically during an acquisition
program to evaluate overall program progress, consistency, and technical adequacy. An
overall testability program status should be an integral part of these reviews whether
conducted with subcontractors or with the requiring authority. ●
50.3.1.2 TestabUty design reviews. Testability design reviews are necessary to assess
the progress of the testability design in greater technical detail and at a greater
frequency than is provided by system program reviews. The reviews shell ensure that the
various organizetionaf elements within the performing activity which impact or are
impacted by testability are represented and have an appropriate degree of authority in
making decisions. ‘l%eresults of performing activity% internal and subcontractor system
reviews should be documented and made available to the requiring authority on request.
These reviews should be coordinated, whenever possible, with maintainability, ILSMT and
program management reviews.

50.3.2 Additional data review. In addition to formal reviews, useful information can
often be gamed from performmg activity data which is not submitted formally, but
which can be made available through an accession list. A data item for this list must be
included in the CDRL. This list is a compilation of documents and data which the “
requiring authority can order, or which can be reviewed at the performing activity’s
facility.

50.4 Task ]03 - Testability datJ3 COllSCtiOll $llld sldysfs @9WIli~

50.4.1 Testability effectiveness tracking. A testability program cannot be totally


effective unless provisions are made for the systematic tracking and evaluation of
testability effectiveness beyond the system development phase. The objective of this
36

L ..—
tdIbS1’D-2165
APPENDIX A

o task is to plan for the eveiuetion of the impact of actuai operational end maintenance
environments on the abiiity of production equipment to be tested. The effectiveness of
testability design techniques for intermediate or depot level maintenance tasks is
monitored end anaiyzed es part of US”M evacuation. Much of the actual collection and
enelvsis of date and resuitine corrective actions mav occur bevond the end of the
I con~act under which the tes”hbitity program is im~ed end ma; be accomplished by
personnel other than those of the performing activity. Stiii, it is essentiet that the
I pfanning for this task be initiated in the FSD phase, preferably by the critical design
review.

50.4.2 Data coUection and anatysis ptans. separate ptens should be prepared for
tedabilitv data collection and enelvsis durhw (t) mwduction DhSSe (Subtesk 103.2.1) and
(2) deplo~ment phase (Subtesk 103.~.2). The ~~ns-should cle~ly delineate which enaiysis
data are to be reported in various documents, such es T&E reports, production test
reports, factory acceptance test reports, etc.

50.4.3 Test maturation. Most test implementations, no matter how wetl conceived,
require a period of time for identification of problems and corrective action to reach
specified per for mence leveis. Thii ‘maturing” process eppiies equally to BIT end off-line
test. ‘11-Iisis especially true in setting test tolerances for BIT end of f-ibse test used to
test analog parameters. The setting of test tolerances to achieve an optimum batence
between failure detection and fatse aierms usuaUy requires the logging of considerable
test time. It should be emphasized, however, that the necessity for “fine-tuning” a test
system during production and deployment in no way diminishes the requirement to
provide a %est possdbie” design during FSD. One way of accelerating the test

● maturation process is to utilize ptemed field or depot testers for portions of acceptance
teat. BIT test hardware and softwere should be exercised for those failures discovered
and the BIT effectiveness documented and assessed.

50.4.4 Operational test and evaluation. The suitability of BiT should be assessed es
en integral pert of operattonel test aad evaluation. A closed-loop date trackhtg system
should be implemented to track initief failure occurrences, organizational-level
corrective actions, subsequent higher-ievel maintenance actions, and subsequent
utilization end performance of repaired and returned items. The data collection must be
integrated es much as possible with simiter data collection requirements such es those
for tracking reflabitity end maintainability. The data trackimr svstem must collect
sufficient data to eup~rt the analysis of %0.4.4.1 through 50.4~4.{. AN maintenance
. actions are first reviewed to determine if the failed item is relevant to BIT or off-line
b test. For example, items with loose bolts are not relevant to testability aneiysis. if at
some point in the data tracking, en actual failure is found, the analysis for confirmed
k. failures (50.4.4.1 and 50.4.4.2) is applied. If en actuai faiiure is not found, the enetysie
for non-confirmed failures (50.4.4.3) is eppiied.

50.4.4.1 Confirmed failure, BiT. For each confirmed failure, data on BIT
effectiveness are analyzed:

a. Did BIT detect the failure?

b. Did BIT correctly indicate operational status to the operator?

37
MfL.+TW2165
APPENDIX A

c. Did BIT provide effective


maintenance actions?
fault isolation information for corrective ●
d. What was the ambiguity size (number of modules to be removed or
further tested) due to fault localization or isolation by BIT?
e. How much time was required for fault isolation at the organizational
level of maintenance?

50.4.4.2 Confirmed failure, off-line test. For each confirm ed failure, data on off-line
test compatibility are analyzed:

a. Were any workarounds required to overcome mechanical or electrical


deficiencies in the UUT and ATE interface?

b. Did the ATE system provide failure detection results consistent with
those of the initiaf detection by BIT?

c. Did the UUT design inhibit the ATE system from providing accurate
fault isolation data?

50.4.4.3 Unconfirmed failure, BIT. For each unconfirmed failure situation (cannot
I duplicate) resulting from a BIT indication or alarm, the following data are analyzed:

a. What is the nature of the alarm?


b. What is the frequency of occurrence of the alarm? ●
c. What failure or failures are expected to cause the observed alarm? I
d. What are the potential consequences of ignoring the alarm (in terms of
crew safety, launching unreliable weapons, etc.)?

e. What are the operational coats of responding to the false alarm (in
terms of aborted mission% degraded mcde operation, system
downtime)?

What are the maintenance costs associated with the false alarm?
4

What add~tionel data are available from operational software dumps


(e.g., soft failure occurrences) to characterize cannot duplicate . ~
occurrences?

50.4.5 Corrective action. The data on BIT effectiveness and off-line test
compatibility are summarized and corrective action, if needed, is proposed by the
performing activity or user activity. Those corrective actions dealing with redesign of
the prime system aresubmitted forreview and implementation as pert of the established
engineering change process.

38

1
MU#t’D-2165
APPENDtX A

● 50.5 Teats 201- Testability requs“rementa


50.5.1 sup portabiti ty analysis interface. It is essential to conduct a Lqfiitic Support
Ansdvsis or other sucioortabilitv enetvses (Subteek 201.2.1) early in en acquisition
pr~”arn to identify “c~htrainti, ‘thresh~lds, and targets for improvement, end to provide
supportability input into early system tradeoffs. It Is during the early phases of an
I acquisition program that the greatest opportunity exists to hsfluance the system design
1- from e supportability standpoint. These analyses can identify supportability and
I maintainability parameters for the new system which are reasonably attainable, elons
with the prim: ~ivers of suc.port, manpower, personnel end training, cost, and readiness.
The drivers, once identified,- provide 8 bes~ for concentrated analysis effort to identify
targets and methods of improvement. Mission and support systems definition tasks are
generatly conducted at system end subsystem levets early in the system acquisition
process (Concept and D&V phases). Identification and analysis of risks plays a key role
due to the high level of uncertainty end unknowns early in a system% tife cycle.
Performance of these tasks requires examination of current operational systems end
their characteristics es wetl es projected systems and capabitit ies that will be aveitable
in the time frame that the new system wilt reach its operationef environment. New
system supportability and supportability reteted design constraints must be established
baaed upon support systems and resources that will be avaitable when the new system is
fielded. Additionet guidance may be found in the Testability Requirements Analysis
Handbook (20.1).
50.5.2 New technology. Subtesk 201.2.la identifies new system supportability
enhancements over existing systems

1

MIL+l!D-2165
APPENDfX A

during the concept exploration phase. However, a preliminary estimate of the critical
testability parameters should be made to ascertain if the required system availability

and maintenance and logistic support concepts can be supported using testability
parameters demonstrated es achievable on similar systems. The diagnostic concept
usually evolves as follows:

Determine on-line BIT requirements to ensure monitoring of critical


functions ar% monitoring of functions which affect personnel safety. .

b. Determine additional on-line BIT requirements to support high system


availability through redundant equipments and functions, backup or degraded modes of .
I operaflon, etc.

c. Determine additional BIT requirements to support confidence checks


prior to system initiation or at periodic intervals during system operation.

As more detailed design data becomes avaifable, usually during the D&V phase, the
diagnostic concept further evolves, making extensive use of readiness and life cycle cost
models:

I d. Determine what fault isolation capability is inherent in (a) through (c);


determine additional BIT requirements to support the preliminary maintenance concept.
Determine automatic, semi-automatic end manual off-line test
requiremen~ to fill voids due to technical or cost limitations associated with using BIT.

Note: The sum of the requirements for BIT, off-line automatic test, semi-automatic test
and manual test must always provide for a complete (100%) maintenance capability at

each maintenance level.

f. Determine what existing organizational level BIT capabilities are usable


at the next higher maintenance level. Determine requirements for ATE. @enerel
I purpose electro~ic test equipment (GPETE) end technical ~ocumentation to provid~~””with
BIT, the total maintenance capability.

If testability requirements are included in a preliminary system specification, they


should be qualitative in nature pending the more quantitative tradeoffs of the D&V
design. Figure 4 provides some model paragraphs which may be included primarily to
alert the perform ing activity that design for testability is considered to be en important
aspect of design end that quantitative requirements will be imposed in the final system
specification. Alternatively, the model paragraphs for the system specification, figure
5, could be used in the preliminary system specification with all quantitative
requirements “to be determined.”

S0.5.6 Testability requirements. Prior to the Full Scale Development phase, firm
testability requirements are established (Subtasks 201.2.5) which are not subject to
tradeoff. These represent the mini mum - essential levels of performance that must be
satisfied. Overall system objectives, goals and thresholda must be allocated and
translated to arrive at testability requirements to be included in the system specification

40

i
MtkfH’D-2165
APPRNDIX A

3.x.x Design for testability

3. X.X.1 Pertitioni~. The system shall be partitioned based, in pert,


upan the ability to confidently iaotate fautta.

Test ointe. Each item within the system shall have sufficient
test
3“X”X”2 +or the measurement or stimutus of intemet circuit nodes so as
points
to achieve en inherently high level of fault detection and isotation.

3.x.x.3 Maintenance capability. For each level of maintenance, BIT,


off-tine automat ic test end menuat test c8pabiU ties shell be integrated to
provide a consistent end complete maintenance capability. The degree of
test automation shell be consistent with the proposed personnel skitt
levets end corrective end preventive maintenance requirements.

3.x.x.4 ~. Mission critical functions shall be monitored by BIT. BfT


tolerances shetl be set to optimize fault detection end false alarm
characteristics. BtT indicators shatt be designed for maximum utilization
by intended personnel (operator or maintainer).

FQure 4. Modet pereww hs, pre timinery system specification.

41

I
I

MIW%PD-2165
APPENDIX A

3.x.x Design for testability y

a. Requirement for status monitoring.

b. Definition of failure modes, including interconnection


failures, specified to be the basis for test design.

c. Requirement for failure coverage (% detection) using


full test resources.

d. Requirement for failure coverage using BIT.


Requirement for failure coverage using only the
monitoring ~f operational signals by BIT.
f. Requirement for maximum failure latency for BIT.

% Requirement for maximum acceptable BIT false alarm


rate; definition of false alarm.

using BIT.
fL Requirement for fault isolation to a replaceable item

j.
i. Requirement for fault isolation times.

Restrictions on BIT resources in items of herdwere size,


weight and power, memory size end test time.
I
k. Requirement for BIT hardwere reliability.

L Requirement for automatic error recovery.

m. Requirement for fault detection consistency between


hardwere levels and maintenance levels.

Figure 5. Model reqru“remcnte, eyetern apecification.

42 ●

L

MfIATTD-2165
APPENDIX A

or other document for contract compliance (Subtesk 201.2.6). This subtask is necessary
to assure that system specification or contract parameters include onty those parameters
which the performing activity can control through design end support system
development. The support burden and other effects of the government furnished
material, administrative and logistic detey times, and other items outaide the control of
the performing activity must be accounted for in this process.
50.5.7 Testebllity requirements for system specification. Quantitative testabihty
require mentZZ analys during the D&V phase and are
incorporated in the system specification. Requirements may be expressed in terms of
goals andthresholrb rather than esa single number. Model requirements for tsstebtity
inasystem specification are provided in Figure 5andere discussed below.
The system specification includes testability requirements for failure
detection, failure isolation and BIT constraints. Requirement (a) defines the interface
I
between theprimesyetem and an external monitoring eystem, if applicable. Perticutar
attention should be Riven to the use of BIT circuitry to provide performance and status
monitoring. Requir~ment (b) provides the basis ~or alt subsequent t-t design end
evaluation. Failure modes are characterized besed upon theeomponent technology used,
the assembly process used, the detrimental environment effects anticipated in the
intended application, etc. Maximum use should be made of prior retiabitity endysiamd
fault analysis data such as FMEA end fault trees. The data represent a profile of
estimated system failures to beconstantly refined andupdetedm the design progresses.

Requirements (c) through (e) dest with test approaches. Requirement (c)
permits the use of all test resources end, es such, should etways demand 100% failure
‘o coverage. Requirement (d) indicates the proportion of failures to be detected
automaticaUy. Excluded failures form the basis for manual troubleshooting procedures
(swapping large items, manual probing, etc.). Requirement (e) is a requirement for
cleating quickly with critical failures and is a subset of (d). The failure detection
appr~ch selected is based upon the requirement for maximum acceptable failure
latency. Concurrent (continuous) failure detection techniques (utilizing herdwere
redundancy, such es parity) are specified for monitoring those functions which are
mission critical or affect safety and where protection must be provided against the
propegationof errors through the system. ‘t%e manmum permitted failure tetency for
concurrent failure detection end other cfesses of automatic testing is imposed by
requirement (f). This requirement determines the frequency at which periodic diagnostic
softwere, etc. will run. The frequency of periodic end on-demand testing is based upon
function, failure rates, wear out factors, maximum acceptable failure tatency, end the
specified operational and maintenance concept.

Requirement (g) is the maximum BIT feIse eterm rate. A~rms which OCC~
during system operation but cannot be later duplicated may actualfy be intermittent
failures or may indeed be a true probtem with the BtT circuitry. It may be useful to use
the system specification to require sufficient instrumentetion in the system to allow the
sorting out end correction of real BIT problems (e.g., BIT faults, wrong thresholds, etc.)
during operational test and evaluation.

level part,
Requirement (h) requires fault isotation by BIT to a subsystem or to a lower
depending upon the maintenance concept.
43
This requirement is usuatly
I
MfL-SID-2165
APPENDIX A

expressed es “fault isolation to one item X% of the time, fault isolation to N or fewer
items Y% of the time~’ Here, the total failure population (100%) consists of those
● 4
failures detected by BIT (requirement (d)). The percentages should always be weighted
by failure rates to accurately reflect the effectiveness of BIT in the field.

Requirement (i), fault isolation time, is derived from maintainability


requirements, primarily Mean Time to Repair (MTTR) or repair time at some percentile.

Fault isolation time = repair time - (preparation time + disassembly time +


interchange time + reassembly time + alignment time + verification
time)

The preparation time, disassembly time, interchange time, reassembly time


end alignment time may be estimated. The verification time is usually equal to the
failure detection test time in that the same tests are likely to be used for each.

Requirement (j), BIT constraints, should not be arbitrarily imposed but should
be consistent with the BIT performance specified in requirements (a) through (i).
Historically, systems have needed atmut 5 to 20% additional hardware for
implementations of adequate BfT. However, for some systems, 1% may be sufficient
whereas other systems may need more than 20%.

Requirement (k), BIT reliability, again should not be arbitrarily imposed but
should be consistent with the required BIT performance. This requirement may also be
used to specify those critical functions for which a failed BIT must not interfere.
50.5.6 Testability requirements for item specifications.
for configuration items (CLS)support two dktinct requirements:
Testability requirements
system test (primarily

BIT) end shop test (ATE and GPETE). Model requirements for testability are presented
in Figure 6.

50.5.8.1 System test, Quantitative testability requirements for each CI are allocated
from system teat ability requirements baaed upon refative failure rates of Cfs, miaAon
crit icafi ty of CJS or other specified criteria. In many digital systems, BIT is
implemented, in whole or in part, through software. Here testability requirements will
aPPe~ in a comPuter Program configuration item (C pCI) development specification. The
program may be dedicated to the BIT function (i.e., a maintenance program) or may be a
mission program which contains test functions.

50.5.8.2 UUT test. Shop test requirements are determined by how the CI is further
partitioned, If at all, into UUTS. Testability requirements for each UUT should be .
included in the appropriate CI development specification. I
50.6 Teak 202- Testability prelim”mary design and analysis
!
50.6.1 Scope of testabifi ty design. Testabilityy addresses three major design areas:
I
a. The compatibility between the item and its off-line test equipment
I
44
●1
1’
I MIL+I’D-2165
APPENDIX A

The fotlowing Cl requirements are allocated from system requirements:

a. Definition of fault modes to be used es the basis for test design.


b. Requirement for fault coverage using fufl test resources (Note:
Requirement should be 100% fault coverage).
Requirement for fault coverage end failure reporting using full
BIT r~urces of CL
d. Requirement for fault coverage and failure reporting using only
BIT monitoring of operational signets within CI.
Requirement for maximum failure tetency for BIT and operational
monit~ring of criticet signals.
f. Requirement for maximum BIT false alarm rate. include criteria
for reporting e transient or intermittent fault es a valid BIT atarm.
g. Requirement for fault isolation to one or more numbers of sub-
items using Cl BIT.
I h. Requirement for CI fault isotetion times using BfT.
Restrictions on BIT resources.
; Requirement for BIT hardwere reliability.
Requirement for periodic verification of calibration of BIT
sensors.
The foUowing requirements apply to each partition within the
configuration item which is identified es a UU~
Requirements for compatibility (functional, electrical, mecheni-
cel) b~ween the UUT and the selected ATE.
b. Requirements for test point access for the UUT.
c. Requirements for fault coverage using fufl test resources of the
intermediate or depot maintenance facilities. (Note: Requirement should be
100% fault coverage).
I Requirement for fault coverage using automatic test resources
(ATE ~d TPS Plus embedded BIT).
Requirement for average (or maximum) test time for GO/NO GO
tests ;ing automatic teat resources.
f. Requirement for maximum rate of false NO GO indications
resulting in cannot duplicates and retest okeys using automatic test
I
resources.
g. Requirement for fault isolation to one or more number of
subitems within the UUT using automatic test resources.
I
,- h. Requirement for fault isolation times using eutomatic test
resources.

NOTIk A UUT at en intermediate maintenance facility may contain


several smelfer UU~ to be tested at a depot maintenance fecility
and all should be Usted in the Cl specification.

Figure 6. Modef reqtu“rements, Cl development eisecifieation.

45
,0
I

MflrST’D-2165
APPENDIX A

b. The BIT (herdware and softwere) provided in the item to detect and
isolate faults a

The structure of the item in terms of (1) partitioning to enhance fault


isolation an~(2) providhrg access and control to the tester (whether BIT or off-line test)
for internal nodes within the item toenhance fault detection end isolation.
Testability concepts, when applied to weapon system and support system .
designs, have thepotentiel to:

a. Facilitate the development of high quality tests

b. Facilitate manufacturing test

c. Provide a performance monitoring capability (through BIT)

d. Facilitate the development of fault isolation procedures for technical


manuals

e. Improve the quality end reduce the cost of maintenance testing and
repair at all levefs of maintenance.

Subtask 202.2.1 requires the performing activity to integrate testability into the design
process. Several design guides and handbooka are available which explain testability
design techniques which have proven successful in certain applications. (See Section 20,
publications).
issues.
The following paragraphs provide a summary of some testability design

50.6.2 D&V system deaiSSS. During the D&V phese,alternate system designs are
evsfuated. This includes the analysis of manpower requirements, support costs,
reliability, maintainability, and system readiness. There are usually no detailed,
quantitative specifications for testability in D&V system designs. fn fact, the purpose of
the D&V phase, with respect to testability, is to determine quantitative testability y
requirements that are achievable, affordable, and adequately support system operation
and maintenance. In making this determination, it is reasonable to apply Task 202 to
selected items to be implemented in the alternative systems. These items may be
selected because they have potential testing problems or are not expected to be modified
during FSD.

50.6.3 Test design tradeoffs. The overall test design will usually incorporate a mix
of BIT, off-tine automatic test and manual test which provides a level of test capability
consistent with operational availability requirements and fife-cycle cost requirements.
I Alternate designs are analyzed and traded off against requirements of performance,
supportability, and cost to arrive at a configuration best meeting the requirements at
I minimum cost.
I a. Manual or automatic test tradeoffs. Decisions regarding the type of
test equipment to be used for system morutoring anf maintenance are made based u~n
I
repair policies, overall maintenance plans and planned number of systems. Tradeoffs are

I
46

I
I
MUASt’W2155
APPENDIX A

● made for test requirements at each maintenance level, considering test complexity, time
to fault isolate, operationet environment, logistic support requirements, development
time and cost. ‘the degree of testing automation must be consistent with the ptenned
skill Ievets of the equipment operators and maintenance personnel.

I b. BIT or ATE tradeoffs. Within the category of automatic testing, the


allocation of requirements to BIT or off-tine test is driven by the natural differences
1. between BIT capebitities and ATE capabilities:

1. BIT is used to provide initial fault detection for a system or


equipment and to provide initial fault isotation to a smetl group of
items. BIT has the advantage of operating in the mission
environment and being self%uff icient.

2, Off-tine ATE is used to provide fault detection for an item es a


UUT and provide fault isotetion to components within an item.
i

I c.
ATE does not impose the weight, volume, power and reliability
penalties on the prime system that BIT does.

Coordination. in developing off-tine test desigrw, maximum utilization


I
should be made of available BtT capability within each UUT. In addition, test tolerances
used by off-tine test should be tighter than those used by BtT to avoid the ‘retest Okeyn
problem.

50.6.4 General testability issues. Testability features in a system or item design


support both BIT and off-tine test.

a. Physical pa rtitioni~. The ease or difficulty of fault isolation depends


to a large extent upon the size and complexity of replaceable items:
1. The physicat partitioning of a system into items should be based,
in pert, upon the enhancement of the fault isotetion process.

2. The maximum number of item pins must be consistent with the


interface capabilities of the proposed ATE.

3. Items should be limited to onty analog or onty digital circuitry,


whenever practicai, and when functional partitioning is not
impaired.
1 4. Where practicet, circuits belongimg to an inherently large
ambiguity group due to signet fan-out should be ptaced in the
same package.

I b. Punctionel partitioning. Whenever possible, each function should be


implemented on a singie repteceeble item to maka fault isotetion straightforward. If
more then one function is placed on a replaceable item, provisions shoutd be made to
allow for the independent testing of each function.

:0 47

I
MfL-STD-2165
APPENDfX A

I
c. Electrical partitioni
currently being tested should
Whenever possible, the block of circuitry
from circuitry not being tested through the use
of blocking gates, tri-state devices, relays, etc. This “divide and conquer” approach is

baaed upon the concept that test time increases exponentially with the complexity of the
circuit.

d. Initialization. The system or equipment should be designed such that it


has a well-defined initief state to commence the fault isolation process. Non- -
achievement of the correct initiaf state should be reported to the operator along with
sufficient signature data for fault isolation. The system or equipment should be designed
to initialize to a unique state such that it will respond in a consistent manner for .
muItipIe testing of a given failure.

Module interface. Maximum use should be made of available conneetor


pins to inc~rporate test control and access. For high density circuits and boards,
preference may be given to additionef circuitry (e.g., multiplexer, shift registers) rather
then additional pins.

f. Test control (controllability). Special test input signata, data paths, end
circuitry should be incorporated to provide the test system, whether BIT or ATE,
sufficient control over internal item or component operation for the detection and
isolation of internef faults. Special attention is ~ven to the independent control of clock
lines, clear lines, breaking of feedback loops, end tri-state isolation of components.
I
& Test access (observabi~ty). Test points, data paths, and circuitry
should be incorporated to provide the test sys tern, whether BIT or ATE, sufficient
signature data for fault detection and isolation within the item. The selection of ● I ]
physical (real) test points should be sufficient to accurately determine the value of
internal nodes (virtual test points) of interest. There should be no requirement to probe
internal points for organizational-level fault isolation.
\
h. Parts selection. In selecting between parts, each. with satisfactory (
performance characteristics, preference is given to integrated circuit components and
assembled items which have satisfactory testability characteristics. Preference is given I
to those integrated circuits for which sufficient disclosure of internet structure and
failure modes has been provided es a basis for effective, economical testing. I

i. Failure mode characterization. Test design end testability design -


features which support testing should be based upon expected failure modes and effects.
Failure modes (e.g., stuck faults, open faults, out-of-tolerance faults) must be I
characterized based upon the component technology used, the assembly process used and
the detrimental environment effects anticipated in the intended application. Maximum
use is made of prior analysis and government data. I
1 I
S0.6.5 Each UUT should be designed such that it is
with selected or available ATE so es to reduce
or eliminate the need for a large number of unique interface device designs. ff the ATE
has not been selected, the general compatibility requirements of MIL-STD-2076 may be
aeptied to the UUT and ATE design as appropriate.

48

I
I MIfnTTD-2165
APPENDIX A
I

lo a. Electricet pertitioni ng for off-line test. The ATE should have sufficient
control over the electrlcat partitioning or UT such that relatwely smell,
independent, and manageable blocks of circuitryemay be defined es the basis of test
I derivation, test documentation, and teat evaluation. The UUT design should support
ruining individual test program segments on en ATE independent of other test program
segments.
1- b. UUT test point selection. The number end placement of UUT test
points is based upon the following:
1. L Test points are selected based upon fault isolation requirements.
I
2. Test points selected are readily 6ccessible for connection to ATE
via system/equipment connectors or test connectors.

3. Test points are chosen so that high voltage end current


I measurements are consistent with safety requirements.

1 4. Test point meewremente relate to a common equipment ground.


5. Test points are decoupled from the ATE to assure that
degradation of equipment performance does not occur es a result
I of connections to the ATE.
6. Test ~ints of high voltage or current are physically isolated from

‘o
1 7.
test points of low logic level signals.

Teat points are selected with due consideration for ATE


implementation and consistent with reasonable ATE frequency
requirements.

I 8. Test points are chosen to segregate analog end digital circuitry


for independent testing.
I
9. Test points are selected with due consideration for ATE
implementation end consistent with reasonable ATE measurement
accuracies.
I
SO.6.6 Built-in test. 3uitable BfT features are incorporated into each item to
provide init~el fault detection and fault isotetion to a subitem. This BIT may also be
I utilized to verify the operability of the system following a corrective maintenance
action or a reconfiguration into a degraded mode.
I
a. BtT fault detection approaches. Fault detection approaches may be
i categorized es concurrent (continuous) , monitoring, periodic, and on-demand techniques.
The selection is based upon the requirement for maximum acceptable fault tetency.
Concurrent (continuous) fault detection techniques (utitizing herdware redundancy) are
used for monitoring those functions which are mission critical or affect safety end where

49

I
I

MIL-BTD-Z165
APPEF3DLXA

protection must be provided against the propagation of errors through the system.
Periodic testing is used for monitoring those functions which provide backup or standby

capabilities or are not mission critical, On-demand testing is used for monitoring those
functions which require operator interaction, sensor simulation, and so forth, or which
are not easily, safely, or cost-effectively initiated automatically. The frequency end
length of periodic end on-demand testing is based upon function, failure rates, wear out
factors, meximum acceptable failure latency, andthespecified maintenance concept.
b. Electrical partitioning for BIT. The BITcircuitry should have sufficient
control over the electrical partitionimz of the item such that relatively small,
independent, and manageable “blocks of ‘circuitry can be defined as the basis of test
derivation, test documentation, and test evaluation. Ln particular, for computer-based
equipment, such control should be made available to BIT software.

c. BfT design tradeoffs. Some of the BIT design tradeoff ieeues are listed
below:

1. Centralized versus distributed BIT.


2. Tailored versus flexible BIT.
3. Active stimulus versus passive monitoring.
4. Circuitry to test BIT circuitry.
5. Hardwere versus software versus firmware BIT.
6. Placement of BIT failure indicators.
I
50.6.7 BIT software. BIT software includes confidence tests (GO/NO GO tests) for
fault detection and diagnostic tests for fault isolation.

a. BIT memory sizing. When estimating memory requirements for a


● 1
,
{
I digital system, it is essential that sufficient words are reserved: I
1. fn control memory for the storage of micro-diagnostics and
initialization routines. I
I
2. fn main memory for the storage of error processing routines and 1
confidence tests.

3. In secondary memory (e.g., disk memory) for the storage of .


diagnostic routines,

In addition, the width of each memory word should be sufficient to support


BIT requirements:

L fn control memory to achieve controllability of herdware


components.

2. fn main and secondary memory to provide for error detection and


error correction techniques, as required.

50 ●

I
MU-STD-2165
APPENDIX A

Finalty, it is important that a sufficient number of mernary words exe


@ a=igned to non-alterable memory resources (e.g., read only memory, protected memory
areas) to ensure the integrity of critical test routinea end data. Sufficient hardwere end
soft ware redundancy should exist to confidently load criticat software segments.

b. Application software test provisions. The system epptication software


(mission sof twar~ should provide for timely detection of hardware faults. Tire
apelica tion SOftw~e design should includa sufficient interrupt and trap capability to
Support the immed]ate processing of errors detected by concurrent BIT hardwere prior to
the destruction of data bases or loss of information concerning the nature of the error.
The operating system and each critical application program must contain software
checks sufficient to meet failure latency requirements.

Application software error processing. Srror processing routines in the


application ;of tware invoked by interrupts and trap should be designed with the fult
participation of hardwere design end test engineers. The processing to be performed (r-
entry, automat ic error correction, diagnostic catl, operator message, error logging,
immediate halt, etc.) must be consistent with the failure modes and effects analysis.
‘fire operating system hierarchy should be designed to atlow the diagnostic software
sufficient control and observation of herdware components.

50.6.8 System-level Built-in test. System BIT includes a mix of BIT hardwere, BIT
saftware, mnd application software error checks to provide the required degree of fault
detection and isolation at the system level.

● BIT intermittent
respond ina’a predictable
failure detection.
manner to intermittent
System BfT must be designed to
failures, considering both the
maximizing of safety end the minimizing of BtT aterms. Detection of a failure by BtT
should be followed by a second test of the failing operation, whenever practical. The
numbers of repeated tests and repeated failures necessary to estabtish a sotid fault
condition needs to be determined. Conditions under which the operator is to be notified
of recoverable intermittent failures should be determined. based uoon failure criticality.
frequency of concurrence, and trends. For digital systems, failure &ata may be reco~d~d
in a failure h~tory queue. Data from the failure history queue could be made accessible
I to assist in troubleshooting of intermittent failures and to identify hardware which is
tfending toward solid failure. The initial implementation of system BIT shoutd be
\ flexible. For example, test tolerances may be stored in software or firmware so that the
1. tolerances and filtering algorithms may be easily changed if BIT is generating too many
false alarms.
i
1. b. BtT failure location. Suitable BfT features must be incorporated into
I the system to localize failures to a small number of items and to advise operator
persomel of degraded mode options. In some cases, the BIT may need to isolate a failure
I to a level lower than a replaceable item in order to determine what system functions are
lost and which system functions are operational. When subsystems are to be developed
I by subcontractors, each subsystem specification should contain a requirement for self-
, contained test with minimal reliance upon the system contractor to perform detailed
L testing of each subsystem through system-level test. The interface between system end

‘o
I
51
MIL-61’W2165
APPENDIX A

subsystem test should be straightforward and relatively simple (e.g., test initiate, test
response, test signature). This allows for the evaluation end demonstration of BIT

quality for each subcontractor prior to system integration.
c. Fault-tolerant design coordination. If system availability or safety
requires continued system operation in the presence of certain faults, then the fault-
tolerant design and testability design efforts should be closely coordinated. Equipment
redundancy or functional redundancy may be used to assist in testing. Fault esseasment, “
reconfiguration into degraded mode, and configuration verification should make
maxi mum use of testing resources. The design should provide for the independent testing
of redundant circuitry.
50.6.9 Application of testability measures. Testability achievement is tracked
through sys~ce use utilizing the measures listed in
Table fIf, or similar measures, as appropriate to the phase of system development. Table
IfI provides general guidance on applicability of measures and is subdivided into three
basic areas:
a. fnherent (design) measures. fnherent testability measures are
evaluations of testability dependent only upon item design characteristics. The
evaluation identifies the presence or absence of hardware features which support testing
and identifies general problem areas. The analysis primarily serves as feedback to the
erf or ming activity at a point in time when the design can be changed relatively easily.
?See 50.6.10 and 50.6.11)
b. Test effectiveness measures. Test effectiveness measures are
evaluations of testability dependent upon item design, its relationship to the chosen
maintenance environment and the testing capability of that environment. (See 5.7.3-.5)

fn-service measures. fn-eervice testability measures are evaluations of
testability ~~ed upon measurable field (i.e., operational) experience. (See 5.7.6)

5.6.10 Qualitative inherent testability y evaluation. Subtask 202.2.3 requires that the
performing activity gives early visibility to teatability issues and shows, in a qualitative
manner, that the testability considerations have been included in the preliminary design.
Testability considerations include, as a minimum, those concepts described in 5.6.4
through 5.6.7.
/
50.6.11 fnherent testability assessment. Subtask 202.2.3 requires that the inherent
ability of en Item to support high qualltY testi~ be assessed using Appendix B.

50.6.11.1 Preliminary design activities. The performing activity prepares e checklist


of testability-related design issues which are relevant to the design, and proposes a I
method of quantitatively scoring the degree of achievement of testability for each
design issue. A checklist is to be prepared for each item specified by the requiring
authority. Each checkfist is tailored to the design characteristics of the item. The
procuring authority reviews the checkliit and the scoring criteria and either concurs or 1
negotiates changes. The checklist approach allows for “structured” testability design

52

I
I
MtkS1’D-2165
APPENDIX A
1
10

Tatrle DI. TeatabLfity Measure Application Guidance Matrix

kstatrttlty Prefinltnery Detatl ProductioId


Measure —Es!laL !?s93!! ~“~:”tio” Deployment

nherent x x

‘eat Effectiveness

Functional coverage x x x
Predicted FD/Fl x x
predicted FD/FI times x x
Predicted test coat x x x

=ervice

Achieved FD/FI x
Achieved FI time x
FA/CND/RTOK rates x
Actual test coat x

FDIFI Fault detection and fault isolation


FA/CND/RTOK False alarm, cannot duplicate and retest okay

o 53
MIL-STD-2165
APPENDIX A

approaches, such ss scan path, to recei~e 8 high score simply because of their inclusion
In the dewgn. The checklist should be fmallzed prior to the preliminary design review.

50.6.11.2 Checklist scoring. As the design progresses, each checklist issue is examined
and scorsd for testability compliance. The use of objective, automated testability
analysis tools (e.g., SCOAP, STAMP) for scoring is permitted. The scores are weighted
and summed, giving a single inherent testability figure of merit for the design. The
design for testability process continues until the figure of merit reaches a predetermined “
threshold value.
50.6.11.3 Threshold determination. The requiring authority must assign a threshold
value to be used for the inherent testability assessment. Due to the wide variety of
possible items which may be analyzed, a single l~bestl! threshold value cannot be
recommended although a value in the range of 80 to 90 should force proper attention to
the assessment process. The actual value chosen is not alf that criticef since another
degree of freedom is available through the negotiation of weighting factors after the
threshold is established. In fact, since each checklist issue is weighted according to its
perceived importance in achieving a testable design, both the eventuet design and the
eventual meeting of the overall figure of merit criteria are essentially determined by the
requiring authority concurrence on the issues to be included and the scoring for each
issue. It is incumbent upon the requiring authority to be aware of the importance of
each proposed issue in achieving a testable design.

50.7 Task 203- Testability detail design and analysis


50.7.1 Testability design techniques. During detail design, the testability design
techniques of the preliminary design are further refined and implemented. Guidance
provided in 50.6.3 through 50.6.8 for preliminary design applies equally to detail design.

50.7.2 Inherent testability y assessment. During detail design, Appendix B may be I
applied to the evolving design. Guidance provided in 50.6.10 and 50.6.11 for preliminary
design applies equally to detail design. The inherent testability assessment should be (
completed prior to the critical design review.
I
I 50.7.3 Test ef fectiveneas messures. At the completion of system or equipment
design, test sequences should be generated for that design and test effectiveness
measured. Analysis need not wait for the completion of TPSS or BIT software. The use .
of models (50.7.4) is encouraged since they can analyze test effectiveness on a large
number of postulated faults (approaching 1-00%of the-specified failure population) pri~r
to incorporating the test stimulus in a teat language (e.g., ATLAS) or embedded .
computer language. The results of the analysis can feed forward to influence TPS or BIT
software design and feed back to influence redesign of the prime item to improve its
testability. The identification of undetected or poorly isolated failuras can lead to three I
actions:

a. The failure is not detectable by any test sequence. Any f?ilures, such
as those in unused circuitry, which are impossible to detect are deleted !I
from the failure population.

54
MIL-STD-2165
APPENDIX A

● b. The failure is potentiatty detectable, but the test sequence is deficient.


Additionat stimulus patterru ere added to the test sequence.
c. The failure is potentially detectable, but the item’s herdwere design
precludes the use of e test sequence of reasonable length or complexity.
I The prime item is redesigned to provide additional test control, test
access, or both.
1“ Test effectiveness measures include functional coverage (an enumeration of which
functions in en item are exercised by a test), failure-based measures es described below,
1- end cost or benefit measures (50.7.5).

I 50.7.3.1 Fault coverage. Fault detection coverage (FD) is cetculeted es foltows:

b FD= Ad where
T

d= K Li,
,4=

where ~ i is the failure rate of the ith detected failure, ~ is the overall
failure rate, and K is the number of detected failures.

50.7.3.2 Fault resolution. To calculate predicted fault resolution, date are required
which corretate each detected failure with the signature it produces during testing. ‘lIre
o data are mast conveniently ordered by signature end by failed module within each
signature (fault dictionary format).

N= number of unique signatures in dictionary (number of unique test


responses)

signature index

number of modules iisted in signature i

module index within signature

failure rate for jth module for failures providing signature i

N M’
OVWUI failure rate of detected failures =
Z, g ‘ij

Fauit Resolution to L or fewer modules (% replacements with


ambiguity S L)

55
MJIA3TD-2165
APPENDIX A

If alf modules under a signature are replaced es a block:

1 if MiS L
FRL . ~ $ Xi ~~ij ‘here ‘i = { Oif Mi >L

ff detailed failure rate information is unavailable or unreliable, or it is desired to


consider each fault as having equal probability, the above equation reduces to:

100 N
FRL = — XiMi where K = number of detected
K i=
5 .
faults = if Mi

If each of the L modules under the signature group is replaced, in turn, and the test rerun
for PASS or FAIL:

50.7.3.3 Fault detection time. Fault detection time (or failure latency) is the time
which elapses between the occurrence of a fault and the detection (reporting) of the
fault by the test process. This measure is most useful in characterizing how BIT deafe
with critical failures. A suggested format is shown in the following example:
% of Class Max. detection time

I Failure class 1 (most critical) 95% ~ 1 second 1
{ 100% 51 minute
Failure class 2 (critical) 85% 51 minute

This measure requires the enumeration of signals considered most critical, critical, and (
so forth and an estimation made of the worst case failure latency for each signaL

S0.7.3.4 Fault isolation time. During maintenance actions using BIT or off-line test,
the time to fault isolate is often the largest and most unpredictable element of repair
time. The testability program should not only attempt to reduce the fault isolation time
but should also try to provide accurate predictions of fault isolation times to
maintenance planners per Task 205 of MIL-STD-470A. The fault isolation time may be
expressed es en average time, a maximum time (at some percentile), or both. The time
is based not only on the length of the diagnostic test sequence but also must include en
estimation of time required for any manual intervention (e.g., the use of a sequential
substitution fault isolation procedure).
I
The physical insertion of a sufficient number of faults into
5;”7”4
P=’=
en Item to eterm me lts response to a test sequence has some obvious problems. The I
two most serious problems are that the time end expense of inserting even a small
number of representative faults into the item is prohibitive and the ability to insert

56

MIL-BTW2165
APPENDIX A

● representative faults is timited by circuit packagin A computer program may be used


to inject (through softwere) a large number of raults into a software model of the
hardware item. The same program can simulate the behavior of an item containing one
of the faults in resoonse
.. to the stimulus, The test stimulus mav then be evaluated in
terms of fault detection and fault resolution based upon a large fiumber of fault cases.
Computer programs are wetl established es toots to simulate the fault behavior of dtgital
I circuits and grade digitet teats for Test Program Set development. These same programs
may be used to grade built-in tests using bootstrap sequences, microdiegnostics, BIT
I software patterns, etc. es test stimulus. in addition, several programs automaticetty
generate test sequences for the simulator to grade. An example of this is the
1- Hiererchicel Interactive Test Simulator (HtTS) program. Programs are etso available to
simulate the fault behavior of enatog circuits, but the test stimuti must be provided
) menuetty. The usefulness of this approach depends upon the ability of the modets (item
modeta end fault modefs) to accurately reflect actual operation end actual field failures.
1 ‘Me item must be modeled at a level of detail which aUows elt important failure modes
to be included. The fault-free behavior of the item model must be validated orior to
modeling faults by applying a functioned test and comparing the modeled respo~ to the
expected response or to the response of a known good hardware item.
50.7.5 System level test effect iveness. The level of BIT fault detection for the
overall system is calculet ed av.

, ~D . ~ 1 iFDi i = 1 to number of items


~~i

‘o where i is the failure rate of the ith item end FDi is the fault detection prediction for
the i th item. This applies equally for systems with centralized BIT and systems with
distributed BIT (i.e., BtT in each item).

50.7.6 Testability cost end benefit data. Ultimately, aU test performance measures
translate into cost impacts. Higher quality tests usuelty cost more to produce but should
result in cost savings over the tife cycle of the system. These cost data are critical in
setting reasonable testability requirements within the framework of supportability
analysis. Subtesk 203.2.7 ensures that testability cost data for this acquisition program
are available for incorporation into appropriate data bases for use by future
1 supportability analysis efforts.
I
a. Non-recurring costs. The development costs associated with the
incorporation of testability mto the system or equipment include, but are not fimited to,
). the fotlowing:

Testabitit y program planning costs


) ;: Testability design costs
3. Testab]tity modeling end anetysis ccmts
) 4. Testability data preparation costs.
b. Recurring COStS and penalties. The producti~n, operation end
maintenance costs and penalties associated with the incorporation of t~tability into the
system or equipment include, but are not limited to, the following:

57
MIL-BTD-2165
APPENDIX A

1. Per item costs of additional herdware required for BIT end



testability capabilities.
2. Volume and weight required for additional hardwere, additional
connectors, and increased modularity.
3. Power required for additional hardware.
4. Computer memory required for BIT software.
5. Possibility of system interruption due to failure within BIT -
circuitry.
6. Refinability impact due to additional herdwere.

c. Benefit assessment, development and production. The impact of


testability on development and production costs includes, but is not limited to, the
followi~

1. Test generation costs


2. Production test costs
3. Test equipment costs
4. fnterface device costs
d. Benefit assessment, operation and maintenance. The impact (actual or (

predicted) of testability on operation and maintenance COStSincludes, but is not Iimited


to, the foflowing:

L Teat and repair costs


2.
3.
4.
Test end repair time
Manpower costs
Training costs

5. Spares cost

50.7.7 In-service testability measures. in-service testability measures evaluate the


impact of actual operational and maintenance environments on the ability of production
systems and equipments to be tested. The planning for the cof.lection of data to measure
test effectiveness in the operational and maintenance environment is accomplished
through Task 103. It is important to realize that testing problems in the field may be
corrected in many ways (e.g., personnel changes, organizational changes, procedural
changes, etc.) and do not always result in engineering design changes. fn-service
testability measures include:

a. Level of automation. Are the testing tools provided consistent with the
training/skU levels of assigned personnel?
b. BIT fault detection. Does BfT provide timely and accurate detection of
faults so as to minimize reliance on manual detection (e.g., squawks)?

c. BIT false alarm rate. Are BIT false alarms adversely impacting
operational availability and maintenance workloads?

~..——— ——-—
MU-BTD-2165
APPENDDZ A

‘o
b
d. Retest okay. Are feults detected at one level of maintenance
detected at the nest level of maintenance?
etso

e. BIT feult isolation time. Does BIT support system MITR and system
avaifabitity requirements?
)
f. Off-tine fault isolation time. Does ATE end its associated TPS6 support
shop throughput requirements?
)“
g, Fault resolution. Does poor fault resolution for BIT or ATE adversely
1. impact spares availabttity?

h. BIT reliability. fs poor BIT reliability adversely impacting the mission?

50.8 Teak 301- Testability demonstration

I 50.8.1 Demonstration parameters. It is useful to distinguish between fault detection


and isolation at the system level (organisational maintenance Ievet) and fault detection
} and isolation performed off -line (higher maintenance levels). The former may be
demonstrated as part of a standard maintainability demonstration if certain testability
concerns (Subtask 301.2.1) are incorporated into the maintainability demonstration plans
and procedures. The latter may be.demonstrated as part of the evaluation procedures
for Test Program Sets, including evaluation of softwere, interfaces and documentation.
50.8.2 BfT and off-tine test correlation. Through the development of the testability
demonstration plan, the items to be demonstrated under the maintainability
demonstration end the TPS demonstration may be coordinated (e.g., some common faults
to be inserted) so es to provide data on the correlation of BIT resutts and off-tine test
results. This can give an early indication of possible “cannot dupticate” (CN D) problems
in the field.

50.8.3 BIT false alarm rate. One important testabtiity parameter, BIT false etarm
rate, is difficutt to measure m the controlled environment of a demonstration procedure.
If the fatae alarm rate was relatively high, it would be possible to make use of a
reliability demonstration procedure from MIL-STD-781 to demonstrate the false alarm
rate, treating each BIT false alarm es a relevant failure. The environmental conditions
during the demonstration should be indicative of the expected operationet environment in
order to experience a wide range of false alarm causes.
50.8.4 Model vetidation. Even with a reasonably large sample of inserted faults, a
demonstration can yield only timited data on actuat test effectiveness. However, a
demonstration is also useful in validating some of the assumptions end modets that were
used during the earlier testability analysis and prediction efforts (Task 203) which were
based upon a much targer fault set. If certain assumptions or models are invaUdated by
the demonstration, appropriate portions of Task 203 should be repeated and new
predictions should be made.

59

‘.
MIL-STD-Z165
APPENDIX A

10. SCOPE
10.1 -e. This appendix provides requirements for the assessment of the
inherent testablhty of system or equipment design.

10.2 Application. Appendix B shell be considered es forming a pert of the


standard.

I 20. REFERENCED DOCUMENTS

Not applicable.
I
30. DEFINITfONS

Not applicable.

40. GENERAL REQUIEEMENTN


40.1 General. Conduct an analysis of the inherent (intrinsic) testability of the
design. The analysis identifies the presence or absence of hardware features which
support testing and identifies problem areas. The method of this Appendix shalt be
aPplied to each item identified for fnherent Testability Assessment by the requiring
authority. The data required by this appendix shalI be determined by the performing
activity and approved by the requiring authority. Any testability criteria designated as
mandatory by the requiring authority, and therefore not subject to design tradeoffs,
I

should be assessed separately from this procedure.

50. DETAILED REQUIREMEN’R3

1
50.1 Procedure. Assess the inherent testability of the system or equipment design
using the frherent Testability Checklist, Table IV.

1 to the desig~.
Delete those testability criteria from Table IV which are not applicable

b. Add additional testability y criteria to Table IV which are relevant to the


design (or modify original criteria).
I c. Assign weighting factors (WT) to each item based upon its relative
importance in achieving a testable product. (1~ WTA1 O)

d. Develop a scoring system for item (OSscores100) where 100 represents “


maximum testability and Orepresents a complete lack of testability.

e. Obtain concurrence on (a) through (d) above from the requiring


authority.

60
I
MII.AWD-2165
APPENDIX A

Count the desi~ attributes which are relevant to each tastabitity item
(e.g., the to!el number of nodes m a circuit).
Count the design attributes which meet the testeb~ty criteria for each
item (e.g., ~~e number of nodes accessible to the tester).

Apply the scoring system to each item (e.g., Score = accessible nodes+
total nodes,gr Score = 100 if YES end = Oif NO).

i. Calculate the weighted score for each item, WT Score = WT x Score.


.
1.

Cslculate the inherent testabLtity of the design, TESTABILITY = Sum


(wT Score)! Sum (WT).

50.2 Criteria. Modify the de@gn as necessary until the inherent testability equals
or exceeds the threshold value.
1“
I

!0

t’
MIL-BTD-2165
APPEWDIX B

Table IV. Inherent Takability Checklist ●


Number
Total Meeting WT

Mechanical Design
WT Number Criterie *ore Bcore
✟✍
IS a standard grid layout used on boards to
facilitate identification of components?

Is enough spacing provided between compo-
nents to allow for clips and test probe?

Are elf components oriented in the same


direction (pin 1 always in same position)?

Are standard connector pin positions used


for power, ground, clock, test, etc.,
signafs ?
Are the number of input end output (1/0)
pins in an edge connector or cable
connector
capabilities
equipment?
compatible
of the
with the
selected
1/0
test ●
Are connector pins arranged such that the
shorting of physically adjacent pins will
cause minimum damage?

fs defeatable keying used on each board so


es to reduce the number of unique
interface adapters required?

When possible, are power end ground


included in the 1/0 connector or test

connector?

Have test and repair requirements impact-


ed decisions on con formaf coating?

Is the design free of special set-yp


requirements (e.g., speciaf cooling) which
would slow testing?

62
MIL-STD-2165
APPENDIX B

p Numbes
Total Meet@
WT Number Criterin 6core 2

Mechanical Design (contkt)


I

Does the item warm up in a reasonable


amount of time?

IS each hardwere component clearly


Iabelled?

partitiord~

Is each function to be tested placed wholly


upon one board?

If more then one function ts ptaced on a


board, can each be tested independently?

Within a function, can complex digital and


anelcg circuitry be teated independently?

Within a function, is the size of each block

● of circuitry to be tssted smetl enough for


econom icel fault detection and isolation?

If required, are pult up reststora located on


same board as the driving component?

Are analog circuits partitioned by fre-


quency to ease tester competibitlty?

fs the number of ~wer supplies required


compatible with the test equipment?

Is the number and type of stimuli required


i; compatible with the test equipment?

Are elements which are included in an


ambiguity group placed in the same
package?

Ttst Control

Are unused connector pins used to provide


test stimulus and control from the- tester
to internal nodes?

● 63
MIL-6TEM165
APPENDIX B

Test Control (contkl)

Can circuitry be quickly and easily driven


to a known initial state? (e.g., master
clear, less than N clocks for initialization
sequence)?

Are redundant elements in design capable


of being independently tested?

Is it possible to disable on-board oscillators


and drive all logic using a tester clock?

Can long counter chains be broken into


smaller segments in test mode with each
segment under tester control?

Can the tester electrically partition the


item into smaller independent, easy-to-test
segments? (e.g., placing tri+tate elements
in a high impedance state).

Is circuitry provided to by-pass any hm-


avoidable) one-shot circuitry?
Can feedback 100PS be broken under con-
trol of the tester?

In microprocessor-based systems, does the


tester have access to the data bus, address
bus and important control lines?

Are teat control points included at those


nodes which have high fan-in (i.e., test
bottlenecks)?

Are input buffers provided for these con-


trol point signals with high drie
MIL-SITH165
APPENDtx B

— —
Numb
Tbtti Meeti

WT critel Scol

2
‘t’em Access
I
.
Are unused connector pins used to prov[de
additional internal node data to the tester?

Are signal lines end test points designed to


drive the capacitive loadlng represented by
the test equipment ?

Are test points provided such that the


tester can monitor and evnchronize to
onboerd clock circuits? “

Are test access points pieced at those


nodes which have high fen-out ?

Are buffers employed when the test point


is a latch end susceptible to reflections?
Are buffers or dividar circuits employed to
protect those test points which may be
damaged by en inadvertent short circuit?

Are active components, such as muiti-


plexem end shift registers, used to make
necessary internal node test date avatleble
to the tester over available output pine?

Are ail high voltages scaled down within


the item prior to providbrg test point
access so es to be consistent with tester
capabilities?

Is the measurement accuracy of the test


equipment adequate compared to the toier-
ence requirement of the item being tested?

PertB SekctiOn

Is the number of different part types the


minimum possibie?

65
MIL-2TD-2165
APPENDfX B

Total
Numbs
Meet@ WT

1
I WT Number Criterfi Bccm Bcore
-
Perte Belection (eont%t)

Have parts been selected which are well


characterized in terms of failure modes?

Are the parts independent of refresh
requirements? U not, are dynamic devices
supported by sufficient clocking during
testing?

fs a single logic family being used? U not,


is a common signal level used for
interconnections?

Analcg Design

La one test point per dwcrete active stage


brought out to the connactor?

fs eech test point adequately buffered or


isolated from the main signal path?

Are multiple, interactive adjust ments pro-



hibited for production items?

Are functional circuits of low complexity?


(Amplifiers, regulators, etc.)

Are circuits functionally complete without


bias networks or loads on some other UUT?

fs a minimum number of multiple pheee-


related or timing-related stimulus
required?

Is a minimum number of phase or timing


measurements required?
I
Is a minimum number of complex modula-
tion or unique timing patterns required?

Are stimulus frequencies compatible with


tester capabilities?

66

MIL-SIIF2165
APtwNINX B

‘Ibtel
Number
Meet@ WT
I
WT Number Crtteria score Score

- Desii (contkt)
I
Are stimulus rise time or pulse width
I requirements compatible with tester
capabilities?

Does response measurements involve fre-


quencies compatible with tester capa-
bilities?

Are response rise time or putse widti mea-


surements compatible with tester capabitl-
tie9?

Are stimulus amptituda requirements with-


in the capability of the test equipment?

Are response amplitude measurements


within the capability of the test equip-
ment?

● Does the design avoid externet


loops?
feedback

Does the design avoid or compensate for


temperature sensitive components?

Does the design eltow testing without heat


sinks?

Are standard types of connectors used?

Di@at De@

t Does the design contain only synchronous


logic?

Are all clocks of differing phases and fre-


quencies derived from a single master
clock?

Are etl memory elements clocked by a de-


rivative of the master clock? (Avoid ale-
ments clocked by data from other
elements.)

67


MIL-STD-2165
APPIfNDJX B

Total
Number
Meeting WT

W’1 Number criteria Soore Score

Digital De9ign (cont4f)

Does the design avoid resistance capaci-


tance on~hots and dependence upon logic
delays to generate timing pulses?

Does the design support testing of “bit


Slic,esl’?

Does the design include data wraparound


circuitry at major interfaces?

Do all buses have a default value when


unselected?

For multilayer boards, is the layout of each


major bus such that current probes or other
techniques may be used for fault isolation
beyond the node?


Is a known output defined for every word in
a Reed Only Memory (ROM)? Will the
improper selection of an unused address
result in a well defined error state?

Is the number of fan-outs for each internal


circuit limited to a predetermined value?

fs the number of fan-outs for each board


output limited to a predetermined value?

Are latches provided at the inputs to a


board in those cases where tester input
skew could be a problem?

fs the design free of WIRED-ORS?

Does the design include current limiters to


prevent domino effect failures?

If the design incorporates a structured


testability design technique (e.g., Scan
Path, Signature Analysis), are afl the da-
sign rules satisfied?

66

I
MU.-STD-2165
APPENDIX B

-
Number
TOM Mee.tirg
WT Number criteria score SC%
Digital De3ign (contkt)

Are sockets provided for microprocessors


and other comples components?

Built-fnTest (BIT)

Can BIT in each item be exercised under


control of the test equipment?

IS the Test Program Set designed to teke


advantage of BIT capabilities?

Are on-bard BIT indicators used for


important functions? Are BIT indicators
designed such that a BIT fellure witt give a
FAIL indication?
I Does the BtT use a building-block approach
(e.g., all inputs to a function are verified

o before that function is tested)?

Does building-block BIT make maximum


use of mission circuitry?

fs BfT optimetly altocated in hardware,


soft were and firm were?
Does o-rd ROM contain setf-test rou-
tines?

D- BfT include a method of saving on-


tine test data for the analysis of intermit-
tent failures and operational failures which
are non-repeatable in the maintenance
I
.
environment ?

Is the failure rate contribution of the BfT


circuitry within stated comtraints?

Is the additional wefght attributed to BIT


within stated constraints?

Is the additional volume attributed to BtT


within stated cottstrainte?

(
‘... .,,
69
MU-BTD-2165
APPENDIX B

/“”. ‘“
Number
WT

Total Meetii
I WT Number CrReria Bcore Bcore

Built-in Test’(Bti) (contkf)

Is the additional power consumption attri-


buted to BIT within stated constraints?

Is the additional pert count due to BIT
within stated constraints?

Does the allocation of BIT capability to


each item reflect the relative failure rate
of the items end the criticality of the
items’ functions?

Are BIT ~hreshold values, which may


require changing es a result of operational
experience, incorporated in software or
easily-modified firmware?

Is processing or filtering of BIT sensor data


performed to minimize BIT false alarms?

Are the data provided by BIT tailored to


the differing needs of the system operator

and the system maintainer ?

Is sufficient memory allocated for confi-


dence tests and diagnostic softwere?

Does mission software include sufficient


hardware error detection capability?

i J?, the failure latency associated with a


,psrticular implementation of BIT coneis-
tnt with the criticality of the function
monitored?

IT threshold limits for each parame-



“ned es a result of considering
% ter’s distribution statistics,
. urement error and the
\ tection/falae alarm cher-

70

MIL-BlTF2165
I APPENDIX B

Numb
Total Meet@
WI Numbet Criteti

Test Requirements

Hes Level of Repair Analysis been accom-


plished?

For each maintenance level, hes a decision


been made for each item on how built-in
test, automatic test equipment end general
purpcse electronic test equipment witt r)
support fault detection and isolation?

Is the planned degree of test automation


consistent with the capabilities of the
maintenance technician?

For each item, does the ptenned degree of


testability design support the level of
repair, test mix, and degree of automation
decisions?
Are the test tolerances established for BIT
consistent with those established for higher
level maintenance test?
Teat Data

Do state diagrams for sequential circuits


identify invalid sequences and indetermi-
nate outputs?

If a Computer-Aided Design system is used


for design, does the CAD data base effec-
tively support the test generation process
and test evaluation process?

For Lerge ScaIetntegrated Circuits usedin


thedesign, aredataavaitebla to accurately
model the LSIC and generate highwonfi-
dence testa for it?

71
MIL-STD-2165
APPENDIX B

Total
Number
Meet@ WT

WT tumber Criteria score tire

✌✎
Teat Date (cont4f)

For computer-assisted test generation, is


the available software sufficient in terms
of program capacity, fault modeling,
component libraries, and post-processing of
test respomse data?

Are testabtlty features included by the
system designer documented in the TRD in
terms of purpose and rationale for the
benefit of the test designer?

fs a mechanism available to coordinate


configuration changes with test personnel
in a timely manner?
Are test diagrams included for each major
test? k+ the diagram limited to a small
number of sheets? Are inter%heet connec-
tiOIIS clearly marked?

Is the tolerance bend known for each signaf



on the item?

72
I

I 10.1 Appendix C shett be considered es forming a part of the tile standard.


I
10.2 ‘l%e purpose of this appendix is to provide definitions of terms used for
clarity of understanding and completeness of information. Aa a generat rule, the
1 \ definitions provided are currently accepted end have been extracted verbatim from other
I directives (regulations, manuals, MlL-STD%, DOD Directives, etc.). A limited number of
I terms are presented for which definitions were developed from several reference
documents.
i’
%0. DEFINITtONB
I
A cqutaition phases
I
(a) Concept Exploration Phase - The identification end exploration of alternative
solutions or solution concepts to eatisfi a validated need.
I (b) Demonstration and Validation Phasa - The period when selected candidate
solutions are refined through extensive study and analyses; hardware development, if
I appropriate; test; and evaluations.
(c) FuI1-2cate Development Phase - The period when the system end the principal
items necessary for its support are designed, fabricated, tested, and evaluated.

(d) Production and Deployment Phase - ‘I%e period from production epprovat
untit the last system is delivered end accepted.
I
Built-in test (BIT). An integral capability of the mission system or equipment which
provides an automated test capability to detect, diagnose or isolate failures.

Built-in teat equip ment (BITE). Herdware which is identifiable es performing the
bum-m trst function; a subset o~lT.

cammt duplicate (CND). A fault indicated by BIT or other monitoring circuitry which
cannot be confirmed at tli6 first level of maintenance.
.
I Fellura tateney. me elapsed time between fault occurrence end failure indication.
.. Fetac alarm. A fault indicated by BIT or other monitoring circuitry where no fault
I exists.

I Fault coverage , fautt detection. The ratio of failures detected (by a test program or
test procedure) to failure popule tion, expressed es a percentage.
I
Fault iaoletton time. ‘l%e etapsed time between the detection end isolation of a fault;
I a component of. repair time.

73
MIL-2TD-2165
APPRNDIX C

Fault resolutiooj fault isolation. The degree to which a test program or procedure can

isolate a fault within an item; generally expressed as the percent of the cases for which
the isolation procedure results in a given ambiguity group size.

Snherem temnbility. A testability measure which is dependent only upon hardware


design and is independent of test stimulus and response data. 4
fnterface device (ID). provides mechanical and electrical connections and any signal
conditioning required between the automatic test equipment (ATE) and the unit under
test (UUT); also known as an interface test adapter or interface adapter unit. ,

Item. A generic term which may represent a system, subsystem, equipment, assembly,
su=mbly, etc., depending upon its designation in each task. Items may include
configuration items and assemblies designated as Units Under Test.

Off-line ttii~. The testing of an item with the item removed from its normal
operational environment.

Performi~ activity. That activity (government, contractor, subcontractor, or vendor)


which is responsible for performance of testability tasks or subtasks as specified in a
contract or other formal document of agreement.
Requiri ng authority. That activity (government, contractor, or subcontractor) which


levies testa~llity task or subtesk performance requirements on another activity
(performing activity) through a contract or other document of agreement.

A unit under test that malfunctions in a specific manner during


opining, but performs that specific function satisfactorily at a higher level
maintenance facility.
‘C~li~ A (fes”lgn characteristic which allows the status (operable, inoperable, or
degra e o an item to be determined and the isolation of faults within the item to be
performed in a timely manner.

T@ effectiveness. Measures which include consideration of hardware design, BfT


design, test equipment design, and t=t program set (TPS) design. Test effectiveness .
measures include, but are not limited to, fault coverage, fault resolution, fault detection
time, fault isolation time, and false alarm rate.

Test program set (TPS). The combination of test program, interface device, test .
program instruction, and supplementary data required to initiate and execute a given
test of a Unit Under Test.

Test reqmr . ements document. An item specification that contains the required
performance characteristics of a UUT and specifies the test conditions, values (and
aflowable tolerances) of the stimuli, and associated responses needed to indicate a
properly operating UUT.

l—

I 74 ●
,

I lNSTRUC170NSILI. mrI!k&s cffmtta nuka our standardization


cubmk!@commmtiand LIUSUtkmt for Imsuonmcnta M um of mOifuY dadardhtion
documenu battar,UIe DoDpvvidu M form for we in
dommenu am kited h EUVvide

9USSCLUOI!U ‘f?h form IBSY b dctacfm!.foldeddons II!*line!indicated,Uti dons IfIo100WM@(~ NOYSTAPLE),and


mdhd. Enbtock6, bt u spad!k u uaalbte clout puitcuhr problem - mch u wrdhs did reQuimd Mcrpreimkort. -
LOO rid. ~-. h. -M--- = - komd~*. d sfm Pwowd --dins CIWISea wbicb wouldSU*W tie
~blenm. EnterIn black6 my mnuzh mt mhmdto c ●DRCI!EC ~gb of the document.U block7 Is Cwd out, an
ukn0ddS911E01 d] b dd b YOU -ftbin SO &y# b tat YOU know Lbd YOW cOrnmell tswm, r+mtredauduahhtg
maddemd.
NOTE: ‘fhkIform my not b, d t-a ?8~U~ c@ci of documents, no, to requealwdmm,dmhtiom, o, cluifkitkm of
madnmfianIUEulmaeati
-1
o n mat eantmc!s.Commentt submitted 00 tht8formdo DOI corudmtaLUIIBD!Y autboriamic.n
. to walm my pottk.m of tbo” ufermmd document(,) or to .merid e.ontrutud requlmumifs.

I t

(Fold OIwu Ihb Uml]

. .“

(Fbld do”, 1M9 lIIW,

n
DEPARTMENT OF THE NAVV

I
I &
Naval Electronic Systems
Washington, DC 20363
111111 NO ?OSTAO
N6cE6s*n
IF MAILED
IN TME
UNITE!! STATES
E
Y

!’
OFFtCIAL 8USINE=
. ?ENAI.TV FOR ●R,VATS uSC SS00 BUSINE:SNoR~PLYMAIL
FIRST
C1..ss WAS M! NGTON 0, C, =1
PCSIAOE WILL 8E PAID BY THE DEPARTMENT OF THE NAVY

I
CW’4ANDER
NAVALELECTRON
IC SYSTEMS
CONMAND
DEFENSE STANDARDIZATION PROGRAM BRANCH
DEPARTMENT OF THE NAVY
WASHINGTON , OC 20363
ATTN : ELEX 8111
I
I ~ANDARDIZATION OOCUMENT IMPROVEMENT PROPOSAL
I (SeeInstructimu- Rfvaie SW)

I
I
OCUMENT

.-STD-2165
NJMBER

VAME OF SUBMITTINO Ofl QANIZATlON


2. ~cuM~NT ‘lTLE TESTABILITYPROGRAN POR ELECTRONIC SYSTEMS

VENOOll
•1
I

I
❑ &

1.
I
,DOnE’6s@hwL IX*. Srati, ZIP Code)

❑ MANuFAC7UI!ER

,
a C.T14E14,epdh): “

I
ROSLEM AREAS
I P.rDJrQIII N“ mbor and Wordlw

b. ROconlrm”ti Wo,dmg ,


c FI-n/ R., ion.!. for R_mrnondmi.m:

I 4.
I
3EMA13KS .
I
I .
I
I
I
I
I
I

I
NAME OF S“9M,TTE~ & 1, Ph t. ml) - or,,lond b, WORK TE LEPHONe NUMBER (1”4”- Arcs

I cd) - Optlolul
I
I
UAILING
~~0~gS9,.sfm.1,
C,fy, 810t8. ZIP Cd) - Oc.tle”d @. DATE OF SU8MI6S1ON O’YMMDD)


I
mm .— —.. . -&-
I
w &“J:.1426 PREv1OU9 EOITlON m qSOLETE.

You might also like