Using Computational Cognitive Modeling to Diagnose
Using Computational Cognitive Modeling to Diagnose
Using Computational Cognitive Modeling to Diagnose
net/publication/228872576
CITATIONS READS
85 295
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Alex Kirlik on 18 May 2014.
Alex Kirlik
Aviation Human Factors Division
University of Illinois at Urbana–Champaign
The purpose of models is not to fit the data but to sharpen the questions.
—Samuel Karlin, 1983
Requests for reprints should be sent to Alex Kirlik, Aviation Human Factors Division, Institute of
Aviation, University of Illinois, Savoy, IL 61874. Email: [email protected]
136 BYRNE AND KIRLIK
Aviation incident and accident investigators often find both cognitive and envi-
ronmental contributing factors to these events. Environmental sources include
such factors as flawed interface design (e.g., Degani, Shafto, & Kirlik, 1999),
confusing automation (e.g., Olson & Sarter, 2000), and unexpected weather con-
ditions (Wiegmann & Goh, 2001). Cognitive sources include such factors as
poor situation awareness (SA; Endsley & Smolensky, 1998), procedural non-
compliance (Bisantz & Pritchett, 2003), and poor crew coordination (Foushee &
Helmreich, 1988).
Many, if not most, significant incidents and accidents result from some combi-
nation of both cognitive and environmental factors. In fact, in a highly
proceduralized domain such as aviation, with highly trained and motivated crews,
accidents rarely result from either environmental or cognitive causes alone.
Training and experience are often sufficient to overcome even the most confusing
interface designs, and the environment is often sufficiently redundant, reversible,
and forgiving (Connolly, 1999) so that most slips and errors have few serious con-
sequences. Most significant incidents and accidents result when cognitive, envi-
ronmental, and perhaps even other (e.g., organizational) factors collectively
conspire to produce disaster (Reason, 1990).
For this reason, an increasing number of human factors and aviation psychol-
ogy researchers have realized that the common terms human error and pilot error
often paint a misleading picture of error etiology (e.g., Hollnagel, 1998; Woods,
Johannesen, Cook, & Sarter, 1994). By their nature, these terms predicate error as
a property of a human or pilot, in contrast to what has been learned about the sys-
temic, multiply caused nature of many operational errors. These often misleading
terms only contribute to the “train and blame” mindset still at work in many opera-
tional settings and perhaps contribute to the failure of such interventions to im-
prove the safety landscape in settings from commercial aviation to military
operations to medicine.
Board and plans to remedy the problem with technologies such as the Airport
Movement Area Safety System (“Runway Incursions,” 2003). For example, the
number of U.S. runway incursions in 1996, 1997, and 1998, totaled 287, 315, and
325, respectively. In 1999, a Korean Airlines airliner with 362 passengers swerved
during takeoff at Chicago O’Hare International Airport (ORD) to avoid hitting a
jet that entered the runway, and an Iceland Air passenger jet at John Fitzgerald
Kennedy Airport (JFK) came within 65 m of a cargo jet that mistakenly entered the
runway (Jones, 2000).
These problems show no immediate sign of going away. There were a total of
337 U.S. runway incursions in 2002, more than 1.5 times the number reported a de-
cade earlier. “Runway Incursions,” (2003) noted that “Despite FAA programs to
reduce incursions, there were 23 reported in January, 2003, compared with only 14
in January 2002” (p. 15). Due in part to the inability to deal with incursion prob-
lems to date, NASA established an Aviation System-Wide Safety Program to ad-
dress this and other challenges to aviation safety. The NASA simulation and
technology evaluation study described in the following section represents one at-
tempt to use aviation psychology and human factors research techniques to ad-
dress critical challenges to aviation safety.
SIMULATION, EXPERIMENTATION,
AND DATA COLLECTION
Called T–NASA2 (for more detail, see Hooey & Foyle, 2001; Hooey, Foyle &
Andre, 2000) throughout this article, the experimental scenario required 18
flight crews, consisting of active pilots from six commercial airlines, to ap-
proach, land, and taxi to a gate at ORD. The flight crews had varying levels of
experience with the ORD surface configuration. Experimentation used baseline
conditions (for this study, chart technology only) as well as conditions in which
pilots were provided with various new display and communication technologies,
including a moving map and head-up displays with virtual signage (e.g., a super-
imposed STOP sign at a hold point). The modeling performed in this research
focused solely on performance in the baseline (current technology) conditions.
Nine different taxiway routes were used in the baseline trials of the T–NASA2
simulation. Each of the 18 crews were tested over a balanced subset of three dif-
ferent routes for a total of 54 trials. Each trial began approximately 12 nm out on
a level approach into ORD. Pilots performed an autoland, and the first officer
(FO) notified the captain of their location with respect to the runway exit on the
COGNITIVE MODELING IN AVIATION 139
basis of clearance information obtained during the final stages of flight and the
paper airport diagram. As the aircraft cleared the runway, the crew tuned the ra-
dio to ground controller frequency, and the controller provided a taxi clearance
(a set of intersections and directions) from the current location to the destination
gate. Crews were then required to taxi to the gate in simulated, visually impov-
erished conditions (RVR1 1000´). Further details can be found in Hooey and
Foyle (2001). It should be noted that the simulation represented neither all stan-
dard operating procedures (after-landing checklists, log, and company paper-
work) nor all communication activities (with the cabin crew, dispatch, and gate).
As a result, the level of crew workload was somewhat lower than a crew might
experience in operational conditions, lending support to the idea that the experi-
mental situation with respect to error was closer to best-case rather than
worst-case conditions (other than low visibility).
Across the 54 baseline T–NASA2 trials, a total of 12 off-route navigation major
errors were committed. Major errors were defined as deviation of 50 ft or more
from the center line of the cleared taxi route. These major errors were used for the
modeling effort because they were objectively determined with simulation data
and did not require subjective interpretation for classification. On each, crews pro-
ceeded down an incorrect route without any evidence of immediate awareness or
else required correction by ground control. The T–NASA2 research team desig-
nated these 12 to be major errors. Additionally, 14 other deviations were observed
but were detected and corrected by the crews. These latter 14 deviations were thus
classified as minor errors by the NASA team, and we were instructed that the mod-
eling effort should focus solely on the major errors. NASA provided our modeling
team with descriptions of each major error, in terms of intersection complexity,
turn severity, and their own classification of each in terms of planning, decision
making, or execution (Goodman; 2001; Hooey & Foyle, 2001).
Two aspects of the T–NASA2 data set provided the primary motivation for this
modeling effort. First, it was believed that modeling might shed light on the under-
lying causes of the errors observed in the experimental simulations. A second mo-
tivation was the fact that the suite of SA and navigation aids used in the new
technology conditions of the T–NASA2 experiments were observed to eliminate
navigation errors almost entirely (Hooey & Foyle, 2001). The goal of our research,
therefore, was to provide a systemic explanation for the errors that were observed
in a fashion that was consistent with the finding that no errors were observed when
the quality of information available to support navigating was improved.
1Runway visual range (RVR) is the range over which the pilot of an aircraft on the centerline of a
runway can see the runway surface markings or the lights delineating the runway or identifying its
centerline.
140 BYRNE AND KIRLIK
ACT–R (Anderson & Lebiere, 1998; see also Anderson et al., 2004) is a compu-
tational architecture designed to support the modeling of human cognition and
performance at a detailed temporal grain size. Figure 1 depicts the general sys-
tem architecture. ACT–R allows for the modeling of the human in the loop, as
the output of the system is a time-stamped stream of behaviors at a very low
level, such as individual shifts of visual attention, keystrokes, and primitive
mental operations, such as the retrieval of a simple fact. To produce this,
ACT–R must be provided two things: knowledge and a world or environment
(usually simulated) in which to operate. The environment must dynamically re-
spond to the outputs of ACT–R and, thus, must also often be simulated at a high
degree of fidelity. The knowledge that must be provided to ACT–R to complete
a model of a person in an environment is essentially of two types: declarative
and procedural. Declarative knowledge, such as “George Washington was the
first president of the United States,” or “‘IAH’ stands for Bush Intercontinental
Airport in Houston,” is represented in symbolic structures known as chunks.
Procedural knowledge, sometimes referred to as how-to knowledge, such as the
knowledge of how to lower the landing gear in a 747, is stored in symbolic struc-
tures known as production rules or simply productions. These consist of
IF–THEN pairs; IF a certain set of conditions hold, THEN perform one or more ac-
tions. In addition, both chunks and productions contain quantitative information
that represents the statistical history of that particular piece of knowledge. For ex-
ample, each chunk has associated with it a quantity called activation that is based
on the frequency and recency of access to that particular chunk as well as its rela-
tion to this context. Because the actual statistics are often not known, in many
COGNITIVE MODELING IN AVIATION 141
cases, these values are left at system defaults or are estimated by the modeler, al-
though in principle, ACT–R can learn them as well.
The basic operation of ACT–R is as follows. The state of the system is repre-
sented in a set of buffers. The IF sides of all productions are matched against the
contents of those buffers. If multiple productions match, a procedure called con-
flict resolution is used to determine which production is allowed to fire, or apply its
THEN actions. This generally changes the state of at least one buffer, and then, this
cycle is repeated every 50 msec of simulated time. In addition, a buffer can change
without a production explicitly changing it. For example, there is a buffer that rep-
resents the visual object currently in the focus of visual attention. If that object
changes or disappears, this buffer will change as a result. That is, the various per-
ceptual and motor processes (and declarative memory as well) act in parallel with
each other and with the central cognitive production cycle. These processes are
modeled at varying levels of fidelity. For example, ACT–R does not contain any
advanced machine vision component that would allow it to recognize objects from
analog light input. Rather, ACT–R needs to be given an explicit description of the
object to which it is attending.
ACT–R was originally designed to model the results of cognitive psychology
laboratory experiments and is often considered a bottom-up or first-principles ap-
proach to the problem of modeling human cognition and performance. Whether
ACT–R scales up to more complex domains is an empirical question, but so far, it
has done well in dynamic domains such as driving (Salvucci, 2001), and we be-
lieve it is now mature enough to be tested in aviation.
Model Scope
One of the first decisions that had to be made was a decision about scope. In one
sense, there are clearly multiple humans in the taxi loop, even in the somewhat
simplified NASA simulation. These include the captain, who is actually head up,
looking out the window, and controlling the aircraft, and the FO, who looks pri-
marily head down and assists both the captain and the ground-based controller.
To limit the scope of the project, we chose to model only the captain in ACT–R
and treated both the ground controller and the FO as items in the environment.
We thought this decision was a good balance between tractability and relevance
because the captain made the final decisions and also controlled the aircraft.
142 BYRNE AND KIRLIK
A second, important aspect of scoping model coverage was to select the psy-
chological activities on which we would focus our efforts. Our research team was
one of many teams also creating cognitive models of the same T–NASA2 data
(e.g., see Deutsch & Pew, 2002; Gore & Corker, 2002; Lebiere et al., 2002;
McCarley, Wickens, Goh, & Horrey, 2002). In this light, we considered both the
strengths and weaknesses of our ACT–R approach with the alternative approaches
taken by other research teams, with the goal of providing a unique contribution to
the overall research effort. For example, we ruled out focusing on multitasking, as
ACT–R is less mature in this area than some other models, and we ruled out focus-
ing on SA issues (losing track of one’s location on the airport surface), as our
model was less mature in this area than some other models. All things considered,
including our own previous experience in human performance modeling (e.g.,
Kirlik, 1998; Kirlik, Miller, & Jagacinski, 1993), we decided to focus on the inter-
active, dynamic decision-making aspects of the task in its closed-loop context. As
a result, we focused on those contributions to error that may result from the inter-
action of the structure of a task environment and the need to make often-rapid deci-
sions on the basis of imperfect information, resulting from a decay of clearance
information from memory, low visibility, and sluggish aircraft dynamics. Our fo-
cus on decision making, which assumed pilots had accurate knowledge of their
current location, was complemented by the focus of another modeling team on SA
errors associated with losing track of one’s location on the airport surface
(McCarley et al., 2002).
Model Environment
Thus, we created an ACT–R model of one human pilot, but this pilot model still
had to be situated in an accurate environment. In this research, three external en-
tities were modeled to describe the environment: the simulated aircraft con-
trolled by the pilot model, the simulated visual information available to the pilot
model, and the simulated runway and taxiway environment through which the
simulated aircraft traveled. Each of these three environmental entities was
computationally modeled and integrated with the cognitive components of the
pilot model to create an overall representation of the interactive human–air-
craft–environment system.
Code for the vehicle dynamics that was used to drive the actual NASA flight
simulator in which behavioral data was collected was unfortunately unavailable.
We, therefore, had to create a simplified vehicle model with which the pilot model
could interact. Given vehicle size, mass, and dynamics, however, we still did re-
quire a somewhat reasonable approximation to the actual aircraft dynamics used in
the experiments to be able to get a handle on timing issues. Although we were not
interested in control issues per se, the dynamics of the aircraft played an important
role in the determination of decision-time horizons, a key factor in the cognitive
COGNITIVE MODELING IN AVIATION 143
taxiways, signage) present on the ORD surface, or at least those objects presented
to flight crews during NASA experimentation. Distant objects became visible to
the pilot model at similar distances to which these same objects became visible to
human pilots in T–NASA2 experimentation.
Obviously, the environment and its dynamic properties are critically important
in understanding pilot performance in this domain, but they do not, of course,
completely determine pilot behavior; thus, the use of a knowledge-based perfor-
mance model such as ACT–R is necessary. As mentioned earlier, the ACT–R
model must be supplied with the knowledge of how to do this task. This part of
the model-building process is often referred to as knowledge engineering be-
cause the demands of gathering and structuring the knowledge necessary to per-
form the tasks in such domains are significant. We focused our efforts on the
identification of procedures and problem-solving strategies used by pilots in this
domain as well as the cost–benefit structure of those procedures and strategies.
than the immediately pending one. Second, the decision process tends to be hierar-
chical: Pilots first decide if the next intersection requires a turn and, if it does, de-
cide which turn to make. We found that in the error corpus available to us, errors in
the first decision (whether to turn or not) were rare (which was also consistent with
our SME reports), and so we concentrated our efforts on understanding how pilots
made the second decision.
The first issue to be addressed was what kinds of knowledge and strategies are ac-
tually brought to bear by actual pilots in the kinds of conditions experienced by the
pilots in the NASA study? Largely through interviews with SMEs, we discovered a
number of key strategies employed by pilots and also discovered that some of these
strategies would not have been available to our model. Many of these strategies in-
volved open communications between ground-based controllers and other aircraft.
For example, if Qantas Flight 1132 has just been given a clearance that overlaps with
the clearance given to United Flight 302, one viable strategy for the United pilot is to
simply follow the Qantas aircraft for the overlapping portion of the clearance.
Similarly, pilots can use dissimilar clearances to rule out certain decision alter-
natives. For example, when faced with an intersection that forces the pilot to
choose between taxiways A10 and D, if the pilot has just heard another flight given
a clearance, which involves A10, D is the more likely choice because the ground
controller is unlikely to assign two aircraft to be on the same taxiway approached
from different directions. It is unclear the extent to which these strategies were
available to the pilots in the T–NASA2 study because details of what clearances
were given to the (simulated) other aircraft and when such clearances were given
were not available to us. Thus, we had no choice but to exclude these strategies
from the model.
At the end of both our task analyses and SME interviews, we had identified five
primary decision strategies available for making turn decisions:
sible heuristic in the model for two reasons: (a) It may be the only strategy available
given the decision time available in some cases, and (b) it provides insights into
chance performance levels.
The next modeling issue to be dealt with was how to choose between strategies
when faced with a time-constrained decision horizon.
This type of meta-decision is well modeled by the conflict-resolution mecha-
nism ACT–R uses to arbitrate between multiple productions matching the current
situation. The accuracy of Strategies 1 (recall the clearance) and 4 (derive from
map knowledge) is primarily a function of the accuracy of the primitive cognitive
operations required of these tasks, moderated by factors such as ACT–R’s memory
decay and constrained working memory. However, the accuracy of Strategies 2, 3,
and 5 is less cognitively constrained and instead is critically dependent on the ge-
ometry of actual clearances and taxiways. As such, we used an SME as a partici-
pant in the study to provide data for an analysis of the heuristic decision Strategies
2 and 3 (the accuracy of Strategy 5, random guessing, was determined by the taxi-
way geometry itself).
For this study, Jeppesen charts for all major U.S. airports were made available to
the SME, a working B–767 pilot for a major U.S. carrier. He was asked to select
charts for those airports for which he had significant experience of typical taxi
routes, and he was asked to draw, with a highlighter on the charts themselves, the
likely or expected actual taxi routes at each airport from touchdown to the gate area
of his company. We would have perhaps never thought of performing this study had
the ACT–R model not required us to provide it with high level (i.e., airport-neutral)
strategies pilots might use in deciding what turns to make during taxi operations
along with their associated costs (times required) and benefits (accuracy).
heuristic Strategies 2 (turn toward the company gates) and 3 (turn in the direction
that minimizes the largest of the X or Y distance between the current location and
the gates) by comparing the predictions these heuristics would make with the data
provided by the SME for the nine airports studied. We recognize that these accu-
racy estimates may be specific to the (major) carrier for whom the SME flew be-
cause other the gates for other carriers may be located in areas at these nine airports
such that their pilots are provided more or less complex or geometrically intuitive
clearances than those providing the basis of our SME’s experience. However, we
do believe that this study resulted in enlightening results regarding the surprisingly
high level of accuracy of simple, fast, and frugal decision heuristics (Gigerenzer &
Goldstein, 1996) in this complex, operational environment.
Figure 2 presents the results of an analysis of the effectiveness of these two heu-
ristic strategies. Note that the XY heuristic was quite good across the board, and the
even simpler toward-terminal heuristic was reasonably accurate at many major U.S.
airports. As such, we created the turn decision-making components of the pilot
model to make decisions according to the set of the five strategies described previ-
ously, including the two surprisingly frugal and robust toward-terminal and XY
heuristics portrayed in Figure 2. One can think of these five strategies as being hier-
archically organized in terms of their costs (time requirements) and benefits (accura-
cies). The decision components of the cognitive model worked by choosing the
strategy that achieved the highest accuracy given the decision time available.
FIGURE 2 Accuracy of the toward-terminal and minimize the greater of the XY distance
heuristics.
148 BYRNE AND KIRLIK
EMPIRICAL ADEQUACY
current interest (see, e.g., Leiden, Laughery, & Corker, 2001), and it is fair to
say that there are no unanimously agreed-on criteria in this area. In the follow-
ing, we present two sources of empirical evidence in support of our dynamic, in-
tegrated, computational model of this pilot–aircraft–visual scene–taxiway sys-
tem. The first source of support is a global analysis of the frequency of taxi
navigation errors as a function of intersection type. The second is a more finely
grained analysis at an error-by-error level.
Nine different taxiway routes were used in the T–NASA2 baseline scenarios,
covering a total of 97 separate intersection crossings. Because each route was
run six times, a total of 582 intersection crossings occurred in the baseline trials.
As mentioned earlier, in only 12 instances were crews observed to make signifi-
cant departures from the cleared route, resulting in an error rate (per intersection,
rather than per trial) of approximately 2% (Goodman, 2001).
As Goodman (2001) reported, of the 582 intersections crossed, the clearance in-
dicated that crews should have proceeded in a direction toward the destination gate
in 534 cases (91.8%), whereas the clearance directed crews in directions away
from the gate in only 48 cases (8.2%). On examining this information with respect
to the predictions of both the toward-terminal and XY heuristics embodied in our
model, we discovered that at every one of the 97 intersection crossings in the
T–NASA2 scenarios at which the cleared route conflicted with both these two
heuristics, at least one taxi error was made. These accounted for 7 of the 12 taxi er-
rors observed.
In addition, and as discussed in the following section, 4 of the 12 taxi errors
were attributed not to decision making but rather to a loss of SA (i.e., losing track
of one’s position on the airport surface, see Goodman, 2001; Hooey & Foyle,
2001), a cognitive phenomenon beyond the scope of this modeling. Our modeling
approach assumed that location knowledge (loss of SA) was not the primary factor
in contributing to taxi error, but instead time-stressed decision making combined
with what might be called counterintuitive intersection and clearance pairs, that
is, those at which both the toward-terminal and XY heuristics failed due to either
atypical geometry or clearances.
The Goodman (2001) report provided a detailed analysis of each of the 12 taxi
errors observed in the baseline conditions of T–NASA2 experimentation. In the
following, we briefly consider each error in turn. When we use the term classifi-
COGNITIVE MODELING IN AVIATION 151
cation, we refer to the terms adopted by Hooey and Foyle (2001) and have
bolded errors we believe to provide evidence for our model, especially for the
fast and frugal decision heuristics used to make decisions under time stress.
Italics are used to indicate errors due to loss of SA, as such are beyond the pur-
view of our research, which thus provide neither support for or against our
model, given our initial modeling focus. In the following, all quotations are from
Goodman:
Error 1: This error was classified as a decision (as opposed to planning or exe-
cution) error, and it confirmed our modeling as the crew turned toward the gate
when the clearance indicated a turn away from the gate.
Error 2: This error was also classified as a decision error associated with “lack
of awareness of airport layout and concourse location” (p. 5). We thus considered
this error due to a loss of SA.
Error 3: This error was classified as a planning error, in which the “crew ver-
balized that Tango didn’t seem to make sense because it was a turn away from the
concourse” (p. 7). They thus turned in a direction toward the destination gate.
Error 4: This error was classified as an execution error due to perceptual con-
fusion over center lines; the crew, nonetheless, prematurely turned in the direction
of the concourse.
Error 5: This error was classified as an execution error, as vocalizations indi-
cated the crew was aware of the proper clearance. However, they made a prema-
ture turn toward the gate.
Error 6: This error was classified as an execution error, as the captain stated
that the lines were confusing but made a premature turn into the ramp area near the
destination gate.
Error 7: This error was classified as a planning error, as the FO verbally omit-
ted an intermediate turn in the clearance to Foxtrot. However, “the turn to Foxtrot
would have led pilots away from concourse—Instead, FO suggested turning to-
ward concourse on Alpha” (p. 15).
Error 8: This error was classified as a decision error, as the crew immediately
made a turn toward the gate after exiting the runway, whereas the clearance pre-
scribed a turn away from gate.
Error 9: This error was classified as an execution error, as the FO voiced con-
fusion over center lines. Crews made a (one-gate) premature turn into the con-
course area, whereas the clearance indicated they should proceed ahead further
prior to turning into the concourse.
Errors 10, 11, and 12: Each of these errors was classified as a due to a loss of
SA, due to the FO being “head down with Jepp chart, [and] didn’t realize where
they were on the airport surface” (p. 21; Error 10), the crew’s “demonstrated lack of
awareness of airport layout” (p. 23; Error 11), and “FO lacked awareness of their lo-
cation on the airport surface” (p. 25; Error 12).
152 BYRNE AND KIRLIK
Although several of these errors were not, strictly speaking, classified as de-
cision errors, we think it is revealing to note that the bulk of the errors classified
as planning and execution errors were consistent with the same decision-making
heuristics.
Summary
CONCLUSIONS
Implications
Obviously, the model presented here does not generalize directly to operational
taxiing situations due to practical limitations in both the original study and the
modeling effort itself. However, we believe that the ultimate lessons learned
from this effort are relevant. This includes the general lesson that the details and
dynamics of both the human cognitive system and the structure of the environ-
ment in which that system operates must be considered jointly, not in isolation
from one another. More directly in the taxiing domain, this research suggests
that taxi routes, which are inconsistent with the heuristics available to
time-pressured flight crews are likely to be error prone and will continue to be
so until a system that makes the correct route computable with greater speed and
accuracy than those heuristics is made available to flight crews.
ACKNOWLEDGMENTS
REFERENCES
Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Quin, Y. (2004). An integrated
theory of the mind. Psychological Review, 11, 1036–1060.
Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah, NJ: Lawrence
Erlbaum Associates, Inc.
Bisantz, A. M., & Pritchett, A. R. (2003). Measuring judgment in complex, dynamic environments: A
lens model analysis of collision detection behavior. Human Factors, 45, 266–280.
Cassell, R., Smith, A., & Hicok, D. (1999). Development of airport surface required navigation perfor-
mance (RNP) (Report No. NASA/CR–1999–209109). Moffet Field, CA: NASA Ames Research
Center.
Cheng, V. H. L., Sharma, V., & Foyle, D. C. (2001). Study of aircraft taxi performance for enhancing
airport surface traffic control. IEEE Transactions on Intelligent Transportation Systems, 2(2),
39–54.
Connolly, T. (1999). Action as a fast and frugal heuristic. Minds and Machines, 9, 479–496.
Degani, A., Shafto, M., & Kirlik, A. (1999). Modes in human–machine systems: Review, classification,
and application. The International Journal of Aviation Psychology, 9, 125–138.
Deutsch, S., & Pew, R. (2002). Modeling human error in a real-world teamwork environment. In W. D.
Gray & C. D. Schunn (Eds.), Proceedings of the 24th annual meeting of the Cognitive Science Soci-
ety (pp. 274–279). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
154 BYRNE AND KIRLIK
Endsley, M. R., & Smolensky, M. W. (1998). Situation awareness is air traffic control: The big picture.
In M. W. Smolensky & E. S. Stein (Eds.), Human factors in air traffic control (pp. 115–154). San
Diego, CA: Academic.
Foushee, H. C., & Helmreich, R. L. (1988). Group interaction and flight crew performance. In E. L.
Wiener & D. C. Nagel (Eds.), Human factors in aviation (pp. 189–227). San Diego, CA: Academic.
Foyle, D. C., Andre, A. D., McCann, R. S., Wenzel, E., Begault, D., & Battiste, V. (1996). Taxiway navi-
gation and situation awareness (T–NASA) system. Problem, design, philosophy, and description of
an integrated display suite for low visibility airport surface operations. SAE Transactions: Journal
of Aerospace, 105, 1511–1418.
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded ratio-
nality. Psychological Review, 103, 650–669.
Goodman, A. (2001). Enhanced descriptions of off-route navigation errors in T–NASA2. Moffet Field,
CA: National Aeronautics and Space Administration Ames Research Center.
Gore, B., & Corker, K. M. (2002). Increasing aviation safety using human performance modeling tools:
An air man–machine design and analysis system application. In M. J. Chinni (Ed.), 2002 military,
government and aerospace simulation (Vol. 34, No. 3, pp. 183–188). San Diego, CA: Society for
Modeling and Simulation International.
Hollnagel, E. (1998). Cognitive reliability and error analysis method. Oxford, England: Elsevier.
Hollnagel, E. (2000). Looking for errors of omission and commission or The hunting of the snark revis-
ited. Reliability Engineering and System Safety, 68, 135–145.
Hooey, B. L., & Foyle, D. C. (2001). A post-hoc analysis of navigation errors during surface operations.
Identification of contributing factors and mitigating strategies. In Proceedings of the 11th Sympo-
sium on Aviation Psychology. Columbus: Ohio State University.
Hooey, B. L., Foyle, D. C., & Andre, A. D. (2000). Integration of cockpit displays for surface operations:
The final stage of a human-centered design approach. SAE Transactions: Journal of Aerospace,
109, 1053–1065.
Jones, D. (2000, February). Runway incursion prevention system (RIPS). Paper presented at the SVS
CONOPS Workshop, National Aeronautics and Space Administration Langley Research Center,
Virginia.
Karlin, S. (1983). 11th R. A. Fisher Memorial Lecture. Lecture presented to the Royal Society, London.
Kirlik, A. (1998). The ecological expert: Acting to create information to guide action. In Fourth Sympo-
sium on Human Interaction with Complex Systems. Piscataway, NJ: IEEE Computer Society Press.
Retrieved October 2001, from http://www.computer.org/proceedings/hics/8341/83410015abs.htm
Kirlik, A., Miller, R. A., & Jagacinski, R. J. (1993). Supervisory control in a dynamic uncertain environ-
ment: A process model of skilled human–environment interaction. IEEE Transactions on Systems,
Man, and Cybernetics, 23(4), 929–952.
Lebiere, C., Bielfeld, E., Archer, R., Archer, S., Allender, L., & Kelly, T. D. (2002). Imprint/ACT–R: In-
tegration of a task network modeling architecture with a cognitive architecture and its application to
human error modeling. In M. J. Chinni (Ed.), 2002 military, government and aerospace simulation
(Vol. 34, No. 3, pp. 13–19). San Diego, CA: Society for Modeling and Simulation International.
Leiden, K., Laughery, R., & Corker, K. (2001). Verification and validation of simulations. Retrieved
January 2002, from https://postdoc.arc.nasa.gov/postdoc/t/folder/ main.ehtml?url_id=90738
McCarley, J. S., Wickens, C. D., Goh, J., & Horrey, W. J. (2002). A computational model of atten-
tion/situation awareness. In, Proceedings of the 46th annual meeting of the Human Factors and Er-
gonomics Society (pp. 1669–1673). Santa Monica, CA: Human Factors and Ergonomics Society.
Olson, W. A., & Sarter, N. B. (2000). Automation management strategies: Pilot preferences and opera-
tional experiences. The International Journal of Aviation Psychology, 10, 327–341.
Payne, J. W., & Bettman, J. (2001). Preferential choice and adaptive strategy use. In G. Gigerenzer & R.
Selten (Eds.), Bounded rationality: The adaptive toolbox (pp. 123–146). Cambridge, MA: MIT
Press.
COGNITIVE MODELING IN AVIATION 155
Rasmussen, J. (1980). What can be learned from human error reports? In K. D. Duncan, M. M.
Gruneberg, & D. Wallis (Eds.), Changes in working life (pp. 97–113). Chichester, England: Wiley.
Reason, J. (1990). Human error. Cambridge, England: Cambridge University Press.
Runway incursions. (2003). NTSB Reporter, 21(3), 4–5.
Salvucci, D. D. (2001). Predicting the effects of in-car interface use on driver performance: An inte-
grated model approach. International Journal of Human–Computer Studies, 55, 85–107.
Wiegman, D. A., & Goh, J. (2001). Pilots’ decisions to continue visual flight rules (VFR) flight into ad-
verse weather: Effects of distance traveled and flight experience (Tech. Rep. No.
ARL-01–11/FAA-01–3). Savoy: University of Illinois, Aviation Research Laboratory.
Wiegmann, D. A., & Shappell, S. A. (1997). Human factors analysis of post-accident data: Applying
theoretical taxonomies of human error. The International Journal of Aviation Psychology, 7, 67–81.
Woods, D. D., Johannesen, L. J., Cook, R. I., & Sarter, N. B. (1994). Behind human error: Cognitive sys-
tems, computers and hindsight. Columbus, OH: CSERIAC.