Forecasting Practice - A Review of The Empirical Literature and An Agenda For Future Research
Forecasting Practice - A Review of The Empirical Literature and An Agenda For Future Research
Forecasting Practice - A Review of The Empirical Literature and An Agenda For Future Research
Forecasting practice" a review of the empirical literature and an agenda for future research
Heidi Winklhofer, Adamantios Diamantopoulos, Stephen F. Witt*
European Business Management School, University of Wales Swansea, Swansea, SA2 8PP, UK
Abstract
An up-to-date overview of empirical studies on forecasting practice is presented. Surveys and case studies reporting on forecasting practice in industry are identified and their major methodological characteristics summarised. An overall framework for organisational forecasting practice is then developed which is used to categorise and discuss the substantive findings of the various empirical studies. Finally, the fiamework is used to identify areas for future research based upon the gaps identified in the literature, review.
Keywords: Forecasting practice; Empirical research; Framework
1. Introduction
Forecasting is essential for decision making, unless insurance or hedging is selected to deal with the future (Armstrong, 1988). The growing importance of the forecasting function within companies is reflected in an increased level of commitment in terms of money, hiring of operational researchers and statisticians, and purchasing computer software (Wheelwright and Clarke, 1976; Pan et al., 1977; Fildes and Hastings, 1994). Makridakis et al. (1983) note several factors which have caused the importance of forecasting within an organisation to increase in recent years:
* Corresponding author. Tel.: (01792) 295639; fax: (01792) 295626.
the increasing complexity of organisations (e.g. number of subm.arkets served and products offered) and their environments (e.g. changes in technology and demand structures) has made it more difficult for decision makers to take all the factors relating to the future development of the organisation into account; organisations have moved towards more systematic decision making that involves explicit justifications for individual actions, and formalised forecasting is one way in which actions can be supported; and the further development of forecasting methods and their practical application has enabled not only forecasting experts but also managers (decision makers) to understand and use these techniques. With particular reference to the last point, it is
0169-2070/96/$15.00 1996 Elsevier Science B.V. All rights reserved SSDI 0169-2070(95)00647-8
194
evident that a knowledge of forecasting is only useful if applied to an organisation's decision making and planning processes; in this context, "Lo]ractical application may derive from theory, but they [the forecasting methods] require considerable modifications before they can be used. Strong bridges are required to connect theory and practice, and many problems must be solved before forecasting methods can be used efficiently and effectively in management situations" (Makridakis and Wheelwright, 1979, p. 3). While the importance of applying forecasting techniques in practice has long been recognised and researchers have been repeatedly urged to investigate such issues (e.g. Armstrong, 1988; DeRoeck, 1991; Mahmoud et al., 1992), there is little doubt that most empirical research on forecasting still deals with methodological issues (e.g. the development of more accurate forecasting methods). The fact that application issues are rather underexplored was most recently highlighted by Schultz (1992, p. 410) in an editorial in the International Journal of Forecasting: "It is virtually impossible to work with real organizations-or even simply read the business presswithout realizing that the gap between the development of forecasting techniques and their application is huge". Indeed, Makridakis et al. (1983, p. 13) had predicted that the greatest gains in forecasting research during the 1980s would be in the areas of implementation and practice: "[w]hile there undoubtedly will be some improvements in available methodologies, it is management's knowledge and use of existing methods, in their specific organizational context, that hold the greatest promise". Against this background, the aim of the present paper is to draw on empirical findings on forecasting practices in order to: provide an up-to-date overview of empirical studies on forecasting practices; develop a framework within which to organise the diverse findings of prior research; and identify areas for future study based upon the above framework. In the section that follows, empirical studies (surveys and case studies) reporting on forecasting practice in industry are identified and their
major methodological characteristics summarised. Next, Levenbach and Cleary's (1981, 1982, 1984) outline of the forecasting process is considered together with Armstrong et al.'s (1987) list of practical questions relating to forecast application, and these are used to generate an overall framework for organisational forecasting practice. This framework is then employed to categorise and discuss the substantive findings of the various empirical studies identified previously. Finally, a research agenda is drawn up based upon the gaps identified from the review of past studies undertaken in the previous step.
2. Empirical research on forecasting practices Although no fewer than five literature reviews of forecasting practices have been published in the past 20 years (Turner, 1974; Rao and Cox, 1978; Makridakis et al., 1983; Wheelwright and Makridakis, 1985; Makridakis and Wheelwright, 1989), a number of reasons underlie the decision to undertake yet another review in this paper. First, the four most recent reviews cover only six empirical studies between them (namely those by the Conference Board, 1970; Dalrympie, 1975; Wheelwright and Clarke, 1976; Pan et al., 1977; Mentzer and Cox, 1984a; Dalrymple, 1987); however, many more investigations of forecasting practices have been conducted, both within the periods covered by these reviews and since the publication of the latest review by Makridakis and Wheelwright (1989). Moreover, in contrast to previous efforts, the present review also includes case studies; the latter not only provide detailed insights into the forecasting process within companies, but also address issues additional to those covered in survey-type investigations (e.g. applying subjective judgement). Secondly, the earliest review by Turner (1974) covers very old studies (Thompson, 1947; MacGowan, 1952; Strong, 1956; Sord and Welsch, 1958; British Institute of Management, 1964; Reichard, 1966; Jones and Morrell, 1966); thus, it is debatable whether their findings are still of relevance today.
195
Thirdly, none of the previous reviews was explicitly concerned with devising a research agenda for studying forecasting practices; in fact, only the Rao and Cox (1978) review provided any future research directions, the most important being "to learn more about the ways in which new techniques and new applications of sales forecasting are diffused among the firms in industry" (Rao and Cox, 1978, p. 84). A systematic literature search, based upon a combination of manual and computer-based scanning methods and covering the period 19701995 resulted in the identification of no fewer than 35 surveys and six case studies pertaining to forecasting practices (Table 1).1 Of these studies, the majority (64%) were conducted in the USA, while 15% of the investigations focused solely on UK companies and 11% examined just Canadian firms. The remaining 10% of surveys either used cross-national samples (e.g. USA and Canada) or concentrated on other countries (e.g. Brazil, Australia). Almost half (49%) of all studies identified focused specifically on sales forecasting) While the majority of the latter examined sales forecasting practices in the light of specific variables such as time horizon or
~The literature search involved the consultation of the following bibliographies and databases, as well as a manual tracing of additional studies by inspecting the reference lists of the studies already obtained. Databases: ABI/INFORM, Jan. 1971-Feb. 1995; BPI, July 1982-March 1995; BIDS, Jan. 1988-Feb. 1995; LAMBDA, 1988 Vol. 1(1)-1991 Vol. 4(2) and 1993 Vol. 6(2). Bibliographies: Rao, V.R. and J.E. Cox, Jr., 1978, Sales forecasting methods: A survey of recent developments (Marketing Science Institute, Cambridge, Massachusetts). Fiides, R., D. Dews and S. Howell, 1981, A bibliography of business and economic forecasting (Gower Publishing, Farnborough). Fildes, R. and D. Dews, 1984,
forecasting level (e.g. Dalrymple, 1975, 1987; Mentzer and Cox, 1984a,b), some looked at the practice of sales forecasting in broad terms (e.g. Cerullo and Avila, 1975; Rothe, 1978). In terms of data collection methodologies, the most popular approach (used in 61% of the investigations) was the mail questionnaire, followed by personal interviews (15%), and indepth investigations (10%); a few studies relied on a combination of methods (e.g. personal interviews and a mail questionnaire or personal interviews coupled with telephone discussions). Most industries represented in the samples studied were manufacturers of industrial products, followed by consumer goods, services and utilities.3 The number of usable responses (i.e. sample sizes) and response rates obtained in the survey-type investigations were fairly mixed. The former ranged from ten responses (Lawrence, 1983) to 324 responses (Hanke, 1984), while the latter ranged from 10% (White, 1986) to 55% (Pan et al., 1977). Overall, inspection of Table 1 reveals a predominance of North American studies (they account for 76% of all investigations), a bias towards large firms and industrial goods sectors and substantial variability both in sample sizes and response rates. Of particular concern is the fact that some empirical studies (e.g. Greenley, 1983; Wilson and Daubek, 1989) d o n o t explicitly specify the kind of forecasting problem(s) under study (e.g. market potential assessment, price forecasting, competitive response forecasting). This inevitably raises questions as to the applicability of the reported results (and any accompanying recommendations) to specific forecasting situations.
A wide variety of issues relating to forecasting practice have been subject to investigation in the studies identified in the literature search. Given
3 The calculation of exact figures is not possible, because of mixed industry samples or non-reporting in the surveys.
196
o .= ~-.
"'..~
= 0
= 0
= 0
0
,-L 0
~.~
~J
"0 = 0 c~
o~
~ ~.~
"~
~"
~ i~ ~,.o ~
.o
C4
8~&~,~.~s,~
.~-~.~.~
~ ~ o ~,.,-, ~ o
"0 e.,
<
~J
.= 0
<
<
<
r~
<
;:3
"0 I=
o
r/2
L)<
197
~o'~
~ ~
~-
r-I
m~ c
..
~o
o~ t"-
oo I'-
:~=.-~ ~
<
r~ m
D
,..d
o~
~ Z
)<
198
~,~.~
~',~ ~ , ~ , , .~ ~
"=
z
"0
,~
8~
o
=.~
o.,
~~ ~ o o o ~
Z ~-~ ~
~ o ~~ ~ oo -
O~
<
,.d ,,..-4
8
["
199
.~ =
":
.=
~ ~
:~
~ ~
~.~
~.,~ o
~.~
~.o
.~.~=.
.,
<
<~.=_~ ~.~
oo
'4D O0
O0
,"o
,<
<
<
200
t~
~O
o,
oo
o,
o~
c~
~,
o,
oO
oD o~ oO oO oD oO oO O~ v~
oO
~0
<
<
<
<
<
ca,
.=.
201
.~
.~
oo
oo
~
z
~..
<
<
<
<
"0
"o
r.~
202
~ ~~
0
~ .~ ~ .~.~ ~ ~
.-
o ~~
~ ~ o~~ ~ ~ ~
"
0 t~
~.~
,.d
o)
[.
203
0 .~
0 .~
0 "~
o
0 ".~,
,4
z
,4 z
.4 z
.4 z
4 z
4 z
6 Z
4
Z
.4
Z
4
Z
4
Z
4
Z
.4
Z
O. 0
4.
z
4
z
.4
z
.4
z
.4
"0
~J "0 0
& e.,
04
'13
q~
*0 ~
..~
,,.c;o o
~4
"~ t~
"0
0"~
~.~ ~o
~4 0 0.,
e.,
<
0
<
<
< =~ Z
".~
f~.,O .~ O~ 0 j ~J e.
r~
rll
204
the large number of studies involved (41 in all), any attempt to summarise their findings on an ad hoc basis is bound to be tedious, repetitive and not very illuminating. Recognising this, a conscious effort was made to develop a basic framework within which to place the various issues examined and, thus, provide some structure for the classification and subsequent discussion of individual findings. In constructing such a framework two sources proved particularly useful, namely Levenbach and Cleary's (1981, 1982, 1984) description of the forecasting process and Armstrong et al.'s (1987) list of considerations relating to forecasting for marketing decision making. While Levenbach and Cleary's (1981, 1982, 1984) detailed outline of the forecasting process is specific to statistical forecasting only (judgemental forecasting is not included), it nevertheless highlights the key phases of the process as reflected in design, specification and evaluation; each of these interrelated' phases incorporates a variety of issues relating to forecasting inputs, forecasting methods and forecasting outputs respectively. Armstrong et al. (1987), on the other hand, present a list of factors affecting forecasting practice in an organisation, ranging from the purpose and time horizon of the forecast to the presentation of forecasts to decision makers; thus, the factors listed represent practical considerations associated with the forecasting process. Fig. 1 shows a framework for organisational forecasting practice developed by integrating the (largely complementary) perspectives of Levenbach and Cleary (1981, 1982, 1984) and Armstrong et al. (1987). The framework distinguishes between three different sets of issues, relating to design, selection/specification and evaluation. Design issues comprise the purpose and type of forecast required, the resources committed to forecasting, the characteristics of forecast preparers and users and the data sources used. Selection~specification issues are concerned with forecasting techniques and address questions of familiarity with and selection and usage of alternative forecasting methods. Finally, evaluation issues focus on the outcomes of forecasting activity as reflected in the presentation and
DESIGN
ISSUES
Purpose/use of forecast Forecast level Time horizon and frequency of forecast preparation Resources committed to forecasting Forecast preparers Forecast users Data sources
SELECTION/SPECIFICATION Familiarity with forecasting techniques Criteria for technique selection Usage of alternative forecasting methods
ISSUES
EVALUATION
ISSUES
Forecast presentation to management Forecast review and use of subjective judgement Standards for forecast evaluation Forecast performance Forecasting problems and forecast improvement
F i g . 1. F r a m e w o r k
for organisational
forecasting
practice.
review of forecasts, the evaluation of forecast performance and the forces adversely affecting forecast accuracy. It should be noted that, as indicated by the two-way arrows, the three sets of issues are interlinked in that each can have implications for the others; for example, the adoption of a particular forecasting technique (a specification issue) will have implications for forecast accuracy (an evaluation issue) which, in turn, may lead to adjustments in, say, the data inputs used to develop the forecast (a design issue). Fig. 1 provides a logical and orderly representation of organisational forecasting practice and establishes a clear overview of the latter; thus, it is a convenient 'navigation guide' for discussing the diverse findings of the empirical
205
studies summarised earlier in Table 1 and for presenting future research directions.
4. Design issues
in relation to other variables, such as the frequency of preparation (e.g. Dalrymple, 1987) and adoption of different techniques (Sparkes and McHugh, 1984); the relevant findings are discussed in Sections 4.3 and 5.3, respectively.
206
ing horizons. Cerullo and Avila (1975), White (1986) and Peterson (1990) also found that the majority of firms prepared sales forecasts on a yearly basis. Naylor (1981) reported that firms which used econometric models developed forecasts for up to 7.7 years ahead on average; some companies employed them to generate forecasts as far as 25 years ahead. Investigating the time horizon of a firm's forecasts in relation to company characteristics, Small (1980, p. 21) discovered that "factors such as the industry of a firm, its market orientation and the forecasting role in which a technique is used have a significant impact on the time horizon over which a technique is used to forecast sales". In this context, McHugh and Sparkes (1983) found that firms operating in highly competitive markets put more emphasis on short-term than on long-term forecasts, and that subsidiaries prepared forecasts more frequently than independent firms. McHugh and Sparkes (1983) also reported that the frequency of forecast preparation is dependent on the forecast application. Forecasts for cash flow, profit planning, levels of capital employed, market share, production planning and stock control achieved relatively high average levels of forecast frequency. In contrast, forecasts for investment appraisal, market size and research and development were not undertaken so often. Dalrymple (1987) also found that production forecasting was undertaken more often than sales forecasting. The findings on time horizon and forecast frequency confirm the conclusion of White (1986, p. 8), that "companies evolve the forecasting frequencies that best suit their type of product, market, and method of operation. There is no one 'best' frequency mix".
4.4. Resources committed to forecasting 4.4.1. Money and personnel
sales forecasting, 24% had less than five employees and only 9% had more than five employees. In contrast, Wheelwright and Clarke (1976) (who surveyed large US firms with high involvement and concern for forecasting) observed a significant level of financial commitment (e.g. the resources typically committed by a company with annual sales up to $500 million comprised one or more specialist forecasting staff and between $10 000 and $50000 forecasting budget). Pan et al. (1977, p. 76) also concluded that "large industrial firms recognize the importance of sales forecasting and commit resources to these efforts on a planned, regular basis". Cerullo and Avila (1975), Rothe (1978) and Dalrymple (1987) collected information on whether firms kept records on forecasting expenditures and all studies found that the majority of firms did not. Dalrymple's (1987, p. 389) interpretation was that "business firms apparently treat forecasting as a free good and that forecasting managers should buy more computers and hire more people. Another possibility is that forecasting is treated so casually that it is not even given a formal budget. Obviously, neither condition is particularly desirable".
4.4.2. Computers
Cerullo and Avila's (1975) results indicated a weak commitment among Fortune 500 firms towards sales forecasting, as the forecasting function was generally not formally organised within these companies; two-thirds of respondents had no full-time employees working in
Some 20 years ago, the majority (86%) of UK companies who utilised econometric forecasting models already worked with computers (Simister and Turner, 1973). However, McHugh and Sparkes (1983) found that, despite the fact that more than half of their UK respondents possessed computers and a large majority (86%) were aware of existing forecasting packages/programs, only 12% actually used computers for formal forecasting. Reasons given for the low incidence of usage were high costs, the fact that the programs seemed to be inappropriate for the company's needs, and inadequate computer capacity. This is in contrast to findings by Hurwood et al. (1978), who concluded that cost was becoming less of a deterrent to computer-based forecasting because refinements in programs and methodology made the former easier to use, more flexible and faster. A similar investigation was undertaken in the United States in 1975 and
207
again in 1983 by Dalrymple (1975, 1987). During the intervening period, the availability of computers and software increased dramatically, with computer usage ("always" and "frequently") climbing from 44% in 1975 to 64% in 1983 (Dalrymple, 1987); the studies by Mentzer and Cox (1984a) and Cerullo and Avila (1975) also showed a trend towards increased computer usage. Dalrymple (1987) found that internally developed software was used by 71% of respondents (the corresponding 1975 figure was 55%), whereas 21% (compared with 10% in 1975) of the respondent companies purchased software packages. However, a survey published in the same year by Davidson (1987) showed a rather different figure, with only 15% of programs having been developed by in-house programming staff (although over 90% of the firms surveyed regularly employed software in forecasting). Fildes and Hastings (1994, p. 10) found that, from 1987 onwards, "computer usage was routine and well-supported and did not present a barrier to information flow" in their case-study company. Finally, Lawrence (1983) investigated reasons for abandoning computer-based forecasting systems and identified three main causes: first, the problem of manual review was neglected; second, the system was designed more to supplant than to support the user; and, third, manually massaging the database in order to remove extraordinary events (e.g. large customer returns, strikes, significant promotional activity) required considerable effort by the user.
4.5.1. Responsibility for forecast preparation Drury (1990) reported that 14% of his sample had not defined responsibility for forecasting at all, while in 52% of cases forecasting was delegated to the controllership (or Vice President Finance) function; only one in five companies
had their forecasts prepared by separate forecasting/planning staff. The existence of dedicated forecasting/planning staff is more common among larger organisations (Simister and Turner, 1973; Wheelwright and Clarke, 1976), while the popularity of the finance function being responsible for forecast preparation probably reflects "the necessity of linking forecasts with plans and especially budgets" (Drury, 1990, p. 326). In Reilly's (1981) study, the finance group's access to sales history records, related advertising and marketing expenditure data, and other quantitative historical data made it an ideal place for setting up the forecasting function; moreover, finance personnel were deemed to be more familiar with quantitative techniques and with management information systems. On a different issue, West (1994) reported that the most popular way of organising the forecasting process was a modified bottom-up approach whereby subunits initially establish the forecast and top management adjusts it to conform with overall goals. Peterson (1993), on the other hand, observed that among smaller retailers a top-down approach was the most popular, whereas for larger firms a bottom-up approach was preferred; this suggests that firm size may affect the organisation of forecasting within the company. White (1986, p. 12, emphasis in the original) concluded that "[t]here seems to be a growing trend in all companies to get more participation in the forecasting process. By doing this, they not only get input from those who can make good contributions to the forecast, but also assure greater acceptance of the forecast and commitment to the plans based upon it" (see also McHugh and Sparkes, 1983). Kahn and Mentzer (1994), who focused on team-based forecasting, found that almost half of the firms questioned used such an approach; in these firms there was either a team responsible for forecast preparation, or if each department separately developed its own forecasts, the final forecast was decided collectively by a team. Group forecasting was particularly emphasised for company and industry forecasts, which firms "typically perceive as the more critical forecasts" (Sanders
208
and Manrodt, 1994, p. 95). The team-based approach has been found to be most popular among larger firms, where a combination of executives was typically responsible for forecast preparation (White, 1986); in smaller firms the responsibility for forecast preparation lay mostly with the chairman or president (White, 1986; Peterson, 1993). West (1994) carried out a detailed survey of the participants involved in different phases of the forecasting process: input, draft, inspection and approval. Marketing/sales personnel were most strongly involved in data input and drafting the initial forecasts, while top management acted more as 'approvers' of forecasts; the roles of the finance and production departments were mainly to inspect the forecasts. Other surveys, focusing on preparation only, also emphasised the dominance of marketing/sales personnel as forecast preparers (Cerullo and Avila, 1975; Wotruba and Thurlow, 1976; Rothe, 1978; Davidson, 1987). Additional parties involved were production and finance (Davidson, 1987; Peterson, 1990, 1993; Walker and McClelland, 1991). Comparing consumer and industrial goods companies, Peterson (1990) reported that the latter displayed a weaker orientation to expert opinion forecasting by marketing personnel than did consumer firms. In contrast, in econometric forecasting, Naylor (1981) found that typical preparers were personnel from corporate economics and planning, with only 11.8% of marketing personnel developing such forecasts.
for forecast preparers in companies, with only half of Mentzer and Cox's (1984a) sample having received formal training (see also Cerullo and Avila, 1975). On the other hand, Davidson (1987, p. 19) reported that his sample of forecasters regarded college courses in "quantitative methods, computer literacy, production/management, statistics, forecasting and market research" as "most important". Surveys that focused on forecasting courses offered at universities (Hanke, 1984, 1989; Kress, 1988; Hanke and Weigand, 1994) agreed that business schools emphasise different techniques (more quantitative than qualitative) than those commonly used in the business world. Furthermore, training in data collection, monitoring and evaluation of forecasts seemed to be rather neglected and a cause for concern given that "the intended audience of most forecasting courses is future managers/decision makers and not forecasters" (Kress, 1988, p. 28), and managers/decision makers are likely to spend a considerable time on such activities (Fildes and Hastings, 1994). Makridakis et al. (1983, pp. 805-806) stated that lack of formal training is often overrated, in that "the emphasis should not be on increasing the forecaster's knowledge of sophisticated methods, since doing so does not necessarily lead to improved performance. Perhaps the training should consider such issues as how to select a time horizon, how to choose the length of a time period, how judgment can be incorporated into a quantitative forecast, how large changes in the environment can be monitored, and the level of aggregation to be forecast". Forecasting managers in service firms appear, overall, to have a lower education level than their counterparts in manufacturing companies (Sanders, 1992), while forecast preparers with graduate level education are likely to employ more sophisticated techniques than their colleagues without graduate education (Cerullo and Avila, 1975). Despite the high level of education found among forecasters in manufacturing firms (Sanders, 1992) and the proposed positive correlation between education level and use of more sophisticated forecasting techniques (Cerullo and Avila, 1975), forecast preparers in Sparkes and
209
McHugh's (1984) sample of manufacturing firms claimed to have the highest level of working knowledge in subjective techniques such as executive assessment and surveys; one out of two respondents declared only an "awareness", but no working knowledge, of exponential smoothing, regression and correlation. The authors concluded that "the perceived 'complexity' of the technique [has] a direct influence on the state of awareness and ultimate working knowledge of formal techniques" (Sparkes and McHugh, 1984, p. 38).
4.6. Forecast users
In contrast to the large amount of empirical research on forecast preparers, relatively little is known about forecast users. However, findings on forecast purpose/use (see Section 4.1 earlier) provide at least some insight, since they highlight the functional areas (e.g. production and marketing) in which the forecasts are applied. It should also be borne in mind that forecast preparers may also themselves be the principal forecast users as shown, for example, in the case study by Fildes and Hastings (1994). Peterson (1993) found that top management, marketing, finance and accounting executives were the major users of forecasts, while five key user groups were identified in the study of Rothe (1978): production planning and operations management, sales and marketing management, finance and accounting, top corporate management and personnel. Wheelwright and Clarke (1976) looked more in depth into the forecast user-designer relationship. They observed a lack of communication between users and preparers of forecasts, and a lack of skills required for effective forecasting, especially on the part of the users. Furthermore, a disparity in user-preparer perceptions of the company's forecasting status and needs was apparent.
4.7. Data sources
A number of studies have concentrated on the external and internal information sources utilised
for forecast preparation. Wotruba and Thurlow (1976, p. 11) found strong reliance on the sales force as a source of forecasting input and concluded that: "[c]ompanies' outside sales force is potentially one of its best sources of market and sales forecasting information, especially under unique economic conditions which make historical data unreliable". In a study of information sources for sales forecast preparation utilised by subsidiaries of large multinationals (Hulbert et al., 1980), the sales force, historical data, the marketing research department and the management of other subsidiaries were used most often; trade sources and commercial suppliers were less important, as was information supplied by the parent office. Similar results were obtained by McHugh and Sparkes (1983), with only half of their sampled subsidiaries receiving data input such as broad economic guidelines and identification of external influences from the parent company. With regard to information needed for predictions of exogenous economic variables for econometric models, official government or treasury forecasts and forecasts from the National Institute of Economic and Social Research (UK) were the most common sources used in Simister and Turner's (1973) sample. In Naylor's (1981) study, 90% of respondents included national macroeconomic variables in econometric models, and the same percentage subscribed to econometric service bureaus in order to receive macroeconomic data and forecasts. Cerullo and Avila (1975), mentioned that two-thirds of the organisations using causal forecasting models incorporated outside information into their forecasting models. Less attention was paid to external information by the respondents to Rothe's (1978) survey, with only 55% of firms (mainly larger ones) using macroeconomic data as input variables; furthermore, this type of information was mostly integrated in a subjective manner-only 10% of firms incorporated it into a quantitative system. Fildes and Hastings (1994), who compared the actual with the desired performance of certain information sources in their case study, found that the respondents were highly dissatisfied. In
210
particular, they complained about lack of sound market research data which they regarded as vital for improving forecast validity.
and Box Jenkins) than their counterparts in service industries (Sanders, 1992).
211
Fildes and Hastings, 1994). Lastly, with regard to econometric modelling, Naylor (1981) reported that major obstacles were management bias, lack of understanding, insufficient time and inadequate expertise (see also Simister and Turner, 1973).
ferred method was the naive approach. Quantitative methods, such as regression/econometric models (Mentzer and Cox, 1984a), leading indicators (Mahmoud et al., 1988) and trend curve analysis (Sparkes and McHugh, 1984) were the most often used techniques for medium-term forecasts. These studies also reported an increase in the use of subjective techniques such as surveys (Sparkes and McHugh, 1984), jury of executive opinion (Mentzer and Cox, 1984a) and sales force composite (Mahmoud et al., 1988). The results by Dalrymple (1987), Sanders (1992) and Sanders and Manrodt (1994) also indicate that companies mainly employ subjective techniques for medium-term forecasts. Long-term forecasting appears to be dominated by jury of executive opinion (Mentzer and Cox, 1984a; Sparkes and McHugh, 1984; Sanders, 1992; Sanders and Manrodt, 1994) and regression/ econometric models (Mentzer and Cox, 1984a; Dalrymple, 1987; Mahmoud et al., 1988; Sanders, 1992; Sanders and Manrodt, 1994). Overall, subjective techniques tend to be very popular across all time horizons, while some objective techniques gain in popularity when moving from a short-term to a medium-term time horizon but then lose in popularity when moving from a medium-term to a long-term time horizon. Sparkes and McHugh (1984) found that, for market size forecasts, the survey technique was the most popular, while executives' assessment was used most for market share, production and financial forecasts. Dalrymple (1987), Peterson (1989) and Herbig et al.'s (1994) results also indicate that the type of product sold (consumer vs industrial product) helped to explain differences in method usage. Industrial product companies employed the sales force composite method more and regarded exponential smoothing as more important than did consumer goods firms. Industrial firms were also found to rely significantly more on expert judgement than consumer firms in a study conducted by Peterson (1990). However, in comparing manufacturing and service firms, Sanders and Manrodt (1994) found that the two groups did not differ greatly in the use of techniques. Dalrymple (1987), Peterson (1993) and Sanders and Manrodt (1994) also
212
observed differences between the adoption of forecasting methods by small vs large companies (see also White, 1986); small firms relied more on subjective and extrapolation methods, while large companies used more sophisticated quantitative techniques more often. American companies employed, on average, 2.7 forecasting methods (Dalrymple, 1987), whereas UK companies used 3.3 (McHugh and Sparkes, 1983) and Canadian companies 3.5 techniques (West, 1994); the latter study also reported that firms which employed more than five different forecasting techniques used at least one or more objective methods. A study by Rothe (1978, p. 116) concluded that "the typical company used multiple forecasting techniques rather than relying on a single method". This result was partly supported by Dalrymple (1987), who found that 38% of firms used a forecast combination, and further confirmed by Wilson and Daubek (1989) whose figure was over 60%. A high proportion of respondents utilising a combination of subjective and objective methods was also observed in the Simister and Turner (1973) and Cerullo and Avila (1975) studies; both studies also stressed that the majority of firms which did not combine forecasts relied on judgemental procedures.
gest(s) that before managers take ownership in the forecasts to be used in operations planning they need to understand how the techniques work" (Davidson, 1987, p. 19). McHugh and Sparkes (1983) also found that forecasts produced judgementally had a more extensive influence on decision making than statistically or model-built forecasts (see also Simister and Turner, 1973). Only White (1986) enquired who received copies of sales forecasts. Interestingly, it was mainly small firms that kept their forecasts highly confidential, because "competition is more keenly felt and a copy of a competitor's sales forecast could be a tip-off of its selling or production strategies in the near future" (White, 1986, p. 15); independent of firm size, the majority of firms in his sample distributed their sales forecasts to all executive personnel.
6.2. Forecast review and use of subjective judgement
6. Evaluation issues
Miller's (1985) case study emphasised how crucial the employment of graphical output was when presenting forecasts to decision makers. Graphical displays were "[t]he most important step toward the overall satisfaction of needs" of the organisation (Miller, 1985, p. 74). Surprisingly, none of the other empirical investigations raised the issue of how forecasts were presented to management. In the study reported by Davidson (1987), 70% of managers asked their forecasters about the statistical basis and the quantitative method employed when preparing the forecast; this "sug-
Davidson (1987) examined whether interdepartmental meetings were used as a means of reviewing forecasts. Half of his respondent companies held such meetings, but some firms did not invite the forecast preparers! Peterson (1989, 1990) investigated whether different parties were involved in the forecast review process for different forecasting techniques, and found that the kinds of individuals involved in reviewing forecasts depended both upon the type of firm and the forecasting technique. For example, in terms of sales force forecasts, finance managers were highly involved in reviewing in consumer goods firms but to a lesser extent in industrial goods firms. The converse was true regarding the revision of expert opinion forecasts, i.e. finance managers were more involved in industrial goods firms than in consumer goods firms. Only Dalrymple (1987) asked in his survey whether companies prepared alternative forecasts. The results obtained indicated that firms showed little support for this activity; 27.8% prepared ("frequently" or "usually") forecasts for alternative strategies, 19.6% for alternative
213
environments and 18.8% for alternative capabilities. Almost one in two forecasters used point estimation only, whereas only one in five used forecasting intervals frequently or usually. A comparison between manufacturing and service companies by Sanders (1992) revealed that the majority of both groups "always" or "frequently" adjusted their quantitative forecasts to incorporate knowledge about the environment, the product and past experience. In this context, Hurwood et al.'s respondents (1978, pp. 9-10) stressed "the importance of common sense and judgement throughout the forecasting process, regardless of the specific technique that may be employed along the way". Political reasons also seem to affect forecast modifications. In Fildes and Hastings' (1994) study, 64% of respondents agreed that forecasts were frequently modified for political reasons and 84% thought that there is a need to superimpose judgement on base forecasts. The authors also observed a "broad agreement amongst the forecasters that forecasts were confused with targets (and therefore modified for political reasons)" (Fildes and Hastings, 1994, p. 9). Political reasons were also the cause of forecast adjustments in a Fortune 500 firm as reported by Walker and McClelland (1991); sales management had to take two conflicting goals into consideration when establishing sales volumes: first, high volumes were necessary for maximum expense budget allocations; and, second, more realistic (lower) volume estimates were needed in order to keep the bonus incentive programme as a motivational tool for the employees (see also Peterson, 1989). Walker and McCleUand (1991) further reported, in their case study, that annual forecasts were judgementally modified in order to incorporate advertising and sales promotion activities for a weekly sales volume plan; the sales forecasts were generally adjusted by the Vice President. The reasons given for adjustments "ranged from gut feelings about future sales trends to more specific anticipated effects of planned selling and marketing programs" (Walker and McClelland, 1991, pp. 378-379).
Studies investigating the criteria used for evaluating forecasts agreed that accuracy was the most important factor, followed by ease of use, ease of interpretation, credibility and cost (Carbone and Armstrong, 1982; Mentzer and Cox, 1984a; Martin and Witt, 1988). Inaccurate forecasts led, in Sanders' (1992) sample, to inventory/production and scheduling problems, wrong pricing decisions, customer service failures, etc. However, the accuracy aspect seemed to be more important for academics than for practitioners, the latter putting more emphasis on ease of interpretation, cost and time (Carbone and Armstrong, 1982). Speed, or timeliness of a forecast, also tended to be an important evaluation criterion for industrial goods producers but not for consumer goods producers in Herbig et al.'s (1994) sample (see also Small, 1980). Lastly, Martin and Witt (1988) reported that, with the extension of the forecasting horizon, the speed with which the forecast became available lost importance for their respondents. Wright (1988, p. 71) argued that "[t]he appropriate method of evaluation is critically dependent on the purpose for which management requires the forecast". He showed this in an example of inventory planning, where a forecasting method might not be chosen because it was the most accurate, but because it led to a least cost inventory management policy. This was based on the fact that, in the company studied, the correlation between forecast accuracy and inventory management cost was low. Other firms, as reported by White (1986, p. 11), put more emphasis on forecast consistency than on accuracy, because "[t]hey feel they can get along all right as long as their forecasts fall within familiar margins".
6.4. Forecast performance
From the empirical evidence, maintaining records of forecast regular basis is not a universal Dalrymple (1975, 1987) reported
214
five firms kept such records, in Rothe's (1978) sample some 40% did not have an objective accuracy figure. Similarly, more than a third of Drury's (1990) respondents did not have systems and procedures for analysing forecast errors. One explanation for this may well be the lack of a mechanism for keeping records on the various
Table 2 Factors influencing forecast accuracy Study Company characteristics Dalrymple (1975) Rothe (1978) Small (1980) White (1986) Dalrymple (1987) Dalrymple (1975) McHugh and Sparkes (1983) Dalrymple (1975) Rothe (1978) Small (1980) Mentzer and Cox (1984b) Peterson (1990) Dalrymple (1975) Rothe (1978) Forecasting process characteristics Small (1980) Mentzer and Cox (1984a,b) Dalrymple (1987) Mentzer and Cox (1984a,b) White (1986) Mc Hugh and Sparkes (1983) Mentzer and Cox (1984b) McHugh and Sparkes (1983) Mentzer and Cox (1984b) Dalrymple (1975) Dalrymple (1987) Kahn and Mentzer (1994) Mentzer and Cox (1984b) Mentzer and Cox (1984b) Dalrymple (1975) Small (1980) Mentzer and Cox (1984b) Small (1980) West (1994) Dalrymple (1987) Dalrymple (1987) Dalrymple (1975) Dalrymple (1975, 1987) Variable Firm size
types of forecasts, as shown in the case study company by Capon et al. (1975); this, in itself, may cause forecasts to be perceived as inadequate (Miller, 1985). Several studies have attempted to identify the factors influencing forecast accuracy; these are summarised in Table 2.
Forecast level Individual function of forecast (e.g. market size) Number of products forecast Number of applications of the forecast $ sales volume forecast Number of people preparing forecasts
+ Not significant Not significant Team based forecasting (yes/no) + Company level at which forecast is prepared + Formal training of forecast preparers Varies Technique Varies Small (though significant for Sophistication of techniques selected situations) + Number of forecasting methods U-relationship No effect Use of forecast combination (yes/no) Seasononal adjustments (never/sometimes/always) + + Use of consultants + Use of computers
a Concept of directionality not applicable as nominal variable involved. +, Positively related to forecasting accuracy; - , negatively related to forecasting accuracy.
215
The majority of surveys found that larger firms achieved more accurate forecasts than smaller firms (Small, 1980; White, 1986; Dalrymple, 1987). Furthermore, while an increase in forecast level improved forecasting accuracy (Mentzer and Cox, 1984a,b; White, 1986), an increase in time horizon decreased the accuracy of forecasts (Small, 1980; Mentzer and Cox, 1984a,b; Dalrymple, 1987). An increase in the market area served also had a negative influence on accuracy (Dalrymple, 1975; Rothe, 1978), as did the magnitude of sales volume forecast (Mentzer and Cox, 1984b). Better forecasting results were obtained by preparing forecasts at higher levels in the company hierarchy, through better formal training of forecast preparers (Mentzer and Cox, 1984b), and by seasonally adjusting the forecasts and using consultants and computers (Dalrymple, 1975, 1987). Firms that utilised a greater number of forecasting techniques (Small, 1980; West, 1994) and prepared their forecasts for more applications (McHugh and Sparkes, 1983) also reported better forecast performance. The accuracy of forecasts can be also influenced by the individual function of the forecast, the forecasting technique used and the industry context. Thus, market size forecasts were claimed to be accurate more often than forecasts for investment appraisals by McHugh and Sparkes (1983), while techniques such as time series, regression and indicator analysis (Small, 1980) or life-cycle and leading index (Dalrymple, 1987) have also been found to have a positive impact on accuracy. Consumer goods manufactures reported far better forecasting results than their industrial counterparts in the studies by Rothe (1978) and Small (1980), but, the opposite was found in Peterson's (1990) survey; wholesale and retail industries have also been found to be superior to manufacturing industries in terms of reported accuracy levels (Dalrymple, 1975; Mentzer and Cox, 1984b). No influence on forecasting accuracy was observed in relation to the number of products forecast (Mentzer and Cox, 1984b), the utilisation of a team approach (Kahn and Mentzer, 1994) or the use of forecast combinations (Dalrymple, 1987). Adoption of more sophisti-
cated techniques resulted in accuracy gains only in certain situations (Mentzer and Cox, 1984b). Pan et al. (1977) found that firms which desired greater accuracy utilised techniques which they thought were more sophisticated. Furthermore, they found evidence that desired and achieved accuracy were positively related, and concluded that "forecasting success is indeed related to forecasting aspirations" (Pan et al., 1977, p. 76). In this context, a few studies looked into the perceptions of decision makers regarding the accuracy of forecasting techniques. Wilson and Daubek (1989) reported that multiple regression was perceived to be the most accurate technique, whereas Mahmoud et al. (1988) found Holt's two parameter and Winters' three parameter exponential smoothing to have the highest perceived accuracy.
216
(1989). Lastly, expert opinion forecasters seemed to lack information, forecasting training, experience and time, and suffered from deadlines which were too short (Peterson, 1990). Peterson (1990) also discovered significant differences between consumer and industrial goods firms, with the former complaining more about being inexperienced in forecasting, having inadequate time available and having deadlines which were too short to prepare such forecasts. Furthermore, consumer goods firms regarded their forecasts more often as too optimistic, and forecasting in general was more often considered unimportant than was the case with their industrial goods counterparts. Sanders and Manrodt (1994) reported that only a small percentage (15%) of their respondents preferred over- to underforecasting, while 70% preferred underforecasting; the reason for this was that management reviews occurred less often when forecasts were surpassed. The actions taken if the forecasting error was not within acceptable limits were examined by Drury (1990). Excluding cases where the reason for the error was clearly attributable to external (i.e. uncontrollable) events (and thus preventative action by the company was not possible), 20% of the respondents made only minor adjustments and 4% did nothing. However, the majority undertook major re-evaluations or initiated serious action. Small (1980) investigated the issue of forecast revision in more detail. Specifically, he linked the frequency of forecast revision with the use of certain forecasting techniques and found that reviews on an annual basis were most often undertaken for forecasts developed through survey of users' expectations, time series, and regression analysis; quarterly reviews were typical for forecasts based on jury of executive opinion and sales force composite estimates. The findings of Simister and Turner (1973) suggested that companies utilising econometric models realised the importance of including the most recent information into their forecasts; all but one company in their sample updated the forecasting models and revised the forecasts at least once a year. Several authors reported that the most common revision periods
were quarterly and monthly (Dalrymple, 1975; CeruUo and Avila, 1975; Pan et al., 1977). While in Drury's (1990) study only a small proportion of the sample (13%) revised the forecasts between normal preparation dates, forecasts were prepared more regularly by the firms studied and, therefore, revision was probably less necessary. A final stream of literature looked at what can be done to improve/assist the forecasting task. Sanders (1992) and Sanders and Manrodt (1994) mentioned advancements in terms of better data, greater management support and better training, in that order. Better data about the industry, customers, competition and the economy were also needed according to the study conducted by Rothe (1978); his respondents also saw a need for better forecasting techniques and more resources for the forecasting task. Finally, in the company investigated by Fildes and Hastings (1994), forecast improvement was found to be a question of organisational design (e.g. better links to the marketing research department).
Although considerable empirical research has focused on the forecasting practices of firms, not all issues have received equal attention. For example, while questions concerning the utilisation of forecasting methods have attracted a lot of study, issues such as the role and level of forecasting have been relatively neglected. Further, while variables such as company size and industry type have been systematically linked to some aspects of forecasting practice (e.g. resources available and forecast accuracy), such linkages have been left unexplored for other aspects (e.g. data sources utilised). A related point concerns the types of variables that have been linked to the individual elements of forecasting practice as listed in Fig. 1; thus, while company size and industry membership have been widely employed to explain differences in practices, other potentially relevant variables (e.g. environmental turbulence and degree of formalisation/centralisation within the firm) have
217
not been considered. Lastly, while the interrelationships among certain aspects of forecasting practice have been examined (e.g. between forecast horizon and use of forecast), potential linkages between other aspects (e.g. between the resources committed to forecasting and forecast performance) have yet to be studied. Taken together, the above observations suggest that future research in forecasting can take three broad directions (or a combination of them): first, to relate organisational and environmental variables known to affect forecasting to a wider range of issues than has hitherto been the case; second, to explore the impact of additional firm-specific and environment-specific variables on forecasting; and, third, to examine neglected interlinkages between different aspects of organisational forecasting. In what follows, some specific research suggestions are put forward under the three key headings of organisational forecasting practice issues shown in Fig. 1.
the use of marketing research information (e.g. Deshpande and Zaltman, 1982, 1987) and the utilisation of marketing plans (e.g. John and Martin, 1984), and may well have an influence on, say, the use/purpose of the forecast and the relationship between preparers and users. For example, highly centralised organisations would be expected to have a preference for a top-down approach, while a bottom-up approach is more likely to match the needs of a decentralised organisation. Finally, as a topic in its own right, the question of how forecasts are used in management decision making needs addressing. Is the use made of forecasts only instrumental (i.e. are they used in specific decisions, such as profit planning) or is it also symbolic (i.e. are they used to justify decisions already made or actions already implemented)? A forecast is a piece of information and it is well known that information can be used in very different ways and for very different purposes (see Menon and Varadarajan, 1992, for a review).
218
forecasts are often the results of combining estimates generated by different methods (see Section 5.3), little is known about the specific combination approaches followed by firms. Finally, what is the impact of environmental complexity on the perceived usefulness and actual adoption of different forecasting methods? For example, firms exporting a wide range of products to a large number of dissimilar foreign destinations may be expected to use a wider range of forecasting approaches than firms serving, say, a few countries which are economically (and/or psychologically) close to their domestic market.
7.3. Evaluation issues
The vast majority of empirical studies examining forecast performance have focused almost exclusively on accuracy as a performance criterion (see Section 6.4). However, as already mentioned in Section 6.3, the evaluation of forecasts by management is more complex, comprising dimensions of timeliness, bias and opportunity cost. Thus, a broader, multi-dimensional conceptualisation of forecasting is warranted in future research. For example, with regard to bias, only one study (specific to the sales force composite method) has been conducted (Peterson, 1989); thus, it would be useful to find out whether other forecasting techniques or particular forecasting levels are prone to over- or underestimation in practical applications. Another issue worthy of future study centres around the parties involved in the forecasting process. Specifically, the participation of different hierarchical levels in the firm and/or functional areas at different stages of forecast development (data input, preparation, approval) could be linked to forecast performance; such participation patterns could be investigated at different forecasting levels and/or time horizons and also linked to organisational characteristics (e.g. size, centralisation, etc.) A related issue concerns the potential involvement of parties outside the organisation in the latter's forecasting activities and the impact of such interdependencies upon forecast per-
formance. For example, delegation of forecasting responsibility to downstream channel members (e.g. distributors and retailers) may have important implications for the accuracy, bias and subsequent utilisation of forecasts. Moreover, in some situations, (e.g. when supplying distant export markets) such delegation may not be a matter of choice but of necessity; the existing empirical literature is practically silent on how such interorganisational interdependencies are managed, and this is obviously an area where the current level o f understanding is very basic indeed. Finally, further attention needs to be drawn to any relation existing between forecast performance and the use of the forecast for particular purposes (see Section 6.4). Although the direction of causality may be a subject for debate, it could be the case that the number and types of specific uses to which the forecast is put is, at least, partly dependent upon its (past) performance (see also the point made earlier concerning the instrumental and symbolic use of forecasts). Needless to say the research agenda drawn above is by no means complete. Imaginative use of Fig. 1, hopefully coupled with the insights provided here, should help identify several issues worthy of research and help improve our understanding of organisational forecasting practice.
Acknowledgements
The authors have benefited from a presentation of a previous version of this paper at the 23rd European Marketing Academy conference (Maastricht, the Netherlands 17-20 May 1994). The comments of Peter Leeflang (University of Groningen) and Scott Armstrong (Wharton School) were particularly appreciated. The authors also wish to thank the editor, Randall Schultz, for his very helpful suggestions for improving the paper.
References
Armstrong, J.S., 1988, Research needs in forecasting, International Journal of Forecasting, 4, 449-465.
219
Hulbert, J.M., W.K. Brandt and R. Richers, 1980, Marketing planning in the multinational subsidiary: practices and problems, Journal of Marketing, 44, 7-15. Hurwood, D.L., E.S. Grossman and E.L. Bailey, 1978, Sales forecasting, The Conference Board in Canada, Report No. 730. John, G. and J. Martin, 1984, Effects of organizational structure of marketing planning on credibility and utilization output, Journal of Marketing Research, 21, 170-183. Jones, E.O. and J.G. Morrell, 1966, Environmental forecasting in British industry, Journal of Management Studies, February. Kahn, K.B. and J.T. Mentzer, 1994, The impact of teambased forecasting, Journal of Business Forecasting, 13(2), 18-21. Keating, B. and J.H. Wilson, 1987-88, Forecasting-practices and teachings, Journal of Business Forecasting, 6, 10-13, 16. Kress, G., 1988, Forecasting courses for managers, in: C.L. Jain, ed., Understanding business forecasting (Graceway Publishing Company, Flushing). Lawrence, M.J., 1983, An exploration of some practical issues in the use of quantitative forecasting models, Journal of Forecasting, 2, 169-179. Levenbach, H. and J.P. Cleary, 1981, The Beginning Forecaster (Lifetime Learning Publications, Belmont, California). Levenbach, H. and J.P. Cleary, 1982, The Professional Forecaster (Lifetime Learning Publications, Belmont, California). Levenbach, H. and J.P. Cleary, 1984, The Modern Forecaster: The Forecasting Process Through Analysis (Lifetime Learning Publications, Belmont, California). MacGowan, A.C., 1952, Techniques in forecasting consumer durable goods sales, Journal of Marketing, 17, 1952-1953. Mahmoud, E., G. Rice and N. Malhotra, 1988, Emerging issues in sales forecasting and decision support systems, Journal of the Academy of Marketing Science, 16, 47-61. Mahmoud, E., R. DeRoeck, R.G. Brown and G. Rice, 1992, Bridging the gap between theory and practice in forecasting, International Journal of Forecasting, 8, 251-267. Makridakis, S. and S.C. Wheelwright, 1979, Forecasting: framework and overview, in: S.Makridakis and S.C. Wheelwright, eds., Forecasting, TIMS Studies in Management Sciences, 12 (North Holland, Amsterdam). Makridakis, S. and S.C. Wheelwright, 1989, Forecasting Methods for Management, 5th edn. (John Wiley & Sons, Chichester). Makridakis, S., S.C. Wheelwright and V.E. McGee, 1983, Forecasting: Methods and Applications, 2nd edn. (John Wiley & Sons, Chichester). Martin, C.A. and S.F. Witt, 1988, Forecasting performance, Tourism Management, 9, 326-329. McHugh, A.K. and J.R. Sparkes, 1983, The forecasting dilemma, Management Accounting, 61, 30-34. Menon, A. and R. Varadarajan, 1992, A model of marketing knowledge use within firms, Journal of Marketing, 56, 53-71.
220
Mentzer, J.T. and J.E., Jr., Cox, 1984a, Familiarity, application and performance of sales forecasting techniques, Journal of Forecasting, 3, 27-36. Mentzer, J.T. and J.E., Jr., Cox, 1984b, A model of the determinants of achieved forecast accuracy, Journal of Business Logistics, 5, 143-155. Miller, D.M., 1985, The anatomy of a successful forecasting implementation, International Journal of Forecasting, 1, 69-78. Naylor, T.H., 1981, Experience with corporate econometric models: a survey, Business Economics, 16, 79-83. Pan, J., D.R. Nichols and O. Joy, 1977, Sales forecasting practices of large U.S. industrial firms, Financial Management, 6, 72-77. Peterson, R.T., 1989, Sales force composite forecasting-an exploratory analysis, Journal of Business Forecasting, 8, 23-27. Peterson, R.T., 1990, The role of experts' judgement in sales forecasting, Journal of Business Forecasting, 9, 16-21. Peterson, R.T., 1993, Forecasting practices in retail industry, Journal of Business Forecasting, 12, 11-14. Rao, V.R. and J.E., Jr., Cox, 1978, Sales Forecasting Methods: A Survey of Recent Developments (Marketing Science Institute, Cambridge, MA). Reichard, R.S., 1966, Practical Techniques of Sales Forecasting (McGraw-Hill, New York). Reilly, R.F., 1981, Developing a sales forecasting system, Managerial Planning, July/August, 24-26, 30. Rothe, J., 1978, Effectiveness of sales forecasting methods, Industrial Marketing Management, 7, 114-118. Sanders, N.R., 1992, Corporate forecasting practices in the manufacturing industry, Production and Inventory Management, 33, 54-57. Sanders, N.R. and K.B. Martrodt, 1994, Forecasting practices in US corporations: survey results, Interfaces, 24(2), 92-100. Schuitz, R.L., 1992, Editorial: Fundamental aspects of forecasting in organizations, International Journal of Forecasting, 7, 409-411. Simister, L.T. and J. Turner, 1973, The development of systematic forecasting procedures in British industry, Journal of Business Policy, 3, 43-54. Small, R.L., 1980, Sales forecasting in Canada: a survey of practices, The Conference Board of Canada, Study No. 66. Sord, B.H. and G.A. Welsch, 1958, Business Budgeting (Controllership Foundation Inc, New York). Sparkes, J.R. and A.K. McHugh, 1984, Awareness and use of forecasting techniques in British industry, Journal of Forecasting, 3, 37-42. Strong, L., 1956, Survey of sales forecasting practices, The Management Review, September, 790-799. Thompson, G.C., 1947, Forecasting Sales, A Conference Board Report: Studies in Business Policy, No. 25 (National Industrial Conference Board Inc, New York). Turner, J., 1974, Forecasting Practices in British Industry (University Press, London). Walker, K.B. and C.A. McClelland, 1991, Management
al Journal of Research in Marketing, Industrial Marketing Management, Journal of Forecasting, Journal of International Marketing, Journal of the Market Research Society, Journal of Strategic Marketing, European Journal of Marketing and Journal of Marketing Management. He is the founder
member of the Consortium for International Marketing Research (CIMaR), Associate Editor of the International Journal of Research in Marketing and a referee for various academic journals, professional associations and funding bodies. He is also an Editorial Review Board member of the
221
Investment Management, Practical Financial Management, Practical Business Forecasting, The Management of International Tourism and Modeling and Forecasting Demand in Tourism, and Co-editor of the Tourism Marketing and
Management Handbook. He has published widely in journals on tourism forecasting, and is on the editorial boards of Tourism Management, the Journal of Travel Research, Tourism Economics, the Journal of International Consumer Marketing and the Journal of Euromarketing.