NRES2 Prelims Reviewer

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

NURSING RESEARCH 2 PRELIMS

Lesson 1 • A questionnaire is reliable if we get


EMPIRICAL PHASE same/similar answers repeatedly

Phases of Research Process Validity


• Conceptual Phase • It is measures what it is supposed to
• Planning and Design Phase measure
• EMPIRICAL PHASE • It answers the question, “Is the questionnaire
• Analytical Phase providing answers to the research questions
• Dissemination Phase for which it will undertake?”
• If so, is it using the appropriate tool
Empirical Phase
• This involves the collection of data and the Sensitivity
preparation for analysis. • Defined as the probability of correctly
• Most time consuming identifying some condition or disease state.
• Gaining results, sorting them, and evaluating • Sensitivity is one of four related statistics
them used to describe the accuracy of an
instrument for making a dichotomous
Nominal Ordinal Interval Ratio classification (i.e., positive or negative test
Eye color Level of Temp. Height outcome).
satisfaction
Named Named Named Named • Sensitivity is calculated based on the
Natural Natural Natural order relationship of the following two types of
order order dichotomous outcomes: (1) the outcome of
Equal Equal interval
interval between
the test, instrument, or battery of procedures
between variables and (2) the true state of affairs.
variables
True zero value, Objectivity
thus ratio
between values • It means that everyone follows the same
can be rules and does not interpret it subjectively.
calculated
Assessment of Qualitative Data
Nominal Data Trustworthiness
• Categories (no ordering or direction) • Qualitative research is subjective, so the
• E.g. Marital status, type of car owned most important thing is to respond to the
concerns of outsiders
Ordinal data • To what extent can we place confidence in
• Ordering categories (Ranking, order, or the outcomes of the study?
scaling) • Do the readers believe what we have
• E.g. Service quality rating, student letter reported
grades
Credibility
Interval Data • Credibility refers to the confidence in the
• Differences between measurement but no truth of research findings.
true zero • Credibility will be assessed by using
• E.g. Temperature in Fahrenheit, purposive sampling that the participants
standardized exam score have the same knowledge and experiences
on the phenomenon under study.
Ratio Data
• Differences between measurements, true Dependability
zero • Dependability shows that the findings are
• E.g. Height, age, weekly food spending consistent and could be repeated.
• To provide the dependability of the findings,
Criteria for Assessing Quality of constant revisions by the researcher with the
Measurements assistance of the adviser, critic and
Reliability participants will be done
• It is the ability of an instrument to create
reproducible results Confirmability
• Each time it is used, similar scores should be • Confirmability is a degree of neutrality or the
obtained. extent to which findings of a study are

1
shaped by the respondents and not by the • The page for the table of contents is usually
researchers’ bias, motivation, or interest written in Roman numeral and indicated at
the bottom of the paper.
Transferability
• This refers to the probability that the study List of Tables
findings have meanings to others in similar • This follows the table of content and
situations. indicates the title of the tables in the research
• The process of member checks, that the paper.
experiences of one participant are the same • The caption should be exactly how it appears
experiences and meanings to other in the text.
participants • In numbering the tables, use Arabic
numerals.
Objectivity = Confirmability
Sensitivity = Trustworthiness List of Figures
• Is composed of paradigms, diagrams,
Lesson 2 graphs and charts or flowcharts.
THE FINAL RESEARCH OUTPUT
WRITING THE FINAL OUTPUT Abstract
• The researcher should know not only the • An abstract is a short summary of the
parts in research process but also the forms completed research. Abstracts should be
and style in writing the research proposal self-contained and concise, explaining the
and the research paper research study as briefly and clearly as
possible.
PRELIMINARY PAGES • The Abstract of the thesis ought to be
Title page/ Title of the Study included in the copy to be evaluated and
• Is a phrase that describes the research defended. It consists of concise statements
study. (more or less 150 words) of:
• It should not be too long or too short as well o what the study is all about,
as too vague and general. o the methodology,
o the most important findings.
Endorsement Page
• Is a section wherein it states that the study MAIN BODY
has been examined and recommended for Chapter I
oral exam. • Introduction: This section refers to:
o “What this study is all about” or “What
Approval Sheet makes the researcher Interested in
• Is a section wherein it presents that the study doing the study”.
has o Purpose: to introduce the reader to the
• been approved by the Committee on oral subject matter.
examination o The introduction serves as a
springboard for the statement of the
Acknowledgement Page problem as stated by Dr. Barrientos-
• Is a section wherein the researcher Tan.
expresses his deep gratitude for those
persons who assisted and helped him to Chapter II
make the study a successful one. • Methodology, RRL

Dedication Chapter III


• A section wherein it allows the researcher to • Results and Discussions
personally dedicate the study to family
members, spouses, friends, or community Chapter IV
groups. • Conclusions, and Recommendations
o Conclusions
Table of Contents ▪ Answer sub-problems
• It contains all the parts of the research paper ▪ Summary of the study and
including the pages. concluding remarks that highlight
• Indicates all the contents of research paper thoughts
and the page number for each section is o Recommendations
placed at the right-hand margin. ▪ Revision of the plan (if necessary),
or improvement

2
▪ Satisfy the following questions: ANOVA test to determine the influence that
a. Did the intervention work? independent variables have on the
b. What should be changed dependent variable in a regression study.
c. What should be the next step? • Example of How to Use ANOVA:
o A researcher might, for example, test
SUPPLEMENTARY PAGES students from multiple colleges to see if
• Bibliography – Books/ Online References students from one of the colleges
(Articles, Books) consistently outperform students from
• Appendices the other colleges. In a business
A. communication letters application, an R&D researcher might
B. Questionnaire test two different processes of creating
C. Validity Result of the questionnaire a product to see if one process is better
D. Grammarly Check Result than the other in terms of cost efficiency.
▪ Informed Consent Form o The type of ANOVA test used depends
▪ Approval Letter from LC REC on a number of factors. It is applied
▪ Interview Guide when data needs to be experimental.
• Curriculum Vitae Analysis of variance is employed if there
is no access to statistical software
Lesson 3 resulting in computing ANOVA by hand.
STATISTICS IN RESEARCH It is simple to use and best suited for
PERCENTAGE small samples. With many experimental
• One of the most frequent ways to designs, the sample sizes have to be the
represent statistics is by percentage. same for the various factor level
combinations.
MEAN o ANOVA is helpful for testing three or
more variables. It is similar to multiple
• The mean is a parameter that measures
two-sample t-tests. However, it results
the central location of the distribution of
in fewer type I errors and is appropriate
a random variable and is an important
for a range of issues. ANOVA groups
statistic that is widely reported in
differences by comparing the means of
scientific literature.
each group and includes spreading out
• Mean implies average and it is the sum the variance into diverse sources. It is
of a set of data divided by the number of employed with subjects, test groups,
data. Mean can prove to be an effective between groups and within groups.
tool when comparing different sets of
data; however, this method might be PEARSON R /PEARSON CORRELATION
disadvantaged by the impact of extreme
values. • The Pearson correlation coefficient (also
known as Pearson product-moment
T TEST correlation coefficient) r is a measure to
determine the relationship (instead of
• A t-test is a type of inferential statistic used difference) between two quantitative
to determine if there is a significant variables (interval/ratio) and the degree to
difference between the means of two groups, which the two variables coincide with one
which may be related in certain features. another—that is, the extent to which two
• The t-test is one of many tests used for the variables are linearly related: changes in one
purpose of hypothesis testing in statistics. variable correspond to changes in another
• Calculating a t-test requires three key data variable.
values. They include the difference between • Pearson’s Correlation Coefficient:
the mean values from each data set (called o Pearson’s correlation coefficient is the
the mean difference), the standard deviation test statistics that measures the
of each group, and the number of data statistical relationship, or association,
values of each group. between two continuous variables. It is
known as the best method of measuring
ANOVA the association between variables of
• Analysis of variance (ANOVA) is an analysis interest because it is based on the
tool used in statistics that splits an observed method of covariance. It gives
aggregate variability found inside a data set information about the magnitude of the
into two parts: systematic factors and association, or correlation, as well as the
random factors. direction of the relationship.
• The systematic factors have a statistical
influence on the given data set, while the Lesson 4
random factors do not. Analysts use the
3
STATISTICAL MEASUREMENTS IN NURSING Sampling error also occurs when
RESEARCH the sample does not accurately
STATISTICS reflect the population.
• Statistics is a branch of mathematics used to b. Sampling distribution – Is a
summarize, organize, present, analyze and theoretical frequency distribution
interpret numerical data such as the based on an infinite number of
numerical characteristics of sample samples. The researcher never
parameters and the numerical actually draws an infinite number of
characteristics of a population. samples from a population.
• Statistics improve the quality of data with the c. Sampling bias – occurs when
design of experiments and survey sampling. samples are not carefully selected
Statistics also provide tools for prediction as in non-probability sampling.
and forecasting using data and statistical 2. Testing the null hypothesis
needs. Statistics is applicable to a wide
variety of academic disciplines including • Inferential statistics are techniques that allow
natural and social sciences, government, us to use these samples to make
business, and nursing. generalizations about the populations from
which the samples were drawn.
KINDS OF STATISTICS • Inferential statistics arise out of the fact that
1. Descriptive Statistics sampling naturally incurs sampling error and
• Statistical methods that can be used to thus a sample is not expected to perfectly
summarize or describe a collection of data. represent the population.
These are statistics intended to organize and
summarize numerical data from the 3. Use Decision Theory
population and sample. • This theory is based on the assumption
associated with the theoretical normal curve,
• Uses: used in testing for differences between
1. Measures and condenses data in groups with the expectations that all groups
a. Frequency distribution – scores are members of the same population. this is
are tested from highest to lowest or expressed as a null hypothesis and the level
from lowest to highest. of significance (alpha) is set at 0.05 before
b. Graphic presentation – data are data collection. According to this theory 2
presented in graphic form to make types of errors can occur when the
frequency distribution data readily researcher is deciding what the result of
apparent. statistical test means (Burns & Groove,
2. Measures of central tendency used to 2007)
describe the mean, median, and mode. • Types of errors:
A. Type I error – occurs when the null
• Descriptive statistics is the term given to the hypothesis is rejected when in reality is
analysis of data that helps describe, show or not. This occurs when the level of
summarize data in a meaningful way such significance is at 0.05 than with a 0.01
that, for example, patterns might emerge level of significance. Type 1 decreases
from the data. when the level of significance become
• They are simply a way to describe data. more extreme.
Descriptive statistics therefore enables us to B. Type II error – occurs when null
present the data in a more meaningful way, hypothesis is regarded as true, but it is
which allows simpler interpretation of the in fact false. A statistical analysis may
data. indicate no significant differences
between groups but in reality, the
2. Inferential Statistics groups are different. There is greater
risk of type II error when the level of
• These are concerned with population and the
significance is 0.01 than when it is 0.05.
use of sample data to predict future
occurrences.
4. Power analysis
• Uses: • This is the way to control type II error. Power
1. To estimate population parameter analysis will determine the probability of the
a. Sampling error – which is the statistical test to detect significant difference
difference between data obtained that exists. The research determines the
from a random sampled population sample size, the level of significance and the
and data that would be obtained if effect size on the outcome variable, (Cohen,
an entire population is measured. 1988)

4
5. Degree of freedom
• The interpretation of a statistical test, in most 5. ANOVA
cases depends on the number of values that • Test the differences between 2 means
can vary. Although DOF indicates the which can be used to examine data from
number of values that can vary, attention is two or more groups.
actually focused on the values that are not
free to vary. This is generally expressed by 6. Factor analysis
the df sign and a number that denotes • Examines relationships among large
significant level (e.g. Df= 0.01 or 0.05) number of variables and isolate those
relationships to identify clusters of
6. Frequency distribution variables that are most closely linked.
• This is the method to organize the research This is necessary in developing
data instruments to measure relate.
• Types:
1. Grouped frequency distribution –
these are used with nominal data or
when continuous variables are being 7. Regression analysis
examined such as age, weight, blood • Used to predict the value of one variable
pressure etc. when the value of one or more variables
2. Ungrouped frequency distribution – are known. The variable to be predicted
this is used when data are categorized din regression analysis is referred to as
and presented in tabular form to display dependent variable.
all numerical values obtained for a
particular variable. 8. Multiple regression analysis
• Us used to correlate more than two
7. Percentage distribution variables.
• This shows the percentage of subjects in a
sample whose scores fall into a specific 9. The complete randomized block design
group and the number scores in that group. • Is the same as the ANOVA except that
This is useful in comparing the present data complete blocks are used instead of
with findings from other studies that have items. For instance, use of different
different samples size. antibiotics per patient per room are
tested. The heterogeneity of
STATISTICAL TOOLS FOR TREATMENT OF respondents will give different results.
DATA
1. Percentage Lesson 5
• Is computed to determine the proportion ANALYSIS AND INTERPRETATION OF DATA
of a part to a whole such as given • Data analysis is the most crucial part of any
number respondents in relation to the research.
entire population. • Data analysis summarizes collected data.
• Formula = • It involves the interpretation of data gathered
using analytical and logical reasoning to
2. Ranking determine patterns, relationships or trends.
• Is used to determine the order of
decreasing or increasing magnitude of Stage of Data Analysis
variables. The largest frequency is rank
1, the second 2 and so on

3. Weighted Mean
• Refers to the overall average or
responses /perceptions of the study
respondents. It is the sum of the scores
and its product of the frequency of
responses in a Likert 5-point Scale

4. T test
• Compares the responses of two
respondent groups in the study on the Lesson 6
phenomenon under investigation. This ANALYSIS OF QUALITATIVE DATA
is used to test for significant differences Thematic Analysis
between samples.
5
o Descriptive Statistics – are numerical
values obtained from the sample that
gives meaning to the collected data.

Classification of Descriptive Analysis


1. Frequency Distribution
• A systematic arrangement of numeric
values from the lowest to the highest or
highest to lowest.
• Formula: Ff = N
• Where:
Verbatim → Significant statements → Codes → o E = sum of
Sub Themes → Major themes o f = frequency
o N= sample size
• Code – a shorthand representation of some
2. Measure of Central Tendency
more complex set of issues or ideas.
• A statistical index that describes the
• Coding – Identifying themes across
average of the set values.
qualitative data by reading transcripts.
• Kinds of Averages:
• Focused coding – collapsing or narrowing
1. Mode – a numeric value in a
down codes, defining codes, and recording
distribution that occurs most
each transcript using a final code list.
frequently
• Open coding – reading through each
2. Median – an index of average
transcript, line by line, and noting any
position in a distribution of numbers
categories or themes that seem to jump out
3. Mean – the point on the score scale
to you.
that is equal to the sum of the
• Transcript – a complete, written copy of the scores divided by the total number
recorded interview or focus group containing of scores.
each word that is spoken on the recording,
• Formula:
noting who spoke which words. ∑
𝑥=
Lesson 7 𝑛
o X = Where
DATA ANALYSIS AND PRESENTATION
o X = the mean
Data Analysis
o ∑ = the sum of
• Purpose o X = each individual raw score
o To answer the research questions and o n = the number of cases
to help determine the trends and 3. Measure of Variability
relationships among the variables. • Statistics that concern the degree to
• Steps in Data Analysis which the scores in a distribution are
o Before Data Collection, the researchers different from or similar to each other.
should accomplish the following: • Two Commonly Used Measures of
a. Determine the methods of data Variability are:
analysis 1. Range – the distance between the
b. Determine how to process the data highest score and the lowest score
c. Consults a statistician distribution. E.g. The range for
d. Prepare dummy tables learning center A 500 (750 - 250)
o After Data Collection and the range for learning center is
a. Process the data about 300 (650 - 350)
b. Prepare tables and graphs 2. Standard Deviation – the most
c. Analyze and interpret findings commonly used measure of
d. Consult again the statistician variability that indicates the
e. Prepare for editing average to which the scores
f. Prepare for presentation deviate from the mean.
4. Bivariate Descriptive Statistics
Kinds of Data Analysis • Derived from the simultaneous analysis
• Descriptive Analysis of two variables to examine the
o refers to the description of the data from relationships between the variables.
a particular sample; hence the • Two Commonly Used Bivariate
conclusion must refer only to the Descriptive Analysis:
sample. 1. Contingency tables – Is
o In other words, these summarize the essentially a two-dimensional
data and describe sample frequency distribution in which the
characteristics.
6
frequencies of two variables are • Findings are presented in different forms
cross-tabulated such as:
2. Correlation – the most common 1. Narrative or textual form
method of describing the o This is composed of summary of
relationship between two measures findings, direct quotations and
implications of the study.
Inferential Analysis 2. Tables
• The use of statistical tests, either to test for o Tables are used to present a clear
significant relationships among variables or and organized data.
to find statistical support for hypotheses. o This is utilized for easy analysis and
• Inferential Statistics – Are numerical interpretation of data.
values that enable to the researchers to draw
The parts of tabular data are presented in the
conclusions about a population based on the
following:
characteristics of a population sample. This
is based on laws or probability. • Rows – horizontal entries (indicates the
outcome or the dependent variable)
Level of Significance • Columns – vertical entries (indicates the
• An important factor in determining the cause or the independent variable)
representatives of the sample population • Cells – are boxes where rows and columns
and the degree to which the chance affects intersect.
the findings.
• The level of significance is a numerical value Interpretation of Data
selected by the researcher before data • After analysis of data and the appropriate
collection to indicate the probability of statistical procedure, the next chapter of the
erroneous findings being accepted as true. research paper is to present the
• This value is represented typically as 0.01 or interpretation of the data, which is the final
0.05. (Massey, 1991) step of the research process.
• Three area:
Uses of Inferential Analysis 1. Summary of findings – This portion
• Cited some statistical test for inferential summarizes the results of data analysis
analysis. from chapter
1. t-test – Is used to examine the 2. Conclusions – A conclusion is drawn
difference between the means of two from the summary of findings. Focuses
independent groups. on the answers to the problem including
2. Analysis of Variance (ANOVA) – Is the outcome of the hypotheses whether
used to test the significance or it is rejected or accepted.
difference between means of two or 3. Recommendations – This is based on
more groups. the results of the conclusion. The main
3. Chi-square – this is used to test goal is geared toward improvement or
hypotheses about the proportion of development.
elements that fall into various cells of a 4. Summary of Findings – This portion
contingency table. summarizes the result of the data
analysis from chapter
Hypothesis
Level of Significance
• The outcome of the study perhaps may
retain, revise or reject the hypothesis and • An important factor in determining the
this determines the acceptability of representativeness of the sample population
hypotheses and the theory from which it was and the degree to which the chance affects
derived. the findings.
• Steps in Testing Hypothesis: • The level of significance is a numerical value
1. Determine the test statistics to be used selected by the researcher before data
2. Establish the level of significance collection to indicate the probability of
3. Select a one-tailed or two-tailed test erroneous findings being accepted as true.
4. Compute a test statistic • This value is represented typically as 0.01 or
5. Calculate the degrees of freedom 0.05. (Massey, 1991)
6. Obtain a tabled value for statistical test
7. Compare the test statistics to the tabled TYPES OF VALIDITY IN RESEARCH
value Face Validity
• This refers to whether the instrument
Presentation of Findings appears, at a glance, to be measuring
the appropriate construct
7
• It may be established through methodology used and described din the
consultation with people whose attribute results or findings of the study.
is being studied.
• Ex. If the study pertains to the social STRATEGY FOR QUANTITAIVE ANALYSIS
support of type 2 diabetes patients, then • QUANTITATIVE ANALYSIS of data is a
consultation should be done with type 2 process of organizing numerical data into
diabetic patients and possibly series of steps for purposes of analysis and
individuals who make up the support interpretation and to ensure precise,
system of these patients accurate and conclusive findings of the
study.
Content Validity • Steps:
• This concerns the degree to which an 1. Pre-analytic phase. This phase
instrument has an appropriate sample of consists of various activities such as
items for the construct being measured clerical and administrative task which
and adequately covering the domain. include the following steps:
• Is measured by subjecting the a. Log in, edit raw data for
instrument to analysis by a group of completeness.
experts, those who are knowledgeable b. Choose software package for data
about the subject both in theory and analysis
practice. c. Code data where information is
• 3-5 experts transformed into symbols or
numbers. Data must be recorded in
Criterion-related validity precise and consistent manner
d. Enter data into computer file and
• This involves determining the
verify
relationship between an instrument and
e. Inspect data for accurateness and
an external criterion.
irregularities such as values that lie
• The instrument is said to be valid if its outside the normal range.
scores correlate highly with the scores f. Cleaning of data through editing
in the criterion. and checking consistency of
• Ex. If a measure of performance in the internal data, check the errors by
NLE correlates highly with the academic testing compatibility of data within
performance in a sample of nursing the objective of the study.
graduates, then the instrument scale for g. Create and document an analysis
performance in the NLE would have file. Prepare a codebook by listing
good quality and coding each variable and other
basis of information. Codebook can
Construct Validity be generalized through statistics or
• Refers to whether the test corresponds with data entry program.
its theoretical construct to the process of 2. Preliminary Assessment of Data. The
construct validation is theory-laden researcher outlines activities to
anticipate major problems that may
Concurrent Validity arise while dealing with data.
• Refers to an instrument’s ability to a. Assess missing data and problems
distinguish individuals who differ on a and determine the number of
present criterion. variables missing in each subject.
• Ex. A nursing achievement test to b. Assess quality of data and examine
differentiate between students who belong to key variables, distribution data,
the honor section and those who belong to a limited variability, extreme
regular section could be correlated with the skewness, and appropriateness of
concurrent behavioral ratings of the teaching statistical test.
personnel. c. Assess bias and extent of
extraneous factors that affects data
such as:
THE NATURE OF DATA ANALYSIS • Nonresponse bias. for those
• DATA ANALYSIS is a process to sort, people who participated in the
reduce, organize and give meaning to the study, but no response was
data collected. The technique in data given in a certain variable.
analysis include descriptive and inferential • Selection bias these are
statistics’ This is based on the research characteristics of the subjects
questions, objectives, hypotheses, and who did not conform with the
inclusion criteria.
8
• Attrition bias or dropout d. Generalizability of the results.
rate. those people who did not Research findings are applicable to
participate in the study. a wider population and can
d. Assess assumptions for inferential generate more similar responses
test. Statistical test can be based regardless of sample size.
from those conditions presumed to e. Implications of the results. Once
be true which then violated can lead the study has established
to misleading or invalid findings. credibility, meaning, importance
3. Preliminary Actions. The researcher and generalizability of results, the
prepares data for computer use prior to researcher can now make
analysis and interpretation. recommendations in terms of
a. Reform and transform data. This is theory development, nursing
done through commands from the education, nursing practice and
computer to create a new variable future research.
that is equal to the original value
through item reversal or records.
b. Address missing values and
problems.
c. Construct scales, composite
indexes
d. Perform a peripheral analysis
through pooling of data from
different sources and comparing
them in terms of key research
variables.
4. Principal Analysis. The researcher can
now proceed with substantive data
analysis having cleaned data, problems
are resolved regarding missing data and
data transformation are completed.
a. Perform descriptive statistical
analysis.
b. Perform bivariate inferential
statistical analysis.
c. Perform multivariate analysis.
d. Perform needed post-hoc analysis.
5. Interpretation of results. Research
findings must have substantive bearing
with the objectives of the study, existing
theories, related literature and research
methods.
5 aspects in interpreting results of
the study
a. Credibility of results. Inferences
of empirical studies must be based
on the truthfulness of evidence
gathered. Results must be
examined consistent with other
studies and existing theories. Any
discrepancies must be verified and
validated.
b. Meaningfulness of results.
Research outcomes must be within
the context of the hypotheses
testing with precise information and
estimates.
c. Importance of the result.
Research findings are significant or
important if group response and
differences are well observed and
proven through empirical testing.

You might also like