Hypotheses
Hypotheses anticipate outcomes and may be tested.
Four hypotheses:
Null Hypotheses (H0) expect no change in findings between conditions. ‘There will be no difference...’ is typical.
Alternative Hypotheses (Ha or H1) anticipate a substantial variation in findings across circumstances. Experimental hypothesis.
One-tailed (directional) hypotheses predict the outcomes' direction: higher, lower, more, less. Correlation studies anticipate positive or negative correlations.
Two-tailed (non-directional) hypotheses predict that the independent variable's conditions will vary but don't specify the direction. ‘There will be a difference...’ is usually put here.
Every study has a null and alternate hypothesis.
Psychologists must accept or reject one theory after study.
If a difference is detected, the psychologist will accept the alternative hypothesis and reject the null. No difference means the reverse.
Sampling methods
Sampling involves choosing a representative sample from a population.
Target Audience
To generalize about a target population, you choose a sample.
Representative indicates a sample matches a researcher's target population.
Volunteer samples are selected from newspapers, noticeboards, or internet.
Opportunity sampling, often called convenience sampling, involves consenting participants who are available during the research. For convenience.
Random sampling gives every target population member an equal chance of being chosen. Picking names from a hat is random sampling.
Systematic sampling selects individuals. selecting every Nth participant. N = research population/sample size.
Stratified sampling involves selecting subgroups proportionally.
Researchers identify a few individuals, then ask them to recruit more.
Quota sampling requires researchers to recruit 90 people with 30 unemployed.
Experiments usually involve independent and dependent variables.
The independent variable is the item the experimenter adjusts between conditions. It is expected to directly affect the dependent variable.
The experiment's outcome is the dependent variable.
Operationalization makes variables quantifiable. Operationalization ensures variables are readily tested.
We can assess a person's two-hour grin count, but not their pleasure.
Operationalizing variables simplifies study replication. This is crucial so we can verify our results.
All factors other than the independent variable that potentially impact the experiment are extraneous variables.
It might be a participant's IQ, gender, age, or surroundings, such as lighting or noise.
Demand characteristics are superfluous variables that develop when participants figure out the study's goals and start acting a specific manner.
Critics of Milgram's study claimed that participants realized the shocks were fake and gave them as instructed.
Controlling extraneous factors prevents confounding.
Randomizing individuals or employing a matched pairs experimental design helps decrease participant variability.
Standardized methods ensure that all participants in a condition are treated the same.
Experimental design
Experimental design involves assigning individuals to each independent variable condition, such as a control or experimental group.
Independent design (between-groups design): participants are assigned to one group. Most independent designs use randomization to assign people to groups.
Matched participants design: each participant is picked for one group, but the two groups are matched for some significant attribute (e.g., ability, sex, age).
Repeated measures design (within groups): each person is in both groups, therefore
Order effects are the key issue with repeated measurements. The experiment may alter participants.
They may do better in the second group after learning about the experiment or assignment. However, fatigue or boredom may lower their performance the second time.
Counterbalancing prevents order effects by ensuring that participants utilize each condition equally first and second.
To compare two groups on an independent variable, we must ensure that they do not vary in any other significant manner.
Experimental Methods
Every experiment has an IV and DV.
Lab experiments are done in a well-controlled setting, not necessarily a lab, allowing for reliable and objective data.
Using a systematic approach, the researcher chooses the location, time, volunteers, and conditions of the experiment.
Feld experiments are done in individuals' natural environments. The researcher manipulates the IV in real life. Though harder than in a lab experiment, unwanted factors may be controlled.
Natural experiments study a naturally occurring IV. Natural events are infrequent and participants are not randomly assigned.
Case Study
In-depth case studies examine a person, organization, event, or community. The individual and their family and friends provide information.
Interviews, psychological exams, observations, and experiments are possible. Case studies study the subject across time.
Freud's case studies are famous in psychology. To comprehend and treat his patients, he meticulously investigated their private lives.
Case studies provide ecological validity and rich qualitative data. It's hard to
Correlational Studies
Correlation measures the relationship between two variables. One variable is the predictor and the other the result.
Correlational studies collect two measurements from a set of individuals and examine their connection.
Predictor variables precede outcome variables. It predicts the outcome variable, thus its name.
Variable relationships may be graphed or scored as a correlation coefficient.
Positive correlation occurs when one variable rises with the other.
Negative correlations occur when one variable increases and the other decreases.
Zero correlation happens when variables are unrelated.
After looking at the scattergraph, we can use Spearman's rho to confirm a substantial association between the two variables.
The test will provide a correlation coefficient. The closer to 1 the score, the stronger the link between variables. 0.63 or -0.63 are possible values.
Correlation types. Strong, weak, and perfect positive, negative, and no association. Charts or graphs...
However, a correlation does not indicate that one variable's change causes the other's change. Correlations only indicate relationships between variables.
Due to a third variable, correlation does not necessarily indicate causation.
Interview Methods
Structured and unstructured interviews exist.
Structured interviews are as uniform as feasible. Job interviews are structured.
Every participant is asked the identical questions in the same sequence. The researcher organizes and phrases questions and chooses alternate responses on a questionnaire.
Interviewers keep their distance from interviewees.
Casual chats are unstructured interviews. The researcher uses an informal technique to break down social barriers and usually starts with a casual discussion.
The participant may ask any questions they choose and in their own style. Questions are based on participants' responses.
Qualitative data may be acquired in this interview.
Qualitative study on attitudes and values uses unstructured interviews. They allow researchers to explore social actors' subjective perspectives, but they seldom allow generalization.
Survey Method
Questionnaires are written interviews. Face-to-face, phone, or mail.
Questions may be open-ended or structured, requiring brief replies or a choice from provided options.
To minimize prejudice, ambiguity, "leading" the responder, or offense, question choice is crucial.
Observations
The researcher observes participants covertly until the study is done. This observation approach may include deceit and consent issues.
Overt observation is when a researcher informs participants of their observation and why.
Controlled: Bandura's Bobo doll research observes behavior in a lab.
Natural: This is spontaneous conduct.
Participant: The observer interacts with the group. The researcher joins the group.
"Fly on the wall": The researcher does not interact with the subjects. Participants are observed remotelyilot studies test the viability of major project milestones on a small scale.
A pilot research entails choosing a few persons to test an investigation's methods. Identifying researcher method problems may save time and money.
A pilot study may assist the researcher identify ambiguities (i.e., strange things) in participant information or task issues.
The researcher may receive a floor effect if the task is too hard and no one scores or completes it.
When the assignment is simple, everyone "hits the ceiling" and gets full marks or peak performance.
Research Design
Cross-sectional research compares various demographic groups simultaneously.
Reliability is repeating a measurement and getting the same result.
Test-retest reliability — Testing the same individual again to see how well the test works.
Inter-observer reliability—how well two or more observers agree.
Meta-Analysis
Identifying an objective and looking for research papers with comparable aims/hypotheses is a meta-analysis.
By searching multiple databases, studies are included or eliminated.
Strengths: Wider sample range strengthens results.
Weaknesses: Different research designs make studies uncomparable.
The journal chooses two or more psychologists in a related area to peer evaluate the manuscript for free. Peer reviewers evaluate the study methodologies and designs, originality, validity, substance, structure, and language.
Reviewer feedback decides article acceptance. Accepted as is, accepted with adjustments, sent back to the author to edit and resubmit, or rejected without submission.
Based on reviewer feedback, the editor accepts or rejects the research report.
Peer review prevents bad data from being released, verifies results and methods, and rates university research units.
Some doubt peer review can prevent bogus research.
The internet has led to more research and academic commentary being published without peer reviews, however platforms are emerging online where everyone may voice their ideas and evaluate research.
Data Types
Quantitative data includes response time and errors. It indicates quantity and duration. Quantitative data is collected from questionnaire closed questions and behavioral categories.
Qualitative data is any non-numerical information that may be seen and captured in writing or speech. Qualitative data comes from open surveys and observational research.
Investigative primary data is firsthand.
Secondary data comes from journals, books, and articles.
Validity
Validity refers to how effectively a study measures or represents reality.
Validity is whether the observed impact is real and reflects reality.
Does the test measure what it's meant to? Eyeballing or having an expert confirm the measurement is done.
Ecological validity is a study's applicability to actual life.
A study's temporal validity is its applicability to various historical periods.
Science Features
Paradigm - Scientific assumptions and procedures.
Paradigm shift - The scientific revolution changed the main unifying theory in a scientific field.
Objectivity—minimizing personal bias to avoid influencing research.
Empirical method—Science based on actual observation and experience.
Replicability—How well other researchers can replicate scientific processes and discoveries.
Falsifiability—the idea that a scientific hypothesis must allow for falsification.
Statistical Testing
A significant finding has a low likelihood that chance causes caused any observed difference, correlation, or relationship in the variables evaluated.
If our test fails, we may accept our null hypothesis and reject our alternative hypothesis. Null hypotheses assert no impact.
A type I mistake occurs when the null hypothesis is rejected when it should have been accepted due to optimism and a lenient significance threshold.
Type II errors occur when the null hypothesis is accepted when it should be rejected (pessimism).
Moral Issues
Participants provide informed consent when they can decide. They predict the research goals and adjust their conduct.
Presumptive consent or formal consent may undermine the study's goal, and participants may not comprehend.
Deception includes intentionally deceiving or concealing facts, thus an ethical committee must authorize it. Debriefing can't change the past.
Participants should be notified at the start that they may leave if they feel uncomfortable.
It produces bias since some who remained are obedient and others may not withdraw because they were offered incentives or feel like they're ruining the research. Researchers may allow data withdrawal.
Participants need safety. If injury is detected, the researcher should cease the study. The research may not reveal the damage.
Personal data is confidential. Researchers should use numbers or phony names instead of names, although it may be feasible to identify them.