We use cookies on this site to enhance your experience

By clicking any link on this page you are giving your consent for us to set cookies.

A link to reset your password has been sent to your email.

Back to login

We need additional information from you. Please complete your profile first before placing your order.

Thank you. payment completed., you will receive an email from us to confirm your registration, please click the link in the email to activate your account., there was error during payment, orcid profile found in public registry, download history, responding to a disproven or failed research hypothesis.

  • Charlesworth Author Services
  • 03 August, 2022

When meeting with a disproven or failed hypothesis , after having expended so much time and effort, precisely how should researchers respond? Responding well to a disproven or failed hypothesis is an essential component to scientific research . As a researcher, it helps to learn ‘ research resilience ’: the ability to carefully analyse, effectively document and broadly disseminate the failed hypotheses, all with an eye towards learning and future progress. This article explores common reasons why a hypothesis fails, as well as specific ways you can respond and lessons you can learn from this. 

Note : This article assumes that you are working on a hypothesis (not a null hypothesis): in other words, you are seeking to prove that the hypothesis is true, rather than to disprove it. 

Reasons why a hypothesis is disproven/fails

Hypotheses are disproved or fail for a number of reasons, including:

  • The researcher’s preconception is incorrect , which leads to a flawed and failed hypothesis.
  • The researcher’s findings are correct, but those findings aren’t relevant .
  • Data set/sample size may not be sufficiently large to yield meaningful results. (If interested, learn more about this here: The importance of having Large Sample Sizes for your research )
  • The hypothesis itself lies outside the realm of science . The hypothesis cannot be tested by experiments for which results have the potential to show that the idea is false.

Responding to a disproved hypothesis

After weeks or even months of intense thinking and experimenting, you have come to the conclusion that your hypothesis is disproven. So, what can you do to respond to such a disheartening realisation? Here are some practical steps you can take.

  • Analyse the hypothesis carefully, as well as your research.   Performing a rigorous, methodical ‘post-mortem’ evaluation of your hypothesis and experiments will enable you to learn from them and to effectively and efficiently share your reflections with others. Use the following questions to evaluate how the research was conducted: 
  • Did you conduct the experiment(s) correctly? 
  • Was the study sufficiently powered to truly provide a definitive answer?
  • Would a larger, better powered study – possibly conducted collaboratively with other research centres – be necessary, appropriate or helpful? 
  • Would altering the experiment — or conducting different experiments — more appropriately answer your hypothesis? 
  • Share the disproven hypothesis, and your experiments and analysis, with colleagues. Sharing negative data can help to interpret positive results from related studies and can aid you to adjust your experimental design .
  • Consider the possibility that the hypothesis was not an attempt at gaining true scientific understanding, but rather, was a measure of a prevailing bias .

Positive lessons to be gained from a disproved hypothesis

Even the most successful, creative and thoughtful researchers encounter failed hypotheses. What makes them stand out is their ability to learn from failure. The following considerations may assist you to learn and gain from failed hypotheses:

  • Failure can be beneficial if it leads directly toward future exploration.
  • Does the failed hypothesis definitively close the door on further research? If so, such definitive knowledge is progress.
  • Does the failed hypothesis simply point to the need to wait for a future date when more refined experiments or analysis can be conducted? That knowledge, too, is useful. 
  • ‘Atomising’ (breaking down and dissecting) the reasoning behind the conceptual foundation of the failed hypothesis may uncover flawed yet correctable thinking in how the hypothesis was developed. 
  • Failure leads to investigation and creativity in the pursuit of viable alternative hypotheses, experiments and statistical analyses. Better theoretical or experimental models often arise out of the ashes of a failed hypothesis, as do studies with more rigorously attained evidence (such as larger-scale, low-bias meta-analyses ). 

Considering a post-hoc analysis

A failed hypothesis can then prompt you to conduct a post-hoc analysis. (If interested, learn more about it here: Significance and use of Post-hoc Analysis studies )

All is not lost if you conclude you have a failed hypothesis. Remember: A hypothesis can’t be right unless it can be proven wrong.  Developing research resilience will reward you with long-term success.

Maximise your publication success with Charlesworth Author Services.

Charlesworth Author Services, a trusted brand supporting the world’s leading academic publishers, institutions and authors since 1928.

To know more about our services, visit: Our Services

Share with your colleagues

Related articles.

how to prove a hypothesis wrong

Developing and writing a Research Hypothesis

Charlesworth Author Services 19/01/2022 00:00:00

how to prove a hypothesis wrong

How to respond to negative, unexpected data and results

Charlesworth Author Services 14/12/2020 00:00:00

how to prove a hypothesis wrong

Introduction to Biases in Research – and steps for avoiding them

Charlesworth Author Services 12/07/2022 00:00:00

Related webinars

how to prove a hypothesis wrong

Bitesize Webinar:Statistics: Module 1- Understanding research design

Charlesworth Author Services 02/03/2021 00:00:00

how to prove a hypothesis wrong

Bitesize Webinar: Statistics: Module 2- Including descriptive statistics in academic papers

Charlesworth Author Services 04/03/2021 00:00:00

how to prove a hypothesis wrong

Bitesize Webinar: Statistics: Module 3- Using statistical tables and figures in academic papers

how to prove a hypothesis wrong

Bitesize Webinar: Statistics: Module 5 - Interpreting and discussing research findings

how to prove a hypothesis wrong

Significance and use of Post-hoc Analysis studies

Charlesworth Author Services 16/03/2022 00:00:00

how to prove a hypothesis wrong

The importance of having Large Sample Sizes for your research

Charlesworth Author Services 26/05/2022 00:00:00

how to prove a hypothesis wrong

Basics of when and how to perform a Meta-analysis

Charlesworth Author Services 17/12/2021 00:00:00

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

7.1: Basics of Hypothesis Testing

  • Last updated
  • Save as PDF
  • Page ID 16360

  • Kathryn Kozak
  • Coconino Community College

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

To understand the process of a hypothesis tests, you need to first have an understanding of what a hypothesis is, which is an educated guess about a parameter. Once you have the hypothesis, you collect data and use the data to make a determination to see if there is enough evidence to show that the hypothesis is true. However, in hypothesis testing you actually assume something else is true, and then you look at your data to see how likely it is to get an event that your data demonstrates with that assumption. If the event is very unusual, then you might think that your assumption is actually false. If you are able to say this assumption is false, then your hypothesis must be true. This is known as a proof by contradiction. You assume the opposite of your hypothesis is true and show that it can’t be true. If this happens, then your hypothesis must be true. All hypothesis tests go through the same process. Once you have the process down, then the concept is much easier. It is easier to see the process by looking at an example. Concepts that are needed will be detailed in this example.

Example \(\PageIndex{1}\) basics of hypothesis testing

Suppose a manufacturer of the XJ35 battery claims the mean life of the battery is 500 days with a standard deviation of 25 days. You are the buyer of this battery and you think this claim is inflated. You would like to test your belief because without a good reason you can’t get out of your contract.

What do you do?

Well first, you should know what you are trying to measure. Define the random variable.

Let x = life of a XJ35 battery

Now you are not just trying to find different x values. You are trying to find what the true mean is. Since you are trying to find it, it must be unknown. You don’t think it is 500 days. If you did, you wouldn’t be doing any testing. The true mean, \(\mu\), is unknown. That means you should define that too.

Let \(\mu\)= mean life of a XJ35 battery

You may want to collect a sample. What kind of sample?

You could ask the manufacturers to give you batteries, but there is a chance that there could be some bias in the batteries they pick. To reduce the chance of bias, it is best to take a random sample.

How big should the sample be?

A sample of size 30 or more means that you can use the central limit theorem. Pick a sample of size 30.

Example \(\PageIndex{1}\) contains the data for the sample you collected:

Now what should you do? Looking at the data set, you see some of the times are above 500 and some are below. But looking at all of the numbers is too difficult. It might be helpful to calculate the mean for this sample.

The sample mean is \(\overline{x} = 490\) days. Looking at the sample mean, one might think that you are right. However, the standard deviation and the sample size also plays a role, so maybe you are wrong.

Before going any farther, it is time to formalize a few definitions.

You have a guess that the mean life of a battery is less than 500 days. This is opposed to what the manufacturer claims. There really are two hypotheses, which are just guesses here – the one that the manufacturer claims and the one that you believe. It is helpful to have names for them.

Definition \(\PageIndex{1}\)

Null Hypothesis : historical value, claim, or product specification. The symbol used is \(H_{o}\).

Definition \(\PageIndex{2}\)

Alternate Hypothesis : what you want to prove. This is what you want to accept as true when you reject the null hypothesis. There are two symbols that are commonly used for the alternative hypothesis: \(H_{A}\) or \(H_{I}\). The symbol \(H_{A}\) will be used in this book.

In general, the hypotheses look something like this:

\(H_{o} : \mu=\mu_{o}\)

\(H_{A} : \mu<\mu_{o}\)

where \(\mu_{o}\) just represents the value that the claim says the population mean is actually equal to.

Also, \(H_{A}\) can be less than, greater than, or not equal to.

For this problem:

\(H_{o} : \mu=500\) days, since the manufacturer says the mean life of a battery is 500 days.

\(H_{A} : \mu<500\) days, since you believe that the mean life of the battery is less than 500 days.

Now back to the mean. You have a sample mean of 490 days. Is this small enough to believe that you are right and the manufacturer is wrong? How small does it have to be?

If you calculated a sample mean of 235, you would definitely believe the population mean is less than 500. But even if you had a sample mean of 435 you would probably believe that the true mean was less than 500. What about 475? Or 483? There is some point where you would stop being so sure that the population mean is less than 500. That point separates the values of where you are sure or pretty sure that the mean is less than 500 from the area where you are not so sure. How do you find that point?

Well it depends on how much error you want to make. Of course you don’t want to make any errors, but unfortunately that is unavoidable in statistics. You need to figure out how much error you made with your sample. Take the sample mean, and find the probability of getting another sample mean less than it, assuming for the moment that the manufacturer is right. The idea behind this is that you want to know what is the chance that you could have come up with your sample mean even if the population mean really is 500 days.

You want to find \(P\left(\overline{x}<490 | H_{o} \text { is true }\right)=P(\overline{x}<490 | \mu=500)\)

To compute this probability, you need to know how the sample mean is distributed. Since the sample size is at least 30, then you know the sample mean is approximately normally distributed. Remember \(\mu_{\overline{x}}=\mu\) and \(\sigma_{\overline{x}}=\dfrac{\sigma}{\sqrt{n}}\)

A picture is always useful.

Screenshot (117).png

Before calculating the probability, it is useful to see how many standard deviations away from the mean the sample mean is. Using the formula for the z-score from chapter 6, you find

\(z=\dfrac{\overline{x}-\mu_{o}}{\sigma / \sqrt{n}}=\dfrac{490-500}{25 / \sqrt{30}}=-2.19\)

This sample mean is more than two standard deviations away from the mean. That seems pretty far, but you should look at the probability too.

On TI-83/84:

\(P(\overline{x}<490 | \mu=500)=\text { normalcdf }(-1 E 99,490,500,25 \div \sqrt{30}) \approx 0.0142\)

\(P(\overline{x}<490 \mu=500)=\text { pnorm }(490,500,25 / \operatorname{sqrt}(30)) \approx 0.0142\)

There is a 1.42% chance that you could find a sample mean less than 490 when the population mean is 500 days. This is really small, so the chances are that the assumption that the population mean is 500 days is wrong, and you can reject the manufacturer’s claim. But how do you quantify really small? Is 5% or 10% or 15% really small? How do you decide?

Before you answer that question, a couple more definitions are needed.

Definition \(\PageIndex{3}\)

Test Statistic : \(z=\dfrac{\overline{x}-\mu_{o}}{\sigma / \sqrt{n}}\) since it is calculated as part of the testing of the hypothesis.

Definition \(\PageIndex{4}\)

p – value : probability that the test statistic will take on more extreme values than the observed test statistic, given that the null hypothesis is true. It is the probability that was calculated above.

Now, how small is small enough? To answer that, you really want to know the types of errors you can make.

There are actually only two errors that can be made. The first error is if you say that \(H_{o}\) is false, when in fact it is true. This means you reject \(H_{o}\) when \(H_{o}\) was true. The second error is if you say that \(H_{o}\) is true, when in fact it is false. This means you fail to reject \(H_{o}\) when \(H_{o}\) is false. The following table organizes this for you:

Type of errors:

Definition \(\PageIndex{5}\)

Type I Error is rejecting \(H_{o}\) when \(H_{o}\) is true, and

Definition \(\PageIndex{6}\)

Type II Error is failing to reject \(H_{o}\) when \(H_{o}\) is false.

Since these are the errors, then one can define the probabilities attached to each error.

Definition \(\PageIndex{7}\)

\(\alpha\) = P(type I error) = P(rejecting \(H_{o} / H_{o}\) is true)

Definition \(\PageIndex{8}\)

\(\beta\) = P(type II error) = P(failing to reject \(H_{o} / H_{o}\) is false)

\(\alpha\) is also called the level of significance .

Another common concept that is used is Power = \(1-\beta \).

Now there is a relationship between \(\alpha\) and \(\beta\). They are not complements of each other. How are they related?

If \(\alpha\) increases that means the chances of making a type I error will increase. It is more likely that a type I error will occur. It makes sense that you are less likely to make type II errors, only because you will be rejecting \(H_{o}\) more often. You will be failing to reject \(H_{o}\) less, and therefore, the chance of making a type II error will decrease. Thus, as \(\alpha\) increases, \(\beta\) will decrease, and vice versa. That makes them seem like complements, but they aren’t complements. What gives? Consider one more factor – sample size.

Consider if you have a larger sample that is representative of the population, then it makes sense that you have more accuracy then with a smaller sample. Think of it this way, which would you trust more, a sample mean of 490 if you had a sample size of 35 or sample size of 350 (assuming a representative sample)? Of course the 350 because there are more data points and so more accuracy. If you are more accurate, then there is less chance that you will make any error. By increasing the sample size of a representative sample, you decrease both \(\alpha\) and \(\beta\).

Summary of all of this:

  • For a certain sample size, n , if \(\alpha\) increases, \(\beta\) decreases.
  • For a certain level of significance, \(\alpha\), if n increases, \(\beta\) decreases.

Now how do you find \(\alpha\) and \(\beta\)? Well \(\alpha\) is actually chosen. There are only three values that are usually picked for \(\alpha\): 0.01, 0.05, and 0.10. \(\beta\) is very difficult to find, so usually it isn’t found. If you want to make sure it is small you take as large of a sample as you can afford provided it is a representative sample. This is one use of the Power. You want \(\beta\) to be small and the Power of the test is large. The Power word sounds good.

Which pick of \(\alpha\) do you pick? Well that depends on what you are working on. Remember in this example you are the buyer who is trying to get out of a contract to buy these batteries. If you create a type I error, you said that the batteries are bad when they aren’t, most likely the manufacturer will sue you. You want to avoid this. You might pick \(\alpha\) to be 0.01. This way you have a small chance of making a type I error. Of course this means you have more of a chance of making a type II error. No big deal right? What if the batteries are used in pacemakers and you tell the person that their pacemaker’s batteries are good for 500 days when they actually last less, that might be bad. If you make a type II error, you say that the batteries do last 500 days when they last less, then you have the possibility of killing someone. You certainly do not want to do this. In this case you might want to pick \(\alpha\) as 0.10. If both errors are equally bad, then pick \(\alpha\) as 0.05.

The above discussion is why the choice of \(\alpha\) depends on what you are researching. As the researcher, you are the one that needs to decide what \(\alpha\) level to use based on your analysis of the consequences of making each error is.

If a type I error is really bad, then pick \(\alpha\) = 0.01.

If a type II error is really bad, then pick \(\alpha\) = 0.10

If neither error is bad, or both are equally bad, then pick \(\alpha\) = 0.05

The main thing is to always pick the \(\alpha\) before you collect the data and start the test.

The above discussion was long, but it is really important information. If you don’t know what the errors of the test are about, then there really is no point in making conclusions with the tests. Make sure you understand what the two errors are and what the probabilities are for them.

Now it is time to go back to the example and put this all together. This is the basic structure of testing a hypothesis, usually called a hypothesis test. Since this one has a test statistic involving z, it is also called a z-test. And since there is only one sample, it is usually called a one-sample z-test.

Example \(\PageIndex{2}\) battery example revisited

  • State the random variable and the parameter in words.
  • State the null and alternative hypothesis and the level of significance.
  • A random sample of size n is taken.
  • The population standard derivation is known.
  • The sample size is at least 30 or the population of the random variable is normally distributed.
  • Find the sample statistic, test statistic, and p-value.
  • Interpretation

1. x = life of battery

\(\mu\) = mean life of a XJ35 battery

2. \(H_{o} : \mu=500\) days

\(H_{A} : \mu<500\) days

\(\alpha = 0.10\) (from above discussion about consequences)

3. Every hypothesis has some assumptions that be met to make sure that the results of the hypothesis are valid. The assumptions are different for each test. This test has the following assumptions.

  • This occurred in this example, since it was stated that a random sample of 30 battery lives were taken.
  • This is true, since it was given in the problem.
  • The sample size was 30, so this condition is met.

4. The test statistic depends on how many samples there are, what parameter you are testing, and assumptions that need to be checked. In this case, there is one sample and you are testing the mean. The assumptions were checked above.

Sample statistic:

\(\overline{x} = 490\)

Test statistic:

Screenshot (139).png

Using TI-83/84:

\(P(\overline{x}<490 | \mu=500)=\text { normalcdf }(-1 \mathrm{E} 99,490,500,25 / \sqrt{30}) \approx 0.0142\)

\(P(\overline{x}<490 | \mu=500)=\operatorname{pnorm}(490,500,25 / \operatorname{sqrt}(30)) \approx 0.0142\)

5. Now what? Well, this p-value is 0.0142. This is a lot smaller than the amount of error you would accept in the problem -\(\alpha\) = 0.10. That means that finding a sample mean less than 490 days is unusual to happen if \(H_{o}\) is true. This should make you think that \(H_{o}\) is not true. You should reject \(H_{o}\).

In fact, in general:

Reject \(H_{o}\) if the p-value < \(\alpha\) and

Fail to reject \(H_{o}\) if the p-value \(\geq \alpha\).

6. Since you rejected \(H_{o}\), what does this mean in the real world? That is what goes in the interpretation. Since you rejected the claim by the manufacturer that the mean life of the batteries is 500 days, then you now can believe that your hypothesis was correct. In other words, there is enough evidence to show that the mean life of the battery is less than 500 days.

Now that you know that the batteries last less than 500 days, should you cancel the contract? Statistically, there is evidence that the batteries do not last as long as the manufacturer says they should. However, based on this sample there are only ten days less on average that the batteries last. There may not be practical significance in this case. Ten days do not seem like a large difference. In reality, if the batteries are used in pacemakers, then you would probably tell the patient to have the batteries replaced every year. You have a large buffer whether the batteries last 490 days or 500 days. It seems that it might not be worth it to break the contract over ten days. What if the 10 days was practically significant? Are there any other things you should consider? You might look at the business relationship with the manufacturer. You might also look at how much it would cost to find a new manufacturer. These are also questions to consider before making any changes. What this discussion should show you is that just because a hypothesis has statistical significance does not mean it has practical significance. The hypothesis test is just one part of a research process. There are other pieces that you need to consider.

That’s it. That is what a hypothesis test looks like. All hypothesis tests are done with the same six steps. Those general six steps are outlined below.

  • State the random variable and the parameter in words. This is where you are defining what the unknowns are in this problem. x = random variable \(\mu\) = mean of random variable, if the parameter of interest is the mean. There are other parameters you can test, and you would use the appropriate symbol for that parameter.
  • State the null and alternative hypotheses and the level of significance \(H_{o} : \mu=\mu_{o}\), where \(\mu_{o}\) is the known mean \(H_{A} : \mu<\mu_{o}\) \(H_{A} : \mu>\mu_{o}\), use the appropriate one for your problem \(H_{A} : \mu \neq \mu_{o}\) Also, state your \(\alpha\) level here.
  • State and check the assumptions for a hypothesis test. Each hypothesis test has its own assumptions. They will be stated when the different hypothesis tests are discussed.
  • Find the sample statistic, test statistic, and p-value. This depends on what parameter you are working with, how many samples, and the assumptions of the test. The p-value depends on your \(H_{A}\). If you are doing the \(H_{A}\) with the less than, then it is a left-tailed test, and you find the probability of being in that left tail. If you are doing the \(H_{A}\) with the greater than, then it is a right-tailed test, and you find the probability of being in the right tail. If you are doing the \(H_{A}\) with the not equal to, then you are doing a two-tail test, and you find the probability of being in both tails. Because of symmetry, you could find the probability in one tail and double this value to find the probability in both tails.
  • Conclusion This is where you write reject \(H_{o}\) or fail to reject \(H_{o}\). The rule is: if the p-value < \(\alpha\), then reject \(H_{o}\). If the p-value \(\geq \alpha\), then fail to reject \(H_{o}\).
  • Interpretation This is where you interpret in real world terms the conclusion to the test. The conclusion for a hypothesis test is that you either have enough evidence to show \(H_{A}\) is true, or you do not have enough evidence to show \(H_{A}\) is true.

Sorry, one more concept about the conclusion and interpretation. First, the conclusion is that you reject \(H_{o}\) or you fail to reject \(H_{o}\). Why was it said like this? It is because you never accept the null hypothesis. If you wanted to accept the null hypothesis, then why do the test in the first place? In the interpretation, you either have enough evidence to show \(H_{A}\) is true, or you do not have enough evidence to show \(H_{A}\) is true. You wouldn’t want to go to all this work and then find out you wanted to accept the claim. Why go through the trouble? You always want to show that the alternative hypothesis is true. Sometimes you can do that and sometimes you can’t. It doesn’t mean you proved the null hypothesis; it just means you can’t prove the alternative hypothesis. Here is an example to demonstrate this.

Example \(\PageIndex{3}\) conclusion in hypothesis tests

In the U.S. court system a jury trial could be set up as a hypothesis test. To really help you see how this works, let’s use OJ Simpson as an example. In the court system, a person is presumed innocent until he/she is proven guilty, and this is your null hypothesis. OJ Simpson was a football player in the 1970s. In 1994 his ex-wife and her friend were killed. OJ Simpson was accused of the crime, and in 1995 the case was tried. The prosecutors wanted to prove OJ was guilty of killing his wife and her friend, and that is the alternative hypothesis

\(H_{0}\): OJ is innocent of killing his wife and her friend

\(H_{A}\): OJ is guilty of killing his wife and her friend

In this case, a verdict of not guilty was given. That does not mean that he is innocent of this crime. It means there was not enough evidence to prove he was guilty. Many people believe that OJ was guilty of this crime, but the jury did not feel that the evidence presented was enough to show there was guilt. The verdict in a jury trial is always guilty or not guilty!

The same is true in a hypothesis test. There is either enough or not enough evidence to show that alternative hypothesis. It is not that you proved the null hypothesis true.

When identifying hypothesis, it is important to state your random variable and the appropriate parameter you want to make a decision about. If count something, then the random variable is the number of whatever you counted. The parameter is the proportion of what you counted. If the random variable is something you measured, then the parameter is the mean of what you measured. (Note: there are other parameters you can calculate, and some analysis of those will be presented in later chapters.)

Example \(\PageIndex{4}\) stating hypotheses

Identify the hypotheses necessary to test the following statements:

  • The average salary of a teacher is more than $30,000.
  • The proportion of students who like math is less than 10%.
  • The average age of students in this class differs from 21.

a. x = salary of teacher

\(\mu\) = mean salary of teacher

The guess is that \(\mu>\$ 30,000\) and that is the alternative hypothesis.

The null hypothesis has the same parameter and number with an equal sign.

\(\begin{array}{l}{H_{0} : \mu=\$ 30,000} \\ {H_{A} : \mu>\$ 30,000}\end{array}\)

b. x = number od students who like math

p = proportion of students who like math

The guess is that p < 0.10 and that is the alternative hypothesis.

\(\begin{array}{l}{H_{0} : p=0.10} \\ {H_{A} : p<0.10}\end{array}\)

c. x = age of students in this class

\(\mu\) = mean age of students in this class

The guess is that \(\mu \neq 21\) and that is the alternative hypothesis.

\(\begin{array}{c}{H_{0} : \mu=21} \\ {H_{A} : \mu \neq 21}\end{array}\)

Example \(\PageIndex{5}\) Stating Type I and II Errors and Picking Level of Significance

  • The plant-breeding department at a major university developed a new hybrid raspberry plant called YumYum Berry. Based on research data, the claim is made that from the time shoots are planted 90 days on average are required to obtain the first berry with a standard deviation of 9.2 days. A corporation that is interested in marketing the product tests 60 shoots by planting them and recording the number of days before each plant produces its first berry. The sample mean is 92.3 days. The corporation wants to know if the mean number of days is more than the 90 days claimed. State the type I and type II errors in terms of this problem, consequences of each error, and state which level of significance to use.
  • A concern was raised in Australia that the percentage of deaths of Aboriginal prisoners was higher than the percent of deaths of non-indigenous prisoners, which is 0.27%. State the type I and type II errors in terms of this problem, consequences of each error, and state which level of significance to use.

a. x = time to first berry for YumYum Berry plant

\(\mu\) = mean time to first berry for YumYum Berry plant

\(\begin{array}{l}{H_{0} : \mu=90} \\ {H_{A} : \mu>90}\end{array}\)

Type I Error: If the corporation does a type I error, then they will say that the plants take longer to produce than 90 days when they don’t. They probably will not want to market the plants if they think they will take longer. They will not market them even though in reality the plants do produce in 90 days. They may have loss of future earnings, but that is all.

Type II error: The corporation do not say that the plants take longer then 90 days to produce when they do take longer. Most likely they will market the plants. The plants will take longer, and so customers might get upset and then the company would get a bad reputation. This would be really bad for the company.

Level of significance: It appears that the corporation would not want to make a type II error. Pick a 10% level of significance, \(\alpha = 0.10\).

b. x = number of Aboriginal prisoners who have died

p = proportion of Aboriginal prisoners who have died

\(\begin{array}{l}{H_{o} : p=0.27 \%} \\ {H_{A} : p>0.27 \%}\end{array}\)

Type I error: Rejecting that the proportion of Aboriginal prisoners who died was 0.27%, when in fact it was 0.27%. This would mean you would say there is a problem when there isn’t one. You could anger the Aboriginal community, and spend time and energy researching something that isn’t a problem.

Type II error: Failing to reject that the proportion of Aboriginal prisoners who died was 0.27%, when in fact it is higher than 0.27%. This would mean that you wouldn’t think there was a problem with Aboriginal prisoners dying when there really is a problem. You risk causing deaths when there could be a way to avoid them.

Level of significance: It appears that both errors may be issues in this case. You wouldn’t want to anger the Aboriginal community when there isn’t an issue, and you wouldn’t want people to die when there may be a way to stop it. It may be best to pick a 5% level of significance, \(\alpha = 0.05\).

Hypothesis testing is really easy if you follow the same recipe every time. The only differences in the various problems are the assumptions of the test and the test statistic you calculate so you can find the p-value. Do the same steps, in the same order, with the same words, every time and these problems become very easy.

Exercise \(\PageIndex{1}\)

For the problems in this section, a question is being asked. This is to help you understand what the hypotheses are. You are not to run any hypothesis tests and come up with any conclusions in this section.

  • Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made in a given time period and found that 11% of all lenses had defects of some type. Looking at the type of defects, they found in a three-month time period that out of 34,641 defective lenses, 5865 were due to scratches. Are there more defects from scratches than from all other causes? State the random variable, population parameter, and hypotheses.
  • According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints ("Consumer fraud and," 2008). Does this data provide enough evidence to show that Alaska had a lower proportion of identity theft than 23%? State the random variable, population parameter, and hypotheses.
  • The Kyoto Protocol was signed in 1997, and required countries to start reducing their carbon emissions. The protocol became enforceable in February 2005. In 2004, the mean CO2 emission was 4.87 metric tons per capita. Is there enough evidence to show that the mean CO2 emission is lower in 2010 than in 2004? State the random variable, population parameter, and hypotheses.
  • The FDA regulates that fish that is consumed is allowed to contain 1.0 mg/kg of mercury. In Florida, bass fish were collected in 53 different lakes to measure the amount of mercury in the fish. The data for the average amount of mercury in each lake is in Example \(\PageIndex{5}\) ("Multi-disciplinary niser activity," 2013). Do the data provide enough evidence to show that the fish in Florida lakes has more mercury than the allowable amount? State the random variable, population parameter, and hypotheses.
  • Eyeglassomatic manufactures eyeglasses for different retailers. They test to see how many defective lenses they made in a given time period and found that 11% of all lenses had defects of some type. Looking at the type of defects, they found in a three-month time period that out of 34,641 defective lenses, 5865 were due to scratches. Are there more defects from scratches than from all other causes? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the manufacturer, and the appropriate alpha level to use. State why you picked this alpha level.
  • According to the February 2008 Federal Trade Commission report on consumer fraud and identity theft, 23% of all complaints in 2007 were for identity theft. In that year, Alaska had 321 complaints of identity theft out of 1,432 consumer complaints ("Consumer fraud and," 2008). Does this data provide enough evidence to show that Alaska had a lower proportion of identity theft than 23%? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the state of Arizona, and the appropriate alpha level to use. State why you picked this alpha level.
  • The Kyoto Protocol was signed in 1997, and required countries to start reducing their carbon emissions. The protocol became enforceable in February 2005. In 2004, the mean CO2 emission was 4.87 metric tons per capita. Is there enough evidence to show that the mean CO2 emission is lower in 2010 than in 2004? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the agency overseeing the protocol, and the appropriate alpha level to use. State why you picked this alpha level.
  • The FDA regulates that fish that is consumed is allowed to contain 1.0 mg/kg of mercury. In Florida, bass fish were collected in 53 different lakes to measure the amount of mercury in the fish. The data for the average amount of mercury in each lake is in Example \(\PageIndex{5}\) ("Multi-disciplinary niser activity," 2013). Do the data provide enough evidence to show that the fish in Florida lakes has more mercury than the allowable amount? State the type I and type II errors in this case, consequences of each error type for this situation from the perspective of the FDA, and the appropriate alpha level to use. State why you picked this alpha level.

1. \(H_{o} : p=0.11, H_{A} : p>0.11\)

3. \(H_{o} : \mu=4.87 \text { metric tons per capita, } H_{A} : \mu<4.87 \text { metric tons per capita }\)

5. See solutions

7. See solutions

how to prove a hypothesis wrong

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

6a.1 - introduction to hypothesis testing, basic terms section  .

The first step in hypothesis testing is to set up two competing hypotheses. The hypotheses are the most important aspect. If the hypotheses are incorrect, your conclusion will also be incorrect.

The two hypotheses are named the null hypothesis and the alternative hypothesis.

The goal of hypothesis testing is to see if there is enough evidence against the null hypothesis. In other words, to see if there is enough evidence to reject the null hypothesis. If there is not enough evidence, then we fail to reject the null hypothesis.

Consider the following example where we set up these hypotheses.

Example 6-1 Section  

A man, Mr. Orangejuice, goes to trial and is tried for the murder of his ex-wife. He is either guilty or innocent. Set up the null and alternative hypotheses for this example.

Putting this in a hypothesis testing framework, the hypotheses being tested are:

  • The man is guilty
  • The man is innocent

Let's set up the null and alternative hypotheses.

\(H_0\colon \) Mr. Orangejuice is innocent

\(H_a\colon \) Mr. Orangejuice is guilty

Remember that we assume the null hypothesis is true and try to see if we have evidence against the null. Therefore, it makes sense in this example to assume the man is innocent and test to see if there is evidence that he is guilty.

The Logic of Hypothesis Testing Section  

We want to know the answer to a research question. We determine our null and alternative hypotheses. Now it is time to make a decision.

The decision is either going to be...

  • reject the null hypothesis or...
  • fail to reject the null hypothesis.

Consider the following table. The table shows the decision/conclusion of the hypothesis test and the unknown "reality", or truth. We do not know if the null is true or if it is false. If the null is false and we reject it, then we made the correct decision. If the null hypothesis is true and we fail to reject it, then we made the correct decision.

So what happens when we do not make the correct decision?

When doing hypothesis testing, two types of mistakes may be made and we call them Type I error and Type II error. If we reject the null hypothesis when it is true, then we made a type I error. If the null hypothesis is false and we failed to reject it, we made another error called a Type II error.

Types of errors

The “reality”, or truth, about the null hypothesis is unknown and therefore we do not know if we have made the correct decision or if we committed an error. We can, however, define the likelihood of these events.

\(\alpha\) and \(\beta\) are probabilities of committing an error so we want these values to be low. However, we cannot decrease both. As \(\alpha\) decreases, \(\beta\) increases.

Example 6-1 Cont'd... Section  

A man, Mr. Orangejuice, goes to trial and is tried for the murder of his ex-wife. He is either guilty or not guilty. We found before that...

  • \( H_0\colon \) Mr. Orangejuice is innocent
  • \( H_a\colon \) Mr. Orangejuice is guilty

Interpret Type I error, \(\alpha \), Type II error, \(\beta \).

As you can see here, the Type I error (putting an innocent man in jail) is the more serious error. Ethically, it is more serious to put an innocent man in jail than to let a guilty man go free. So to minimize the probability of a type I error we would choose a smaller significance level.

Try it! Section  

An inspector has to choose between certifying a building as safe or saying that the building is not safe. There are two hypotheses:

  • Building is safe
  • Building is not safe

Set up the null and alternative hypotheses. Interpret Type I and Type II error.

\( H_0\colon\) Building is not safe vs \(H_a\colon \) Building is safe

Power and \(\beta \) are complements of each other. Therefore, they have an inverse relationship, i.e. as one increases, the other decreases.

MeasuringU Logo

Statistical Hypothesis Testing: What Can Go Wrong?

how to prove a hypothesis wrong

Making decisions with data inevitably means working with statistics and one of its most common frameworks: Null Hypothesis Significance Testing (NHST).

Hypothesis testing can be confusing (and controversial), so in an earlier article we introduced the core framework of statistical hypothesis testing in four steps:

  • Define the null hypothesis (H 0 ). This is the hypothesis that there is no difference.
  • Collect data. Compute the difference you want to evaluate.
  • Compute the p-value. Use an appropriate test of significance to obtain the p-value—the probability of getting a difference this large or larger if there is no difference.
  • Determine statistical significance. If the value of p is lower than the criterion you’re using to determine significance (i.e., the alpha criterion, usually set to .05 or .10), reject the null hypothesis and declare a statistically significant result; otherwise, the outcome is ambiguous, so you fail to reject H 0 .

But just because you’re using the hypothesis testing framework doesn’t mean you will always be right. The bad news is that you can go through all the pain of learning and applying this statistical framework and still make the wrong decision. The power in this approach isn’t in guaranteeing always being right (that’s not achievable), but in knowing the likelihood that you’re right while reducing the chance you’re wrong over the long run.

But what can go wrong? Fortunately, there are only two core errors, Type I and Type II, which we cover in this article.

Two Ways to Be Wrong: The Jury Example

One of the best ways to understand errors in statistical hypothesis testing is to compare it to an event that many of us are called upon to perform as a civic duty: the jury trial.

When empaneled on a jury in a criminal trial, jurists are presented with evidence from the prosecution and defense. Based on the strength of the evidence, they decide to acquit or convict.

The defense argues that the accused is not guilty, or at least that the prosecution has failed to prove their case “ beyond a reasonable doubt .” The prosecution insists the evidence is in their favor. Who should the jury believe? What’s the “truth?”

Unfortunately, the “truth”—the reality of whether the defendant is guilty or not—is unknown. We can compare the binary decision the jury has to make with the reality of what happened in the 2×2 matrix shown in Figure 1. There are two ways to be right and two ways to be wrong.

how to prove a hypothesis wrong

Figure 1: A binary decision (as in jury trials) can be displayed on a 2×2 grid showing the two correct outcomes.

We want to be in the green checkboxes, making the correct decision about someone’s future. Of course, juries can make wrong decisions. Figure 2 illustrates the two errors.

Juries sometimes let the guilty person go free (acquit the guilty; see the lower-left corner of Figure 2). Juries can also convict the innocent. For example, DNA evidence has exonerated some who have been convicted for crimes they didn’t commit. Such stories often make headlines , as we in the U.S. generally see these cases as more significant errors in the justice system. This more egregious error (convicting the innocent) is in the upper-right corner of Figure 2 (four Xs).

how to prove a hypothesis wrong

Figure 2: Jury decisions—two ways to be right and two ways to be wrong.

Statistical Decision Making: Type I and Type II Errors

What does this have to do with statistical decision making? It’s not like a jury conducts a chi-square test to reach an outcome. But a researcher, like a jurist, is making a binary decision. Instead of acquitting and convicting, the decision is whether a difference is or is not statistically significant.

As we covered in the earlier article , differences can be computed using metrics such as completion rates, conversion rates, and perceived ease from comparisons of interfaces (e.g., websites, apps, products, and prototypes).

Figure 3 shows how our decisions in hypothesis testing map to juror decisions. Our decision that something is or isn’t significantly different is similar to the jury’s decision to convict or acquit. Likewise, the reality (which we often never know) of whether there’s really a difference is akin to whether the defendant was really innocent or guilty.

how to prove a hypothesis wrong

Figure 3: Jury and statistical decisions are similar.

Researchers use the p-value from a test of significance to guide their decisions. If the p-value is low (below alpha), then we conclude the difference is statistically significant and that the result is strong enough to say the difference is real. If this decision is correct, then we’re in the upper-left quadrant of Figure 3 (correct decision). But if there isn’t a difference and we just happened to get an unusual sample, we’re in the upper-right quadrant of Figure 3 (decision error).

On the other hand, if the p-value is higher than alpha, then we would conclude the difference isn’t statistically significant. If the decision matches reality, we’re in the lower-right quadrant of Figure 3 (correct decision). But if there really is a difference that we failed to detect through bad luck or an underpowered experimental design, then we’d be in the lower-left quadrant of Figure 3 (decision error).

Figure 4 shows the creative name statisticians came up with for these two types of errors: Type I and Type II.

how to prove a hypothesis wrong

Figure 4: Type I and Type II errors in jury and statistical decisions.

A Type I error can be thought of as a false positive —saying there’s a difference when one doesn’t exist. In statistical hypothesis testing, it’s falsely identifying a difference as real. In a trial, this is convicting the innocent person. Or in thinking of COVID testing, a false positive is a test indicating someone has COVID-19 when they really don’t.

A Type II error is when you say there’s no difference when one exists. It is also called a false negative—failing to detect a real difference. In a trial, this is letting the guilty go free. For COVID testing, that would be someone who has COVID, but the COVID test failed to detect it.

Note that the consequences of Type I and Type II errors can be dramatically different. We already discussed the preference for Type II errors (failing to convict the guilty) over Type I errors (convicting the innocent) in jury trials.

The opposite can be the case for COVID testing, where committing a Type II error is more dangerous (failing to detect the disease, further spreading the infection) than a Type I error (falsely saying someone has the disease, which additional testing will disconfirm). Although even with COVID testing, context matters; false positives may lead to incorrectly quarantining dozens of kids who would lose two weeks of in-school instruction—illustrating the balancing act between Type I and Type II errors.

When researchers use statistical hypothesis testing in scientific publications, they consider Type II errors less damaging than Type I errors. Type I errors introduce false information into the scientific discourse, but Type II errors merely delay the publication of accurate information. In industrial hypothesis testing, researchers must think carefully about the consequences of Type I and Type II errors when planning their studies (e.g., sample size) and choosing appropriate alpha and beta criteria.

Figure 5 includes additional details from the null hypothesis testing framework. We set the Type I error rate (the α criterion) to a tolerable threshold. In scientific publications and many other contexts, the conventional setting is .05 (5%).

how to prove a hypothesis wrong

Figure 5: The NHST applied to statistical and jury decisions, with conventional alpha and beta criteria.

We set the Type II error rate at another tolerable threshold, conventionally .20 (20%), four times higher than the Type I error rate. It’s denoted by the Greek letter β in the lower-left corner. This 5% to 20% ratio is the convention used in most scientific research, but as mentioned above, researchers adjust it depending on the relative consequences of the two types of error. (We’ll cover this in more detail in another article.)

You can also see in Figure 5 how the green check marks (the correct decisions) apply to the “truth” of the null hypothesis: there’s one check when there’s really no difference (H 0 is true) and we say there’s no difference (lower-right square), and the other when there IS a difference (H 0 is false) and we say so (upper-left square).

This framework also guides the development of sample size estimation strategies that balance Type I and Type II errors and holds them to their designated alpha and beta criteria over the long run.

Examples of Decisions

Building on the examples from our previous article on hypothesis testing (all using an alpha criterion of .05), what could go wrong?

1. Rental Cars Ease : 14 users attempted five tasks on two rental car websites. The average SUS scores were 80.4 (sd = 11) for rental website A and 63.5 (sd = 15) for rental website B. The observed difference was 16.9 points (the SUS can range from 0 to 100).

Significance decision . Used a paired t-test to assess the difference of 16.9 points (t(13) = 3.48; p = .004). The value of p was less than .05, so the result was deemed statistically significant.

What could go wrong ? A p-value of .004 suggests it isn’t likely that this result occurred by chance, but if there really is no difference, it would be a Type I error (false positive).

2. Flight Booking Friction: 10 users booked a flight on airline site A and 13 on airline site B. The average SEQ score for Airline A was 6.1 (sd = .83) and for Airline B, it was 5.1 (sd = 1.5). The observed difference was 1 point (the SEQ can range from 1 to 7).

Significance decision . The observed SEQ difference was 1 point, assessed with an independent groups t-test (t(20) = 2.1; p = .0499). Because .0499 is less than .05, the result is statistically significant.

What could go wrong ? With the p-value so close to the alpha criterion, we’re less confident in the declaration of significance. If there really is no difference, then we’ve made a Type I error (false positive).

3. CRM Ease: 11 users attempted tasks on Product A and 12 users attempted the same tasks on Product B. Product A had a mean SUS score of 51.6 (sd = 4.07) and Product B had a mean SUS score of 49.6 (sd = 4.63). The observed difference was 2 points.

Significance decision . An independent group’s t-test on the difference of 2 points found t(20) = 1.1; p = .28. A p-value of .28 is greater than the alpha criterion of .05, so the decision is that the difference is not statistically significant.

What could go wrong ? A p-value of .28 is substantially higher than .05, but this can happen for several reasons, including having a sample size too small to provide enough power to detect small differences. If there is a difference between the perceived usability of Products A and B, then we’ve made a Type II error (false negative).

4. Numbers and Emojis : 240 respondents used two versions of the UMUX-Lite, one using the standard numeric format and the other using face emojis in place of numbers . The mean UMUX-Lite rating using the numeric format was 85.9 (sd = 17.5) and for the face emojis version was 85.4 (sd = 17.1). That’s a difference of .5 points on a 0-100–point scale.

Significance decision . A dependent groups t-test on the difference of .5 points found t(239) = .88; p = .38. Because .38 is larger than .05, the result is not statistically significant.

What could go wrong ? Even though the sample size was large and the observed difference was small, there is always a chance that there is a real difference between the formats. If so, a Type II error (false negative) has occurred.

Note that you can’t make both a Type I and a Type II error in the same decision. You can’t convict and acquit in the same trial. If you determine a result to be statistically significant, you might make a Type I error (false positive), but you can’t make a Type II error (false negative). When your result is not significant, the decision might be a Type II error (false negative), but it can’t be a Type I error (false positive).

Summary and Additional Implications

Statistical hypothesis testing can be a valuable tool when properly used. Proper use of hypothesis testing includes understanding the different ways things can go wrong.

The logic of formal statistical hypothesis testing can seem convoluted at first, but it’s ultimately comprehensible (we hope!). Think of it as similar to a jury trial with a binary outcome. There are several key elements:

There are two ways to be right and wrong. When making a binary decision such as significant/nonsignificant, there are two ways to be right and two ways to be wrong.

The first way to be wrong is a Type I error. Concluding a difference is real when there really is no difference is a Type I error (a false positive). This is like convicting an innocent person. Type I errors are associated with statistical confidence and the α criterion (usually .05 or .10).

The second way to be wrong is a Type II error. Concluding a real difference doesn’t exist when one exists is a Type II error (a false negative). This is like acquitting the guilty defendant. Type II errors are associated with statistical power and the β criterion (usually .20).

You can make only one type of error for each decision. Fortunately, you can make only a Type I or a Type II error and not both because you can’t say there’s a difference AND say there’s NO difference at the same time. You can’t convict and acquit for the same decision.

The worse error is context dependent. While the Type I error is the worse error in jury trials and academic publishing (we don’t want to publish fluke results), a Type II error may have a deleterious impact as well (e.g., failing to detect a contagious disease).

Reducing Type I and Type II errors requires an increase in sample size. Reducing the alpha and beta thresholds to unrealistic levels (e.g., both to .001) massively increases sample size requirements. One of the goals of sample size estimation for hypothesis tests is to control the likelihood of these Type I and Type II errors over the long run. We do that by making decisions about acceptable levels of α (associated with Type I errors and the concept of confidence) and β (associated with Type II errors and the concept of power), along with the smallest difference that we want to be able to detect (d).

We rarely know if we’re right. We rarely know for certain if our classification decision matches reality. Hypothesis testing doesn’t eliminate decision errors, but over the long run, it holds them to a specified frequency of occurrence (the α and β criteria).

A nonsignificant result doesn’t mean the real difference is 0. Keep in mind that when an outcome isn’t statistically significant, you can’t claim that the true difference is 0. When that happens, and when the decision you’re trying to make is important enough to justify the expenses, continue collecting data to improve precision until the appropriate decision becomes clear. One way to enhance that clarity is to compute a  confidence interval  around the difference.

Statistical significance doesn’t mean practical significance. Some statistically significant results may turn out to have limited practical significance, and some results that aren’t statistically significant can lead to practical acceptance of the null hypothesis. We’ll cover this in more detail in our next article on this topic.

You might also be interested in

081821-Feature

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

how to prove a hypothesis wrong

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation., 2. ask a question., 3. propose a hypothesis., 4. make predictions., 5. test the predictions..

  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

how to prove a hypothesis wrong

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

how to prove a hypothesis wrong

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Statology

Statistics Made Easy

How to Write Hypothesis Test Conclusions (With Examples)

A   hypothesis test is used to test whether or not some hypothesis about a population parameter is true.

To perform a hypothesis test in the real world, researchers obtain a random sample from the population and perform a hypothesis test on the sample data, using a null and alternative hypothesis:

  • Null Hypothesis (H 0 ): The sample data occurs purely from chance.
  • Alternative Hypothesis (H A ): The sample data is influenced by some non-random cause.

If the p-value of the hypothesis test is less than some significance level (e.g. α = .05), then we reject the null hypothesis .

Otherwise, if the p-value is not less than some significance level then we fail to reject the null hypothesis .

When writing the conclusion of a hypothesis test, we typically include:

  • Whether we reject or fail to reject the null hypothesis.
  • The significance level.
  • A short explanation in the context of the hypothesis test.

For example, we would write:

We reject the null hypothesis at the 5% significance level.   There is sufficient evidence to support the claim that…

Or, we would write:

We fail to reject the null hypothesis at the 5% significance level.   There is not sufficient evidence to support the claim that…

The following examples show how to write a hypothesis test conclusion in both scenarios.

Example 1: Reject the Null Hypothesis Conclusion

Suppose a biologist believes that a certain fertilizer will cause plants to grow more during a one-month period than they normally do, which is currently 20 inches. To test this, she applies the fertilizer to each of the plants in her laboratory for one month.

She then performs a hypothesis test at a 5% significance level using the following hypotheses:

  • H 0 : μ = 20 inches (the fertilizer will have no effect on the mean plant growth)
  • H A : μ > 20 inches (the fertilizer will cause mean plant growth to increase)

Suppose the p-value of the test turns out to be 0.002.

Here is how she would report the results of the hypothesis test:

We reject the null hypothesis at the 5% significance level.   There is sufficient evidence to support the claim that this particular fertilizer causes plants to grow more during a one-month period than they normally do.

Example 2: Fail to Reject the Null Hypothesis Conclusion

Suppose the manager of a manufacturing plant wants to test whether or not some new method changes the number of defective widgets produced per month, which is currently 250. To test this, he measures the mean number of defective widgets produced before and after using the new method for one month.

He performs a hypothesis test at a 10% significance level using the following hypotheses:

  • H 0 : μ after = μ before (the mean number of defective widgets is the same before and after using the new method)
  • H A : μ after ≠ μ before (the mean number of defective widgets produced is different before and after using the new method)

Suppose the p-value of the test turns out to be 0.27.

Here is how he would report the results of the hypothesis test:

We fail to reject the null hypothesis at the 10% significance level.   There is not sufficient evidence to support the claim that the new method leads to a change in the number of defective widgets produced per month.

Additional Resources

The following tutorials provide additional information about hypothesis testing:

Introduction to Hypothesis Testing 4 Examples of Hypothesis Testing in Real Life How to Write a Null Hypothesis

Featured Posts

how to prove a hypothesis wrong

Hey there. My name is Zach Bobbitt. I have a Masters of Science degree in Applied Statistics and I’ve worked on machine learning algorithms for professional businesses in both healthcare and retail. I’m passionate about statistics, machine learning, and data visualization and I created Statology to be a resource for both students and teachers alike.  My goal with this site is to help you learn statistics through using simple terms, plenty of real-world examples, and helpful illustrations.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Join the Statology Community

Sign up to receive Statology's exclusive study resource: 100 practice problems with step-by-step solutions. Plus, get our latest insights, tutorials, and data analysis tips straight to your inbox!

By subscribing you accept Statology's Privacy Policy.

What Is a Testable Hypothesis?

  • Scientific Method
  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

A hypothesis is a tentative answer to a scientific question. A testable hypothesis is a  hypothesis that can be proved or disproved as a result of testing, data collection, or experience. Only testable hypotheses can be used to conceive and perform an experiment using the scientific method .

Requirements for a Testable Hypothesis

In order to be considered testable, two criteria must be met:

  • It must be possible to prove that the hypothesis is true.
  • It must be possible to prove that the hypothesis is false.
  • It must be possible to reproduce the results of the hypothesis.

Examples of a Testable Hypothesis

All the following hypotheses are testable. It's important, however, to note that while it's possible to say that the hypothesis is correct, much more research would be required to answer the question " why is this hypothesis correct?" 

  • Students who attend class have higher grades than students who skip class.  This is testable because it is possible to compare the grades of students who do and do not skip class and then analyze the resulting data. Another person could conduct the same research and come up with the same results.
  • People exposed to high levels of ultraviolet light have a higher incidence of cancer than the norm.  This is testable because it is possible to find a group of people who have been exposed to high levels of ultraviolet light and compare their cancer rates to the average.
  • If you put people in a dark room, then they will be unable to tell when an infrared light turns on.  This hypothesis is testable because it is possible to put a group of people into a dark room, turn on an infrared light, and ask the people in the room whether or not an infrared light has been turned on.

Examples of a Hypothesis Not Written in a Testable Form

  • It doesn't matter whether or not you skip class.  This hypothesis can't be tested because it doesn't make any actual claim regarding the outcome of skipping class. "It doesn't matter" doesn't have any specific meaning, so it can't be tested.
  • Ultraviolet light could cause cancer.  The word "could" makes a hypothesis extremely difficult to test because it is very vague. There "could," for example, be UFOs watching us at every moment, even though it's impossible to prove that they are there!
  • Goldfish make better pets than guinea pigs.  This is not a hypothesis; it's a matter of opinion. There is no agreed-upon definition of what a "better" pet is, so while it is possible to argue the point, there is no way to prove it.

How to Propose a Testable Hypothesis

Now that you know what a testable hypothesis is, here are tips for proposing one.

  • Try to write the hypothesis as an if-then statement. If you take an action, then a certain outcome is expected.
  • Identify the independent and dependent variable in the hypothesis. The independent variable is what you are controlling or changing. You measure the effect this has on the dependent variable.
  • Write the hypothesis in such a way that you can prove or disprove it. For example, a person has skin cancer, you can't prove they got it from being out in the sun. However, you can demonstrate a relationship between exposure to ultraviolet light and increased risk of skin cancer.
  • Make sure you are proposing a hypothesis you can test with reproducible results. If your face breaks out, you can't prove the breakout was caused by the french fries you had for dinner last night. However, you can measure whether or not eating french fries is associated with breaking out. It's a matter of gathering enough data to be able to reproduce results and draw a conclusion.
  • Null Hypothesis Examples
  • Examples of Independent and Dependent Variables
  • What Is the Visible Light Spectrum?
  • What Glows Under Black Light?
  • Difference Between Independent and Dependent Variables
  • What Are Examples of a Hypothesis?
  • What Is a Black Light?
  • What Is a Hypothesis? (Science)
  • What Are the Elements of a Good Hypothesis?
  • How To Design a Science Fair Experiment
  • Understanding Simple vs Controlled Experiments
  • Scientific Method Vocabulary Terms
  • Hypothesis, Model, Theory, and Law
  • Theory Definition in Science
  • Null Hypothesis Definition and Examples
  • What 'Fail to Reject' Means in a Hypothesis Test

how to prove a hypothesis wrong

From the Editors

Notes from The Conversation newsroom

How we edit science part 1: the scientific method

how to prove a hypothesis wrong

View all partners

how to prove a hypothesis wrong

We take science seriously at The Conversation and we work hard to report it accurately. This series of five posts is adapted from an internal presentation on how to understand and edit science by our Australian Science & Technology Editor, Tim Dean. We thought you might also find it useful.

Introduction

If I told you that science was a truth-seeking endeavour that uses a single robust method to prove scientific facts about the world, steadily and inexorably driving towards objective truth, would you believe me?

Many would. But you shouldn’t.

The public perception of science is often at odds with how science actually works. Science is often seen to be a separate domain of knowledge, framed to be superior to other forms of knowledge by virtue of its objectivity, which is sometimes referred to as it having a “ view from nowhere ”.

But science is actually far messier than this - and far more interesting. It is not without its limitations and flaws, but it’s still the most effective tool we have to understand the workings of the natural world around us.

In order to report or edit science effectively - or to consume it as a reader - it’s important to understand what science is, how the scientific method (or methods) work, and also some of the common pitfalls in practising science and interpreting its results.

This guide will give a short overview of what science is and how it works, with a more detailed treatment of both these topics in the final post in the series.

What is science?

Science is special, not because it claims to provide us with access to the truth, but because it admits it can’t provide truth .

Other means of producing knowledge, such as pure reason, intuition or revelation, might be appealing because they give the impression of certainty , but when this knowledge is applied to make predictions about the world around us, reality often finds them wanting.

Rather, science consists of a bunch of methods that enable us to accumulate evidence to test our ideas about how the world is, and why it works the way it does. Science works precisely because it enables us to make predictions that are borne out by experience.

Science is not a body of knowledge. Facts are facts, it’s just that some are known with a higher degree of certainty than others. What we often call “scientific facts” are just facts that are backed by the rigours of the scientific method, but they are not intrinsically different from other facts about the world.

What makes science so powerful is that it’s intensely self-critical. In order for a hypothesis to pass muster and enter a textbook, it must survive a battery of tests designed specifically to show that it could be wrong. If it passes, it has cleared a high bar.

The scientific method(s)

Despite what some philosophers have stated , there is a method for conducting science. In fact, there are many. And not all revolve around performing experiments.

One method involves simple observation, description and classification, such as in taxonomy. (Some physicists look down on this – and every other – kind of science, but they’re only greasing a slippery slope .)

how to prove a hypothesis wrong

However, when most of us think of The Scientific Method, we’re thinking of a particular kind of experimental method for testing hypotheses.

This begins with observing phenomena in the world around us, and then moves on to positing hypotheses for why those phenomena happen the way they do. A hypothesis is just an explanation, usually in the form of a causal mechanism: X causes Y. An example would be: gravitation causes the ball to fall back to the ground.

A scientific theory is just a collection of well-tested hypotheses that hang together to explain a great deal of stuff.

Crucially, a scientific hypothesis needs to be testable and falsifiable .

An untestable hypothesis would be something like “the ball falls to the ground because mischievous invisible unicorns want it to”. If these unicorns are not detectable by any scientific instrument, then the hypothesis that they’re responsible for gravity is not scientific.

An unfalsifiable hypothesis is one where no amount of testing can prove it wrong. An example might be the psychic who claims the experiment to test their powers of ESP failed because the scientific instruments were interfering with their abilities.

(Caveat: there are some hypotheses that are untestable because we choose not to test them. That doesn’t make them unscientific in principle, it’s just that they’ve been denied by an ethics committee or other regulation.)

Experimentation

There are often many hypotheses that could explain any particular phenomenon. Does the rock fall to the ground because an invisible force pulls on the rock? Or is it because the mass of the Earth warps spacetime , and the rock follows the lowest-energy path, thus colliding with the ground? Or is it that all substances have a natural tendency to fall towards the centre of the Universe , which happens to be at the centre of the Earth?

The trick is figuring out which hypothesis is the right one. That’s where experimentation comes in.

A scientist will take their hypothesis and use that to make a prediction, and they will construct an experiment to see if that prediction holds. But any observation that confirms one hypothesis will likely confirm several others as well. If I lift and drop a rock, it supports all three of the hypotheses on gravity above.

Furthermore, you can keep accumulating evidence to confirm a hypothesis, and it will never prove it to be absolutely true. This is because you can’t rule out the possibility of another similar hypothesis being correct, or of making some new observation that shows your hypothesis to be false. But if one day you drop a rock and it shoots off into space, that ought to cast doubt on all of the above hypotheses.

So while you can never prove a hypothesis true simply by making more confirmatory observations, you only one need one solid contrary observation to prove a hypothesis false. This notion is at the core of the hypothetico-deductive model of science.

This is why a great deal of science is focused on testing hypotheses, pushing them to their limits and attempting to break them through experimentation. If the hypothesis survives repeated testing, our confidence in it grows.

So even crazy-sounding theories like general relativity and quantum mechanics can become well accepted, because both enable very precise predictions, and these have been exhaustively tested and come through unscathed.

The next post will cover hypothesis testing in greater detail.

  • Scientific method
  • Philosophy of science
  • How we edit science

how to prove a hypothesis wrong

Head of School, School of Arts & Social Sciences, Monash University Malaysia

how to prove a hypothesis wrong

Chief Operating Officer (COO)

how to prove a hypothesis wrong

Clinical Teaching Fellow

how to prove a hypothesis wrong

Data Manager

how to prove a hypothesis wrong

Director, Social Policy

Online Learning College

Hypotheses and Proofs

Hypothesis and proof

In this post

What is a hypothesis?

A hypothesis is basically a theory that somebody states that needs to be tested in order to see if it is true. Most of the time a hypothesis is a statement which someone claims is true and then a series of tests are made to see if the person is correct.

Hypothesis – a proposed true statement that acts as a starting point for further investigation.

Devising theories is how all scientists progress, not just mathematicians, and the evidence that is found must be collected and interpreted to see if it gives any light on the truth in the statement. Statistics can either prove or disprove a theory, which is why we need the evidence that we gather to be as close to the truth as possible: so that we can give an answer to the question with a high level of confidence.

Hypotheses are just the plural of a single hypothesis. A hypothesis is the first thing that someone must come up with when doing a test, as we must initially know what it is we wish to find out rather than blindly going into carrying out certain surveys and tests.

Some examples of hypotheses are shown below:

  • Britain is colder than Spain
  • A dog is faster than a cat
  • Blondes have more fun
  • The square of the hypotenuse of a triangle is equal to the sum of the squares of the other two sides

Obviously, some of these hypotheses are correct and others are not. Even though some may look wrong or right we still need to test the hypothesis either way to find out if it is true or false.

Some hypotheses may be easier to test than others, for example it is easy to test the last hypothesis above as this is very mathematical. However, when it comes to measuring something like ‘fun’ which is shown in the hypothesis ‘Blondes have more fun’ we will begin to struggle! How do you measure something like fun and in what units? This is why it is much easier to test certain hypotheses when compared with others.

Another way to come up with a hypothesis is by doing some ‘trial and error’ type testing. When finding data you may realise that there is in fact a pattern and then state this as a hypothesis of your findings. This pattern should then be tested using mathematical skills to test its authenticity. There is still a big difference between finding a pattern in something and finding that something will always happen no matter what. The pattern that is found at any point may just be a coincidence as it is much harder to prove something using mathematics rather than simply noticing a pattern. However, once something is proved with mathematics it is a very strong indication that the hypothesis is not only a guess but is scientific fact.

A hypothesis must always:

  • Be a statement that needs to be proven or disproven, never a question
  • Be applied to a certain population
  • Be testable, otherwise the hypothesis is rather pointless as we can never know any information about it!

There are also two different types of hypothesis which are explained here:

An Experimental Hypothesis –  This is a statement which should state a difference between two things that should be tested. For example, ‘Cheetahs are faster than lions’.

A Null Hypothesis –  This kind of hypothesis does not say something is more than another, instead it states that they are the same. For example, ‘There is no difference between the number of late buses on Tuesday and on Wednesday’.

Subjects and samples

We have already talked in an earlier lesson of different types of samples and how these are formed, so we will not dwell for too long on this. The main thing to make sure of when choosing subjects for a test is to link them to the hypothesis that we are looking into. This will then give a much better data set that will be a lot more relevant to the questions we are asking. There is no point in us gathering data from people that live in Ireland if our original hypothesis states something about Scottish people, so we need to also make sure that the sample taken is as relevant to the hypothesis as possible. As with all samples that are taken, there should never be any bias towards one subject or another (unless we are using something like quota sampling as outlined in an earlier lesson). This will then mean that a random collection of subjects is taken into account and will mean that the information that is acquired will be more useful to the hypothesis that we wish to look at.

The experimental method

By treating the hypothesis and the data collection as an experiment, we should use as many scientific methods as possible to ensure that the data we are collecting is very accurate.

The most important and best way of doing this is the  control of variables . A variable is basically anything that can change in a situation, which means there are a lot in the vast majority as lots of different things can be altered. By keeping all variables the same and only changing the ones which we wish to test, we will get data that is as reliable as possible. However, if variables are changed that can affect an outcome we may end up getting false data.

For example, when testing ‘A cheetah is faster than a lion’ we could simply make the two animals run against each other and see which is quickest. However, if we allowed the cheetah to run on flat ground and made the lion run up hill, then the times would not be accurate to the truth as it is much harder to run up a slope than on flat ground. It is for this reason that any variables should be the same for all subjects.

The only variable that is mentioned in the hypothesis ‘A cheetah runs faster than a lion’ is the animal that runs. Therefore, this is called the  independent variable  and is the only thing that we wish to change between experiments as it is the thing we wish to  prove has an effect on other results.

A  dependent variable  is something that we wish to measure in experiments to see if there is an effect. This is the speed at which something runs in our example, as we are changing the animal and measuring the speed.

Independent variable – something that stands alone and is not changed by other variables in the experiment. This variable is changed by the person carrying out the investigation to see if it influences the dependent variables. This can also be seen as an input when an experiment is created.

Dependent variable – this variable is measured in an experiment to see if it changes when the independent variable is changed. These represent an output after the experiment is carried out.

Standardised instructions

Another thing that is essential to carrying out experiments is to give both of the participants the same instructions in what you wish them to do. Although this may seem a little picky, there will be a definite difference in how a subject performs if they are given clear and concise instructions as opposed to given misleading and rushed ones.

Turning data into information

Experiments are carried out to produce a set of data but this is not the end of the problem! We will then need to interpret and change this information into something that will tell us what we need to know. This means we need to turn data in the form of numbers into actual information that can be useful to our investigation. Figures that are found through experiments are first shown as ‘raw data’ before we can use different tables and charts to show the patterns that have been found in the surveys and experiments that have been carried out. Once all the data is collected and in tables we can move on to using these to find patterns.

Once a hypothesis has been stated, we can look to prove or disprove it. In mathematics, a proof is a little different to what people usually think. A mathematical proof must show that something is the case without any doubt. We do this by working through step-by-step to build a proof that shows the hypothesis as being either right or wrong. Each small step in the proof must be correct so that the entire thing cannot be argued.

Setting out a proof

Being able to write a proof does not mean that you must work any differently to how you would usually answer a question. It simply means that you must show that something is the case. Questions on proofs may ask you to ‘prove’, ‘verify’ or ‘check’ a statement.

When doing this you will need to first understand the hypothesis that has been stated. Look at the example below to see how we would go about writing a simple proof.

Prove that 81 is not a prime number.

Here we have a hypothesis that 81 is not prime. So, to prove this, we can try to find a factor of 81 that is not 1 as we know the definition of a prime number is that it is only divisible by itself and 1. Therefore, we could simply show that:

81 \div9=9

The fact that 81 divided by 9 gives us 9 proves the hypothesis that 81 is not prime.

A proof for a hypothesis does not have to be very complex – it simply has to show that a statement is either true or false. Doing this will use your problem-solving skills though, as you may need to think outside the box and ensure that all of the information that you have is fully understood.

Harder examples

Being able to prove something can be very challenging. It is true that some mathematical equations are still yet to be proved and many mathematicians work on solving extremely complex proofs every day.

When looking at harder examples of proofs you will need to find like terms in equations and then think about how you can work through the proof to get the desired result.

(n+3)^2-(3n+5)=(n+1)(n+2)+2

Here we need to use the left-hand side to get to the right-hand side in order to prove that they are equal. We can do this by expanding the brackets on the left and collecting the like terms:

(n+3)^2-(3n+5)=n^2+6n+9-3n-5

We have now expanded the brackets and collected the like terms. It is now that we will need to look at our hypothesis again and try to make the above equation into the right-hand side by moving terms around. We can see from the right-hand side of our hypothesis that we have a double bracket and then 2 added to this so we can begin by bringing 2 out of the above:

=n^2+3n+4=(n^2+3n+2)+2

So we have now worked through an entire proof from start to finish. Here it is again using only mathematics and no writing:

(n+3)^2-(3n-5)=(n+1)(n+2)+2

In the above we have shown that the hypothesis is true by working through step-by-step and rearranging the equation on the left to get the one on the right.

\frac{1}{2}(n+1)(n+2)-\frac{1}{2}n(n+1)=n+1

The step-by-step approach to proofs

To prove something is correct we have used a step-by-step approach so far. This method is a very good way to get from the left-hand side of an equation to the right-hand side through different steps. To do this we can use specific rules:

1) Try to multiply out brackets early on where possible.  This will help you to cancel out certain terms in order to simplify the equation.

(n+2)

3) Take small steps each time.  A proof is about working through a problem slowly so that it is easy to spot what has been done in each step. Do not take big leaps in your work such as multiplying out brackets and collecting like terms all at once. Remember that the person marking your paper needs to see your working, so it is good to work in small stages.

4) Go back and check your work.  Once you have finished your proof you can go back and check each individual stage. One of the good things about carrying out a proof is that you will know if a mistake has been made in your arithmetic because you will not be able to get to the final solution. If this happens, go back and check your working throughout.

Harder proofs

When working through a proof that is more difficult it can be quite tricky. Sometimes we may have to carry out a lot of different steps or even prove something using another piece of knowledge. For example, it might be that we are asked to prove that an expression will always be even or that it will always be positive.

(4n+1)^2-(4n+1)

In the above equation we have worked through to get an answer that is completely multiplied by 4. This must therefore be even as any number (whether even or odd) will be even when multiplied by 4.

In this example we have had to use our knowledge that anything multiplied by 4 must be even. This information was not included in the question but is something that we know from previous lessons. Some examples of information that you may need to know in order to solve more difficult proofs are:

Any number that is multiplied by an even number must be even

A number multiplied by an even number and then added to an odd number will be odd

Any number multiplied by a number will give an answer that is divisible by the same number (e.g. 3 n  must be divisible by 3)

Any number that is squared must be positive

(x-2)(x+1)+(x+2)

Above we have come to an answer that is multiplied by 3. This means that the answer has to be divisible by 3 also.

GCSE Mathematics course

Interested in a Maths GCSE?

We offer the Edexcel IGCSE in Mathematics through our online campus.

Learn more about our maths GCSE courses

Read another one of our posts

Understanding dementia: types, symptoms, and care needs.

Understanding Dementia: Types, Symptoms, and Care Needs

A Comprehensive Guide to Health and Social Care Training

A Comprehensive Guide to Health and Social Care Training

Understanding and Supporting Mental Health Conditions

Understanding and Supporting Mental Health Conditions

The Role of Diet in Managing Chronic Diseases

The Role of Diet in Managing Chronic Diseases

Effective Communication Skills in Health and Social Care

Effective Communication Skills in Health and Social Care

Understanding Animal Training – Positive Reinforcement Techniques

Understanding Animal Training – Positive Reinforcement Techniques

Key Skills for Successful Care Home Management

Key Skills for Successful Care Home Management

Save your cart?

  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

Don’t say ‘prove’: How to report on the conclusiveness of research findings

This tip sheet explains why it's rarely accurate for news stories to report that a new study proves anything — even when a press release says it does.

research studies don't say prove tip sheet

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by Denise-Marie Ordway, The Journalist's Resource February 13, 2023

This <a target="_blank" href="https://journalistsresource.org/media/dont-say-prove-research-tip-sheet/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

When news outlets report that new research studies prove something, they’re almost certainly wrong.

Studies conducted in fields outside of mathematics do not “prove” anything. They find evidence — sometimes, extraordinarily strong evidence.

It’s important journalists understand that science is an ongoing process of collecting and interrogating evidence, with each new discovery building on or raising questions about earlier discoveries. A single research study usually represents one small step toward fully understanding an issue or problem.

Even when scientists have lots of very strong evidence, they rarely claim to have found proof because proof is absolute. To prove something means there is no chance another explanation exists.

“Even a modest familiarity with the history of science offers many examples of matters that scientists thought they had resolved, only to discover that they needed to be reconsidered,” Naomi Oreskes , a professor of the history of science at Harvard University, writes in a July 2021 essay in Scientific American. “Some familiar examples are Earth as the center of the universe, the absolute nature of time and space, the stability of continents, and the cause of infectious disease.”

Oreskes points out in her 2004 paper “ Science and Public Policy: What’s Proof Got To Do With It? ” that “proof — at least in an absolute sense — is a theoretical ideal, available in geometry class but not in real life.”

Math scholars routinely rely on logic to try to prove something beyond any doubt. What sets mathematicians apart from other scientists is their use of mathematical proofs, a step-by-step argument written using words, symbols and diagrams to convince another mathematician that a given statement is true, explains Steven G. Krantz , a professor of mathematics and statistics at Washington University in St. Louis.

“It is proof that is our device for establishing the absolute and irrevocable truth of statements in our subject,” he writes in “ The History and Concept of Mathematical Proof .” “This is the reason that we can depend on mathematics that was done by Euclid 2300 years ago as readily as we believe in the mathematics that is done today. No other discipline can make such an assertion.”

If you’re still unsure how to describe the conclusiveness of research findings, keep reading. These four tips will help you get it right.

1. Avoid reporting that a research study or group of studies “proves” something — even if a press release says so.

Press releases announcing new research often exaggerate or minimize findings, academic studies have found . Some mistakenly state researchers have proven something they haven’t.

The KSJ Science Editing Handbook urges journalists to read press releases carefully. The handbook, a project of the Knight Science Journalism Fellowship at MIT , features guidance and insights from some of the world’s most talented science writers and editors.

“Press releases that are unaccompanied by journal publications rarely offer any data and, by definition, offer a biased view of the findings’ value,” according to the handbook, which also warns journalists to “never presume that everything in them is accurate or complete.”

Any claim that researchers in any field outside mathematics have proven something should raise a red flag for journalists, says Barbara Gastel , a professor of integrative biosciences, humanities in medicine, and biotechnology at Texas A&M University.

She says journalists need to evaluate the research themselves.

“Read the full paper,” says Gastel, who’s also director of Texas A&M University’s master’s degree program in science and technology journalism . “Don’t go only on the news release. Don’t go only on the abstract to get a full sense of how strong the evidence is. Read the full paper and be ready to ask some questions — sometimes, hard questions — of the researchers.”

2. Use language that correctly conveys the strength of the evidence that a research study or group of studies provides.

Researchers investigate an issue or problem to better understand it and build on what earlier research has found. While studies usually unearth new information, it’s seldom enough to reach definitive conclusions.

When reporting on a study or group of studies, journalists should choose words that accurately convey the level of confidence researchers have in the findings, says Glenn Branch , deputy director of the nonprofit National Center for Science Education , which studies how public schools, museums and other organizations communicate about science.

For example, don’t say a study “establishes” certain facts or “settles” a longstanding question when it simply “suggests” something is true or “offers clues” about some aspect of the subject being examined.

Branch urges journalists to pay close attention to the language researchers use in academic articles. Scientists typically express themselves in degrees of confidence, he notes. He suggests journalists check out the guidance on communicating levels of certainty across disciplines offered by the Intergovernmental Panel on Climate Change , created by the United Nations and World Meteorological Organization to help governments understand, adapt to and mitigate the impacts of climate change.

“The IPCC guidance is probably the most well-developed system for consistently reporting the degree of confidence in scientific results, so it, or something like it, may start to become the gold standard,” Branch wrote via email.

Gastel says it is important journalists know that even though research in fields outside mathematics do not prove anything, a group of studies, together, can provide evidence so strong it gets close to proof.

It can provide “overwhelming evidence, particularly if there are multiple well-designed studies that point in the same direction,” she says.

To convey very high levels of confidence, journalists can use phrases such as “researchers are all but certain” and “researchers have as much confidence as possible in this area of inquiry.”

Another way to gauge levels of certainty: Find out whether scholars have reached a scientific consensus ,  or a collective position based on their interpretation of the evidence.

Independent scientific organizations such as the National Academy of Sciences, American Association for the Advancement of Science and American Medical Association issue consensus statements on various topics, typically to communicate either scientific consensus or the collective opinion of a convened panel of subject experts.

3. When reporting on a single study, explain what it contributes to the body of knowledge on that given topic and whether the evidence, as a whole, leans in a certain direction. 

Many people are unfamiliar with the scientific process, so they need journalists’ help understanding how a single research study fits into the larger landscape of scholarship on an issue or problem. Tell audiences what, if anything, researchers can say about the issue or problem with a high level of certainty after considering all the evidence, together.

A great resource for journalists trying to put a study into context: editorials published in academic journals. Some journals, including the New England Journal of Medicine and JAMA , the journal of the American Medical Association, sometimes publish an editorial about a new paper along with the paper, Gastel notes.

Editorials, typically written by one or more scholars who were not involved in the study but have deep expertise in the field, can help journalists gauge the importance of a paper and its contributions.

“I find that is really handy,” Gastel adds.

4. Review headlines closely before they are published. And read our tip sheet on avoiding mistakes in headlines about health and medical research.

Editors, especially those who are not familiar with the process of scientific inquiry, can easily make mistakes when writing or changing headlines about research. And a bad headline can derail a reporter’s best efforts to cover research accurately.

To prevent errors, Gastel recommends reporters submit suggested headlines with their stories. She also recommends they review their story’s headline right before it is published.

Another good idea: Editors, including copy editors, could make a habit of consulting with reporters on news headlines about research, science and other technical topics. Together, they can choose the most accurate language and decide whether to ever use the word ‘prove.’

Gastel and Branch agree that editors would benefit from science journalism training, particularly as it relates to reporting on health and medicine. Headlines making erroneous claims about the effectiveness of certain drugs and treatments can harm the public. So can headlines claiming researchers have “proven” what causes or prevents health conditions such as cancer, dementia and schizophrenia.

Our tip sheet on headline writing addresses this and other issues.

“’Prove’ is a short, snappy word, so it works in a headline — but it’s usually wrong,” says Branch. “Headline writers need to be as aware of this as the journalists are.”

About The Author

' src=

Denise-Marie Ordway

Could these black hole 'morsels' finally prove Stephen Hawking's famous theory?

Stephen Hawking suggested nothing lasts forever, including black holes. Scientists may have a way to prove it at last.

A purple scene showing two outlines of black holes in the center, on their way toward colliding.

One of the most profound messages Stephen Hawking left humanity with is that nothing lasts forever — and, at last, scientists could be ready to prove it.

This idea was conveyed by what was arguably Hawking's most important work: the hypothesis that black holes "leak" thermal radiation, evaporating in the process and ending their existence with a final explosion. This radiation would eventually come to be known as " Hawking radiation " after the great scientist. To this day, however, it's a concept that remains undetected and purely hypothetical. But now, some scientists think they may have found a way to finally change that; perhaps we'll soon be on our way toward cementing Hawking radiation as fact.  

The team suggests that, when larger black holes catastrophically collide and merge, tiny and hot "morsel" black holes may be launched into space — and that could be the key.

Importantly, Hawking had said that the smaller the black hole is, the faster it would leak Hawking radiation. So, supermassive black holes with masses millions or billions of time that of the sun would theoretically take longer than the predicted lifetime of the cosmos to fully "leak." In other words, how would we even detect such immensely long-term leakage? Well, maybe we can't — but when it comes to these asteroid-mass black hole morsels, dubbed "Bocconcini di Buchi Neri" in Italian, we may be in luck. 

Tiny black holes like these could evaporate and explode on a time scale that is indeed observable to humans. Plus, the end of these black holes' lifetimes should be marked by a characteristic signal, the team says, that indicates their deflation and death via the leaking of Hawking radiation.

Related: If the Big Bang created miniature black holes, where are they?

"Hawking predicted that black holes evaporate by emitting particles," Francesco Sannino , a scientist behind this proposal and a theoretical physicist at the University of Southern Denmark, told Space.com. "We set out to study this and the observational impact of the production of many black hole morsels, or 'Bocconcini di Buchi Neri,' that we imagined forming during a catastrophic event such as the merger of two astrophysical black holes."

Get the Space.com Newsletter

Breaking space news, the latest updates on rocket launches, skywatching events and more!

Morsel black holes can't keep their cool

The origin of Hawking radiation dates back to a 1974 letter written by Stephen Hawking called "Black hole explosions?" that was published in Nature. The letter came about when Hawking considered the implications of quantum physics on the formalism of black holes, phenomena that arise from Albert Einstein's theory of general relativity . This was interesting because quantum theory and general relativity are two theories that notoriously resist unification, even today.

Hawking radiation has remained troubling and undetected for 50 years now for two possible reasons — first of all, most black holes might not emit this thermal radiation at all, and second, if they do, it may not be detectable. Plus, in general, black holes are very strange objects to begin with and therefore complex to study.

"What is mind-bending is that black holes have temperatures that are inversely proportional to their masses. This means the more massive they are, the colder they are, and the less massive they are, the hotter they are," Sannino said.

Even in the emptiest regions of space, you'll find temperatures of around minus 454 degrees Fahrenheit (minus 270 degrees Celsius). That's because of a uniform field of radiation left over from just after the Big Bang, called the "cosmic microwave background" or "CMB." This field if often called a "cosmic fossil," too, because of how utterly old it is. Furthermore, according to the second law of thermodynamics , heat should be unable to flow from a colder body to a hotter body. 

" Black holes heavier than a few solar masses are stable because they are colder than the CMB," Sannino said. "Therefore, only smaller black holes are expected to emit Hawking radiation that could potentially be observed."

A black circle in the center of the screen is against the dark backdrop of space, speckled with some illustrated stars.

Research author Giacomo Cacciapaglia of the French National Centre for Scientific Research told Space.com that, because the vast majority of black holes in today's universe are of astrophysical origin, with masses exceeding a few times that of the sun, they cannot emit observable Hawking radiation. 

"Only black holes lighter than the moon can emit Hawking radiation. We propose that this type of black hole may be produced and ejected during a black hole merger and start radiating right after its production," Cacciapaglia added. "Black hole morsels would produced in large numbers in the vicinity of a black hole merger."

However, these black holes are too small to create effects that allow them to be imaged directly, like the Event Horizon Telescope has been doing for supermassive black holes by focusing on the glowing material that surrounds t hem.

The team suggests there is a unique signature that could be used to indicate the existence of these morsel black holes. This would come in the form of a powerful blast of high-energy radiation called a gamma-ray burst occurring in the same region of the sky where a black hole merger has been detected.

In this illustration, a jet is produced by an unusually bright gamma-ray burst. Scientists think next-generation laser facilities will be able to recreate the fundamental physics at the heart of these gamma-ray explosions.

The researchers said these Bocconcini di Buchi Neri black holes would radiate Hawking radiation faster and faster as they lose mass, hastening their explosive demises. Those possessing masses of around 20,000 tons would take an estimated 16 years to evaporate, while examples of morsel black holes with masses of at least 100,000 kilotons would potentially last as long as hundreds of years.

The morsels' evaporation and destruction would produce photons exceeding the trillion electron volts (TeV) energy range. To get an idea of how energetic that is, Sannino said that CERN's Large Hadron Collider (LHC) in Europe, the largest particle accelerator on the planet, collides protons head-on with a total energy of 13.6 TeV.

— Black hole-like 'gravastars' could be stacked like Russian tea dolls

— 2nd image of 1st black hole ever pictured confirms Einstein's general relativity (photo)

— Our neighboring galaxy's supermassive black hole would probably be a polite dinner guest

The researchers do have an idea of how to detect these morsel black holes as they evaporate, however. First, black hole mergers could be detected via the emission of gravitational waves , which are tiny ripples in spacetime predicted by Einstein, emitted as the objects collide.

Astronomers could then follow up on those mergers with gamma-ray telescopes, such as the High-Altitude Water Cherenkov gamma-ray Observatory, which can spot photons with energies between 100 Gigaelectron volts (GeV) and 100 TeV.

The team acknowledges there is a long way to go before the existence of morsel black holes can be confirmed, and therefore a long way to go before we can validate Hawking radiation once and for all.

"As this is a new idea, there is a lot of work to do. We plan to better model the Hawking radiation emission at high energies beyond the TeV scale, where our knowledge of particle physics becomes less certain, and this will involve experimental collaborations in searching for these unique signatures within their dataset," Cacciapaglia concluded. "On a longer timeline, we plan to investigate in detail the production of morsels during catastrophic astrophysical events like black hole mergers."

The team's research is available as a pre-print paper on the repository arXiv .

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: [email protected].

Robert Lea is a science journalist in the U.K. whose articles have been published in Physics World, New Scientist, Astronomy Magazine, All About Space, Newsweek and ZME Science. He also writes about science communication for Elsevier and the European Journal of Physics. Rob holds a bachelor of science degree in physics and astronomy from the U.K.’s Open University. Follow him on Twitter @sciencef1rst.

Elusive medium-size black holes may form in dense 'birthing nests'

If the Big Bang created miniature black holes, where are they?

Watch Boeing's Starliner arrive at ISS today on historic 1st astronaut mission

  • danR A brief morsel on the mechanism of the nano-BH production among the 20-odd paragraphs might have been helpful. One shouldn't have to go to the paper; it would be one of the more salient topics for this article. That said, the paper leaves me with the vague apprehension that the authors are papering over a theoretical crack with their proposed morsels. Eg: "...one can still envision that, under certain strong non-linearity in the gravitational field within general relativity or beyond, small BHs may form." "As we remain agnostic about the details of the BH morsel production, we assume the existence of a distribution of masses that escapes the merger." So OK; we'll just bung along with the math and hope it all works out.. Reply
  • NickOfTime To suggest that the referenced "morsels" PROVE Hawkings theory is obviously wrong. After all, Black Holes are supposed to be the one thing that doesn't "leak", so to find alleged evidence that a dense, dark object is leaking is NOT even uniquely supporting of the leaking is from black holes. QED. As a further note, which most/all will dismiss without thought, is that the existence of Black Holes, as formally defined with event horizon and singularity (if massive enough), have yet to be PROVED. Yes, large, dense DARK objects have been "observed", but not their alleged singularity which is a defining element. In addition, physicists such as Stephen Crothers and others have shown how the derivation from GR of a Black Hole with singularity is rather obviously flawed. Crothers and others were rewarded by being fired for pointing out flaws in currently accepted theory. Crothers was actually doing GR a favor because if the derivation was accurate, it would have shown that GR was fundamentally flawed. Incidentally, GR's field equations are fine (but not necessarily all interpretations of them), however, some SR flaws and the flawed Equivalence Principle were added to broaden the scope of GR. GPS data has shown light on these flaws, for example, the SR time dilation equation will only give the correct result if one measures velocity with respect to the single preferred frame (in the vicinity of the earth), namely the ECI frame, as noted by THE expert on GPS data, the late great Ron Hatch, and others within GPS and GPS consultants such as Tom van Flandern. Reply
  • Unclear Engineer I don't think that black hole requires a singularity at its center. All it requires is enough mass to create a gravitational escape velocity at a radius which is larger than the radius of the physical matter of that mass. A big enough, diffuse cloud of gas could do it, without that gas needing to collapse into a singularity, or even condense. It is not clear that a black hole slightly more massive than a naked neutron star has anything in it other than a neutron star that we can no longer see. Reply
  • View All 3 Comments

Most Popular

  • 2 Watch SpaceX launch 4th test flight of Starship megarocket today
  • 3 1st telescope removed from controversial astronomy hub on Hawaiian volcano
  • 4 SpaceX congratulates Boeing, ULA on 1st crewed Starliner launch
  • 5 What time is SpaceX's Starship Flight 4 launch test today?

how to prove a hypothesis wrong

IMAGES

  1. how to say my hypothesis was wrong

    how to prove a hypothesis wrong

  2. Hypothesis Testing SIMPLIFIED

    how to prove a hypothesis wrong

  3. Scientific Method Steps Observation Hypothesis Experiment Analysis and

    how to prove a hypothesis wrong

  4. Hypothesis Testing Solved Problems

    how to prove a hypothesis wrong

  5. 13 Different Types of Hypothesis (2024)

    how to prove a hypothesis wrong

  6. Describe a Hypothesis and How It Is Used

    how to prove a hypothesis wrong

VIDEO

  1. Proportion Hypothesis Testing, example 2

  2. How to identify NULL hypothesis

  3. Hypothesis Testing

  4. HOW THEY COULD HAVE PROVED THE QUR'AN WRONG

  5. Simulation Theory / Computer Generated Reality/ Stephen Hawking

  6. Why greatest Mathematicians are not trying to prove Riemann Hypothesis? || #short #terencetao #maths

COMMENTS

  1. T-test and Hypothesis Testing (Explained Simply)

    A hypothesis is a claim or assumption that we want to check. The approach is very similar to a court trial process, where a judge should decide whether an accused person is guilty or not. There are two types of hypotheses: Null hypothesis (H₀) — the hypothesis that we have by default, or the accepted fact. Usually, it means the absence of ...

  2. Hypothesis Testing

    Present the findings in your results and discussion section. Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps. Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test.

  3. A hypothesis can't be right unless it can be proven wrong

    Type 3 experiments are those experiments whose results may be consistent with the hypothesis, but are useless because regardless of the outcome, the findings are also consistent with other models. In other words, every result isn't informative. Formulate hypotheses in such a way that you can prove or disprove them by direct experiment.

  4. What do we do if a hypothesis fails?

    The hypothesis itself lies outside the realm of science. The hypothesis cannot be tested by experiments for which results have the potential to show that the idea is false. Responding to a disproved hypothesis. After weeks or even months of intense thinking and experimenting, you have come to the conclusion that your hypothesis is disproven.

  5. Types I & Type II Errors in Hypothesis Testing

    When the significance level is 0.05 and the null hypothesis is true, there is a 5% chance that the test will reject the null hypothesis incorrectly. If you set alpha to 0.01, there is a 1% of a false positive. If 5% is good, then 1% seems even better, right? As you'll see, there is a tradeoff between Type I and Type II errors.

  6. S.3 Hypothesis Testing

    The general idea of hypothesis testing involves: Making an initial assumption. Collecting evidence (data). Based on the available evidence (data), deciding whether to reject or not reject the initial assumption. Every hypothesis test — regardless of the population parameter involved — requires the above three steps.

  7. 7.1: Basics of Hypothesis Testing

    Test Statistic: z = ¯ x − μo σ / √n since it is calculated as part of the testing of the hypothesis. Definition 7.1.4. p - value: probability that the test statistic will take on more extreme values than the observed test statistic, given that the null hypothesis is true.

  8. 6a.1

    The first step in hypothesis testing is to set up two competing hypotheses. The hypotheses are the most important aspect. If the hypotheses are incorrect, your conclusion will also be incorrect. The two hypotheses are named the null hypothesis and the alternative hypothesis. The null hypothesis is typically denoted as H 0.

  9. Statistical Hypothesis Testing: What Can Go Wrong?

    Hypothesis testing can be confusing (and controversial), so in an earlier article we introduced the core framework of statistical hypothesis testing in four steps: Define the null hypothesis (H0). This is the hypothesis that there is no difference. Collect data. Compute the difference you want to evaluate.

  10. How to Write a Strong Hypothesis

    Developing a hypothesis (with example) Step 1. Ask a question. Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project. Example: Research question.

  11. The scientific method (article)

    At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis. Test the prediction.

  12. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  13. What Do You Do if Your Hypothesis Is Wrong?

    Proven or not, your hypothesis is the cornerstone of an experiment. While it's nice to have your hypothesis be proven true, there are times when things don't always work out that way. How you write-up your results will show the learning that took place when you determined your hypothesis was wrong.

  14. How to Write Hypothesis Test Conclusions (With Examples)

    A hypothesis test is used to test whether or not some hypothesis about a population parameter is true.. To perform a hypothesis test in the real world, researchers obtain a random sample from the population and perform a hypothesis test on the sample data, using a null and alternative hypothesis:. Null Hypothesis (H 0): The sample data occurs purely from chance.

  15. Scientific hypothesis

    hypothesis. science. scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If…then" statement summarizing the idea and in the ...

  16. When scientific hypotheses don't pan out

    How a hypothesis is formed. Technically speaking, a hypothesis is only a hypothesis if it can be tested. Otherwise, it's just an idea to discuss at the water cooler. Researchers are always prepared for the possibility that those tests could disprove their hypotheses — that's part of the reason they do the studies.

  17. What Is a Testable Hypothesis?

    Updated on January 12, 2019. A hypothesis is a tentative answer to a scientific question. A testable hypothesis is a hypothesis that can be proved or disproved as a result of testing, data collection, or experience. Only testable hypotheses can be used to conceive and perform an experiment using the scientific method .

  18. How to Write a Hypothesis w/ Strong Examples

    This means that it should be possible to design an experiment or observation that could potentially prove the hypothesis wrong. ... If it doesn't have the potential to be proven wrong, it's not a hypothesis. Being falsifiable doesn't mean a hypothesis is false. It means that if the hypothesis is false, there is a way to demonstrate this. ...

  19. How we edit science part 1: the scientific method

    An unfalsifiable hypothesis is one where no amount of testing can prove it wrong. An example might be the psychic who claims the experiment to test their powers of ESP failed because the ...

  20. Hypotheses and Proofs

    Once a hypothesis has been stated, we can look to prove or disprove it. In mathematics, a proof is a little different to what people usually think. A mathematical proof must show that something is the case without any doubt. We do this by working through step-by-step to build a proof that shows the hypothesis as being either right or wrong.

  21. Don't say 'prove': How to report on the conclusiveness of research findings

    These four tips will help you get it right. 1. Avoid reporting that a research study or group of studies "proves" something — even if a press release says so. Press releases announcing new research often exaggerate or minimize findings, academic studies have found. Some mistakenly state researchers have proven something they haven't.

  22. Writing a Hypothesis for Your Science Fair Project

    A hypothesis is a tentative, testable answer to a scientific question. Once a scientist has a scientific question she is interested in, the scientist reads up to find out what is already known on the topic. Then she uses that information to form a tentative answer to her scientific question. Sometimes people refer to the tentative answer as "an ...

  23. Could these black hole 'morsels' finally prove Stephen Hawking's famous

    Scientists may have a way to prove it at last. Comments (3) This still from a NASA simulation shows the glow from two supermassive black holes as they spiral toward each other ahead of a collision.

  24. A New Search for Ripples in Space From the Beginning of Time

    Alternatively, the observatory's measurements could undercut this hypothesis, a pillar in the current understanding of cosmology. The observatory is named after the foundation and its founders ...