It is common in some academic fields to only report the p-value from a hypothesis test. For example, “We found a significant difference between the treatment group and control group (p=0.002)”. Some academic fields require including the test statistic value and other information, e.g., https://my.ilstu.edu/~jhkahn/apastats.html.
As long as the hypothesis test was performed correctly reporting a p-value is probably sufficient. But, consider the following two versions of the previous example:
We found a significant difference between the treatment group and control group (t(99.3)=4.5, p=0.002).
vs
We found a significant difference between the treatment group and control group (F(3,99), p=0.002).
In the first we see a non-integer degree of freedom with a t statistic, which suggests the author probably performed a two independent sample t test assuming unequal variances (like the default t.test() in the R language). This makes sense given the context. In the second, we may have caught an error. That is, the F statistic suggests an ANOVA-type test – in this case a one-way ANOVA with 4 groups since the numerator degrees of freedom is 3 – which does not test the difference between any two groups. Testing paired differences after the ANOVA requires a post hoc test.
To go one step further, I think it’s also a good practice to report an effect size measure, a confidence interval, or some other measure of the “size” of the significance, such as
We found a significant difference between the treatment group and control group (t(99.5)=4.2, p=0.002, Cohen’s d=0.2).
Now we can see an effect size measure to help us gauge the “practical significance”.
Including additional details of a hypothesis test take little additional effort but add much clarity to the statistical reporting.