Statistical measures like ‘significant’ test results and P values always need interpretation, when one considers what they really mean:
The chance of observing data under the assumption of a null hypothesis (of no correlation or no effect), therefore they only reflect the likelihood that the null hypothesis is true.
Reporting and graphically displaying effect sizes and confidence intervals can help to avoid the yes/no decision trap of statistical tests and to illustrate the size of effects in the context of biological relevance.
Emphasizing the size of effects and the confidence in them avoids the problem of a small, biologically unimportant effect being declared statistically significant and the artificiality of trying to dichotomize a result into a positive or negative finding on the basis of a P value.