6 hours ago, wildturkey said:Once again I am not selecting anyone, therefore there is no self-selection Bias, if the results are distorted it does not matter as I am merely seeking to report the results, and if distorted they will be reported as such.
That is exactly self-selection bias. The participants of the survey select themselves to participate. You only get results from those who self-select to participate. They have a name in research circles, "SLOP", from "Self-Selected Opinion Polls", mockingly termed this because SLOP results are near-worthless academically.
Let's look at a few.
Survey participants are already biased toward people who use this site, a social discussion board, and who frequent the Lounge on this board. It would have been somewhat different if you posted in other forums, like the networking forum. By being on the board it means participants are more social than a random, representative sample, and more likely to actively discuss topics discussed particularly on this board with other people online than a representative sample. Opinions on the board may differ considerably from a representative sample, such as if we collectively believe this is the work for other systems, or that particular vulnerabilities are an issue even if they don't match actual industry data. As you don't know how the self-selected respondents compare to the representative samples, you will have no idea how far shifted those are.
Survey participants are biased toward those who want to answer surveys and are willing to jump through the hoops to do so; industry professionals generally don't care to be bothered to do that, so you're going to get people with more time on their hands than a random sample; those people are less likely to be the ones you want. Again, you won't be able to know how shifted those are. Are you seeing more industry professionals, or less industry professionals, or even raw beginners? Are they misrepresenting themselves, perhaps calling themselves expert after a few months of experience, or calling themselves intermediate when they are far above the norm? Dunning-Kruger applies here, both for those who are below and those who are above the norm.
The difficulty is that because you have no proper representative sample to compare against, you cannot compute how biased your results are, nor can you determine the accuracy (more properly, the inaccuracy) of the results. This is why self-selected surveys are generally wildly inaccurate.
For the results to be meaningful, the pollster must identify the potential respondents for the survey, must select them from representative pools, and must account for non-responses in the results. The survey itself must also be designed in non-biasing ways, including carefully crafted questions that do not presume a specific result, and ordering the questions to avoid leading, or "pushing" respondents toward expected answers. The responses allowed must also account for this, forcing respondents into particular buckets introduces further bias.
Surveys for objective data and concrete results is hard enough, but subjective opinion polls are extremely difficult to craft with any research credibility.
But if your research adviser is willing to let you do 'research' with using SLOP data, go right ahead. Just don't expect any reputable journals to publish it, the better ones identify the methodology and reject the papers.