It’s too much effort and too many studies to collect data from to be practical? No.

In the creation of clinical guidelines there involves massive reviews of existing research. The same is true of large systematic reviews and meta analyses.

These reviews involve inclusion criteria because some types of research studies are more important than others. For example double blind randomised controlled trials are the most favoured type of single study and recent studies are preferred to older ones.

In the first iteration of the application of this method of collecting single symptom data from pre existing research the inclusion criteria limits the amount of work necessary. It does involve more work, time and effort than typically goes into the creation and updating of clinical guidelines but it’s a small percentage more effort.

At the same time the inclusion criteria can be varied. Not everyone agrees with the same inclusion criteria and qualities of research that they deem to a high standard of science to use to inform clinical decisions about treatment (solutions).

I am imagining how software would help using the science to use in clinical practice. Once the single measure data is collected from pre existing studies then the software would allow the user to choose their own inclusion criteria – simple drop down menus would, for example, allow the choice to use all studies or only randomised controlled trials and the most recent studies alone or older studies are included.

There are a lot of diagnostic labels in psychiatry. The most common ones have clinical guidelines but others don’t. But research does exist for the victims of uncommon psychiatric diagnoses and labels. Collecting the unpublished single measure data allows a hope of giving good solutions (treatments) for things that don’t yet have specific clinical guidelines.

There’s also another advantage when looking at treatments that fail to be proven to work but are successful at changing one single symptom (one trait or thing important to the individual to fix). As I explain elsewhere the averaging of multiple single symptom measures fails the specificity of the scientific method that makes it so successful in endeavours of science outside psychiatric science.

What I am saying is that there’s effort involved but it creates so many more possibilities to give the right solution (treatment) with the minimum of trial and error. The scientific method used correctly will give the right solution on the first attempt – this method I’m talking about here is one way to reach towards this goal. It is only a small way to reach towards yet another thing so important to the individual which is to get the right solution recommended on the first attempt.

Using the data collected there’s a way to make the application of science to clinical practice even better and that’s aligning multiple single symptoms with the solutions that are best for multiple symptoms. This requires much more time and effort in terms of human value. By which I mean it requires more time to measure the severity of every single symptom to then use the single symptom data to personalise the clinical recommendation – but this is only the time and effort of one of the monsters who call themselves the human race. Once the single symptom data is collected together then this possibility requires little more effort to create a software program where multiple single symptom measures can be inputted to give the best solution for the specific presentation of the individual.

(It’s not just the beauty of the scientific method I am trying to achieve here. It is also about how much consent and free will and voluntary choices about what is important should matter to care – it is the beauty of the democratic method (for all its flaws) I’m also trying to achieve. The concept of user involvement is a democratic force against medical tyranny.)

Leave a comment