Teachers, paid-researchers, and wannabe teachers and paid-researchers, need pay rises and employment respectively. In order to gauge who deserves a pay rise, or is the best candidate to be employed, several factors are considered. One of the minor ones is of the quality of their research work.
However, what determines quality? The answers may be one of the two following points.
1. The number of people whom the work has influenced positively.
2. The new leads that the work has given to people who already know quite a bit about such work. This batch of people is, usually, smaller than the number of people mentioned in the previous point.
Now, how does one measure either the number of people mentioned in point 1, or in point 2?
Conventional academic practice requires the research work to be made public in some form (part of a journal article or book).
However, other forms of publication, especially in digital forms such as websites, blogs and posts on social network forums, also exist. Moreover, there are less recognised forms of publication, but performance is also a form of expressing result of a research work. Measuring the quality of a performance has not been attempted so far in academic quantitative measurements. All these less conventional forms of expression of the result of research work are not acknowledged in conventional academic assessment.
Even if one sticks to conventional methods of publication of academic research work, measuring its quality is still a problematic issue. Standard modes of assessment rely on citations of an existing research publication. However, a lot of research work has a lot of influence or gives new leads but cannot be acknowledged within the formal space of a future publication.
For instance, a researcher may be heavily influenced by the presentation of an existing research work. That research may not be directly linked to the later publication but the researcher may be trying to emulate or deviate from certain modes of presentation. Such influences are rarely acknowledged, as they seem unconventional. It is a silent form of intertextuality.
Or, one may not feel the need to cite directly from an earlier publication. This happens when one feels that the point one is making in a future publication is too well-known to be backed up by references to earlier publications.
Thus, one is left with only direct citations in order to measure the quality of research. The conventional methods of measuring these citations (in some cases only in case of science and social science journals) are as follows.
1. Impact factor – Impact factor refers to journals, rather than to a specific research publication. Wikipedia states that “The impact factor of a journal is the average number of citations received per paper published in that journal during the two preceding years. For example, if a journal has an impact factor of 3 in 2008, then its papers published in 2006 and 2007 received 3 citations each on average in 2008. The 2008 impact factor of a journal would be calculated as follows:
A = the number of times that articles published in that journal in 2006 and 2007, were cited by articles in indexed journals during 2008.
B = the total number of “citable items” published by that journal in 2006 and 2007. (“Citable items” are usually articles, reviews, proceedings, or notes; not editorials or letters to the editor.)
2008 impact factor = A/B.
(Note that 2008 impact factors are actually published in 2009; they cannot be calculated until all of the 2008 publications have been processed by the indexing agency.)”
2. Eigenfactor – Eigenfactor also refers to journals, rather than to a specific research publication. This, as Wikipedia puts it, “is a rating of the total importance of a ... journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals.” This is similar to the PageRank logic adopted by the founders of Google.
3. h-index – The h-index refers to an individual rather than to a journal or a specific research publication. A researcher has index “h” if “h” of the researcher’s “N” (total) papers have at least “h” citations each, and the other “(N - h)” papers have no more than “h” citations each. Only the most highly cited articles of a researcher contribute to the h-index. If one of these highly cited articles is path-breaking and has changed the way the world thinks about itself, and has more citations than the other highly cited articles, it will not add to the h-index. The h-index value, unlike the impact factor and the Eigenfactor, cannot be a fraction. It has to be a whole number.
4. g-index – The g-index also refers to an individual rather than to a journal or a specific research publication. “Given a set of articles ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g*g (or g-square) citations.” However, unlike the h-index, the total citations of all the research papers of a researcher, add to the g-index of the researcher. Thus, if one of the researcher’s articles is very highly cited, it will increase that researcher’s g-index value, even if the researcher’s other articles have not been cited much. The g-index value, like the h-index value, is always a whole number and never a fraction.
As the Wikipedia articles on these four conventional methods, of measuring the quality of research, state, all of them have their drawbacks, which people have pointed out. Yet, one or more of these assessing methods is used to determine the quality of research in conventional academic practice, not only in the sciences and social sciences, but also in other streams of academics. Humanities research, which often has a validity beyond 3 or 5 years, unlike most scientific research, is also measured using the same parameters.
Added to this, is the issue of paid publication. Conventional publication is through either books or articles. Whereas books are usually published by established publishers, articles are sometimes published also by small institutes. Most books and articles though are not free. Either the author has to pay to get the material published or the reader has to pay to read such publications. Open-access publications (no author fees, no reader fees) exist in the digital domain but most journals with high subsequent citations avoid this form of open-access model. It is worth pointing out that researchers usually do not make any profit at all from paid-publications. The profit is solely made by the publishing house. Yet, authors continue to publish in these high-citation paid-publications as it gets authors higher credit points in academic assessments. Governments, such as the UK and India, advocate all public research output to be open-access, yet such a policy is not rigorously implemented.
However, what determines quality? The answers may be one of the two following points.
1. The number of people whom the work has influenced positively.
2. The new leads that the work has given to people who already know quite a bit about such work. This batch of people is, usually, smaller than the number of people mentioned in the previous point.
Now, how does one measure either the number of people mentioned in point 1, or in point 2?
Conventional academic practice requires the research work to be made public in some form (part of a journal article or book).
However, other forms of publication, especially in digital forms such as websites, blogs and posts on social network forums, also exist. Moreover, there are less recognised forms of publication, but performance is also a form of expressing result of a research work. Measuring the quality of a performance has not been attempted so far in academic quantitative measurements. All these less conventional forms of expression of the result of research work are not acknowledged in conventional academic assessment.
Even if one sticks to conventional methods of publication of academic research work, measuring its quality is still a problematic issue. Standard modes of assessment rely on citations of an existing research publication. However, a lot of research work has a lot of influence or gives new leads but cannot be acknowledged within the formal space of a future publication.
For instance, a researcher may be heavily influenced by the presentation of an existing research work. That research may not be directly linked to the later publication but the researcher may be trying to emulate or deviate from certain modes of presentation. Such influences are rarely acknowledged, as they seem unconventional. It is a silent form of intertextuality.
Or, one may not feel the need to cite directly from an earlier publication. This happens when one feels that the point one is making in a future publication is too well-known to be backed up by references to earlier publications.
Thus, one is left with only direct citations in order to measure the quality of research. The conventional methods of measuring these citations (in some cases only in case of science and social science journals) are as follows.
1. Impact factor – Impact factor refers to journals, rather than to a specific research publication. Wikipedia states that “The impact factor of a journal is the average number of citations received per paper published in that journal during the two preceding years. For example, if a journal has an impact factor of 3 in 2008, then its papers published in 2006 and 2007 received 3 citations each on average in 2008. The 2008 impact factor of a journal would be calculated as follows:
A = the number of times that articles published in that journal in 2006 and 2007, were cited by articles in indexed journals during 2008.
B = the total number of “citable items” published by that journal in 2006 and 2007. (“Citable items” are usually articles, reviews, proceedings, or notes; not editorials or letters to the editor.)
2008 impact factor = A/B.
(Note that 2008 impact factors are actually published in 2009; they cannot be calculated until all of the 2008 publications have been processed by the indexing agency.)”
2. Eigenfactor – Eigenfactor also refers to journals, rather than to a specific research publication. This, as Wikipedia puts it, “is a rating of the total importance of a ... journal. Journals are rated according to the number of incoming citations, with citations from highly ranked journals weighted to make a larger contribution to the eigenfactor than those from poorly ranked journals.” This is similar to the PageRank logic adopted by the founders of Google.
3. h-index – The h-index refers to an individual rather than to a journal or a specific research publication. A researcher has index “h” if “h” of the researcher’s “N” (total) papers have at least “h” citations each, and the other “(N - h)” papers have no more than “h” citations each. Only the most highly cited articles of a researcher contribute to the h-index. If one of these highly cited articles is path-breaking and has changed the way the world thinks about itself, and has more citations than the other highly cited articles, it will not add to the h-index. The h-index value, unlike the impact factor and the Eigenfactor, cannot be a fraction. It has to be a whole number.
4. g-index – The g-index also refers to an individual rather than to a journal or a specific research publication. “Given a set of articles ranked in decreasing order of the number of citations that they received, the g-index is the (unique) largest number such that the top g articles received (together) at least g*g (or g-square) citations.” However, unlike the h-index, the total citations of all the research papers of a researcher, add to the g-index of the researcher. Thus, if one of the researcher’s articles is very highly cited, it will increase that researcher’s g-index value, even if the researcher’s other articles have not been cited much. The g-index value, like the h-index value, is always a whole number and never a fraction.
As the Wikipedia articles on these four conventional methods, of measuring the quality of research, state, all of them have their drawbacks, which people have pointed out. Yet, one or more of these assessing methods is used to determine the quality of research in conventional academic practice, not only in the sciences and social sciences, but also in other streams of academics. Humanities research, which often has a validity beyond 3 or 5 years, unlike most scientific research, is also measured using the same parameters.
Added to this, is the issue of paid publication. Conventional publication is through either books or articles. Whereas books are usually published by established publishers, articles are sometimes published also by small institutes. Most books and articles though are not free. Either the author has to pay to get the material published or the reader has to pay to read such publications. Open-access publications (no author fees, no reader fees) exist in the digital domain but most journals with high subsequent citations avoid this form of open-access model. It is worth pointing out that researchers usually do not make any profit at all from paid-publications. The profit is solely made by the publishing house. Yet, authors continue to publish in these high-citation paid-publications as it gets authors higher credit points in academic assessments. Governments, such as the UK and India, advocate all public research output to be open-access, yet such a policy is not rigorously implemented.
No comments:
Post a Comment