Showing 1 - 10 of 17
We present a new method for visualizing similarities between objects. The method is called VOS, which is an abbreviation for visualization of similarities. The aim of VOS is to provide a low-dimensional visualization in which objects are located in such a way that the distance between any pair...
Persistent link: https://www.econbiz.de/10010730862
In a recent article in JASIST, L. Leydesdorff and L. Vaughan (2006) asserted that raw cocitation data should be analyzed directly, without first applying a normalization such as the Pearson correlation. In this communication, it is argued that there is nothing wrong with the widely adopted...
Persistent link: https://www.econbiz.de/10010730955
We introduce two new measures of the performance of a scientist. One measure, referred to as the hα-index, generalizes the well-known h-index or Hirsch index. The other measure, referred to as the gα-index, generalizes the closely related g-index. We analyze theoretically the relationship...
Persistent link: https://www.econbiz.de/10010731216
In this paper, a bibliometric study of the computational intelligence field is presented. Bibliometric maps showing the associations between the main concepts in the field are provided for the periods 1996–2000 and 2001–2005. Both the current structure of the field and the evolution of the...
Persistent link: https://www.econbiz.de/10010731283
In scientometric research, the use of co-occurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this paper, we...
Persistent link: https://www.econbiz.de/10010731291
In a recent paper, Egghe [Egghe, L. (in press). Mathematical derivation of the impact factor distribution. Journal of Informetrics] provides a mathematical analysis of the rank-order distribution of journal impact factors. We point out that Egghe’s analysis relies on an unrealistic assumption,...
Persistent link: https://www.econbiz.de/10010731304
In a recent paper in the Journal of the American Society for Information Science and Technology, Leydesdorff and Vaughan assert that raw cocitation data should be analyzed directly, without first applying a normalization like the Pearson correlation. In this report, it is argued that there is...
Persistent link: https://www.econbiz.de/10010731313
We are concerned with evolutionary algorithms that are employed for economic modeling purposes. We focus in particular on evolutionary algorithms that use a binary encoding of strategies. These algorithms, commonly referred to as genetic algorithms, are popular in agent-based computational...
Persistent link: https://www.econbiz.de/10010731317
We present a mathematical analysis of the long-run behavior of genetic algorithms that are used for modeling social phenomena. The analysis relies on commonly used mathematical techniques in evolutionary game theory. Assuming a positive but infinitely small mutation rate, we derive results that...
Persistent link: https://www.econbiz.de/10010731342
We provide a number of new insights into the methodological discussion about author cocitation analysis. We first argue that the use of the Pearson correlation for measuring the similarity between authors’ cocitation profiles is not very satisfactory. We then discuss what kind of similarity...
Persistent link: https://www.econbiz.de/10010731473