Showing 1 - 10 of 23
We present a new method for visualizing similarities between objects. The method is called VOS, which is an abbreviation for visualization of similarities. The aim of VOS is to provide a low-dimensional visualization in which objects are located in such a way that the distance between any pair...
Persistent link: https://www.econbiz.de/10010730862
In a recent article in JASIST, L. Leydesdorff and L. Vaughan (2006) asserted that raw cocitation data should be analyzed directly, without first applying a normalization such as the Pearson correlation. In this communication, it is argued that there is nothing wrong with the widely adopted...
Persistent link: https://www.econbiz.de/10010730955
The decision tree algorithm for monotone classification presented in [4, 10] requires strictly monotone data sets. This paper addresses the problem of noise due to violation of the monotonicity constraints and proposes a modification of the algorithm to handle noisy data. It also presents...
Persistent link: https://www.econbiz.de/10010730978
We introduce two new measures of the performance of a scientist. One measure, referred to as the hα-index, generalizes the well-known h-index or Hirsch index. The other measure, referred to as the gα-index, generalizes the closely related g-index. We analyze theoretically the relationship...
Persistent link: https://www.econbiz.de/10010731216
The bankruptcy prediction problem can be considered an or dinal classification problem. The classical theory of Rough Sets describes objects by discrete attributes, and does not take into account the order- ing of the attributes values. This paper proposes a modification of the Rough Set...
Persistent link: https://www.econbiz.de/10010731259
In this paper, a bibliometric study of the computational intelligence field is presented. Bibliometric maps showing the associations between the main concepts in the field are provided for the periods 1996–2000 and 2001–2005. Both the current structure of the field and the evolution of the...
Persistent link: https://www.econbiz.de/10010731283
In scientometric research, the use of co-occurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this paper, we...
Persistent link: https://www.econbiz.de/10010731291
In a recent paper, Egghe [Egghe, L. (in press). Mathematical derivation of the impact factor distribution. Journal of Informetrics] provides a mathematical analysis of the rank-order distribution of journal impact factors. We point out that Egghe’s analysis relies on an unrealistic assumption,...
Persistent link: https://www.econbiz.de/10010731304
This paper focuses on the problem of monotone decision trees from the point of view of the multicriteria decision aid methodology (MCDA). By taking into account the preferences of the decision maker, an attempt is made to bring closer similar research within machine learning and MCDA. The paper...
Persistent link: https://www.econbiz.de/10010731315
We are concerned with evolutionary algorithms that are employed for economic modeling purposes. We focus in particular on evolutionary algorithms that use a binary encoding of strategies. These algorithms, commonly referred to as genetic algorithms, are popular in agent-based computational...
Persistent link: https://www.econbiz.de/10010731317