Bibliografia sobre a avaliação acadêmica

Atualizado há 3 anos


Pilha de livrosNesta página estarei desenvolvendo uma revisão (em ordem cronológica) de artigos importantes para o debate sobre a Avaliação Acadêmica e, em particular, sobre a avaliação de nossos programas de pós-graduação. Os comentários sobre os artigos bem como a seleção dos tópicos são a minha interpretação do tema. 



Stop the numbers game.  David Lorge Parnas. Communications of the ACM, Vol. 50 No. 11, Pages 19-21, (November 2007), DOI: 10.1145/1297797.1297815

“As a senior researcher, I am saddened to see funding agencies, department heads, deans, and promotion committees encouraging younger researchers to do shallow research. As a reader of what should be serious scientific journals, I am annoyed to see the computer science literature being polluted by more and more papers of less and less scientific value. As one who has often served as an editor or referee, I am offended by discussions that imply that the journal is there to serve the authors rather than the readers. Other readers of scientific journals should be similarly outraged and demand change. The cause of all of these manifestations is the widespread policy of measuring researchers by the number of papers they publish, rather than by the correctness, importance, real novelty, or relevance of their contributions. The widespread practice of counting publications without reading and judging them is fundamentally flawed for a number of reasons: …”

Let’s make science metrics more scientific, Julia Lane, Nature 464, 488–489 (25 March 2010) DOI: 10.1038/464488a – Published online 24 March 2010. 

“Medir e avaliar o desempenho acadêmico é agora um fato da vida científica. Decisões que vão desde a tenure até o ranking e o financiamento das universidades dependem de métricas. No entanto, os atuais sistemas de medição são insuficientes. Métricas amplamente usadas como o h-índice, atualmente na moda, até o citation index, velho de 50 anos são de utilidade limitada”.

Hypercriticality, Moshe Y. Vardi, Communications of the ACM, Vol. 53 No. 7, Page 5, DOI: 10.1145/1785414.1785415 – Jul 2010.

Neste texto Moshe Vardi, editor chefe da CACM, analisa algumas centenas de mensagens recebidas e deixa claro que a comunidade de computação desenvolveu uma visão de tal forma crítica que pode ser chamada de fratricida. A análise mostra que uma posição super-critica está desestimulando a comunidade, a crítica deveria ser dirigida para a melhoria dos trabalhos em desenvolvimento e não para a sua destruição. 

Relatório do Seminário de Acompanhamento dos Programs de Pós-graduação da Area de Ciência da Computação, dias 18 a 21 de Março de 2013, Sede da CAPES, Brasília.

Neste seminário consultores internacionais fizeram uma revisão do processo de avaliação dos programas de pós-graduação em Ciência da Computação no Brasil. As conclusões são sérias e profundas, a leitura do documento (em inglês) é essencial para a compreensão dos problemas atuais desta avaliação.

“A reunião contou com a presença de coordenadores dos cursos de pós-graduacão em Ciências de Computação, os membros da comissão de Ciência de Computação da CAPES e quatro convidados internacionais. Prof. Hans-Ulrich Heiss (TU-Berlin), Prof. John Hopcroft (Conell University), Prof. Michel Robert (Université Montpellier 2), Prof. Eli Upfal (Brown University). No dia 18 os coordenadores dos cursos com conceito 5, 6 e 7 (UFF, IME-USP, UNICAMP, UFPE, ICM-USP, UFRGS, UFMG, COPPE-UFRJ e PIJC-Rio) apresentaram um resumo dos principais indicadores de seus cursos.” 

The Leiden Manifesto for research metrics, Nature, v. 520, n. 7548, p. 429, 3 April 2015.

“Research evaluation has become routine and often relies on metrics. But it is increasingly driven by data and not by expert judgement. As a result, the procedures that were designed to increase the quality of research are now threatening to damage the scientific system. To support researchers and managers, five experts led by Diana Hicks, professor in the School of Public Policy at Georgia Institute of Technology, and Paul Wouters, director of CWTS at Leiden University, have proposed ten principles for the measurement of research performance: the Leiden Manifesto for Research Metrics published as a comment in Nature. 

The Beckman report on database research.
 Daniel Abadi, Rakesh Agrawal, Anastasia Ailamaki, Magdalena Balazinska, Philip A. Bernstein, Michael J. Carey, Surajit Chaudhuri, Surajit Chaudhuri, Jeffrey Dean, AnHai Doan, Michael J. Franklin, Johannes Gehrke, Laura M. Haas, Alon Y. Halevy, Joseph M. Hellerstein, Yannis E. Ioannidis, H. V. Jagadish, Donald Kossmann, Samuel Madden, Sharad Mehrotra, Tova Milo, Jeffrey F. Naughton, Raghu Ramakrishnan, Volker Markl, Christopher Olston, Beng Chin Ooi, Christopher Ré, Dan Suciu, Michael Stonebraker, and Todd Walter, Jennifer Widom. 2016. Commun. ACM 59, 2 (January 2016), 92-99. DOI=http://dx.doi.org/10.1145/2845915

“Research culture. Finally, there is much concern over the increased emphasis of citation counts instead of research impact. This discourages large systems projects, end-to-end tool building, and sharing of large datasets, since this work usually takes longer than solving point problems. Program committees that value technical depth on narrow topics over the potential for real impact are partly to blame. It is unclear how to change this culture. However, to pursue the big data agenda effectively, the field needs to return to a state where fewer publications per researcher per time unit is the norm, and where large systems projects, end-to-end tool sets, and data sharing are more highly valued”. 

Academic Rankings Considered Harmful!, Moshe Y. Vardi Communications of the ACM, Vol. 59 No. 9, Page 5, DOI: 10.1145/2980760 – Set 2016.

Esta Carta do Editor trata das consequências da aplicação de uma otimização multi-objetivos realizando o mapeamento de um espaço complexo de um programa para um espaço linear. A escolha deste mapeamento é completamente arbitrária segundo os critérios da organização que realiza este rankeamento. Os dados brutos deveriam ser disponibilizados bem como uma ferramenta para que os interessados piudessem realizar a análise segundo seus critérios e de forma adequada a seu processo decisório.

The Impact of Academic Mobility on the Quality of Graduate Programs, Thiago H. P. Silva, Alberto H. F. Laender, Clodoveu A. Davis Jr., Ana Paula Couto da Silva and Mirella M. Moro. D-Lib Magazine, Sep.Oct. DOI: 2016, 10.1045/september2016-silva

Este é um ótimo artigo, muito bem embasado em dados, apenas discordo da conclusão final:  Our findings indicate that the number of faculty members who are educated abroad are an important indicator of the quality of these graduate programs because they tend to publish more often, and in higher quality venues. Em meu ponto de vista estes programas são melhores em consequência de seus pesquisadores desenvolverem melhores projetos e uma boa rede de co-autores 0 que leva a publicações em meios de maior qualidade. É apenas um problema de identificação da variável independente.

“The large amount of publicly available scholarly data today has allowed exploration of new aspects of research collaboration, such as the evolution of scientific communities, the impact of research groups and the social engagement of researchers. In this paper, we discuss the importance of characterizing the trajectories of faculty members in their academic education and their impact on the quality of the graduate programs they are associated with. In that respect, we analyze the mobility of faculty members from top Brazilian Computer Science graduate programs as they progress through their academic education (undergraduate, master’s, PhD and post-doctorate). Our findings indicate that the number of faculty members who are educated abroad are an important indicator of the quality of these graduate programs because they tend to publish more often, and in higher quality venues”. 


Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition.  Edwards Marc A. and Roy Siddhartha. Environmental Engineering Science. September 2016, ahead of print. doi:10.1089/ees.2016.0223.

“Over the last 50 years, we argue that incentives for academic scientists have become increasingly perverse in terms of competition for research funding, development of quantitative metrics to measure performance, and a changing business model for higher education itself. Furthermore, decreased discretionary funding at the federal and state level is creating a hypercompetitive environment between government agencies (e.g., EPA, NIH, CDC), for scientists in these agencies, and for academics seeking funding from all sources—the combination of perverse incentives and decreased funding increases pressures that can lead to unethical behavior. If a critical mass of scientists become untrustworthy, a tipping point is possible in which the scientific enterprise itself becomes inherently corrupt and public trust is lost, risking a new dark age with devastating consequences to humanity. Academia and federal agencies should better support science as a public good, and incentivize altruistic and ethical outcomes, while de-emphasizing output.”

(Acessos 1.193)