期刊文献+

An Empirical Comparison Between Tutorials and Crowd Documentation of Application Programming Interface

原文传递
导出
摘要 API(application programming interface)documentation is critical for developers to learn APIs.However,it is unclear whether API documentation indeed improves the API learnability for developers.In this paper,we focus on two types of API documentation,i.e.,official API tutorials and API crowd documentation.First,we analyze API coverage and check API consistencies in API documentation based on the API traceability.Then,we conduct a survey and extract several characteristics to analyze which API documentation can help developers learn APIs.Our findings show that:1)API crowd documentation can be regarded as a supplement to the official API tutorials to some extent;2)the concerns for frequently-used APIs between different types of API documentation show a huge mismatch,which may prevent developers from deeply understanding the usages of APIs through only one type of API documentation;3)official API tutorials can help developers seek API information on a long page and API crowd documentation could provide long codes for a particular programming task.These findings may help developers select the suitable API documentation and find the useful information they need.
出处 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第4期856-876,共21页 计算机科学技术学报(英文版)
基金 the National Key Research and Development Program of China under Grant No.2018YFB1003900 the National Natural Science Foundation of China under Grant Nos.61722202,61772107 and 61572097 the Fundamental Research Funds for the Central Universities of China under Grant No.DUT18JC08.
  • 相关文献

参考文献5

二级参考文献69

  • 1Jeon J, Croft W B, Lee J H, Park S. A framework to predict the quality of answers with non-textual features. In Proc. the 29th Annual International A CM SIGIR Conference on Research and Development in Information Retrieval, Au- gust 2006, pp.228-235.
  • 2Jurczyk P, Agichtein E. Discovering authorities in question answer communities by using link analysis. In Proc. the 16th A CM Conference on Information and Knowledge Manage- ment, November 2007, pp.919-922.
  • 3Agichtein E, Castillo C, Donato D, Gionis A, Mishne G. Finding high-quality content in social media. In Proc. the International Conference on Web Search and Web Data Mining, February 2008, pp.183-194.
  • 4Wang G, Wilson C, Zhao X, Zhu Y, Mohanlal M, Zheng H, Zhao B Y. Serf and turf: Crowdturfing for fun and profit. In Proc. the 21st International Conference on World Wide Web, April 2012, pp.679-688.
  • 5Liu Y, Li S, Cao Y, Lin C Y, Han D, Yu Y. Understand- ing and summarizing answers in community-based question answering services. In Proc. the 22nd International Con- ference on Computational Linguistics, Volume 1, August 2008, pp.497-504.
  • 6Bian J, Liu Y, Agichtein E, Zha H. Finding the right facts in the crowd: Factoid question answering over social media. In Proc. the 17th International Conference on World Wide Web, April 2008, pp.467-476.
  • 7Bian J, Liu Y, Zhou D, Agichtein E, Zha H. Learning to rec- ognize reliable users and content in sociM media with cou- pled mutual reinforcement. In Proc. the 18th International Conference on World Wide Web, April 2009, pp.51-60.
  • 8Kleinberg J M. Authoritative sources in a hyperlinked en- vironment. Journal of the ACM, 1999, 46(5): 604-632.
  • 9Bian J, Liu Y, Agichtein E, Zha H. A few bad votes too many? Towards robust ranking in social media. In Proc. the th International Workshop on Adversarial Informa- tion Retrieval on the Web, April 2008, pp.53-60.
  • 10Page L, Brin S, Motwani R, Winograd T. The pagerank citation ranking: Bringing order to the Web. Technical Re- port SIDL-WP-1999-0120, Stanford Digital Library Tech- nologies Project, 1998.

共引文献23

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部