期刊文献+

使用概念基元特征进行自动文本分类 被引量:6

Using concept primitive feature for text classification
在线阅读 下载PDF
导出
摘要 自动文本分类技术是大规模文档数据处理的关键技术,在文本分类过程中通常先进行文本表示,即把文本转化为特征向量,这其中常用的特征有特征词、词频、N-gram等等。论文研究了一种新的特征,即词语的HNC概念符号。词语的HNC概念符号来自于HNC(概念层次网络,HierarchicalNetworkofConcepts)建立的语义网络,以符号表达式的方式表示了词语的语义信息。因此使用HNC概念符号作为特征实际上是以文本中蕴含的语义信息作为特征,和词频等使用文本表层信息的特征有本质的不同。采用最大熵模型的方法建立分类器,以分词和HNC概念符号作为特征进行了研究,并对分类结果进行了比较。结果表明,HNC特征优于分词特征。 This paper presents a new kind of feature for text classification.The new features are based on HNC (Hierarchical Network of Concepts) concept primitive.HNC concept primitive are elements of HNC semantic network.They express word's semantic meaning with symbol expressions.We found the system with Maximum Entropy Model,and test word feature and HNC concept primitive feature respectively.The result shows that HNC concept primitive feature is better than word feature.
作者 贾宁
出处 《计算机工程与应用》 CSCD 北大核心 2007年第1期24-26,共3页 Computer Engineering and Applications
基金 国家973重点基础研究规划资助项目(2004CB318104) 中国科学院声学研究所创新资助项目。
关键词 文本分类 HNC 特征 text classification HNC feature
  • 相关文献

参考文献4

  • 1Adwait R.Maximum entropy models for natural language ambiguity resolution[D].University of Pennsylvania, 1998.
  • 2Kamal N,John L,Andrew M.Using maximum entropy for text classification[C]//Proceedings of the IJCAI-99 Workshop on Information Filtering, Stockholm, Sweden, 1999.
  • 3Jin Rong,Yan Rong,Zhang Jian.A faster iterative scaling algorithm for conditional exponential model [C]//Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003),Washington DC,2003.
  • 4李荣陆,王建会,陈晓云,陶晓鹏,胡运发.使用最大熵模型进行中文文本分类[J].计算机研究与发展,2005,42(1):94-101. 被引量:96

二级参考文献16

  • 1D. D. Lewis. Naive (Bayes) at forty: The independence assumption in information retrieval. In: Proc. of the 10th European Conf. on Machine Learning. New York: Springer,1998, 4-15.
  • 2Y. Yang, X. Lin. A re-examination of text categorization methods. In: The 22nd Annual Int'l ACM SIGIR Conf. onResearch and Development in the Information Retrieval. NewYork: ACM Press, 1999.
  • 3Y. Yang, C. G. Chute. An example based mapping method for text categorization and retrieval. ACM Trans. on Information Systems, 1994, 12(3): 252 -277.
  • 4E. Wiener. A neural network approach to topic spotting. The 4th Annual Syrup. on Document Analysis and Information Retrieval,Las Vegas, NV, 1995.
  • 5R. E. Schapire, Y. Singer. Improved boosting algorithms using confidence-rated predications. In: Proc. of the 11th Annual Conf.on Computational Learning Theory. New York: ACM Press,1998. 80--91.
  • 6T. Joachims. Text categorization with support vector machines:Learning with many relevant features. In: Proc. of the 10th European Conf. on Machine Learning. New York: Springer,1998. 137-142.
  • 7Y. Yang. An evaluation of statistical approaches to text categorization. Information Retrieval, 1999, 1 ( 1 ) : 76-- 88.
  • 8R. Adwait. Maximum entropy models for natural language ambiguity resolution: [ Ph. D. dissertation ] . Pennsylvania:University of Pennsylvania, 1998.
  • 9R. Adwait. A maximum entropy model for part-of-speech tagging. The Empirical Methods in Natural Language Processing Conference, Philadelphia, USA, 1996.
  • 10Adam L. Berger, Stephen A. Della Pietra, Vincent J. Della Pietra. A maximum entropy approach to natural language processing. Computational Linguistics, 1996, 22( 1 ) : 38-- 73.

共引文献95

同被引文献59

引证文献6

二级引证文献47

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部