Latent semantic analysis and indexing: Difference between revisions

The educational technology and digital learning wiki
Jump to navigation Jump to search
Line 2: Line 2:


== Introduction ==
== Introduction ==


{{quotation|Latent Semantic Indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called Singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.}} (Deerwester et al, 1988 cited by [http://en.wikipedia.org/wiki/Latent_semantic_indexing Wikipedia]
{{quotation|Latent Semantic Indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called Singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.}} (Deerwester et al, 1988 cited by [http://en.wikipedia.org/wiki/Latent_semantic_indexing Wikipedia]


{{quotation|Latent Semantic Analysis (LSA) is a theory and method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a large corpus of text. The underlying idea is that the totality of information about all the word contexts in which a given word does and does not appear provides a set of mutual constraints that largely determines the similarity of meaning of words and set of words to each other. The adequacy of LSA's reflection of human knowledge has been established in a variety of ways. For example, its scores overlap those of humans on standard vocabulary and subject matter tests, it mimics human word sorting and category judgments, simulates word-word and passage-word lexical priming data and, as reported in Group Papers, accurately estimates passage coherence, learnability of passages by individual students and the quality and quantity of knowledge contained in an essay.}} ([http://lsa.colorado.edu/whatis.html What is LSA?], retrieved 12:10, 12 March 2012 (CET).
== Software ==
Free '''latent semantic analysis''' software is difficult to find.
* [http://cran.at.r-project.org/web/packages/lsa/index.html LSA package for R] (developed by Fridolin Wild)
* [http://radimrehurek.com/gensim/ Gensim - Topic Modelling for Humans], implemented in Python. {{quotation|Gensim aims at processing raw, unstructured digital texts (“plain text”). The algorithms in gensim, such as Latent Semantic Analysis, Latent Dirichlet Allocation or Random Projections, discover semantic structure of documents, by examining word statistical co-occurrence patterns within a corpus of training documents. These algorithms are unsupervised, which means no human input is necessary – you only need a corpus of plain text documents.}} ([http://radimrehurek.com/gensim/intro.html introduction]), retrieved 12:10, 12 March 2012 (CET)
* [http://www.d.umn.edu/~tpederse/senseclusters.html SenseClusters] by Ted Pedersen et al. This {{quotation|is a package of (mostly) Perl programs that allows a user to cluster similar contexts together using unsupervised knowledge-lean methods. These techniques have been applied to word sense discrimination, email categorization, and name discrimination. The supported methods include the native SenseClusters techniques and Latent Semantic Analysis. }} ([http://www.d.umn.edu/~tpederse/senseclusters.html SenseClusters], retrieved 12:10, 12 March 2012 (CET).


== Links ==
== Links ==
'''Introductions'''
* [http://www.knowledgesearch.org/lsi/ Patterns in Unstructured Data], A Presentation to the Andrew W. Mellon Foundation by Clara Yu, John Cuadrado, Maciej Ceglowski, J. Scott Payne (undated). A good introduction to LSI and its use in search engines.
* [http://lsa.colorado.edu/ lsa.colorado.edu].
* [http://iv.slis.indiana.edu/sw/lsa.html Latent Semantic Analysis] (Infoviz)
'''Technical introductions'''


* [http://en.wikipedia.org/wiki/Latent_semantic_indexing Latent semantic indexing] (Wikipedia)
* [http://en.wikipedia.org/wiki/Latent_semantic_indexing Latent semantic indexing] (Wikipedia)


* [http://www.knowledgesearch.org/lsi/ Patterns in Unstructured Data], A Presentation to the Andrew W. Mellon Foundation by Clara Yu, John Cuadrado, Maciej Ceglowski, J. Scott Payne (undated). A good introduction to LSI and its use in search engines.
* [http://en.wikipedia.org/wiki/Latent_semantic_analysis Latent semantic analysis] (Wikipedia)
 




== Bibliography ==
== Bibliography ==
* Landauer, T. K., & Dumais, S. T. (1996). How come you know so much? From practical problem to theory. In D. Hermann, C. McEvoy, M. Johnson, & P. Hertel (Eds.), Basic and applied memory: Memory in context. Mahwah, NJ: Erlbaum, 105-126.
* Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato's problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240.
* Landauer Thomas, Peter W. Foltz, & Darrell Laham (1998). "Introduction to Latent Semantic Analysis" (PDF). Discourse Processes 25 (2–3): 259–284. http://dx.doi.org/10.1080/01638539809545028 - [http://lsa.colorado.edu/papers/dp1.LSAintro.pdf "PDF]
* Dumais, Susan T. (2005). "Latent Semantic Analysis". Annual Review of Information Science and Technology 38: 188.


[[Category: Analytics]]
[[Category: Analytics]]
[[Category: Research methodologies]]
[[Category: Research methodologies]]

Revision as of 12:10, 12 March 2012

Draft

Introduction

“Latent Semantic Indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called Singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.” (Deerwester et al, 1988 cited by Wikipedia

“Latent Semantic Analysis (LSA) is a theory and method for extracting and representing the contextual-usage meaning of words by statistical computations applied to a large corpus of text. The underlying idea is that the totality of information about all the word contexts in which a given word does and does not appear provides a set of mutual constraints that largely determines the similarity of meaning of words and set of words to each other. The adequacy of LSA's reflection of human knowledge has been established in a variety of ways. For example, its scores overlap those of humans on standard vocabulary and subject matter tests, it mimics human word sorting and category judgments, simulates word-word and passage-word lexical priming data and, as reported in Group Papers, accurately estimates passage coherence, learnability of passages by individual students and the quality and quantity of knowledge contained in an essay.” (What is LSA?, retrieved 12:10, 12 March 2012 (CET).

Software

Free latent semantic analysis software is difficult to find.

  • LSA package for R (developed by Fridolin Wild)
  • Gensim - Topic Modelling for Humans, implemented in Python. “Gensim aims at processing raw, unstructured digital texts (“plain text”). The algorithms in gensim, such as Latent Semantic Analysis, Latent Dirichlet Allocation or Random Projections, discover semantic structure of documents, by examining word statistical co-occurrence patterns within a corpus of training documents. These algorithms are unsupervised, which means no human input is necessary – you only need a corpus of plain text documents.” (introduction), retrieved 12:10, 12 March 2012 (CET)
  • SenseClusters by Ted Pedersen et al. This “is a package of (mostly) Perl programs that allows a user to cluster similar contexts together using unsupervised knowledge-lean methods. These techniques have been applied to word sense discrimination, email categorization, and name discrimination. The supported methods include the native SenseClusters techniques and Latent Semantic Analysis.” (SenseClusters, retrieved 12:10, 12 March 2012 (CET).

Links

Introductions

  • Patterns in Unstructured Data, A Presentation to the Andrew W. Mellon Foundation by Clara Yu, John Cuadrado, Maciej Ceglowski, J. Scott Payne (undated). A good introduction to LSI and its use in search engines.

Technical introductions


Bibliography

  • Landauer, T. K., & Dumais, S. T. (1996). How come you know so much? From practical problem to theory. In D. Hermann, C. McEvoy, M. Johnson, & P. Hertel (Eds.), Basic and applied memory: Memory in context. Mahwah, NJ: Erlbaum, 105-126.
  • Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato's problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240.
  • Dumais, Susan T. (2005). "Latent Semantic Analysis". Annual Review of Information Science and Technology 38: 188.