Search by property

Jump to navigation Jump to search

This page provides a simple browsing interface for finding entities described by a property and a named value. Other available search interfaces include the page property search, and the ask query builder.

Search by property

A list of all pages that have property "Has description" with value "Online tools to assist in the conversion of JSON to CSV.". Since there have been only a few results, also nearby values are displayed.

Showing below up to 26 results starting with #1.

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)


List of results

  • Maps (MediaWiki extension)  + (Maps is a MediaWiki extension that provides the ability to visualize geographic data with dynamic, JavaScript based, mapping API's. It has built-in support for geocoding, displaying maps, displaying markers, adding pop-ups, and more.)
  • Nactem software tools  + (NaCTeM has developed a number of high-qualNaCTeM has developed a number of high-quality text mining tools for the UK academic community. However, at least some seem to available to all for ''non commercial purposes'' ([])</br></br>NaCTeM's tools and services offer benefits to a wide range of users eg. reduction in time and effort for finding and linking pertinent information from large scale textual resources and customised solutions in semantic data analysis. ([ Our Aims and Objectives], retrieved March 2014).</br></br>NaCTeM tools are available in different ways. For basic tools, web services exist. Others require download and sometimes configuration/installation. and sometimes configuration/installation.)
  • NetDraw  + (NetDraw is a free Windows program for visualizing social network data NetDraw is also included in [ UCINET], a fairly cheap commercial SNA program deveveloped by the same company.)
  • NetMiner  + (NetMiner is an application software for exNetMiner is an application software for exploratory analysis and visualization of large network data based on SNA(Social Network Analysis). It can be used for general research and teaching in social networks. This tool allows researchers to explore their network data visually and interactively, helps them to detect underlying patterns and structures of the network. It features data transformation, network analysis, statistics, visualization of network data, chart, and a programming language based on the [[Python]] script language.[[Python]] script language.)
  • Neural Designer  + (Neural Designer is a data mining applicatiNeural Designer is a data mining application intended for professional data scientists. </br></br>It uses neural networks, which are mathematical models of the brain function that can be trained in order to perform tasks such as function regression, pattern recognition, time series prediction or auto-association.</br></br>The software provides a graphical user interface using a wizard approach consisting of a sequence of pages. It allows you to run the tasks and to obtain comprehensive results as a report in an easy way.</br></br>Neural Designer outstands in terms of performance. Indeed, it is developed using C++, has been subjected to code optimization techniques and makes use of parallel processing. It can analyze bigger data sets in less time.can analyze bigger data sets in less time.)
  • Orange  + (Open source data visualization and analysiOpen source data visualization and analysis for novice and experts. Data mining through visual programming or Python scripting. Components for machine learning. Add-ons for bioinformatics and text mining. Packed with features for data analytics.</br></br>Various addons like [[Orange Textable ]] expand functionality of this software[[Orange Textable ]] expand functionality of this software)
  • OpenSesame  + (OpenSesame is a graphical, open-source expOpenSesame is a graphical, open-source experiment builder for the social sciences. It sports a modern and intuitive user interface that allows you to build complex experiments with a minimum of effort. With OpenSesame you can create a wide range of experiments. The plug-in framework and [[Python]] scripting allow you to incorporate external devices, such as eye trackers, response boxes, and parallel port devices, into your experiment.</br></br>OpenSesame is freely available under the General Public Licence.vailable under the General Public Licence.)
  • Piwik  + (Piwik is an open source web analytics platPiwik is an open source web analytics platform. </br></br>Piwik displays reports regarding the geographic location of visits, the source of visits (i.e. whether they came from a website, directly, or something else), the technical capabilities of visitors (browser, screen size, operating system, etc.), what the visitors did (pages they viewed, actions they took, how they left), the time of visits and more.</br></br>In addition to these reports, Piwik provides some other features that can help users analyze the data Piwik accumulates, such as:</br></br>*Annotations — the ability to save notes (such as one's analysis of data) and attach them to dates in the past.</br>*Transitions — a feature similar to Click path-like features that allows one to see how visitors navigate a website, but different in that it only displays navigation information for one page at a time.</br>*Goals — the ability to set goals for actions it is desired for visitors to take (such as visiting a page or buying a product). Piwik will track how many visits result in those actions being taken.</br>*E-commerce — the ability to track if and how much people spend on a website.</br>*Page Overlay — a feature that displays analytics data overlaid on top of a website.</br>*Row Evolution — a feature that displays how metrics change over time within a report.</br>*Custom Variables — the ability to attach data, like a user name, to visit data.ach data, like a user name, to visit data.)
  • QDA Miner  + (QDA Miner is qualitative "mixed methods" dQDA Miner is qualitative "mixed methods" data analysis package. There are two version:</br>* A [ free QDA Miner Lite] Version</br>* An expensive commercial version</br></br>Quote from the official [ product page]: <span style="background-color:#eeeeee" class="citation">“DA Miner is an easy-to-use qualitative data analysis software package for coding, annotating, retrieving and analyzing small and large collections of documents and images. QDA Miner qualitative data analysis tool may be used to analyze interview or focus group transcripts, legal documents, journal articles, speeches, even entire books, as well as drawings, photographs, paintings, and other types of visual documents. Its seamless integration with SimStat, a statistical data analysis tool, and [[WordStat]], a quantitative content analysis and text mining module, gives you unprecedented flexibility for analyzing text and relating its content to structured information including numerical and categorical data.”</span>erical and categorical data.”</span>)
  • WordSmith  + (Quotation from the [http://www.lexically.nQuotation from the [ getting started page] (11/2014): <span style="background-color:#eeeeee" class="citation">“ WordSmith Tools is an integrated suite of programs for looking at how words behave in texts. You will be able to use the tools to find out how words are used in your own texts, or those of others. </br></br>The WordList tool lets you see a list of all the words or word-clusters in a text, set out in alphabetical or frequency order. The concordancer, Concord, gives you a chance to see any word or phrase in context -- so that you can see what sort of company it keeps. With KeyWords you can find the key words in a text. The tools have been used by Oxford University Press for their own lexicographic work in preparing dictionaries, by language teachers and students, and by researchers investigating language patterns in lots of different languages in many countries world-wide.”</span>n many countries world-wide.”</span>)
  • KNOT  + (Quote from the [ from the [ software home page] (11(2014):</br></br><div style="padding:2px;border-style:dotted;border-width:thin;margin-left:1em;margin-right:1em;margin-top:0.5ex;margin-bottom:0.5ex;"></br>The Knowledge Network Organizing Tool (KNOT) is built around the Pathfinder network generation algorithm. There are also several other components (see below). Pathfinder algorithms take estimates of the proximities between pairs of items as input and define a network representation of the items. The network (a PFNET) consists of the items as nodes and a set of links (which may be either directed or undirected for symmetrical or non-symmetrical proximity estimates) connecting pairs of the nodes. The set of links is determined by patterns of proximities in the data and parameters of Pathfinder algorithms. For details on the method and its applications see R. Schvaneveldt (Editor), Pathfinder Associative Networks: Studies in Knowledge Organization. Norwood, NJ: Ablex, 1990.</br></br>The Pathfinder software includes several programs and utilities to facilitate Pathfinder network analyses of proximity data. The system is oriented around producing pictures of the solutions, but representations of networks and other information are also available in the form of text files which can be used with other software. The positions of nodes for displays are computed using an algorithm described by Kamada and Kawai (1989, Information Processing Letters, 31, 7-15).</div>Information Processing Letters, 31, 7-15).</div>)
  • Orange Textable  + (Quote from the [ from the [ Textable] (oct. 2, 2014)</br></br>Orange Textable is an open-source software tool for building data tables on the basis of raw text sources. Look at the following example to see it in typical action. Orange Textable offers the following features:</br></br>* text data import from keyboard, files, or urls</br>* systematic recoding</br>* segmentation and annotation of various text units</br>* extract and exploit XML-encoded annotations</br>* automatic, random, and arbitrary selection of unit subsets</br>* unit context examination using concordance and collocation tables</br>* frequency and complexity measures</br>* recoded text data and table exportsures * recoded text data and table export)
  • Gensim  + (Quote from the [ from the [ about page] (12/2016): Gensim started off as a collection of various Python scripts for the Czech Digital Mathematics Library in 2008, where it served to generate a short list of the most similar articles to a given article (gensim = “generate similar”). I also wanted to try these fancy “Latent Semantic Methods”, but the libraries that realized the necessary computation were not much fun to work with.</br></br>By now, gensim is—to my knowledge—the most robust, efficient and hassle-free piece of software to realize unsupervised semantic modelling from plain text. It stands in contrast to brittle homework-assignment-implementations that do not scale on one hand, and robust java-esque projects that take forever just to run “hello world”.at take forever just to run “hello world”.)
  • Textalyser  + (Quote from the [ homQuote from the [ home page]: <span style="background-color:#eeeeee" class="citation">“Welcome to the online text analysis tool, the detailed statistics of your text, perfect for translators (quoting), for webmasters (ranking) or for normal users, to know the subject of a text. Now with new features as the analysis of words groups, finding out the keyword density, analyse the prominence of word or expressions.”</span>ence of word or expressions.”</span>)
  • Bitext  + (Quote from the [ homQuote from the [ home page] (11/2014): Bitext provides B2B multilingual semantic engines with “documentably” the highest accuracy in the market. Bitext works for companies in two main markets: Text Analytics (Concept and Entity Extraction, Sentiment Analysis) for Social CRM, Enterprise Feedback Management or Voice of the Customer; and in Natural Language Interfaces for Search Language Interfaces for Search Engines.)
  • Juxta  + (Quote from the [http://www.juxtasoftware.oQuote from the [ About Page] (11/2014): <span style="background-color:#eeeeee" class="citation">“Juxta is an open-source tool for comparing and collating multiple witnesses to a single textual work. Originally designed to aid scholars and editors examine the history of a text from manuscript to print versions, Juxta offers a number of possibilities for humanities computing and textual scholarship. [...] As a standalone desktop application, Juxta allows users to complete many of the necessary operations of textual criticism on digital texts (TXT and XML). With this software, you can add or remove witnesses to a comparison set, switch the base text at will. Once you’ve collated a comparison, Juxta also offers several kinds of analytic visualizations. By default, it displays a heat map of all textual variants and allows the user to locate — at the level of any textual unit — all witness variations from the base text. Users can switch to a side by side collation view, which gives a split frame comparison of a base text with a witness text. A histogram of Juxta collations is particularly useful for long documents; this visualization displays the density of all variation from the base text and serves as a useful finding aid for specific variants.”</span>g aid for specific variants.”</span>)
  • ALA-Reader  + (Quote from the [http://www.personal.psu.edQuote from the [ software home page] (11/2014): Here is a software tool that can translate written text summaries directly into proximity files (prx) that can be analyzed by [ Pathfinder KNOT]. It also generates text proposition files that can be imported by [ CMAP Tools] to automatically form concept maps from the text. It should be of use to researchers who want to visualize "text" for various instructional and research-related reasons. Also it should work with different languages.</br></br>ALA-Reader contains a rudimentary scoring system. Essentially, this tool converts the written summary into a cognitive map and then scores the cognitive map using an approach that we developed for scoring concept maps. The "score" produced is percent agreement with an expert referent. As I narrow down what algorithms work, then I plan to release updated versions periodically. to release updated versions periodically.)
  • Lexico  + (Quote from the [http://www.tal.univ-paris3Quote from the [ Home page]: <span style="background-color:#eeeeee" class="citation">“Lexico3 is the 2001 edition of the Lexico software, first published in 1990. Functions present from the first version (segmentation, concordances, breakdown in graphic form, characteristic elements and factorial analyses of repeated forms and segments) were maintained and for the most part significantly improved. The Lexico series is unique in that it allows the user to maintain control over the entire lexicometric process from initial segmentation to the publication of final results. Beyond identification of graphic forms, the software allows for study of the identification of more complex units composed of form sequences: repeated segments, pairs of forms in co-occurrences, etc which are less ambiguous than the graphic forms that make them up.”</span></br></br>A free version is available for "personal work", bottom of [ this page] this page])
  • Wordcruncher  + (Quote from the [http://www.wordcruncher.coQuote from the [ home page]: <span style="background-color:#eeeeee" class="citation">“WordCruncher is a free eBook reader with research tools to help students and scholars study important texts.</br>* You can look for specific references, search for words or phrases, follow cross-reference hyperlinks, and enlarge images.</br>* You can copy and paste text, add bookmarks, highlight text, and make searchable notes.</br>* Additional study aids include complex searches, word frequencies, word frequency distributions, synchronized windows to compare translations, word tags, and various text analysis reports (e.g., collocation, vocabulary dispersion, vocabulary usage). ”</span>persion, vocabulary usage). ”</span>)
  • Netlytic  + (Quote from the [ home page] (11/2014): Netlytic is a cloud-based text and social networks analyzer that can automatically summarize and discover social networks from online conversations on social media sites.)
  • Meaning Cloud  + (Quote from the home page: <span style="Quote from the home page: <span style="background-color:#eeeeee" class="citation">“Textalytics is a text analysis engine that extracts meaningful elements from any type of content and structures it, so that you can easily process and manage it. Textalytics features a set of high-level web services — adaptable to the characteristics of every type of business — which can be flexibly integrated into your processes and applications.”</span> processes and applications.”</span>)
  • Lexos  + (Quote from the home page: <span style="Quote from the home page: <span style="background-color:#eeeeee" class="citation">“This web-based tool enables you to "scrub" (clean) your unicode text(s), cut a text(s) into various size chunks, manage chunks and chunk sets, tokenize with character- or word- Ngrams or TF-IDF weighting, and choose from a suite of analysis tools for investigating those texts. Functionality includes building dendrograms, making graphs of rolling averages of word frequencies or ratios of words or letters, and playing with visualizations of word frequencies including word clouds and bubble visualizations. To facilitate subsequent text mining analyses beyond the scope of this site, users can also transpose and download their matricies of word counts or relative proportions as comma- or tab-separated files (.csv, .tsv).”</span>eparated files (.csv, .tsv).”</span>)
  • KoRpus  + (Quote: <span style="background-color:#eQuote: <span style="background-color:#eeeeee" class="citation">“koRpus is an R package i originally wrote to measure similarities/differences between texts. over time it grew into what it is now, a hopefully versatile tool to analyze text material in various ways, with an emphasis on scientific research, including readability and lexical diversity features.”</span> lexical diversity features.”</span>)
  • OpenRefine  + (Quote: OpenRefine (formerly Google Refine)Quote: OpenRefine (formerly Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; extending it with web services; and linking it to databases like Freebase. ([ ], oct 2. 2014). ([ ], oct 2. 2014))
  • Stanford NLP toolkits  + (Quote: The Stanford NLP Group makes parts Quote: The Stanford NLP Group makes parts of our Natural Language Processing software available to everyone. These are statistical NLP toolkits for various major computational linguistics problems. They can be incorporated into applications with human language technology needs. ([])://]))
  • Tweet NLP  + (Quote: We provide a tokenizer, a part-of-speech tagger, hierarchical word clusters, and a dependency parser for tweets, along with annotated corpora and web-based annotation tools.)
  • TOko  + (Quoted from the tOko homepage (oct 2014) *Quoted from the tOko homepage (oct 2014)</br>* tOKo is an open source tool for text analysis and browsing a corpus of documents. It implements a wide variety of text analysis and browsing functions in an interactive user interface.</br>* An important application area of tOKo is ontology development. It supports both ontology construction from a corpus, as well as relating the ontology back to a corpus (for example by highlighting concepts from the ontology in a document).</br>* Another application area is community research. Here the objective is to analyse the exchange of information, for example in a community forum or through a collection of interconnected a collection of interconnected weblogs.)
  • Redash  + (Quotes from the [ from the [ FAQ]: Redash is an open source tool for teams to query, visualize and collaborate. Redash is quick to setup and works with any data source you might need so you can query from anywhere in no time. [..] Redash was built to allow fast and easy access to billions of records, that we process and collect using Amazon Redshift (“petabyte scale data warehouse” that “speaks” PostgreSQL). Today Redash has support for querying multiple databases, including: Redshift, Google BigQuery,Google Spreadsheets, PostgreSQL, MySQL, Graphite, Axibase Time Series Database and custom scripts.</br></br>Main features:</br>* Query editor - enjoy all the latest standards like auto-complete and snippets. Share both your results and queries to support an open and data driven approach within the organization.</br>* Visualization - once you have your dataset, select one of our /9 types of visualizations/ for your query. You can also export or embed it anywhere.</br>* Dashboard - combine several visualizations into a topic targeted dashboard.</br>* Alerts - get notified via email, Slack, Hipchat or a webhook when your query's results need attention.</br>" API - anything you can do with the UI, you can do with the API. Easily connect results to other systems or automate your workflows. other systems or automate your workflows.)
  • R  + (R is a language and environment for statisR is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R.</br></br>R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. </br></br>R is available as Free Software for data manipulation, calculation and graphical display. It includes</br></br>*an effective data handling and storage facility,</br>*a suite of operators for calculations on arrays, in particular matrices,</br>*a large, coherent, integrated collection of intermediate tools for data analysis,</br>*graphical facilities for data analysis and display either on-screen or on hardcopy, and</br>*a well-developed, simple and effective programming language which includes conditionals, loops, user-defined recursive functions and input and output facilities. </br></br>R can be considered as an environment within which statistical techniques are implemented. R can be extended via packages. For example, try:</br>* [ RQDA]</br>* [ CRAN Task View: Natural Language Processing]l CRAN Task View: Natural Language Processing])
  • RapidAnalytics  + (RapidAnalytics is an open source server for data mining and business analytics. It is based on the data mining solution RapidMiner and includes ETL, data mining, reporting, dashboards in a single server solution.)