top of page
Search

Linguistic Information Processing System

Updated: Jun 19





Overview

We describe Linguistic Information Processing System (LIPS) – the System - for research of natural language and speech. The goal of the system is to provide linguists (including philologists, lexicographers, psycho- and socio- linguists, pragmatists, etc.) researchers and educators with a toolkit for gathering, storing, analyzing, organizing, and rendering linguistic data.

The System consists of three major subsystems or modules: 1) the natural language model (NLM), the ultimate goal of which is text parsing, 2) the Thesaurus, which is a corpus of different types of monolingual and multilingual Dictionaries: linguistic, ontological (thesaurus), etymological, bilingual, etc., and 3) different types of Corpora: literary, current spoken dialects, news and media, legal, scientific, etc. The three modules are controlled via the Linguistic Workbench, which is a set of UIs that allow access to the functionality of the modules. 

The Thesaurus is singled out from other types of corpora because of special relation to the rest of the system, different internal structure and end-user audience. The Thesaurus content is updated not just by the existing Dictionaries, but also by the corpora content. It receives data not only from outside the LIPS, like other corpora, but also from that corpora. The usage is also different.

The audience for this essay is primarily linguists interested in Digital Humanities (DH). Other interested humanities professionals might include philosophers of science and language, ethnographers (cultural anthropologists), epistemologists, etc. However, it might be interesting for software engineers, computer scientists, and IT professionals too.

Specifying the System Requirements

Overview

One of the most important stages in the software system development is requirements specification. The complexity arises from the fact that the party that is a subject matter expert typically is not an expert in IT. On the other hand, the IT professional is rarely an expert in humanities. This difference in background knowledge creates a schism in language they communicate, the vocabulary they use.

Recently a whole discipline emerged in Humanities Departments – Digital Humanities  – to address the schism. The main goal of the DH is to establish and teach humanities experts to communicate with the IT community fluently, to use vocabulary that is familiar to the latter.

User Story

In last couple of decades the standard way of communicating requirements to programmers and IT specialists is a User Story (US)  . 

The US is typically a single sentence – the US Phrase - of this format:

As a <role>, I would like to <activity>[, so that <result>],

followed by Acceptance Criteria (AC) – a set of tests to ensure that the <activity> in the US Phrase is implemented correctly and produces expected <result>. The square brackets in the US Phrase signifies the fact that the “so that…” phrase is optional. The US might contain a more detailed Description, that provides more functional or technical details.

This is an example of a US:

US Phrase:As a Middle English Literature Researcher, I would like to analyze the lexicon of  Middle English literature and compare it with the lexicon of the Spanish mystery plays of the same period.

US Description: The Middle English literature Digital library is available at mel-digilib.com. For the Spanish mystery plays statistics is available in spreadsheet format (attached). The Middle English literature lexicon statistics should be formatted similarly.

AC:

1.  Save the spreadsheet statistics for The Canterbury Tales in local folder.

2.  List titles for specified century: 12 to 15.

3.  Ensure that for the lexeme ‘love’ the system returns results for ‘loving’, ‘loved’, ‘loves’, ‘lover’ too as separate positions.

This compact requirements communicate a lot of valuable for the IT specialist information in unambiguous, precise manner.

Use Case

The Use Case (UC) is also a requirements specification format, geared towards capturing the interaction of an Actor with the System – the software piece that is developed. The Actor is either humans or other software module or application.

This is a template for UC definition:

Actors

List of Actors, for example: Lexicographer, Administrator, Data Analyst, Thesaurus, etc.

Description

A title that captures the essence of the interaction with the system

Preconditions

Conditions that should be satisfied before the start of the UC

Post Conditions

What is accomplished by the Actor at the end of UC – what is the state of the System after executing the Steps below 

Normal Course

Defines the Steps: the actions perform by the System in response to  - the Actors requests.

Alternative Course

A branching from the main flow of activities are described in similar to the Normal Course manner

Exceptions

List of activities in case of the System malfunction or Actor errors

Notes 

Comments to clarify any of the above items

For any set of UCs the Actors and the System short descriptions are provided.

See The major Use Case section for an example of a UC.

Structure of Modules

High-level view and Actors

All three modules: the NLM, the Thesaurus, and the Corpora exchange data and provide services to each other: Diagram 1 is a high-level view.

Diagram 1

The Diagram shows interactions between the modules – non-human Actors and human Actors. The NLM tags texts and the Thesaurus stores exhaustive information about lexemes, while the Corpora are repositories of different types and kinds of speech (text).

The whole System is controlled by the Linguist via Linguistic Workbench. This Actor is supposed to be a very proficient [linguistic] morphologist and lexicographer, not necessarily in a single individual. When the Linguist puts on the morphologist hat then she/he should understand the structure of lexemes [Հայ2022::277-278], be able to identify the stem, the type of stem [Հայ2022::276-277], and find or construct relevant paradigmatic trees [Հայ2022::124-127, 139]. In general - understand the structure of the NLM Morpheme Dictionaries and morpheme and lexeme descriptions  [Հայ2022::119-123].

When the lexicographer’s hat is put, then it is expected that the Linguist can correctly add a new lexeme information into the System: create relevant entries in different types of Dictionaries of the Thesaurus.

The Linguist controls all major data and processing flows of the System.

The end-user can be a layman or philosopher, philologist, lexicographer, etymologist, psycho-, socio-, or theoretical linguist, editor, educator, machine (ML/AI), etc.

NLM and Thesaurus Structure

The Diagram 2 schematically represents the major data structures and data flows in the System.

  Diagram 2

The Morpheme Dictionary (A1 on the Diagram) stores stems, prefixes, suffixes (postfixes), and particles. The entries contain morpheme structure and tags (i.e. types like NOML, VERB, NOUN, ADJ, ADV, ABL, GEN, INF, FUT, etc. The prefixes, suffixes, and particles Dictionaries depicted with dashed lines, because they are used by the algorithmic stemmer/parser. The Diagram assumes a different: “semi-brute-force” parser, which has a minor disadvantage – more storage is used – but has implementational (relative simplicity) and functional advantages, in particular, implementation of spelling error correction suggestions when parser is used in the spell checking mode. It also simplifies constriction of particular or all text forms

The stem Dictionaries is beneficial to separate into Base, First/Last names, Toponyms, scientific terms, etc. Base Dictionary is mandatory, while all others can be added when necessary. For example, if the analyzed texts are of chemical nature we have the flexibility of using Base,  First/Last names, and Chemical Nomenclature Dictionaries.

The Generative (Morphological and Paradigmatic) trees (B2) repository contains tree-like structures [Հայ2022::124-127] that guide the Text Forms Generator (B1) in building all possible derivative wordforms from the stems in the Stems Dictionary. The generated forms are tagged and stored in the Text Forms (C1) repository that is indexed for searching convenience.

The entries in the Text Form repository along with the entry itself contain structure, tag, and  lemma fields. Homonyms are separate entries. For example [Հայ2022::139], the “կաղապարում” lexeme, which means 1: in the model, 2: modeling (name of a process), 3: modeling (name of an activity – gerund) – see below - will have these entries:

1.  {“entry”: “կաղապարում”, “structure”: “կաղապար-ում” “tag”: “NOUN.LOC”, “lemma”: “կաղապար”}

2.  {“entry”: “կաղապարում”, “structure”: “կաղապար-ում” “tag”: “NOUN.NOM”, “lemma”: “կաղապարում”}

3.  {“entry”: “կաղապարում”, “structure”: “կաղապար-ում” “tag”: “VERB.NOM”, “lemma”: “կաղապարել”}

The Text Forms Generator identifies lemmas that are not yet in the Thesaurus Repository (E1) and puts them into the D1 queue. The Lexicographer reviews the queued items and adds missing data entries in the Thesaurus.

The Tags repository is a hierarchical list of tags (also used by the algorithmic stemmer/parser). The Tags and suffix Dictionaries are used for information purposes.

The Parser/Tagger  (C3) reads the text, extracts lexemes and tags them according to the The lexeme signature is a context definition [Հայ2022::200] that refines grammatical role and meaning [Hay2025]. The Signatures repository (B3) contains verbal [phrase] signatures [Հայ2022:: 207, 284-288] that the NLM Parser uses for building the trees. 

Text Forms repository data. The words that the Tagger cannot tag are sent into the B4 queue. There are two possibilities causes for the wordform getting into the queue: 1) the wordform is misspelled, or 2) it is unknow to the System – it is not in the Text Forms (C1) repository.

The Morpheme Dictionary (A1), the Generative (Morphological and Paradigmatic) trees (B2), Tags and Signatures (B3) are parts of Morphological repository.

The Major Use Case

This is the Linguistic Workbench UI (the System) Use Case for parsing a text by using the modules of Diagram 1:

Actors

Linguist, NLM, Thesaurus, Corpus

Description

Parsing and tagging a text for storage in the Corpus

Preconditions

The Linguist is authenticated and logged in

Post Conditions

Parsed text is available in the Corpus. 

Normal Course

1.     The Linguist navigates the System menu, specifies the location of the file, and requests Tagging.

2.     The System reads the file and submits it to the NLM for tagging

3.     The NLM identifies and tags the lexemes, and:

1.     Sends the tagged text to the Corpus

2.     Queues unknown words for Linguist’s review

3.     Sends the tagged text to Linguist for review

4.     The Linguist reviews the Queue (the Queue processing UC) and:

1.     Sends new stems to NLM for Dictionary update (the Dictionary update UC)

2.     The NLM updates the Dictionary

3.     Updates the tagged text  and saves it in Corpus (the Text tagging UC)

4.     The Linguist requests Parsing

5.     The System reads the tagged text and submits it to the NLM for parsing

6.     The NLM builds the requested tree-like structure using Signatures and saves it in Corpus

7.     The Linguist reviews, corrects, and approves parsed text (the Text parsing UC).

8.     End of UC: The parsed text becomes available in the Corpus.



Alternative Course

 

Exceptions


Notes 


NLM Functionality

The NLM functionality includes:

1.     Stemming – extraction of a lexeme stem. More generally, chopping lexeme into a sequence of morphemes: <prefix>…<prefix>-<stem>…<stem>-<suffix>…<suffix>, for example, un-believ-able.

2.     Tagging – determining the type of a lexeme and assigning tags. For example, jumps: VERB.PRES.3

3.     Lemmatization – contraction of the Dictionary form from the text form. For example, for jumps, jumped the lemma is jump. The lemma for jumping can be jump, if it is in a verbal phrase, or jumping if – noun phrase.

4.     Lexeme generation – constructing of all or specified text forms for a stem or lemma. For example, for jump the text forms are:  jump, jumps (NOUN.PLU, VERB.PRES.3), jumped, jumping, jumper.

5.     Parsing – determining the tree-like structure of phrase based on tagging information. It creates tree-like structures in common format [Հայ2022:: 207] using unified nominal (declension) and verbal (conjugation) paradigmatic systems [Հայ2022::166-181] and verbal signatures into a common (same for all languages) tree-like structures of personal choice: dependency, phrase, or other structure.

6.     Phrase generation – opposite to parsing: conversion of a the tree-like structure into a phrase (sentence). [Note. This cannot be considered a true generation because any tree-like phrase structure created outside the human brain is a result of parsing natural speech, human generated phrases. Human brain so far is the only known entity that supposedly constructs a concept tree (semantic structure) without language involvement. It is later converted into syntactic structure by brain. At best this is a conversion of one phrase into another. This should be taken into consideration when we talk about generative AI (LLMs) – they convert one sequence of tokens into another. Sometimes  one or both of the token sequences correspond to natural speech [Հայ2022::42]. It is probably more accurate to call this process phrase conversion.]

Parsing (tagging) is the most important function of the NLM. It is a pivotal part of corpora building, because of wide use of the results in statistical analysis of texts.

In addition to the above functionality the NLM can be used for informational purposes as a repository of classified stems and suffixes, tags hierarchy, production rules, etc.

Thesaurus

The Thesaurus [Rog1852] unlike monolingual (explanatory) dictionary, contains ontological (logical) - synonym, antonym, meronym, etc. - relations between concepts [Հայ2022::23]. However, it is not just a list of synonyms, antonyms, or neutral (isoaposonym – made up term: ισο+απόσ[ταση]+ ὄνομ[α] for միջանուն) words that are equidistant in meaning from a synonym and antonym. It is also organized as hierarchy of categories. This is what P.M. Roget has to say about his creation: “The present Work is intended to supply, with respect to the English language, a desideratum hitherto unsupplied in any language; namely, a collection of the words it contain and of the idiomatic combinations peculiar to it, arranged not in alphabetical order as they are in a Dictionary, but according to the ideas which they express. The purpose of an ordinary dictionary is simply to explain the meaning of the words; and the problem of which it professes to furnish the solution may be stated thus: - The word being given, to find its signification, or the idea it is intended to convey. The object aimed at in the present undertaking is exactly the converse of this: namely, - The idea being given, to find the word, or words, by which that idea may be most fitly and aptly expressed. For this purpose, the words and phrases of the language are here classed, not according to their sound or their orthography, but strictly according to their signification” [Rog1852::xiii].

P.M. Roget introduced fixed levels of hierarchy: Class, Division, Section, Subsection, and 1000 Heads – groups of concepts. In the electronic Thesaurus there is no need in a priory setting of the hierarchy. It will come about during building the Thesaurus and will be revealed at search time.

It is remarkable that P.M. Roget supplied his book with an alphabetical inverted index, where for each word he has listed all relevant Subsections. This might be the first implementation of the idea of inverted index.

One of the dictionaries in the Thesaurus repository (E1 on Diagram 2) is the Ontological Dictionary, which is the Thesaurus (in its original sense). The entry in the Ontological Dictionary contains the entry name and the fields that specify the relations of synonymy, antonymy, and hypernymy. The neutral relation is implemented by “more” and “less” fields. For example, for the word hot the “antonym” is cold and “less” is lukewarm. We can decide that there is no “more” for hot since it is the highest polar value for the cold, lukewarm, hot triplet. However, if the lexicographers and philosophers find that scorching is more than hot, then we will add scorching  as “more” field value to the hot entry. For example:

{“entry”: “hot”, “category”: “heat |degree|”,  “synonym”: [“sunny”, “heated“, “smoking”], antonyms=”: [“cold”, “frozen”, “icy”, “frosty”], “less” : [“warm”, “lukewarm”], “more”: “scorching”, “boiling”, “broiling”}

The “more” and “less” approach allows to represent not just a triplet, but also a spectrum of degrees.

In the Electronic Thesaurus, in addition to Ontological Dictionary (traditional Thesaurus) [Rog1852]) we store information from Explanatory, Etymological, and other dictionaries. Grammatical and morphological information about lexemes is available in the Text Forms (C1) repository. The bilingual and multilingual dictionaries, and other, for example Rhymes, dictionaries are also available. The translations of a lexeme into other languages are also descriptions.

The Electronic Thesaurus is a corpus of different dictionaries that contain comprehensive and exhaustive information about each lexeme-entry including extended synonymy – corresponding lexemes in foreign languages..

Corpora Creation and Support

Definition and functions

Corpus is a collection of texts of in a specific language (dialect) - a database, which, like a library, allows you to search for texts (books) by author, title, and perhaps other characteristics. However, in practice, corpora, unlike such electronic libraries, have other functions, for example, richer search capabilities: lexemes, lexeme types, phrase structure or dependency type, regular expression searching; tagging – labeling the lexemes per their linguistic form; statistical calculations; data views configuration; mathematical models specifications and their applications.

An important characteristic of new-generation corpora is close integration with the Thesaurus and the NLM. Such integration is mutually beneficial: the former enables quick retrieval of exhaustive lexicographic data, while the latter – the grammatical data about lexeme. Using the corpora data, the linguist can correct and improve NLM and the Thesaurus. The data in  corpora contains information about “real” use of lexemes and grammatic rules.

Tagged text allows displaying the phrases as different tree-structures: phrase structure (of different flavors), dependency structure (of different flavors), or other.

Text Corpus

What are the main functions of a Corpus?

1.     Repository of original texts. A digital library is an electronic library that stores visual (books, manuscripts, born-digital texts) and audio (field interviews, TV and radio news and programs, soap operas, podcasts, etc.) images.

2.     Perpetual influx of images.

3.     Conversion of text (OCR) and audio (ASR) images to plain text.

4.     Text parsing – Tagging, lemmatization; dependency or phrase structure representation

5.     Indexing – creating an inverted index of texts.

6.     Searching – regular search (regex search), syntactic search, semantic search, etc.

7.     Mathematical, statistical modeling

8.     Data analysis and visualization.

Audio Corpus

These functions are specific to audio Corpora:

1.     The ASR convertor creates two types (copies) of text:

a.     Encoded with written (alphabetic) symbols (language specific script letters): ordinary written text, which is subsequently tagged (see #4 in previous section).

b.     Encoded with the International Phonetic Alphabet

2.     In ordinary written text, the beginning of the segment is anchored in the sound source. This will allow the researcher to easily compare the visual (written) image of the text obtained by the converter with the pronunciation of the original.

3.     Interlocutor identification

4.     Audio spectral analysis to identify syntagmatic units as well as prosodic and other vocal characteristics of speech

Structure of Corpus

Major Parts

There 2 major types Corpus software: 1) for building and 2) for using the Corpus. The majority of the former are for data input, data conversion, and indexing, the latter – data search and rendering.

Diagram 3 is a schematic representation of the major modules of the Corpus and data flows among them:

Diagram 3

The corpus is a three-layered system: a) data collection and storage (Data Repository in Diagram 3), b) data processing (NLM Tagger, A3 and partially A2, B2), and c) data rendering (A1, A2).

Data Collection

From external sources, such as YouTube, Google, Electronic Library, Internet Dictionary, etc., data is directed to a NoSQL text database (B1 and B3) and image repositories (D1 and D2), when it is legally allowed and has no convenient access to. Visual and audio data are converted (C1, C2) to text using ASR and OCR converters.

Data Flow Management

Data inflow is controlled by the Linguist via Workbench UI. Upon saving a text image into designated folder the file is picked up by the System, moved into image repository, and fed into the OCR (C1) module. The audio file is saved in the same designated folder by an external scheduler that extracts specified news programs from the internet. These files are picked up by the System and fed into the ARS (C2) module. The latter, unlike the OCR module, created 2 plain text files: one in IPA notation, the other in relevant Unicode character set.

The plain text that can also come from the “born digital” sources: they passed to the NLM Tagger.

The Tagger saves text into the Tagged Text (B1) repository. Unidentified lexemes are passed to the Linguist’s queue (A1) for review and manual correction of the automatically tagged text.

After linguists review and approval the parsed data pieces are indexed and available for the end-user viewing. See more details in The Major Use Case section above.

As mentioned before the Thesaurus is a special form of Corpus – a corpus of different types of dictionaries. The difference is in a unit of storage (file) and indexing: for Thesaurus it is the dictionary entry, for other corpora it can be the whole document (book, journal, article) or page, chapter, section, etc.

For either type of corpora the unit of storage does not “know” about other units. At most the units can be grouped into folders per document. The Catalog is build at search time using Inverted Index.

Data Rendering

The Data Rendering is one of the major (along with data flow control) functionalities of the front-end, the UI.

The linguist can define Data Views (A3) and research them using Analytics and Reporting tools (A2).

Simple and complex queries (A4) can be constructed and submitted to retrieve data elements from the Data Views or the whole data repository. For example, the System can display a phrase aligned with the audio spectrum and IPA representation.

Summary

We described an information processing system architecture as an interconnection of IT tools that control and support data flows into big data repositories of texts for building and accessing corpora to study natural language and speech. It is controlled and operated by linguists.

The linguists and other humanities experts now participate in the design and development of the software systems they use. For communicating with the developers, the IT professionals effectively the humanities specialists become proficient in the area of DH.

One of the important goals of DH education is establishing vocabulary or lexicon for communicating with the IT specialists is precise and clear way. And it is not just the memorization of concepts. For formulating system requirements accurately it is also important the understanding and correct usage of such concepts as mathematical function, logical operation, probability (and statistics), data structure, data storage and retrieval, algorithm, Turing machine, finite state automaton, grammar (Chomsky) hierarchy, regular expression, etc. In addition to those concepts they need to understand software development lifecycle and, most importantly, the phase of requirement gathering.

In the beginning of this essay we introduced several IT concepts related to the system requirements specification, which seem neglected in the DH courses.

It is plain that these courses are not supposed to produce IT systems developers, but they should prepare students to play the role of subject matter expert or business analyst on the software design and development teams.

The proposed System design emphasizes the benefits of closer integration of available scattered NLM, Thesaurus, and Corpus modules and other applications. The dynamic nature of such integration is very important. It allows for accurate capture and perpetual improvement of understanding the lexicon and the grammar of language. This understanding relays not just on the opinions expressed in monographs or textbooks, but rather on “real”, “live” usage of language in relevant communities.

There are several levels of dynamism in the System:

1.     There is a perpetual influx of data

2.     No rigid categorization: it is created on fly at search time, rather than storage time. For example:

a.     In a Digital Libraries – which is technically a multilingual Corpus – the instances of books are not put into catalog, but assigned an existed, registered for a super category or create a new category

b.     In Thesaurus – which is technically a Corpus of Dictionaries – the lexemes are not put in predefined category buckets or folders. Category is assigned to a lexeme.

3.     Content dynamics – these are error corrections and concepts, categories, grammatical rules, etc. refinements due to information (new data) influx.

4.     The functionality is not fixed; new functionality is made available via published interfaces of the System (or the interfaces of the external tools).

This is not a static, closed or "petrified" system that has different creation and exploitation times. It is a dynamic, live system, which is in perpetual upgrade, expansion, correction, improvement in both: functionality and data volume.

The three modules of the System can be implemented for multiple languages. Since the result of parsing is a commonly known format (see #5 in the NLM Functionality section) Phrase generators (#6) can be considered as translators.

Terms

The majority of the terms description is from Wikipedia. However, it is mentioned only for verbatim (at the time of this writing) quotes.

Actor – either human or mechanical (software module, hardware unit, application, etc.)  role player in requirements specification for a System design

Automated Speech Recognition (ASR) - interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers (Wikipedia)

Acceptance Criteria (AC) – a list of statements that are true, if a feature of the system defined in a US is implemented correctly.

International Phonetic Alphabet (IPA) - an alphabetic system of phonetic notation based primarily on the Latin script. It was devised by the International Phonetic Association in the late 19th century as a standard written representation for the sounds of speech (Wikipedia).

Lemma – lexeme dictionary form

Lemmatization – extracting lemma from the text form

NoSQL database - or "Not Only SQL" database, is a non-relational database designed to handle unstructured or semi-structured data. Other database types are: relational normalized, relational denormalized (warehouse), hierarchical (file system), etc.

Optical Character Recognition (OCR) - electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document (Wikipedia).

Parser – analysis (breakdown) of a message (natural language text, programming language source code) and transformation into a structured format. It involves breaking down the input into its constituent parts according to predefined rules or grammar. We assume that the “language organ” in human brain parses natural speech into tree-like structure before passing it to other layers for storage and analysis. Computer programming language parsers also parse the source code into a tree like structure before passing it to other modules for linearizing it into a sequence of commands

Portable document format (PDF) - standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems (Wikipedia)

System – product, service, organization that is being developed.

Tagger - in corpus linguistics, part-of-speech tagging (POS tagging, PoS tagging, or POST), also called grammatical tagging, is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc. (Wikipedia) [Note. This is written from an analytical language grammar perspective. Instead of part-of-speech the paradigmatic form should be used and taught to school-age children.]

User Story - standardized narrative format for describing system features from the end-user perspective. It specifies “who wants to achieve what” – User Story phrase, and Acceptance

Use Case - structured description of the System behavior as responses to a sequence of the Actor’s requests for achieving a tangible goal.

Data View – data extract from one or more underlying data sources through a predefined query. It allows the users to interact with the extracted data as if it were a separate data repository.

References

[Hay2025] A.Hayrapetyan. Conjunctions in Eastern Armenian. https://www.academia.edu/129638433.

[Rog1852] P.M. Roget. Thesaurus of English Words and Phrases, 1852 (1879, expended, ed. J.L. Roget, London. 1925, revised, ed. S.R. Roget, London). Avenel Books (Crown publishers), NY. 1988.

Agoulis, Concord, 2022


 
 
 

Recent Posts

See All
Pragmatics

This is the landing page for everything Pragmatics. It has links to the books and articles by well established workers on established...

 
 
 

Comments


bottom of page