Paper types

This is description of the different types of paper at COLING 2018. It is important to pick the right paper type in order to help get good quality reviews for your work. Note that all paper types use the same template; download the Word and LaTeX templates here: coling2018.zip

Computationally-aided linguistic analysis

The focus of this paper type is new linguistic insight. It might take the form of an empirical study of some linguistic phenomenon, or of a theoretical result about a linguistically-relevant formal system.

  1. Relevance: Is this paper relevant to COLING?
  2. Readability/clarity: From the way the paper is written, can you tell what research question was addressed, what was done and why, and how the results relate to the research question?
  3. Originality: How original and innovative is the research described? Originality could be in the linguistic question being addressed, in the methodology applied to the linguistic question, or in the combination of the two.
  4. Technical correctness/soundness: Is the research described in the paper technically sound and correct? Can one trust the claims of the paper—are they supported by the analysis or experiments and are the results correctly interpreted?
  5. Reproducibility: Is there sufficient detail for someone in the same field to reproduce/replicate the results? [n/a for certain types of theoretical results]
  6. Data/code availability: Is the data/code (as appropriate) available to the research community or is there a compelling reason given why this is not possible?
  7. Generalizability: Does the paper show how the results generalize, either by deepening our understanding of some linguistic system in general or by demonstrating methodology that can be applied to other problems as well? [n/a for certain types of theoretical results]
  8. Meaningful comparison: Does the paper clearly place the described work with respect to existing literature? Is it clear both what is novel in the research presented and how it builds on earlier work?
  9. Substance: Does this paper have enough substance for a full-length paper, or would it benefit from further development?
  10. Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

NLP engineering experiment paper

This paper type matches the bulk of submissions at recent CL and NLP conferences.

  1. Relevance: Is this paper relevant to COLING?
  2. Readability/clarity: From the way the paper is written, can you tell what research question was addressed, what was done and why, and how the results relate to the research question?
    1. Is it clear what the authors’ hypothesis is? What is it?
      [A text input reponse]
    2. Is it clear how the authors have tested their hypothesis? [y/n]
  3. Originality: How original and innovative is the research described? Note that originality could involve a new technique or a new task, or it could lie in the careful analysis of what happens when a known technique is applied to a known task (where the pairing is novel) or in the careful analysis of what happens when a known technique is applied to a known task in a new language.
  4. Technical correctness/soundness: Is the research described in the paper technically sound and correct? Can one trust the claims of the paper—are they supported by the analysis or experiments and are the results correctly interpreted?
    1. Is it clear how the results confirm/refute the hypothesis, or are the results inconclusive?
    2. Do the authors explain how the results follow from their hypothesis (as opposed to say, other possible confounding factor)?
    3. Are the datasets used clearly described and are they appropriate for testing the hypothesis as stated?
  5. Reproducibility: Is there sufficient detail for someone in the same field to reproduce/replicate the results?
  6. Data/code availability: Is the data/code (as appropriate) available to the research community or is there a compelling reason given why this is not possible?
  7. Error analysis: Does the paper provide a thoughtful error analysis, which looks for linguistic patterns in the types of errors made by the system(s) evaluated and sheds light on either avenues for future work or the source of the strengths/weaknesses of the systems?
  8. Meaningful comparison: Does the paper clearly place the described work with respect to existing literature? Is it clear both what is novel in the research presented and how it builds on earlier work?
  9. Substance: Does this paper have enough substance for a full-length paper, or would it benefit from further work?
  10. Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

Reproduction paper

The contribution of a reproduction paper lies in analyses of and in insights into existing methods and problems—plus the added certainty that comes with validating previous results.

  1. Relevance: Is this paper relevant to COLING?
  2. Readability/clarity: Is the paper well-written and well-structured?
  3. Data/code availability: Is the data/code (as appropriate) available to the research community or is there a compelling reason given why this is not possible?
  4. Analysis: If the paper was able to replicate the results of the earlier work, does it clearly lay out what needed to be filled in in order to do so? If it wasn’t able to replicate the results of earlier work, does it clearly identify what information was missing/the likely causes?
  5. Generalizability: Does the paper go beyond replicating the results on the original to explore whether they can be reproduced in another setting? Alternatively, in cases of non-replicability, does the paper discuss the broader implications of that result?
  6. Informativeness: To what extent does the analysis reported in the paper deepen our understanding of the methodology used or the problem approached? Will the information in the paper help practitioners with their choice of technique/resource?
  7. Meaningful comparison: In addition to identifying the experimental results being replicated, does the paper motivate why these particular results are an important target for reproduction and what the future implications are of their having been reproduced or been found to be non-reproducible?
  8. Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

Resource paper

Papers in this track present a new language resource. This could be a corpus, but also could be an annotation standard, tool, and so on.

  1. Relevance: Is this paper relevant to COLING? Will the resource presented likely be of use to our community?
  2. Readability/clarity: From the way the paper is written, can you tell how the resource was produced, how the quality of annotations (if any) was evaluated, and why the resource should be of interest?
  3. Originality: Does the resource fill a need in the existing collection of accessible resources? Note that originality could be in the choice of language/language variety or genre, in the design of the annotation scheme, in the scale of the resource, or still other parameters.
  4. Resource quality: What kind of quality control was carried out? If appropriate, was inter-annotator agreement measured, and if so, with appropriate metrics? Otherwise, what other evaluation was conducted, and how agreeable were the results?
  5. Resource accessibility: Will it be straightforward for researchers to download or otherwise access the resource in order to use it in their own work? To what extent can work based on this resource be shared?
    [answers to include: Yes, I have verified]
  6. Metadata: Do the authors make clear whose language use is captured in the resource and to what populations experimental results based on the resource could be generalized to? In case of annotated resources, are the demographics of the annotators also characterized?
  7. Meaningful comparison: Is the new resource situated with respect to existing work in the field, including similar resources it took inspiration from or improves on? Is it clear what is novel about the resource?
  8. Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Will people learn a lot by reading this paper or seeing it presented? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

Position paper

A position paper presents a challenge to conventional thinking or a futuristic new vision. It could open up a new area or novel technology, propose changes in existing research, or give a new set of ground rules.

  1. Relevance: Is this paper relevant to COLING?
  2. Readability/clarity: Is it clear what the position is that the paper is arguing for? Are the arguments for it laid out in an understandable way?
  3. Soundness: Are the arguments presented in the paper relevant and coherent? Is the vision well-defined, with success criteria? (Note: It should be possible to give a high score here even if you don’t agree with the position taken by the authors)
  4. Creativity: How novel or bold is the position taken in the paper? Does it represent well-thought through and creative new ground?
  5. Scope: How much scope for new research is opened up by this paper? What effect could it have on existing areas and questions?
  6. Meaningful comparison: Is the paper well-situated with respect to previous work, both position papers (taking the same or opposing side on the same or similar issues) and relevant theoretical or experimental work?
  7. Substance: Does the paper have enough substance for a full-length paper? Is the issue sufficiently important? Are the arguments sufficiently thoughtful and varied?
  8. Overall recommendation: There are many good submissions competing for slots at COLING 2018; how important is it to feature this one? Please be decisive—it is better to differ from other reviewers than to grade everything in the middle.

Survey Paper

A survey paper provides a structured overview of the literature to date on a specific topic that helps the reader understand the kinds of questions being asked about the topic, the various approaches that have been applied, how they relate to each other, and what further research areas they open up. A conference-length survey paper should be about a sufficiently focused topic that it can do this successfully with in the page limits.

  1. Relevance: Is the paper relevant to COLING?
  2. Readability/clarity: Is the paper generally easy to follow and well structured?
  3. Organization: Does the paper organize the relevant literature in a narrative and identify common strands of inquiry?
  4. Scope: Does the paper identify a reasonably focused area to survey?
  5. Thoroughness: Given the area identified to survey, does the paper cover all of the relevant literature? Is the literature reviewed represented accurately?
  6. Outlook: Does the paper identify areas for future work and/or clearly point out what is not yet handled within the literature surveyed?
  7. Context: Does the paper situate current research appropriately within its historical context? (We don’t expect papers to start with Pāṇini, yet at the same time something that only cites work from 2017 probably doesn’t capture how current work relates to the bigger picture.)