library

Photo vnwayne fan on [Unsplash]

Contextualization

Writing documentation is considered a burden by most developers. Yet, code without documentation is worthless, as not all the users are enough tech-savy to go through the source code to figure out how to accomplish a given task (even if they were, for large and complex codebases it is too time consuming for doing so).

In order to protect themselves from accusations of absence of documentation, most projects rely on automatic tools to create it. That’s nearly useless (well, AI can already write code, so this sentence may become outdated). Wait, we don’t mean you shouldn’t use such tools (you should!), but that it represents, according to Daniele Procida’s Grand Unified Theory of Documentation, only a quarter of your docs (more on it later).

We believe this attitude towards documentation is not due to lazyness, but due to a lack of clear guidelines to follow. Writing documentation should be simple and become part of your coding routines (as well as testing, of course). This post intends to summarize the main guidelines accepted in the community and to point out to useful resources. It also intends to standardize the documentation writing within COOP, so we can ensure our projects follow high quality standards.

Before we dive deeper, remember that we are at the start of our journey. That means our documentation still lacks several of the features that will be shown next. Trying to apply everything at once may be overwhelming. Let’s not do it! The goal should be to slowly introduce these standards, to a point it becomes automatic to write great documentation. The pyramidical approach suggested by Carol Willing may be a good guide here: get practical, start by the essentials, and then keep moving in the direction of the top of the pyramid by adding new features to the docs.

Where to start?

There’s tons of resources about documentation over the internet (just search “How to write documentation”). This shows that people take documentation seriously and acknowledge its value. It also shows that they feel the need to present their view on how documentation should be written. In fact, at first sight, the answer to the question “How to write good documentation?” appears to be a matter of taste. But should it? Writing is a technical skill (in this context) and documentation has clear goals… therefore, shouldn’t it be possible to formulate an unifying theory of documentation?

Daniele Procida’s talk presents a documentation system that aims to be such a theory (you can also find a short blog post on it here). This should be, in our modest opinion, the starting point of your readings about documentation. In fact, after reading other resources, you’ll find out most of them recycle Procida’s message, adding slight variations. And that’s fair, as you should always adapt the guiding principles to your use case (but it should not scare you, as you don’t need to be aware of all the variants).

Another great resource you should read (even if painfully long) is James Mertz’s blog post on documenting Python code. It is very vast and covers topics such as the difference between commenting and documenting, documentation structure (follows Procida), docstring formats, and type hints.

Here, we list several additional resources where you can learn more about documentation:

  • Write the Docs: “a global community of people who care about documentation”. Contains several learning resources.
  • The Good Docs Project: “best practice templates and writing instructions for documenting open source software”.
  • Eric Holscher’s blog: Eric is the co-founder of Read the Docs and Write the Docs. You can find several blog posts on the topic (see e.g. this post on semanting meaning). His slides are also very informative.
  • Google’s Season of Docs: support to open source projects to improve their documentation by hiring technical writers. Going through the accepted project proposals can bring valuable ideas.

Several other resources cover the technical writing aspect of writing documentation. While valuable, this topic will not be covered in this post, as it is a world by itself. Nevertheless, if you are used to writing papers, then this will not be an obstacle to you.

A warning note before moving on: a lot of resources contain outdated information. That’s specially true when they speak about tools (e.g. sphinx), as tools get updated. Make sure to double check what you’ve learned with more up-to-date resources (or by trying it out).

Benefits, rules, principles, and caveats

So, why to bother writing documentation? Here a non-exhaustive list of benefits:

  • users become self-sufficient (less support, more time for coding!)
  • can attract contributors, more users and talented developers
  • can help to build reputation with customers and attract funding
  • can make your code much more pleasurable to use
  • can reduce time required for new employees to start making contributions

For sure you can find much more benefits your team can gain from (please let us know what you came up with). If you want a more in-depth look to the benefits of writing good documentation, this podcast episode and Topic 7 (“Communicate!”) of The Pragmatic Programmer may be good starting points.

In terms of what should be in the top of your head when writing documentation, Adam Scott’s eight rules are a good guide. In short, you should look for clearness, comprehensiveness, and skimmability, offer examples (how-tos), allow repetition when useful (ARID - accept (some) repetition in documentation-, instead of DRY), keep it up-to-date, and make it easy to contribute to and to find. Let’s also add navigability (search must be possible) and conciseness (if you can explain a concept in a sentence, why to use more?).

Another interesting resource principles-centered is this blog post from Write the Docs. There, they suggest an approach to documentation similar to coding, i.e. following philosophies and principles (e.g. KISS), in order to foster clean and intuitive content.

Before closing this section, let’s add a few more caveats:

  • outdated documentation is worst than no documentation
  • documentation should be near the code (otherwise the likelihood of updating it accordingly will decrease significantly)
  • semi-automatic tools must be fostered: try to automate what you can, but do not forget to add a human in the loop to ensure your automations are not simply creating “clutter”
  • automatically test code snippets (doctest may help here), to ensure your docs remain consistent when the code is updated
  • good documentation, as good code, is time consuming to write
  • remember docs, as code, have to be maintained
  • appearance is important, so make it look good
  • involve your audience: they’ll be the main users of both your code and your docs, so ensure you use their feedback wisely
  • make the process of writing documentation so easy that it cannot be avoided

In the end you can see there’s not a large difference between writing code and documentation.

How to structure documentation

The Grand Unified Theory of Documentation

Next figure summarizes the documentation system proposed by Daniele Procida:

procida_system

Simply put, he proposes to split documentation into four parts (Tutorials, How-to guides, Explanations/Discussions/Topical Guides, and Reference), each with its own specific goals. Note the split between practical and theoretical, usefulness when studying or when coding, and the task each part is oriented to.

Tutorials serve as entry point to your project (do not underestimate the importance of first impressions!). You take a wannabe user by hand and help him/her explore the capabilities of your code. At the end, he/she must be proficient enough to be mostly autonomous and to start building on top of your code (referring to the other sections when some clarification is needed). Tutorials follow a learning by doing style, must be quick (“a new user should be able to experience success within thirty minutes”), easy, but not too easy (i.e. the goal is to smooth the learning curve, but, depending on your project, more complex things may arise soon or later - avoid it to be too late to not deceive non-qualified users), and concrete (they’re not the place for abstractions). Explanations should be kept to a minimum (avoid distractions) and the different parts of the project should be covered (holistic view). Tutorials turn a learner into a user.

How-To Guides take the user through a series of steps required to solve a specific problem (problem-oriented). Contrary to tutorials, where you take responsability for the learner, How-To Guides attempt to answer meaningful questions formulated by skilled users. They are focused (practicality beats completeness), easier and funnier to write, usable, and slightly more flexible (meaning you can show how slightly different problems can be tackled). Again, explanations should be kept to a minimum.

Reference contains specific information on how to use a method, class or function. It is descriptive (information-oriented), structured, consistent (dictionary-like), and accurate.

Explanations/Discussions/Topical Guides clarify a particular topic (understanding-oriented). They’re meant for comprehensiveness and give the why. It is the place where different approaches and examples are provided, as well as connections are made. Unfortunately, they are commonly absent and left to books.

It is important to avoid a section’s content to leak in another’s: the greater the leaking, the larger the decrease in documentation quality. It makes the maintenance harder and confuses the reader (“where should I go to find this information?”). By explicitly stating the purpose of each section we increase user experience and avoid deceptions.

For an in-depth analysis of each of these sections, read Kaplan-Moss’s blog post (check also the quick summary made here).

Additionally to these standard sections, the documentation should contain a Getting Started section that quickly summarizes the project, deals with installation, and potentially shows very simple examples (the typical contents of a README file), a Changelog/Release notes that keep the user updated on new features, changes, fixes and deprecations (git is not a replacement for changelogs) and, less importantly, a Development Guide section where code design choices and contribution guidelines are set, and the roadmap for the future is presented.

Let’s get practical

Let’s now leave the realm of ideas and learn something of practical usefulness. The first step is to choose the documentation generation tool to use. That’s a no brainer for the Python community: sphinx.

sphinx

sphinx is a tool written in Python and was initially thought to create Python documentation. Meanwhile utilities for new languages were added and it is also viewed as a static site generator. Its markup language is reStructuredText (.rst) (parsed by docutils). reStructuredText, similar to Markdown, is very easy to use and read. Yet, additional concepts as directives and interpreted text roles make it very powerful for non-trivial use cases. In fact, these are the concepts that allow the representation of semantic information, which cannot be easily accomplished in Markdown. You can find reStructuredText specifications here.

Another strength of sphinx is the availability and ease of develop of extensions, which can be splitted into built-in and third-party (you can find a collection here). If used wisely, extensions can make your life much easier and make your docs look much better. For example, when we mentioned above sphinx uses reStructeredText we were just telling half of the truth: sphinx uses reStructuredText, and Markdown, and jupyter notebooks… as long as the right extensions are added.

The following-up question is, of course, why should you care to use Markdown in your docs? Well, if you are writing documents from scratch, you should not use Markdown (remember semantic meaning). Yet, sometimes you want to add your README.md or CHANGELOG.md directly in your docs, without having to perform any type of manual document conversion. Or you have just finished to write a COOP blog post that beautifully fits in your project documentation. For that you just have to use MyST (recommonmark, a widely used “docutils-compatibility bridge to CommonMark“, was deprecated recently in favour of MyST). Simple, isn’t it? Find out more configuration details here.

And what about jupyter notebooks? We see a great strength on them: it is very easy to quickly validate the code examples you’re writing or catch up outdated examples (they sonorously fail to run!). This makes them very suitable for writing tutorials and how-tos. You’ll need nbsphinx and pandoc. Find out more configuration details here (wait, what about DRY?). (Yes, we know doctest can be used to automate code blocks validation, but we really like jupyter notebook - do you know we even can create presentations with them?)

We are not finished with sphinx strengths yet. This next one refers to output formats: besides html, you can export your docs in LaTeX (which then compiles into PDF), ePUB, Texinfo, and more.

You can learn how to set up sphinx from scratch here.

Commenting vs documenting

Before proceeding for the docstrings section it is important to make a clear distinction between commenting and documenting. The former is intended for maintainers and developers and helps understanding the code, and its purpose and design. On the other hand, documenting describes use and functionality and is intended for users. Commenting happens alongside the code. Documenting is done in docstrings.

Docstrings

Docstrings, besides being immensely valuable while we’re coding (since they inform about the task a method performs, what it expects and what it returns), are of great importance for the creation of an informative API reference section. Remember our groaning about automatic documentation? Well, an API reference powered by great docstrings is built in a semi-automatic way that can clearly be leveraged.

scipy, for example, places tons of information in docstrings, from extensive notes to examples. It comes, of course, with trade-offs: you get substancial information about a method near its definition, but your source code gets super cluttered with text, which can be annoying (noteworthy to say most IDEs have plugins to hide docstrings - check e.g. Fold Python Docstrings if you use Sublime Text).

Besides its importance for documentation, docstrings are printed out when the help method is used. In case you need to access them, they are stored in the attribute __doc__ of your Python objects.

In terms of standards/guidelines on how to write docstrings, there’s essentially three widely accepted docstring formats: Google docstrings, Numpy docstrings, and reStructuredText Python’s official docstring format. It does not really matter which one you select (all get rendered the same if written properly), but you should be consistent within your project.

Numpy docstrings are widely used in scientific computing open-source projects (likely due to the importance of numpy in this community), which makes its adoption easier (it benefits from a network effect, as it is very easy to find projects that consistently use them and imitate the best practices). Yet, the introduction of type hints created a major flaw on them: they place too much importance on types. This code snippet exemplifies the problem:

def manipulate_str(string: str) -> str:
    """Manipulates a string.

    Parameters
    ----------
    string: str
        A string.
    """
    pass

This is a clear violation of the DRY principle. Google docstrings solve this problem easily: the type is simply ommited within the docstring (then sphinx-autodoc-typehints renders them in the exact same way as if the type was directly written in the docstring). (Yet, this solution is imperfect, as the docstrings do not contain type information when using __doc__.)

def manipulate_string(string: str) -> str:
    """Manipulates a string.

    Args:
        string: A string.
    """
    pass

Numpy docstrings are also more verbose. On the other hand, we lose column space when using Google docstrings as identation always follows a header.

reStructuredText docstrings are a clear no-go: they’re much harder to read and write! Yet, reStructuredText directives may be useful in particular situations (e.g. deprecated directive). Before using a directive, verify if it is supported by napoleon (more on it later), and use the corresponding napoleon header if available.

Google docstrings are, therefore, the docstring format choice of COOP. Guidelines and examples on how to write docstrings under this standard can be found here, here and here. The next subsections will show how to treat the most common use cases.

napoleon is the extension that enables the parsing of Google and Numpy docstrings (as sphinx, by itself, deals only with reStructuredText). It works as a pre-processor, converting the docstrings to reStructured text before sphinx attempts to parse them. The available section headers are pretty much everything you need to know about napoleon.

A warning note before moving on: be careful about overdocstringing your code. Docstrings need to be maintained, so overdoing it can backfire. Public methods must have docstrings. Yet, not all private methods should have them (in fact, you can even support that private methods should not have docstrings). In a lot of cases, good naming conventions are enough (let the code speak by itself). When you feel it is not enough and that docstrings will help your fellow developers (users are of no interest here), then add them. Knowing that private methods are more prone to change also supports this idea. Curiously, Google suggests a function must have docstrings unless it is: not externally visible, very short, obvious.

Here a collection of tools that can help achieving consistency in the docstrings (to be updated as we go):

  • darglint: “A functional docstring linter which checks whether a docstring’s description matches the actual function/method implementation”.

Guidelines

The guidelines presented here are a summary of Google’s style guide (with few additions from our side), so please read it for completeness. PEP 257 is also of great guidance here. Please, refer to this example repository and corresponding documentation for further examples and sphinx configuration.

Docstrings always use double-quotes (") rather than single-quotes ('). Triple double-quotes should be used ("""), even for one-liner docstrings (easier to expand later). If any backslash is used within them, then add ‘r’ before: r""". ‘u’ for unicode docstrings (u""").

Multi-line docstrings always start with a summary line (followed by an empty-line). This is of great importance when using our preferred way of creating an API reference: autosummary. This extension creates a table (that serves as table of contents) with module/function/class name and a brief description (the first line of the docstrings). Check out this example. A more elaborate description follows then (in practice, rarely used). Then, another blank line and further information as arguments, returns, examples, references, and so on (always separated by a blank line).

Packages and modules

Docstrings of packages are placed in the corresponding __init__ files. PEP 257 suggests packages should list modules and subpackages within them and module docstrings should list classes, exceptions and functions defined in the module. In practice, this is hard to maintain and prone to get outdated. Again, autosummary easily achieves this, but remember it is not reflected in __doc__.

Functions and methods

A docstring should give enough information such that someone can use the function without reading the code. The following guidelines apply:

  • it should be descriptive, rather than imperative
  • return (or yield) is not mandatory if docstring summary line explicitly and clearly states what the function returns
  • properties follow attribute style
  • focus on syntax and semantics, not implementation
  • for an overriding method with no substantial behavior change """See base class.""" is enough

The most relevant headers for functions/methods are: Args, Returns (or Yields for generators) and Raises.

A very simple example (and the corresponding rendering):

def merge_arrays(a, b):
    """Merges a list and a tuple.

    Args:
        a (list): A list.
        b (tuple): A tuple.

    Returns:
        list: The result of merging the list and the tuple.

    Notes:
        Do not forget to respect indentations when
        changing line.

    .. deprecated:: 0.1.0
        `merge_arrays` will be removed in `pyretem` 0.2.0, find out another
        way of merging arrays.
    """
    pass

Classes

The main consideration in classes is that they should have a docstring below their definition defining not only the instatiation arguments, but also the public attributes (methods should not be listed). Attributes are documented exactly as function arguments (nevertheless, they get rendered differently).

A very simple example (and the corresponding rendering):

class AnObject:
    """Defines an object

    Args:
        a (bool): This is param `a` description.
        b (np.array): This is param `b` description.

    Attributes:
        a (bool): An attribute.
        b (np.array): Another attribute.

    Notes:
        There's a little of repeatability in paramaters and attributes.
        Nevertheless, the output is quite different (take a look).
    """

    def __init__(self, a, b):
        self.a = a
        selb.b = b

    def operate_on_attributes(self, operator):
        """Operates on attributes accordingly to passed method.

        Args:
            operator (callable): A function that receives `a` and `b`.
        """
        pass

Types

Google guidelines don’t give any information on how to specify types within docstrings, as they appear to have fully adopted type annotations. So, let’s define our guidelines by ourselves (borrowing deeply from numpy docstrings). Most of the time, you should simply write the type as it is recognized by Python: int, float, list, str, dict, or set. This also applies if the object was created by you (as an object can also be viewed as a type) or comes from a third-party library. Sometimes you can be more abstract: for example, if you accept any kind of array use array-like. Another useful term is callable, in the case where you code oriented to objects (in this case do not forget to specify in the description which arguments should the function take). Several times you will find yourself using int or None, or similar. or is normally used if you accept two or more types (e.g. float or str) - nevertheless, napoleon does not deal very well with this and assumes the full text is a type (which is not a concern most of the time).

numpy.ndarray

numpy arrays are extensively used by scientific computing libraries (which includes almost all the code we develop at COOP). We suggest the use of np.array in the docstrings (alternatives can be numpy.ndarray, np.ndarray, but we should be consistent). Then, with napoleon_type_aliases and interpshinx we can render the type as a hyperlink to the numpy documentation, so that an unexperienced user can learn about this type without a need to search outside our documentation.

A very simple example (and the corresponding rendering):

def sum_np_arrays(a, b):
    """Sums two `numpy` arrays.

    Args:
        a (np.array): First array.
        b (np.array): Second array.

    Returns:
        np.array: Sum of passed arrays.
    """
    pass

Notice the rendering does not work very well for the return type (probably a place for improvement in napoleon; let us know if we are missing any configuration detail).

Returning multiple arguments

Python does not support returning multiple arguments, so Google docstrings are not really prepared to deal with it (neither napoleon). “What? I’m always returning multiple objects from functions”… wait, that’s partially true: you can return multiple objects WITHIN another object (a tuple). Therefore, when you return a tuple containining several objects, you should specify that you are returning a tuple containing several examples. (Numpy docstrings deal better with this situation by treating returns in the same way as arguments.) Nevertheless, napoleon will not understand the types of objects you are returning and will only highlight the tuple component (yet another place of improvement in napoleon).

A very simple example (and the corresponding rendering):

def get_values(a):
    """Gets first two values of an array.

    Args:
        a (array-like): An array where only the first two elements are relevant.

    Returns:
        tuple: A tuple containing:

            - *float*: First element of `a`.
            - *float*: Second element of `a`.
    """
    return a[0], a[1]

There’s a workaround that consists on deceiving napoleon to treat returns as arguments, but it does not work properly (if you don’t believe try it out with text following the returns definition).

Conclusion

Thank you for coming so far! It was a long journey, but hopefully you have a much better idea now on how to write good documentation. Please, reach us if you have any comment, disagreement, suggestion of interesting tools, ideas, or you simple want to get in touch.

pydata_example

Like this post? Share on: TwitterFacebookEmail


Luís F. Pereira is an engineer that enjoys to develop science/engineering related software.

Keep Reading


Published

Category

Pitch

Tags

Stay in Touch