Git Product home page Git Product logo

nydus-snapshotter's Introduction

containerd banner light mode containerd banner dark mode

PkgGoDev Build Status Nightlies Go Report Card CII Best Practices Check Links

containerd is an industry-standard container runtime with an emphasis on simplicity, robustness, and portability. It is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc.

containerd is a member of CNCF with 'graduated' status.

containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users.

architecture

Announcements

Now Recruiting

We are a large inclusive OSS project that is welcoming help of any kind shape or form:

  • Documentation help is needed to make the product easier to consume and extend.
  • We need OSS community outreach/organizing help to get the word out; manage and create messaging and educational content; and help with social media, community forums/groups, and google groups.
  • We are actively inviting new security advisors to join the team.
  • New subprojects are being created, core and non-core that could use additional development help.
  • Each of the containerd projects has a list of issues currently being worked on or that need help resolving.
    • If the issue has not already been assigned to someone or has not made recent progress, and you are interested, please inquire.
    • If you are interested in starting with a smaller/beginner-level issue, look for issues with an exp/beginner tag, for example containerd/containerd beginner issues.

Getting Started

See our documentation on containerd.io:

To get started contributing to containerd, see CONTRIBUTING.

If you are interested in trying out containerd see our example at Getting Started.

Nightly builds

There are nightly builds available for download here. Binaries are generated from main branch every night for Linux and Windows.

Please be aware: nightly builds might have critical bugs, it's not recommended for use in production and no support provided.

Kubernetes (k8s) CI Dashboard Group

The k8s CI dashboard group for containerd contains test results regarding the health of kubernetes when run against main and a number of containerd release branches.

Runtime Requirements

Runtime requirements for containerd are very minimal. Most interactions with the Linux and Windows container feature sets are handled via runc and/or OS-specific libraries (e.g. hcsshim for Microsoft). The current required version of runc is described in RUNC.md.

There are specific features used by containerd core code and snapshotters that will require a minimum kernel version on Linux. With the understood caveat of distro kernel versioning, a reasonable starting point for Linux is a minimum 4.x kernel version.

The overlay filesystem snapshotter, used by default, uses features that were finalized in the 4.x kernel series. If you choose to use btrfs, there may be more flexibility in kernel version (minimum recommended is 3.18), but will require the btrfs kernel module and btrfs tools to be installed on your Linux distribution.

To use Linux checkpoint and restore features, you will need criu installed on your system. See more details in Checkpoint and Restore.

Build requirements for developers are listed in BUILDING.

Supported Registries

Any registry which is compliant with the OCI Distribution Specification is supported by containerd.

For configuring registries, see registry host configuration documentation

Features

Client

containerd offers a full client package to help you integrate containerd into your platform.

import (
  "context"

  containerd "github.com/containerd/containerd/v2/client"
  "github.com/containerd/containerd/v2/pkg/cio"
  "github.com/containerd/containerd/v2/pkg/namespaces"
)


func main() {
	client, err := containerd.New("/run/containerd/containerd.sock")
	defer client.Close()
}

Namespaces

Namespaces allow multiple consumers to use the same containerd without conflicting with each other. It has the benefit of sharing content while maintaining separation with containers and images.

To set a namespace for requests to the API:

context = context.Background()
// create a context for docker
docker = namespaces.WithNamespace(context, "docker")

containerd, err := client.NewContainer(docker, "id")

To set a default namespace on the client:

client, err := containerd.New(address, containerd.WithDefaultNamespace("docker"))

Distribution

// pull an image
image, err := client.Pull(context, "docker.io/library/redis:latest")

// push an image
err := client.Push(context, "docker.io/library/redis:latest", image.Target())

Containers

In containerd, a container is a metadata object. Resources such as an OCI runtime specification, image, root filesystem, and other metadata can be attached to a container.

redis, err := client.NewContainer(context, "redis-master")
defer redis.Delete(context)

OCI Runtime Specification

containerd fully supports the OCI runtime specification for running containers. We have built-in functions to help you generate runtime specifications based on images as well as custom parameters.

You can specify options when creating a container about how to modify the specification.

redis, err := client.NewContainer(context, "redis-master", containerd.WithNewSpec(oci.WithImageConfig(image)))

Root Filesystems

containerd allows you to use overlay or snapshot filesystems with your containers. It comes with built-in support for overlayfs and btrfs.

// pull an image and unpack it into the configured snapshotter
image, err := client.Pull(context, "docker.io/library/redis:latest", containerd.WithPullUnpack)

// allocate a new RW root filesystem for a container based on the image
redis, err := client.NewContainer(context, "redis-master",
	containerd.WithNewSnapshot("redis-rootfs", image),
	containerd.WithNewSpec(oci.WithImageConfig(image)),
)

// use a readonly filesystem with multiple containers
for i := 0; i < 10; i++ {
	id := fmt.Sprintf("id-%s", i)
	container, err := client.NewContainer(ctx, id,
		containerd.WithNewSnapshotView(id, image),
		containerd.WithNewSpec(oci.WithImageConfig(image)),
	)
}

Tasks

Taking a container object and turning it into a runnable process on a system is done by creating a new Task from the container. A task represents the runnable object within containerd.

// create a new task
task, err := redis.NewTask(context, cio.NewCreator(cio.WithStdio))
defer task.Delete(context)

// the task is now running and has a pid that can be used to setup networking
// or other runtime settings outside of containerd
pid := task.Pid()

// start the redis-server process inside the container
err := task.Start(context)

// wait for the task to exit and get the exit status
status, err := task.Wait(context)

Checkpoint and Restore

If you have criu installed on your machine you can checkpoint and restore containers and their tasks. This allows you to clone and/or live migrate containers to other machines.

// checkpoint the task then push it to a registry
checkpoint, err := task.Checkpoint(context)

err := client.Push(context, "myregistry/checkpoints/redis:master", checkpoint)

// on a new machine pull the checkpoint and restore the redis container
checkpoint, err := client.Pull(context, "myregistry/checkpoints/redis:master")

redis, err = client.NewContainer(context, "redis-master", containerd.WithNewSnapshot("redis-rootfs", checkpoint))
defer container.Delete(context)

task, err = redis.NewTask(context, cio.NewCreator(cio.WithStdio), containerd.WithTaskCheckpoint(checkpoint))
defer task.Delete(context)

err := task.Start(context)

Snapshot Plugins

In addition to the built-in Snapshot plugins in containerd, additional external plugins can be configured using GRPC. An external plugin is made available using the configured name and appears as a plugin alongside the built-in ones.

To add an external snapshot plugin, add the plugin to containerd's config file (by default at /etc/containerd/config.toml). The string following proxy_plugin. will be used as the name of the snapshotter and the address should refer to a socket with a GRPC listener serving containerd's Snapshot GRPC API. Remember to restart containerd for any configuration changes to take effect.

[proxy_plugins]
  [proxy_plugins.customsnapshot]
    type = "snapshot"
    address =  "/var/run/mysnapshotter.sock"

See PLUGINS.md for how to create plugins

Releases and API Stability

Please see RELEASES.md for details on versioning and stability of containerd components.

Downloadable 64-bit Intel/AMD binaries of all official releases are available on our releases page.

For other architectures and distribution support, you will find that many Linux distributions package their own containerd and provide it across several architectures, such as Canonical's Ubuntu packaging.

Enabling command auto-completion

Starting with containerd 1.4, the urfave client feature for auto-creation of bash and zsh autocompletion data is enabled. To use the autocomplete feature in a bash shell for example, source the autocomplete/ctr file in your .bashrc, or manually like:

$ source ./contrib/autocomplete/ctr

Distribution of ctr autocomplete for bash and zsh

For bash, copy the contrib/autocomplete/ctr script into /etc/bash_completion.d/ and rename it to ctr. The zsh_autocomplete file is also available and can be used similarly for zsh users.

Provide documentation to users to source this file into their shell if you don't place the autocomplete file in a location where it is automatically loaded for the user's shell environment.

CRI

cri is a containerd plugin implementation of the Kubernetes container runtime interface (CRI). With it, you are able to use containerd as the container runtime for a Kubernetes cluster.

cri

CRI Status

cri is a native plugin of containerd. Since containerd 1.1, the cri plugin is built into the release binaries and enabled by default.

The cri plugin has reached GA status, representing that it is:

See results on the containerd k8s test dashboard

Validating Your cri Setup

A Kubernetes incubator project, cri-tools, includes programs for exercising CRI implementations. More importantly, cri-tools includes the program critest which is used for running CRI Validation Testing.

CRI Guides

Communication

For async communication and long-running discussions please use issues and pull requests on the GitHub repo. This will be the best place to discuss design and implementation.

For sync communication catch us in the #containerd and #containerd-dev Slack channels on Cloud Native Computing Foundation's (CNCF) Slack - cloud-native.slack.com. Everyone is welcome to join and chat. Get Invite to CNCF Slack.

Security audit

Security audits for the containerd project are hosted on our website. Please see the security page at containerd.io for more information.

Reporting security issues

Please follow the instructions at containerd/project

Licenses

The containerd codebase is released under the Apache 2.0 license. The README.md file and files in the "docs" folder are licensed under the Creative Commons Attribution 4.0 International License. You may obtain a copy of the license, titled CC-BY-4.0, at http://creativecommons.org/licenses/by/4.0/.

Project details

containerd is the primary open source project within the broader containerd GitHub organization. However, all projects within the repo have common maintainership, governance, and contributing guidelines which are stored in a project repository commonly for all containerd projects.

Please find all these core project documents, including the:

information in our containerd/project repository.

Adoption

Interested to see who is using containerd? Are you using containerd in a project? Please add yourself via pull request to our ADOPTERS.md file.

nydus-snapshotter's People

Contributors

adamqqqplay avatar akihirosuda avatar austinvazquez avatar bbolroc avatar bergwolf avatar billie60 avatar changweige avatar chengyuzhu6 avatar darfux avatar dependabot[bot] avatar desiki-high avatar eryugey avatar fengshunli avatar fidencio avatar hangvane avatar hsiangkao avatar imeoer avatar jiangliu avatar liubin avatar liubogithub avatar loheagn avatar luodw avatar mofishzz avatar pkizzle avatar power-more avatar raoxiang1996 avatar sctb512 avatar taoohong avatar wllenyj avatar zyfjeff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nydus-snapshotter's Issues

Make nydus-snapshotter config-path as optional

Nydus-snapshotter's option config-path is required now, where actually only the registry/OSS auth has to be passed to nydusd.
But nydus-snapshotter now can take in auth from local host's docker configuration. It means nydus-snapshotter can make up a comprehensive json configuration file for nydusd itself. So end users can skip the configuration step. It is convenient.

		&cli.StringFlag{
			Name:        "config-path",
			Required:    true,
			Usage:       "path to the configuration file",
			Destination: &args.ConfigPath,
		},

Log nydusd's stderr and stdout to nydus-snapshotter log

At present, the log of nydusd is not output to the log file. When panic occurs in Nydusd, the information of the panic will be lost. Therefore, it is necessary to output stdout and stderr to the log file as well.

	args = append(args, "--apisock", d.GetAPISock())
	args = append(args, "--log-level", d.LogLevel)
	if !d.LogToStdout {
		args = append(args, "--log-file", d.LogFile())
	}

	log.L.Infof("start nydus daemon: %s %s", m.nydusdBinaryPath, strings.Join(args, " "))

	cmd := exec.Command(m.nydusdBinaryPath, args...)
	if d.LogToStdout {
		cmd.Stdout = os.Stdout
		cmd.Stderr = os.Stdout
	}
	return cmd, nil

Support download blob layers to cache dir

Our Nydusd supports the localfs mode to start, in other words, the blob layer is placed on the local file system in advance, but at present, we do not provide the auxiliary ability to put the blob layer in the corresponding directory, which requires us manually from the registry download the blob layer and extract it to the corresponding directory. Obviously, this is more complicated to operate and maintain. We can support this scenario through nydus-snapshotter, which downloads the blob layer from the registry through snapshotter and puts it into the blob cache directory configured by Nydusd.

use `findmnt` to judge `IsLikelyNotMountPoint`

When nydusd is down unexpectedly, IsLikelyNotMountPoint can not run expectedly, as stat fuse mountpoint will failed and the fuse mountpoint can not cleanup when image is removed. So I think IsLikelyNotMountPoint use command findmnt maybe more suitable.

here can refer to this,

func (m *Mounter) IsLikelyNotMountPoint(file string) (bool, error) {
	file, err := filepath.Abs(file)
	if err != nil {
		return true, err
	}
	cmdPath, err := exec.LookPath("findmnt")
	if err != nil {
		// no findmnt found, judge moutpoint by device
		log.L.Printf("no findmnt command found, use device to judge")
		return m.isLikelyNotMountPoint(file)
	}
	args := []string{"--types", "fuse", "-o", "target", "--noheadings", "--target", file}
	log.L.Printf("findmnt command: %v %v", cmdPath, args)

	out, err := exec.Command(cmdPath, args...).CombinedOutput()
	if err != nil {
		// if findmnt didn`t return, just claim it's not a mount point
		return true, err
	}
	strOut := strings.TrimSuffix(string(out), "\n")
	log.L.Printf("IsLikelyNotMountPoint findmnt output: %v", strOut)
	if strOut == file {
		return false, nil
	}

	return true, nil
}

func (m *Mounter) isLikelyNotMountPoint(file string) (bool, error) {
	stat, err := os.Stat(file)
	if err != nil {
		return true, err
	}
	rootStat, err := os.Stat(filepath.Dir(strings.TrimSuffix(file, "/")))
	if err != nil {
		return true, err
	}
	// If the directory has a different device as parent, then it is a mountpoint.
	if stat.Sys().(*syscall.Stat_t).Dev != rootStat.Sys().(*syscall.Stat_t).Dev {
		return false, nil
	}

	return true, nil
}

Monitor nydusd

Snapshotter is better to have a mechanism to monitor nydusds. If nydusd is dead somehow, the nydus-snapshotter should be notified.

report nydus image usage

Let nydus-snasphotter report more accurate disk usage.

When performing ctr snapshot --snapshotter nydus usage, the total usage of all layers is not accurate.

sharedMode snapshotter doesn't shutdown nydusd when handling os signal INT and TERM

As a e2e test case comment says:

          # After the snapshotter container is stopped, it seems that Nydusd doesn't umount it
          # so we need to umount it here, otherwise you cannot delete this directory. 
          # Frankly, I don't know why Nydusd didn't clean up these resources.
          sudo umount -f /var/lib/containerd-test/io.containerd.snapshotter.v1.nydus/mnt

The reason is that nydus-snapshotter always forks a nydusd when starts but exits leaving the nydusd unsignaled.

In fact, we can't just terminate sharedMode nydusd when handling SIGINT and SIGTERM since it can still serve container images while restarting nydus-snapshotter. But we call terminate it when nydusd when snapshotter is aware that no container image is being served.

Support shared domain for erofs + fscache daemon

Previously, the commit erofs: basic support for erofs + fscache daemon supported erofs with fscache daemon.
Since the shared domain feature is not be implemented in Linux Kernel, the config field DomainID is unused.
The cfg field "DomainID" is not used since the shared domain feature is not implemented in Linux Kernel.
After Kernel implements this feature, nydus-snapshotter should adapt the DomainID field.

Introduce nydus-snapshotter toml configuration file

At present, nydus-snapshotter is configured by its command line parameters some of which are passed to the nydusd daemon.
At the same time, users have to provide a nydusd JSON configuration template. It's a minimal version of nydusd JSON configuration which will be enriched by nydus-snapshotter with necessary extra information like registry auth, etc. It is not very friendly to end-user especially since some items in the JSON fill might be overwritten by nydus-snapshotter.
On the other hand, nydusd's configuration file is going to evolve to its next version, which means nydus-snapshooter's configuration loading and parsing logic has to adapt it. And we don't have to change systemd service unit file when we want to change nydus-snapshotter's work mode and parameters.

I am proposing a TOML format nydus-snapshotter configuration file:

cleanup_on_close = false
enable_stargz = false
root = "/var/lib/containerd-nydus"
version = 1

[binaries]
nydusd_path = "/usr/local/bin/nydusd"
nydusimage_path = "/usr/local/bin/nydus-image"

[log]
# Snapshotter's log level
level = "info"
log_rotate_compress = true
log_rotate_local_time = true
log_rotate_max_age = 
log_rotate_max_backups = 
log_rotate_max_size = 
log_to_stdout = false

[system]
collect_metrics = false
# Management API server unix domain socket path
socket = 

[remote.auth]
enable_kubeconfig_keychain = false
kubeconfig_path = "/home/foo/.kube"

[snapshot]
enable_nydus_overlayfs = false
sync_remove = false

[daemon]
# fuse or fscache
fs_drvier = "fuse"
# Specify nydusd log level
log_level = "info"
# How to process when daemon dies: "none", "restart", "failover"
recover_policy = "restart"
# Speicfy a configuration templiate file
template_path = ""

# configuration of remote backend storage. fuse and fscache 
# can share the same backend configuration.
[daemon.storage]
connect_timeout = 5
#  NOTE: mirrors and proxy can't be set at the same time
mirrors = [{host = , headers = , auth_though = }]
# proxy =
disable_indexed_map = false
# container images data can be cached locally
enable_cache = true
prefetch_config = {enable = true, threads_count = 8, merging_size = 1048576}
retry_limit = 2
scheme = "https"
timeout = 5
type = "registry"

[daemon.fuse]
# loading rafs metadata mode
digest_validate = false
enable_xattr = true
iostats_files = false
mode = "direct"

# Nydusd works as a fscache/cachefiles userspace daemon
[daemon.fscache]
conig = {cache_tpye = "fscache"}
type = "bootstrap"

[cache_manager]
enable = true
gd_period = "24h"

[image]
public_key_file = "/path/to/key/file"
validate_signature = true

Config file given to nydusd looks strange

Config file generated by snapshotter given to nydusd looks so strange. It looks a mixture of fusedev/rafs and fscache.
Originally, the config file is only configuring rafs not nydusd.

{
  "device": {
    "backend": {
      "type": "registry",
      "config": {
        "readahead": false,
        "host": "xxx.com",
        "repo": "foor/bar",
        "auth": "<AUTH>",
        "scheme": "https",
        "proxy": {
          "fallback": false
        },
        "timeout": 5,
        "connect_timeout": 5,
        "retry_limit": 2
      }
    },
    "cache": {
      "type": "blobcache",
      "config": {
        "work_dir": "/var/lib/containerd-nydus-grpc/cache",
        "disable_indexed_map": false
      }
    }
  },
  "mode": "direct",
  "digest_validate": false,
  "enable_xattr": true,
  "fs_prefetch": {
    "enable": true,
    "prefetch_all": true,
    "threads_count": 4,
    "merging_size": 0,
    "bandwidth_rate": 0
  },
  "type": "",
  "id": "",
  "domain_id": "",
  "config": {
    "id": "",
    "backend_type": "",
    "backend_config": {
      "readahead": false,
      "proxy": {
        "fallback": false
      }
    },
    "cache_type": "",
    "cache_config": {
      "work_dir": ""
    },
    "metadata_path": ""
  }
}

containerd.toml version 2 support

As containerd config default produce containerd.toml version 2 for latest containerd, so README.md should include how to set nydus working env for containerd.toml version 2

logs of nydusd in fscache mode is missing

Unlink fusedev mode, nydusd has a logging file to store messages. We'd better keep this consistent with fusedev mode.
Otherwise it is hard for us to maintain and investigate.

Snapshotter occasionally report error message

sudo nerdctl --snapshotter nydus   run -it --net none gechangwei/python:3.7-nydus bash
FATA[0001] wait until daemon ready by checking status: failed to check status: failed to create new nydus client: failed to build transport for nydus client: stat /var/lib/containerd-nydus-grpc/socket/1_jWaFHnQcezLGcQdXDfwg/api.sock: no such file or directory: unknown

In fact, the socket file existes

The blobs annotation in manifest should be deprecated

Currently, nydus image puts an annotation to the manifest to track all referenced blobs in bootstrap: nydus

This will cause the label kv size limitation to be exceeded in containerd when acceld/buildkit write a nydus manifest included a large number of blobs into the content store: containerd

I noticed that the blobs annotation only be used for blob cache gc in nydus-snapshotter: nydus-snapshotter

A feasible workaround is to use nydusd sock API to get the blobs in use, instead of using the list in manifest annotation, so we can remove the annotation from acceld/buildkit.

Use temp file/rename to ensure atomic file ops

If the nydus-snapshotter has been forcibly terminated, there may be intermediate file left on disk, thus cause inconsistent system state. So we should use temp file/rename to ensure atomic file operations.

nydus sdk lack of error context

Most errors returned in pkg/nydussdk/client.go don't have a context, so it's hard to determine where the error is returned.

support different nydus configurations

Nydus supports several different types of storage backend, "localfs", "oss" and "registry".

Since users may use different backend for different images in order to meet different scenarios, it'd better to have nydus-snapshotter support multiple nydus configuration in one shot.

With that being said, how to choose a specific nydus config is not a straightforward thing as it depends on a container image's metadata.
Maybe we can leave an annotation in bootstrap.

Try to enhance Mount struct of containerd

When runtime starts nydusd, it has to provide necessary information to nydusd like configuration or auth.
Current Mount struct does not accommodate such information. So nydus has to provide a mount helper binary on host.
Another method to address this to pass more necessary information to runtime via Mount struct.

Add end-to-end CI

The current CI is missing e2e tests. We should run end-to-end tests to make sure change works.

Provide CRI like configuration for registry access

Currently the downloading of layers (including the bootstrap layer) is done entirely by snapshotter, not containerd, so we need to consider these:

  1. registry mirror support;
  2. choose http/https registry scheme;
  3. client https tls cert support;
  4. retryable http request;
  5. registry auth for multiple registries;
  6. retryable multiple mirrors;

Maybe we can support the CRI like configuration like this:

image

Implement hybrid mode of nydus-snapshotter

If nydus-snapshotter is ever started in multiple or single daemon mode, it can't change the daemon mode from the last startup set.
It is not very friendly to be used.

Replace standard http client to go-retryablehttp

At present, our snapshot will download the image through the go standard http client, if the http download fails and does not do a retry, ideally we should do a retry internally, we can replace the standard http client with go-retryablehttp

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.