Git Product home page Git Product logo

blackstork-io / fabric Goto Github PK

View Code? Open in Web Editor NEW
25.0 2.0 2.0 3.65 MB

An open-source command-line tool for cybersecurity reporting automation and a configuration language for reusable templates. Reporting-as-Code

Home Page: https://blackstork.io/fabric/

License: Apache License 2.0

Go 99.91% Just 0.08% Dockerfile 0.01%
cti cybersecurity reporting secops compliance compliance-reporting pentesting security-reporting

fabric's People

Contributors

andrew-morozko avatar charliemacnamara avatar dependabot[bot] avatar dobarx avatar traut avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fabric's Issues

`data.snyk_issues` data source in `snyk` plugin

Description

Snyk is a popular solution for vulnerabilities detection in the code and the applications. Issues created by Snyk should be tracked in the scope of the cyber security practice.

Use Case

Snyk API provides two endpoint for fetching the issues -- per group and per organization:

  • /groups/{group_id}/issues endpoint (docs)
  • /orgs/{org_id}/issues endpoint (docs)

Requirements

  • configuration:
    • api_key - a required string attribute
  • parameters:
    • project_id - (optional) a string attribute, must be an UUID
    • group_id - (optional) a string attribute, must be an UUID
    • scan_item_id -- (optional) a string attribute
    • scan_item.type -- (optional) a string attribute. Supported values are project and environment
    • type -- (optional) a string attribute. Supported values are package_vulnerability, license, cloud, code, custom, config.
    • updated_before -- (optional) a string attribute
    • updated_after -- (optional) a string attribute
    • created_before -- (optional) a string attribute
    • created_after -- (optional) a string attribute
    • effective_severity_level -- (optional) a string attribute. Supported values are: info, low, medium, high and critical
    • status -- (optional) an array of strings. Supported array values are open and resolved
    • ignored -- (optional) a boolean attribute
    • limit -- (optional) an int attribute
  • constraints:
    • either project_id or group_id attribute must be set

Immutable query parameters:

  • version is at least to 2024-01-23

The plugin takes care of pagination according to limit value that limits the number of overall results (not per page!).
The plugin returns the list of issues, concatenated over multiple pages.

Additional Information

`data.splunk_search` plugin

Background

Splunk is a leader in SIEM market and is widely used in the industry.

  • Splunk Cloud Platform is a cloud solution by Splunk.
  • Splunk Enterprise is an on-prem solution by Splunk.

The APIs for both solutions are similar, but Splunk Cloud Platform requires additional access controls. This issue describes the plugin that works with Splunk Enterprise API.

Features

The plugin should be able to send a search request to the API and return the data.

Specification

  • configuration:
    • auth_token -- a string with a Splunk authentication token
    • host -- (optional) a string with a hostname for a Splunk instance
    • deployment_name -- (optional) a string with a deployment name for Splunk Cloud instance.
  • interface:
    • search_query -- (required) string attribute that contains a Splunk query
    • max_count -- (optional) an int, limits the number of events returned by search
    • status_buckets -- (optional) an int value, indicates the most status buckets to generate
    • rf -- (optional) a list of strings, defines additional fields to be returned
    • earliest_time -- (optional) a string value
    • latest_time -- (optional) a string value

Other non-configurable parameters for API calls:

  • for search/jobs endpoint
    • id for should be set to fabric_<randomized-string>
    • exec_mode should be set to blocking
  • for search/jobs/<sid>/results endpoint - output_mode should be set to json

For the exact format of the parameters, see POST request parameters for search/jobs endpoint here.

If host is provided, the base URL is https://<host>:8089/
If deployment_name is provided, the base URL is https://<deployment-name>.splunkcloud.com:8089/

This plugin should be a part of splunk plugin package.

Usage example

  • auth_token is used it as a basic auth token (docs)
  • search_query value is submitted via HTTP POST to /services/search/jobs endpoint with all set parameters.
  • the results are fetched from /services/search/v2/jobs/<sid>/results and returned

Additional context

`data.opencti` plugin

Depends on

Design

The plugin a GraphQL-based client for the OpenCTI API. Provides syntactic sugar for common GraphQL queries, based on OpenCTI's spec.

Specification

  • Configuration options:
    • graphql_url -- a required string attribute that contains an URL for a GraphQL endpoint
    • auth_token -- an optional string attribute that contains a bearer token to be used in Authorization header value: Authorization: Bearer <token>
  • API interface:
    • graphql_query -- an optional string attribute that contains a raw GraphQL query string. The query is parsed and validated using OpenCTI's GraphQL schema
    • ... TBD more STIX-object-specific query fields

Deliverables

  • new data.opencti plugin
  • the unit tests for the plugin

Support plugin configurations

Background

Some plugins require additional configuration, like credentials / API keys, etc, similar to Terraform's provider configuration.

Design

In the input *.fabric files, there can be configuration blocks that contain additional configuration properties for the plugins.

If the plugin exposes config schema with the required attributes, the config must be present. If it is only optional fields, the config block is not required.

Config blocks examples:

// named data configuration
config data elasticsearch "clusterA" {
    username = "john"
    password = "smith"
}

// named data configuration
config data elasticsearch "clusterB" {
    cloud_id = "x"
    api_key = "y"
}

// "default" configuration for `llm_text` content plugin
config content llm_text {
    parameter_a = "foo"
}

config content llm_text "llama2" {
    parameter_a = "bar"
}

// "default" configuration for `table` content plugin
config content table {
    parameter_b = "foo"
}
  • each config block must have 3 labels:

    • a block type (data or content)
    • a data plugin name (elasticsearch, for example)
    • a plugin config instance name (clusterA or none)
  • each content_config block must have 2 labels:

    • a content plugin name (table for example)
    • a plugin config instance name (llama2 or none)
  • both content or data blocks can have config attribute set or config block specified. The attribute must point to an existing config block (content_config or data_config correspondingly). For example:

    config data elasticsearch "clusterB" {
        username = "jimmy"
        password = "page"
    }
    
    config data llm_text "openai" {
        api_key = "test-key"
    }
    
    document "test-doc" {
    
        data elasticsearch "my-results" {
            config = config.data.elasticsearch.clusterB
            query = "event.type:alert"
        }
    
        content llm_text {
            config = config.content.llm_text.llama2
            prompt = "hey, LLM!"
        }
    
        data elasticsearch "other-results" {
            config {
                username = "john"
                password = "smith"
            }
        }
    }
  • if the config block is specified, it is applied only to the invocation of its parent block

  • the config is parsed according to the "configuration parameters" schema the plugin provides

  • a request to a plugin must support a config properties in addition to execution parameters

    • if config attribute is set in a block, it is resolved and the value is passed as a config object in a plugin request
    • if config attribute in a block is not set, a default config - a config object for data/content type without a name - is used
    • if config attribute in a block is not set and a default config object is not specified, nil is passed in a request
      For example, when this template is executed:
    config data elasticsearch {
        username = "john"
        password = "smith"
    }
    
    document "test-doc" {
        data elasticsearch "my-values" {
            index = "my-index"
            query = "entity.type:alert"
            size = 10
        }
    }

    a call to elasticsearch plugin has 2 inputs: config object (with username and password fields set) and execution parameters (with index, query and size set). The same applies to content blocks.

`data.sqlite` plugin

Design

Specification

  • Configuration options:
    • database_uri -- a required string attribute. The URI with the path to the database file to be opened in a read-only mode. The value must be a valid DSN connection string.
  • API interface:
    • sql_query -- a required string attribute. The SQL query to execute against the database. Only SELECT queries are allowed.
    • sql_args -- an optional list of strings attribute. Contains the arguments to be passed alongside the query, as values for the placeholders in the sql_query.

Behavior

The plugin creates a connection to the SQLite3 database, executes a query and returns the rows as a list of JSON maps, with keys corresponding to the columns.

For example:

[
  {
    "columnA": 1,
    "columnB": 2,
    "columnC": 3
  },
  {
    "columnA": 4,
    "columnB": 5,
    "columnC": 6
  }
]

Deliverables

  • new data.sqlite plugin
  • the unit tests for the plugin

Refs

HCL expressions

Description

Investigate and document the extent to which the HCL expressions are supported in Fabric configuration files.

For example, some of the features might be:

  • HCL-native templating
  • loops
  • variables
  • ...

Built-in `data.txt` plugin

Background

There should be a way to import existing static text, stored in a separate text file, into a template. This use case covers the use of existing signatures, disclaimers, footers, etc, in a template.

Design

data.txt plugin should be a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • API interface:
    • required path string attribute that accepts a path to a file on a local filesystem.

Behavior

Using the provided path value, the plugin reads the file and returns its body as its result. The file should be treated as a text file.

Deliverables

  • new built-in data.txt plugin
  • the unit tests for the plugin

`data.hackerone_reports`plugin

Description

HackerOne provides an API for the companies to pull the data about submitted vulnerability reports.

Use Case

The metrics from HackerOne are often used in the SecOps reports for assessing the enterprise's bug bounty program.

Requirements

  • plugin configuration:
    • api_username -- (required) a string attribute
    • api_token -- (required) a string attribute
  • plugin parameters:
    • supporting all filter parameters for Get All Reports request:
      • program -- (required) a string attribute
      • reporter -- (optional) a string attribute
      • assignee -- (optional) a string attribute
      • state -- (optional) a string attribute
      • ...
    • size -- (optional) an int parameter

size parameter is used for calculating number of pages to be fetched, if page_number (page[number] in the API) is not provided.

The plugin returns the list of reports, concatenated from multiple pages if needed.

The plugin should be a part of data.hackerone package.

Additional Information

Streamline Fabric CLI interface

Fabric CLI:

  • 2 subcommands:
    • render
      • the subcommand renders the specified document into Markdown and outputs it either to stdout or to a file.
      • target -- a required positional argument. Specifies the name of the document to be rendered as document.<name>.
      • --out-file -- an optional argument. Specifies the name of the output file where the rendered document must be saved to.
        • if --out-file is not set, the Markdown is printed to stdout
    • data
      • the subcommand executes the data block and prints out prettified (and highlighted) JSON to stdout

      • target -- a required positional argument. Specifies a path to a data block to be executed -- a data block must be inside a document, so the path should have syntax of document.<doc-name>.data.<plugin-name>.<data-name>.

  • subcommand-independent arguments
    • --source-dir -- an optional argument that accepts a path to a directory with *.fabric files. Default value: . (current directory)
    • --log-output -- an optional argument that accepts plain or json. Configures a handler for the logging. Default value: plain (colored plain text output). The logs (both plain and json formats) should be written to stderr
    • --logging-level -- an optional argument that specifies the logging level. Default value: info
    • -v -- a shortcut to --logging-level debug

Replace nested `content` blocks with `section` block

Background

Having nested content blocks inside another content block mixes up concerns by combining a content plugin's concern of generating content with block grouping.

Design

By introducing a dedicated section block as a grouping, the only block type that allows nesting, would:

  • separate block grouping from plugin invocation
  • allow us to drop content generic construct

So, instead of

content generic _ {
    content text {
        test = "some text"
    }

    content table {
        query = ".data.plugin_b.data_plugin_b.result | length"
        text = "The length of the list is {{ .query_result }}"
        columns = ["ColumnA", "ColumnB", "ColumnC"]
    }
}

we can write

section {
    content text {
        test = "some text"
    }

    content table {
        query = ".data.plugin_b.data_plugin_b.result | length"
        text = "The length of the list is {{ .query_result }}"
        columns = ["ColumnA", "ColumnB", "ColumnC"]
    }
}

section spec

  • the blocks can have a name label or be anonymous:
    section {
        content text {
            text = "some a"
        }
    }
    
    section "section1" {
        content text {
            text = "some a"
        }
    }
  • section blocks can be nested:
    section {
        content text {
            text = "some a"
        }
    
        section "section1" {
            content text {
                text = "some b"
            }
        }
    }
  • named section block defined on a root level can be referenced:
    section {
        content text {
            text = "some a"
        }
    
        section ref {
            base = section.section1
        }
    } 
  • section block accepts an optional title argument:
    section {
        title = "Some title"
    
        content text {
            text = "test text"
        }
    }
    This is a syntax sugar for
    section {
        content text {
            text = "Some title"
            format_as = "title"
        }
    
        content text {
            text = "test text"
        }
    }
    (see #15 for formatting options for content.text)

Changes

content blocks can not contain other content blocks

Built-in `content.text` plugin

Background

In addition to the installable plugins (#3), fabric must have built-in plugins to allow generation of simple content without the need to download extra components.

Related: #13, #18

Design

content.text plugin is be a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • API interface:
    • local context object
    • required text string attribute
    • optional format_as enum string attribute with allowed values title, code and blockquote (nil by default)
    • optional absolute_title_size int attribute (nil by default)
    • optional code_language string attribute

Behavior

The plugin renders and returns a text value as a template string using local context as input data and formats it in Markdown according to format_as parameter (if provided).

If format_as set to code, the text is formatted with triple backticks. If code_language is provided, the language should be set after the first triple backtick.

If format_as is title and absolute_title_size is set, the Markdown title size is absolute_title_size * #
If format_as is title and absolute_title_size is not set, the relative title size is calculated from the local context object that contains a document body (issue TBD) -- the multiplier for # for a Markdown title prefix is the number of title text seen previously in the document

Deliverables

  • new built-in content.text plugin
  • unit test for the plugin

`data.postresql` plugin

Design

Specification

  • Configuration options:
  • API interface:
    • sql_query -- a required string attribute. The SQL query to execute against the database. Only SELECT queries are allowed.
    • sql_args -- an optional list of strings attribute. Contains the arguments to be passed alongside the query, as values for the placeholders in the sql_query.

Behavior

The plugin creates a connection to the PostgreSQL database, executes a query and returns the rows as a list of JSON maps, with keys corresponding to the columns.

For example:

[
  {
    "columnA": 1,
    "columnB": 2,
    "columnC": 3
  },
  {
    "columnA": 4,
    "columnB": 5,
    "columnC": 6
  }
]

Deliverables

  • new data.postgresql plugin
  • the unit tests for the plugin

Refs:

`data.elasticsearch` plugin

Background

The plugin for executing queries against an Elasticsearch instance.

Design

Specification

  • the configuration options:
    • optional base_url string attribute
    • optional cloud_id string attribute
    • optional api_key_str string attribute
    • optional api_key string list of 2 items
    • optional basic_auth_username string attribute (elastic by default)
    • optional basic_auth_password string attribute
    • optional bearer_auth string attribute
    • optional ca_certs string attribute
  • API interface:
    • required index string attribute
    • optional id string attribute
    • optional query_string string attribute
    • optional query map attribute
    • optional fields list of strings attribute
    • return_only_hits boolean argument, true by default. If false, the raw search response is returned; if true - only the documents are returned (response['hits']['hits'])

Important

Depends on #4

Ref:

Behavior

Using the configuration options, the plugin creates an Elasticsearch client instance.

If id attribute is set, the document is fetched from the Elasticsearch index specified in index attribute by id.
if query attribute is set, it is used as an Elasticsearch DSL query against the index specified in index attribute.
If query_string is set, the DSL query {"query_string": {"query": "<query_string>"}} is executed against the index specified in index attribute.

Deliverables

  • new data.elasticsearch plugin
  • unit test for the plugin

`data.github_issues` plugin

Design

The plugin implements a client for GitHub Issues API.

Specification

  • Configuration options:
    • github_token -- a required string attribute, that contains a valid GitHub Auth token. It will be used in the HTTP header during API calls: Authorization: Bearer $TOKEN (docs)
  • API interface:
    • all parameters for the Issues API endpoint, except per_page and page:
      • repository -- a required string attribute in the format of <owner>/<repo>
      • milestone -- an optional string attribute
      • state -- an optional string attribute
        ...
    • limit -- an optional int attribute

The plugin sends an HTTP request to the GitHub API Issues endpoint and returns the de-serialized JSON response, with capped number of results by limit.

Deliverables

  • new data.github_issues plugin
  • the unit tests for the plugin

Refs

`data.virustotal_api_usage` plugin

Description

VirusTotal has built-in quotas on API usage per user and group. Security teams that extensively use VirusTotal must be aware of their usage to ensure the caps are not hit, and the critical data pipelines continue to work.

Use Case

Monitoring API usage and quotas is necessary to ensure proper utilization of VirusTotal as an enrichment / threat hunting / CTI resource. The plugin should allow users to fetch the user metrics for user and group accounts.

Requirements

  • configuration:
    • api_key -- a required string attribute
  • parameters
    • user_id -- (optional) a string attribute
    • group_id -- (optional) a string attribute
    • start_date -- (optional) a string attribute, YYYYMMDD formatted date
    • end_date -- (optional) a string attribute, YYYYMMDD formatted date
  • constraints:
    • either user_id or group_id must be provided

The plugin should be a part of data.virustotal package.

Additional Information

Programmatic API

Background

To build more tools for the ecosystem we should be able to reuse features implemented in fabric. This means fabric must provide a programmatic API to be used in other Go-powered tools.

The goal of the programmatic API for fabric is to provide parsing, data fetching and content rendering capabilities to other tools.

Design

The API must provide the functions:

  • a function that accepts a Fabric config, provided as a string, and returns all parsed root-level blocks as structs.
  • a function that accepts all parsed root-level blocks as structs, name of the document block, and either a path to the content block in the document or a parsed content block struct to be rendered. The function returns a struct with the content block details and the rendered text
  • a function that accepts all parsed root-level blocks as structs, name of the document block, and either a path to the data block in the document or a parsed data block struct to run. The function returns a struct with the data block details and the data returned by the plugin
  • a function that accepts all parsed root-level blocks as structs, and either the name of the root-level data block or a parsed data block struct to run. The function returns a struct with the data block details and the data returned by the plugin.

Built-in `content.table` plugin

Background

In addition to the installable plugins (#3), fabric must have built-in plugins to allow generation of simple content without the need to download extra components.

Related: #13, #15

Design

content.table plugin is a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • API interface:
    • local context object
    • required columns attribute, a list of strings
    • optional datapoints attribute, a list of strings

Behavior

The plugin formats and returns a Markdown-formatted table with the header row.

The values in columns attribute are the column names. datapoints attribute contains a list of jq queries to be applied to every object in query_result object.

If query is not set, or datapoints is not set, or query_result is nil or an empty list, an empty table with the header is returned.

For example, with the content block

content table {
  query = <<-EOT
        data.elasticsearch.critical_alerts | (
            group_by(."kibana.alert.rule.name") |
            map({rule_name: .[0]."kibana.alert.rule.name", count: length})
        )
    EOT

  columns    = ["Rule Name", "Alerts Count"]
  datapoints = [".rule_name", ".count"]
}

and with the query_result set to

[
  {
    "rule_name": "Rule A",
    "count": 11
  },
  {
    "rule_name": "Rule B",
    "count": 22
  }
]

the expected result table would be

|Rule Name|Alerts Count|
|-|-|
|Rule A|11|
|Rule B|22|

Deliverables

  • new built-in content.table plugin
  • unit test for the plugin

Updates to plugin spec

Since #78 if a plugin provides nil in arg spec it would receive the full contents of the plugin invocation block as a cty.Value

  • labels of nested blocks are ignored right now, this probably should be changed? How best to do it?

Built-in `data.csv` plugin

Background

There should be a way to import existing CSV files.

Design

data.csv plugin should be a part of the fabric binary.

Specification

  • Configuration options:
    • delimiter -- a one-character string used to separate fields. It defaults to ,
  • API interface:
    • required path string attribute that accepts a path to a file on a local filesystem.

Behavior

Using the provided path value, the plugin reads the CSV file and returns a JSON object with the data.

The plugin expects the CSV file to have a header, so the first row is always treated as a header. The CSV file is parsed using the configured delimiter.

The plugin produces a list of dictionaries -- one dictionary per row, with fields corresponding to the column names.

For example, for the CSV file

column_a,column-b,column C
1,2,3
4,5,foo

the plugin will return

[
  {
    "column_a": 1,
    "column-b": 2,
    "column C": 3
  },
  {
    "column_a": 4,
    "column-b": 5,
    "column C": "foo"
  }
]

Deliverables

  • new built-in data.csv plugin
  • the unit tests for the plugin

Built-in `content.frontmatter` plugin

Background

Frontmatter is a common way to add metadata to the Markdown files. Frontmatter has to be on the top of the file and begin and end with three dashes --- / or three pluses +++ / or { and } (see references) depending on the serialization format: yaml, toml or json.

Design

content.frontmatter plugin is a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • API interface:
    • content -- an optional map attribute, default to nil.
    • format -- an optional string attribute. Defaults to yaml, supports also json and toml

Behavior

Either query or content must be set in the block:

  • if content attribute is set, its value is used as input data
  • If content is not set but query is set, query_result is used as input data

The plugin serializes the input data in the requested format and surrounds it with triple dashes.

For example,

content frontmatter "foo" {
  query = ".data.inline.fontmatter"
  format = "toml"
}

with query_result value

{
  "fieldA": "valueA",
  "fieldB": "valueB"
}

will produce:

---
fieldA = "valueA"
fieldB = "valueB"
---

Alternatively, the content block can be

content frontmatter "foo" {
  content = {
    fieldA = "valueA",
    fieldB = "valueB,
  }
  format = "toml"
}

The content plugin does not define where the block is placed in the document. It is up to a template writer to define this content block at the top of the document.

Deliverables

  • new built-in content.frontmatter plugin
  • the unit tests for the plugin

References

`content.mermaid_image` plugin

Background

Users can add Mermaid diagrams to the documents using content.text with format_as set to code and code_language set to mermaid, but in the case user's Markdown renderer does not support Mermaid diagrams, fabric should support static rendering.

Design

The plugin draws Mermaid diagrams as static images.

Specification

  • No configuration options:
  • API interface:
    • code -- a required string attribute that contains a Mermaid code.
    • alt_text -- an optional string attribute with alt text for the image tag
    • output_file -- a required string attribute that contains a path for the output file

Behaviour

The plugin renders mermaid diagram code into a static image and returns a Markdown image tag (similar to content.image).

The code value is treated as Go template string (query_result object from the local context)

For example,

content mermaid_image "foo" {
    code =  <<-EOT
      graph TD;
        A-->B;
        A-->C;
        B-->D;
        C-->D;
    EOT

    output_file = "/tmp/diagram.png"
    alt_text = "Mermaid diagram"
}

renders image

into /tmp/diagram.png and returns

![<alt_text>](<output_file>)

Important

Things to consider:

  • Mermaid code can be rendered with
    • mermaid-cli (mmdc)
    • CDP-based (Chrome DevTools Protocol) library used to drive a headless browser on the system for rendering the diagrams (for example, mermaid.go)
  • both approaches add an external requirement for the plugin -- either the external tool or the Chrome browser must be installed on the system where fabric runs
  • mermaid-cli provides a convenient way of converting all mermaid code tags into images during post-processing of the Markdown file -- https://github.com/mermaid-js/mermaid-cli?tab=readme-ov-file#transform-a-markdown-file-with-mermaid-diagrams -- which lowers the need for this Fabric plugin.

Deliverables

  • the new content.mermaid_image plugin
  • the unit tests for the plugin

Create a centralized document describing the fabric syntax

Right now there is no single source of truth documenting the up-to-date expected syntax for fabric files. Documentation and examples are spread out over many issues and discussions.

@traut Why not use GitHub's wiki to document the fabric syntax? You can enable it in repo settings

Un-named data refs make things awkward?

The data returned by the plugin is set in the global context map under path data..

This makes refs being not nameable kind of awkward. If a document contains more than a single data ref to the same plugin then the results of the first invocation would be overwritten by the second (since they share the "<result-name>"), preventing us from doing parallel execution and generally being an unexpected side-effect.

We don't need refs to be anonymous to prevent ref chaining, we can forbid it by looking at the type of the block... Off-topic, but actually, I don't really know why are we preventing ref chaining, seems a pretty ok idea to me. First ref may set some common parameters, and the ref in the document might refine them further for the specific invocation.

Switch to hcl templating and reduce relience on jq (and maybe even context map)

As I was investigating hcl deeper for #59 and #69 I had an idea. #17 and #29 felt too clunky to implement, so perhaps this is a better way

TLDR:

Issues #20 and #59 mean we're moving towards using more of the hcl/cty ecosystem. I propose to replace (at least partially) gojq and replace text/template fully.

content text{
    query = ".document.meta.name"
    // instead of replacing the data right here, we're sending it to the plugin with the whole copy of the context
    text = "{{.query_result}}, {{.data.block.value}}"
}

using native hcl templating becomes

content text{
    // doing everything even before the `text` attribute is sent to the plugin
    text = "${get(document.meta.name)}, ${get(data.block.value)}"
}

As for advanced JQ functions, we can offer many pre-made functions from cty stdlib or write custom once.

If we decide that it's not enough, we can add gojq back in, for example like this

content text{
    text = "${jq("$args[0].meta.name | length", document)}, ${data.block.value}"
}

Proposed syntax:
jq("<jq query>", <hcl path to contents of $args[0]>, <$args[1]>, ...)
or we can add the args to the root array:
jq("<jq query>", <hcl path to contents of .[0]>, <.[1]>, ...)

This allows us to observe the dependencies between blocks. At the moment this is just better UX and error reporting, but in principle it allows us to build a full dependency graph and execute both content and data blocks in parallel.

Details

Go-cty doesn't play well with variables. All cty.Values are constants, so if we're providing them in expression evaluation context ("Hello ${document.content[0].result}"), every change of the underlying data (update to rendered content block list) would result in recreating the whole hierarchy in cty, starting at document.

We can get away with providing data block values directly ("${data.block.value}" instead of "${get(data.block.value)}"), since all data blocks are parsed before content blocks are evaluated, and we can create the data cty.Value just once.

However with content values, there's another trick: if we provide a custom function (like get), then we can override default hcl behavior (lookup of document.content[0].result in hcl.EvalContext) via customdecode hcl extension. This allows us to manually query the document content in native go types, get the rendered result, and only then wrap it in cty.Value.
This also solves an issue with local context (#17) forcing us to execute all content blocks sequentially, because any content block might access the document.content. Now, since the path is not in the opaque jq expression, but in the hcl one, we can notice that block X assesses only document.content[2], so it's ok to run it any time after document.content[2] is ready in parallel with other content blocks.

I propose that if we're going with this approach โ€“ let's enforce "${get(...)}" syntax for data block values, just to keep everything uniform.

Also: #29 is about adding predictability to the data shape of the content block for reuse in refs. But in the bigger picture, the context map[string]any that we encode each time and send to content block plugins is itself unpredictable, so the plugin itself can't rely on anything being in it, so the only use for the context is to be templated into user-supplied strings. If hcl templating replaces the text/template, then what's the use for sending the global map to each plugin? If the plugin wants some info from it โ€“ it can just define a parameter and request it. This can work even if a plugin actually wants the full data structure:

content someplugin {
    all_data_blocks = get(data)
    // but most plugins would request only some data
    some_data_block = get(data.block.name.value)
}

This way there's no implicit (and rather large) global map sent to each plugin, plugins must request and the user must approve transferring the information. Also, this helps with potential parallel execution once again: this content block relies on the whole data, so it must be executed strictly after all data blocks, but it won't be typical case: most plugins only need some data.

Parser does not accept inline configuration for `data.csv` blocks

Description

When processing the document with data.csv blocks that contain inline configuration, the parsing failes

Environment

Fabric version: 8d84549
Operating System: macOS Sonoma 14.2.1 (23C71)
Terminal/Shell: zsh

Steps to Reproduce

2.fabric contains:

config data csv {
  delimiter = ";"
}

config data csv "dash-separated" {
  delimiter = "-"
}

data csv "events_a" {
  path = "/tmp/events-a.csv"
}

document "test-document2" {

   data ref {
     base = data.csv.events_a
   }

   data csv "events_b" {
     config {
       delimiter = ","
     }

     path = "/tmp/events-b.csv"
   }

   data csv "events_c" {
     config = config.data.csv.dash-separated
     path = "/tmp/events-b.csv"
   }
}

Expected Behavior

The rendering fails because CSV files are not found.

Actual Behavior

Warnings Plugin 'data csv' does not support configuration, but was provided with one. Remove it. are raised:

$ ../../dist/fabric_darwin_arm64/fabric -document test-document2 -path . -plugins ../../dist/plugins/plugins
Error: Missing block name

  on 2.fabric line 15, in document "test-document2":
  15:    data ref {

Block name was not specified

Warning: Plugin doesn't support configuration

  on  line 0:
  (source code not available)

Plugin 'data csv' does not support configuration, but was provided with one. Remove it.

Error: Failed to read csv file

open /tmp/events-b.csv: no such file or directory

Warning: Plugin doesn't support configuration

  on 2.fabric line 28, in document "test-document2":
  27:    data csv "events_c" {
  28:      config = config.data.csv.dash-separated
  29:      path = "/tmp/events-b.csv"
  30:    }

Plugin 'data csv' does not support configuration, but was provided with one. Remove it.

Error: Failed to read csv file

open /tmp/events-b.csv: no such file or directory

Syntax updates for refs and block names

A) Refs

Now:

data plugin_b "orig_name" {
    parameter_z = ["a", "b", "c", "d"]
}

data ref "new_name" {
    ref = data.plugin_b.orig_name
    parameter_z = ["x", "y", "z"]
}

Next version:
Ref blocks do not need names, remove them.

Proposition 1

Don't duplicate the block type in the ref block header, we can determine it from ref dest; users see it in the ref attribute

ref "ref_name" {
    ref = data.plugin_b.orig_name
    parameter_z = ["x", "y", "z"]
}

Proposition 2

Since ref attribute is the heart and soul of the ref block, perhaps it should be in a label?

ref data.plugin_b.orig_name "ref_name" {
    parameter_z = ["x", "y", "z"]
}

I think this communicates intents even better. Having ref as an attribute feels wrong: it looks just like an attribute override for the referenced block, but it's actually a part of the ref block itself. Also since the attribute order is arbitrary this is currently possible:

data ref "new_name" {
    parameter_x = ["x", "y", "z"]
    parameter_y = ["x", "y", "z"]
    ref = data.plugin_b.orig_name // this line is fundamentally dissimilar (ref block parameter), but is lost amongst attribute overrides of the referenced block
    parameter_z = ["x", "y", "z"]
}

B) Block names

Now:

content generic _ {}
content text _ {}

Next version:
Block names are optional

content generic {}
content text {}

Built-in `content.image` plugin

Design

content.image plugin is a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • API interface:
    • src -- a required string attribute
    • alt -- an optional string attribute

Behavior

The plugin returns a Markdown-formatted image tag:

![<alt-text>](<src-url>)

for

content image "foo" {
  src = "src-url"
  alt = "alt-text"
}

Download and install missing plugins

Background

The templates might use the plugins that are not installed on the local FS. In this case, fabric should be able to download and install the plugins of the specified versions.

Design

  • when fabric executes, before the template execution, it populates the in-memory registry of available plugins from the plugins present in the FS cache (cache_local_path path set in global settings)
    • the local cache directory structure is <cache_local_path>/plugins/<data|content>/<name>/<version>/
  • if the template uses the data / content plugins not present in the cache, download and install the missing plugins:
    • read the list of plugins from plugin_versions from the global configuration (#5)
      • try fetching the appropriate (latest up to the ceiling) version of the plugin from the local mirror, if mirror_local_path is provided in the global settings. The path schema is <mirror_local_path>/<data|content>/<name>/<version>.zip
      • try fetching the appropriate (latest up to the ceiling) version of the plugin from the registry using base_url from the global config in the URL schema -- <base_url>/<data|content>/<name>/<version>.zip
    • if the archive is downloaded, unpack it into <cache_local_path>/plugins/<data|content>/<name>/<version>/ and update the in-memory plugins registry.
    • if no plugin found, error out and die

Some of the plugins must be built-in -- #13

References

`data.graphql` plugin

Design

The plugin provides a naive GraphQL client implementation (without strict schema-based types). The plugin sends a HTTP request with the provided query and returns deserialised "data" value from JSON response.

Specification

  • Configuration options:
    • url -- a required string attribute that contains an URL to a GraphQL endpoint
    • auth_token -- an optional string attribute that contains a bearer token to be used in Authorization header value: Authorization: Bearer <token>
  • API interface:
    • query -- a required string attribute that contains a GraphQL query.

Deliverables

  • new data.graphql plugin
  • the unit tests for the plugin

Moving away from net/rpc as a go-plugin transport

net/rpc (and specifically the underlying encoding/gob) is no longer supported by go-cty as of 1.11.0

hashicorp/packer suffers from the same issue. Their proposed plan is to slowly migrate to gRPC go-plugin backend and depricate the net/rpc

Our current mitigation is in pkg/gobfix/gobfix.go. I replace cty.Types in hcldec.Specs with a custom wrapper that adds support for encoding/gob back in via json ser/deser.

Proposed solutions

  • Take a page from the packer's book, migrate our RPC to gRPC
    โž• Known working approach
    โž• gRPC is simply more powerful than net/rpc
    โž– Adds a codegen step and complicates things
  • Try using msgpack as an encoder and keep the net/rpc backend (since it will be only transmitting simple byte arrays โ€“ this is doable)
    โ” cty does support msgpack. Do hcldec types?
    โž• Probably simpler to implement
    โž– Keeps us using semi-depricated technology. go-plugin may drop net/rpc support once hashicorp migrates away fully

My proposal: the workaround is not that awful, we can use it for the time being. As we develop plugins we can collect ideas about plugin design, and implement pluginInterface/v2 with those ideas, simoultaniously migrating to gRPC

Add `from_env_var` HCL function

Background

Hard-coding credentials in .fabric files is a bad practice, so there should be a way to load credentials during execution dynamically.

Design

Provide HCL function from_env_variable() that will take the name of the environment variable and load the value from it during execution (if the variable is set)

Important

Related issue: #4

Usage example:

// plugin configuration
config data elasticsearch "clusterB" {
    cloud_id = from_env_var("CLUSTER_B_CLOUD_ID")
    api_key = from_env_var("CLUSTER_B_API_KEY")
}

or for the inline config block

data elasticsearch "critical_alerts" {
  config {
    cloud_id = from_env_var("CLUSTER_B_CLOUD_ID")
    api_key  = from_env_var("CLUSTER_B_API_KEY")
  }

  index        = ".alerts-security.alerts-*"
  query_string = "kibana.alert.severity:critical AND @timestamp:[now-1d/d TO now]"
  size         = 10
}

Deliverables

  • from_env_var() function, to be executed during parsing of the templates
  • unit tests for the function

Local context for content plugins invocation

Background

Content plugins require the local context, in addition to invocation parameters, for execution. In addition to the data fetched from data plugins, the local context must contain the body of the template and the results of previous executions -- this will allow us to create meta content blocks -- the blocks that query the template / other blocks instead of just data.

Design

  • initial local context is a struct that contains:
    • data -- a map with the data returned by data plugins
    • document -- a JSON-style map of the current document template
      • already processed content blocks contain a result field with the result of the execution of the block

content plugin execution steps:

  • the local context is extended with:
    • query root field that contains a value of query attribute from the block definition
    • query_result -- a result of JQ query from query attribute applied to the initial local context (with data and document top fields)
  • the local context is passed to the plugin via API, together with the arguments specified in the block
  • the execution results are added to the context for the next plugin execution

For example, we should be able to perform these queries:

document "test-document" {
    meta {
        name = "Test Document Template"
        tags = ["tag-a", "tag-b"]
    }

    content text {
        query = ".document.meta.name"
        text = "Template name is is {{ .query_result }}"
    }

    content text {
        query = ".document.content[0].result"
        test = "First block result is \"{{ .query_result }}\""
    }
}

Built-in `data.inline` plugin

Background

There should be an easy way to specify a data structure inline in the template

Design

data.inline plugin should be a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • the attributes set inside data inline <name> block define a map. The attribute names are valid HCL attributes and the value can be any data structure.

For example, the block

data inline "foobar" {
    field_a = 123
    field_b = 456
}

represents a {"field_a": 123, "field_b": 456} dictionary

Deliverables

  • new built-in data.inline plugin
  • the unit tests for the plugin

Plugin architecture

code and data blocks are handled in almost the same way (even the linter yelled at me for large chunks of identical code). With current struct-based parsing, I had to write a 200-line schema_impl.go which is mostly a lot of boilerplate to make content and data blocks conform to the same interface and de-duplicate the code handling them.

Since we're moving to dynamic plugin-determined schemas, combining code that handles both kinds of plugins would be even easier. From the main app's perspective data and content only difference is could they contain another blocks or not. Everything else is determined by the plugin.

Proposed plugin RPC interface v1:

go-plugin binary exposes the following functions:

GetPlugins() []Plugin

// individual plugin, like "content.text" or "data.plugin_a"
type Plugin struct{ 
    Kind           string // "content" or "data" for now
    Name           string // "text", "plugin_a", etc.
    PluginVersion SemVer // version of this particular plugin
    ConfigSpec     Spec // Specification of the `config` block
    InvocationSpec Spec // Specification of the invocation block (parameter_x and parameter_y for data.plugin_a)
    ResultSpec     Spec // Specifies how the plugin modifies the block that invoked it, for example "sets result" for data plugins or "sets text" for content plugins
}

Call(kind string, name string, configData, invocationData) result, diagnostics
// configData and invocationData conform to ConfigSpec and InvocationSpec respectively
// Result conforms to ResultSpec

The go-plugin version is the version of the interface above. It's generic enough that we wouldn't need to update it often, only when something like "we added the ability to configure plugins" has changed.

Each plugin can be updated on its own, which would change the returned PluginVersion and ConfigSpec/InvocationSpec/ResultSpec.

Syntax sugar for no-query-result condition

Background

#142 introduces dynamic blocks. One typical pattern is to select which content to render based on the query result.

Design

Implement syntactic sugar t

content text {
  query = ".data.inline.my_elements"
  value = "The elements: {{ .data.inline.my_elements }}"

  when_empty_result content text {
    ...
  }
}

is unpacked into

dynamic content text {
  query = ".data.inline.my_elements"
  condition_query = "(.query_result == {} or .query_result == [] or .query_result == null) | not"

  value = "The elements: {{ .data.inline.my_elements }}"
}

dynamic content text {
  query = ".data.inline.my_elements"
  condition_query = ".query_result == {} or .query_result == [] or .query_result == null"

  value = "There are no elements"
}

Deliverables

  • the unit tests covering the new behavior

Built-in `data.json` plugin

Background

In addition to the installable plugins (#3), fabric must have built-in plugins to allow generation of simple content without the need to download extra components.

Related: #15, #18

Design

data.json plugin should be a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • API interface:
    • required glob string attribute, that with a path glob against local FS

Behavior

Using the provided glob value, the plugin reads all matching files and returns a list of items, where each item is a content of the matching file (json-deserialized). If there are no files matching the provided glob, return an empty list.

Deliverables

  • new built-in data.json plugin
  • unit test for the plugin

Support global configuration

Background

Users should be able to configure the behavior of some of the features in fabric. The easiest way to do that is to allow global configuration, similar to Terraform settings (link).

Design

In the input *.fabric files, there can be a single configuration block that contains global settings:

fabric {

    cache_dir = "./.fabric"

    plugins_registry {
        mirror_dir = "/tmp/plugins/"
    }

    plugin_versions = {
        "blackstork/data.elasticsearch" = "1.2.3"
        "blackstork/content.openai" = "=11.22.33"
    }
}
  • cache_dir attribute (optional) contains a path to a directory on the local FS where a local cache will be kept.
    • default value: ./.fabric (a relative path to a dir from the directory where the execution is happening)
  • plugins_registry block (optional)
    • mirror_dir (optional) attribute that contains a path on a local FS from which the archives of the plugins can be taken
  • plugin_versions attribute (optional) contains a dict with pinned versions of the plugins. The plugin names are namespaced and the versions are in SemVer, following Terraform's version constraint syntax)

The global configuration block should be deserialized and validated when input templates are loaded.

If the path specified in cache_local_path (or a default value) does not exist, it should be created.

TODO:

`content.stixview` plugin

Background

Stixview JS library renders STIX2 graphs.

Specification

  • the plugin has no configuration options
  • interface (see stixview data attributes for the detailed descriptions):
    • gist_id -- (optional) a string attribute
    • stix_url -- (optional) a string attribute
    • caption -- (optional) a string attribute
    • show_footer -- (optional) a boolean attribute
    • show_sidebar -- (optional) a boolean attribute
    • show_tlp_as_tags -- (optional) a boolean attribute
    • show_marking_nodes -- (optional) a boolean attribute
    • show_labels -- (optional) a boolean attribute
    • show_idrefs -- (optional) a boolean attribute
    • width -- (optional) int attribute
    • height -- (optional) int attribute

If query is provided, gist_id and stix_url are ignored.

Behavior

If query attribute is not set, but gist_id or stix_urlare, the plugin returns something similar to

<script src="https://unpkg.com/stixview/dist/stixview.bundle.js" type="text/javascript"></script>
<div data-stix-gist-id="<GIST-ID>"
     data-show-sidebar=true
     data-show-marking-nodes=false
     data-graph-height=400>
</div>
Screenshot 2024-01-26 at 22 26 38

if query attribute is set, the plugin returns slightly more complicated HTML:

<script src="https://unpkg.com/stixview/dist/stixview.bundle.js" type="text/javascript"></script>
<div id="graph-<UNIQUE-STR>"
     data-show-sidebar=true
     data-enable-mouse-zoom=false
     data-show-footer=false
     data-graph-height=400>
</div>
<script>
window.stixview.init(
    document.getElementById('graph-<UNIQUE-STR>'),
    (graph) => {
        graph.loadData({
                "type": "bundle",
                "id": "bundle--<UNIQUE-UUID>",
                "spec_version": "2.0",
                "objects": <JSON-SERIALIZED-LIST-OF-OBJECTS>
        });
    }
);
</script>

with

  • graph-<UNIQUE-STR> -- unique randomized graph div id
  • <UNIQUE-UUID> -- random UUID
  • <JSON-SERIALIZED-LIST-OF-OBJECTS> -- JSON serialized objects from query_result

For example:

<script src="https://unpkg.com/stixview/dist/stixview.bundle.js" type="text/javascript"></script>
<div id="graph-cfd073c4-c5da-4228-adf3-7bcafed88f98"
     data-show-sidebar=true
     data-enable-mouse-zoom=false
     data-show-footer=false
     data-graph-height=400>
</div>
<script>
window.stixview.init(
    document.getElementById('graph-cfd073c4-c5da-4228-adf3-7bcafed88f98'),
    (graph) => {
        graph.loadData(
            {
                "type": "bundle",
                "id": "bundle--ac946f1d-6a0e-4a9d-bc83-3f1f3bfda6ba",
                "spec_version": "2.0",
                "objects": [
                    {
                        "type": "malware",
                        "id": "malware--591f0cb7-d66f-4e14-a8e6-5927b597f920",
                        "created": "2015-05-15T09:12:16.432Z",
                        "modified": "2015-05-15T09:12:16.432Z",
                        "name": "Poison Ivy",
                        "description": "Poison Ivy is a remote access tool, first released in 2005 but unchanged since 2008. It includes features common to most Windows-based RATs, including key logging, screen capturing, video capturing, file transfers, system administration, password theft, and traffic relaying.",
                        "labels": [
                            "remote-access-trojan"
                        ]
                    },
                    {
                        "type": "identity",
                        "id": "identity--81cade27-7df8-4730-836b-62d880e6d9d3",
                        "created": "2015-05-15T09:12:16.432Z",
                        "modified": "2015-05-15T09:12:16.432Z",
                        "name": "FireEye, Inc.",
                        "identity_class": "organization",
                        "sectors": [
                            "technology"
                        ]
                    },
                ]
            }
        );
    }
);
</script>
Screenshot 2024-01-26 at 22 31 21

Additional context

Lazy evaluation

Since we're moving to dynamic schemas all of the parsing is done in our code (right now gohcl decodes the whole struct and its children, whether we need it or not).

This means that we do not need to parse the bodies of the blocks or resolve refs if we don't have to. As far as I understood, the primary goal of this tool is to render a specific document. What if we do it like this:

  1. Parse top-level block headers
  2. Find the requested document and begin parsing its body
  3. Execute plugins, resolve refs, and evaluate blocks when they are needed by the document body

Built-in `content.toc` plugin

Design

data.toc plugin renders a Table Of Contents for the document

Specification

  • the plugin has no configuration options
  • API interface (inspired by Hugo TOC config):
    • start_level -- (optional) a positive int attribute. Default value: 1
    • end_level -- (optional) a positive int attribute. Default value: 3
    • ordered -- (optional) a boolean attribute. Default value: false
    • scope -- (optional) a string attribute. Accepted values are: document and section. Default value: document.

Behavior

The plugin accepts a local content with the parsed document tree inside (#17), walks the tree (depending on the scope -- section or document -- the walk is constrained or not) and assembles a table of contents from content.text block with format_as set to title (#15)

This plugin might reuse relative-title-size calculation that the content.text plugin performs if absolute_title_size is not set.

Deliverables

  • new built-in content.toc plugin
  • the unit tests for the plugin

`content.openai_text` plugin

Design

The plugin utilizes an OpenAI API call for rendering text.

Specification

  • Configuration options:
    • api_key -- a required string attribute that contains an OpenAI API key
    • system_prompt -- an optional string attribute that contains a system prompt (OpenAI docs). Set to [TBD] by default.
  • API interface:
    • prompt -- a required string attribute that contains a user-defined prompt for API call

Behaviour

The plugin uses system_prompt, prompt and query_result to assemble an API request.

The user prompt is created by joining prompt value with a serialized query_result value (surrounded with tree backticks).

For example,

content openai_text "foo" {
    prompt = "Describe in plain text the events using only data provided in JSON below. Do not format the text in any way."
    query = ...
}

with query_data value of

[
  {
    "event_type": "endpoint",
    "name": "Event Name 1"
  },
  {
    "event_type": "firewall",
    "name": "Event Name 2"
  }
]

creates this user prompt for the API call:

Describe in plain text the events using only data provided in JSON below. Do not format the text in any way.
```
[
  {
    "event_type": "endpoint",
    "name": "Event Name 1"
  },
  {
    "event_type": "firewall",
    "name": "Event Name 2"
  }
]
```

with the result:

Event Name 1 is of type "endpoint," and Event Name 2 is of type "firewall."

Deliverables

  • the new content.openai_text plugin
  • the unit tests for the plugin

Support variable requirements in referenced blocks

Background

The reference-able content blocks, defined on the root level of the codebase, need to access specific data points from the context. The content block might not know the exact keys available in the context, making the logic inside the block fragile and error-prone. This makes block reuse difficult.

The blocks that allow vars blocks (content, document, section blocks) and that can be referenced, should be able to declare the requirements: a list of variables a block expects to find in the context during execution.

Design

We introduce a new arrtibute to the content, document and section blocks:

required_vars -- (optional) a list of strings

required_vars defines the names of the variables that the block expectes to find in the context under .vars namespace.

For example:

content text "hello" {
  text = "Hello, {{ .vars.name }}"
  required_vars = ["name"]
}

content text "greeting" {
  text = "Greetings, {{ .vars.other_name }}"
  required_vars = ["other_name"]
}

document "bar" {
  vars {
    name = "Bruce"
  }

  content ref {
    base = content.text.hello
  }

  content ref {
    vars {
      other_name = query_jq(".vars.name")
    }
    base = content.text.greetings
  }
}

renders into

Hello, Bruce
Greetings, Bruce

Note: Required variables should be asserted during evaluation of the block, not during ref block resolution (since the data is not available yet)

References

Built-in `content.list` plugin

Design

content.list plugin is a part of the fabric binary.

Specification

  • the plugin has no configuration options
  • API interface:
    • item_template -- a required string attribute. Defines a template string to be used for rendering a list item.

Behavior

The item_template template is applied to every object in query_result list.

For example,

content list "foo" {
  item_template = "* {{ event_type }}: {{ name }}": 
}

for query_result value

[
  {
    "event_type": "endpoint",
    "name": "Event Name 1"
  },
  {
    "event_type": "firewall",
    "name": "Event Name 2"
  }
]

will produce:

* endpoint: Event Name 1
* firewall: Event Name 2

It is enough to use "1. " at the start of the line (and render every line with "1. ") to get a proper ordered list, according to GitHub Markdown docs.

Deliverables

  • new built-in content.list plugin
  • the unit tests for the plugin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.