Git Product home page Git Product logo

fluent-plugin-record-reformer's Introduction

fluent-plugin-record-reformer

Build Status

Fluentd plugin to add or replace fields of a event record

Requirements

See .travis.yml

Note that fluent-plugin-record-reformer supports both v0.14 API and v0.12 API in one gem.

Installation

Use RubyGems:

gem install fluent-plugin-record-reformer

Configuration

Example:

<match foo.**>
  type record_reformer
  remove_keys remove_me
  renew_record false
  enable_ruby false
  
  tag reformed.${tag_prefix[-2]}
  <record>
    hostname ${hostname}
    input_tag ${tag}
    last_tag ${tag_parts[-1]}
    message ${record['message']}, yay!
  </record>
</match>

Assume following input is coming (indented):

foo.bar {
  "remove_me":"bar",
  "not_remove_me":"bar",
  "message":"Hello world!"
}

then output becomes as below (indented):

reformed.foo {
  "not_remove_me":"bar",
  "hostname":"YOUR_HOSTNAME",
  "input_tag":"foo.bar",
  "last_tag":"bar",
  "message":"Hello world!, yay!",
}

Configuration (Classic Style)

Example:

<match foo.**>
  type record_reformer
  remove_keys remove_me
  renew_record false
  enable_ruby false
  tag reformed.${tag_prefix[-2]}
  
  hostname ${hostname}
  input_tag ${tag}
  last_tag ${tag_parts[-1]}
  message ${record['message']}, yay!
</match>

This results in same, but please note that following option parameters are reserved, so can not be used as a record key.

Option Parameters

  • output_tag (obsolete)

    The output tag name. This option is deprecated. Use tag option instead

  • tag

    The output tag name.

  • remove_keys

    Specify record keys to be removed by a string separated by , (comma) like

      remove_keys message,foo
    
  • renew_record bool

    renew_record true creates an output record newly without extending (merging) the input record fields. Default is false.

  • renew_time_key string

    renew_time_key foo overwrites the time of events with a value of the record field foo if exists. The value of foo must be a unix time.

  • keep_keys

    You may want to remain some record fields although you specify renew_record true. Then, specify record keys to be kept by a string separated by , (comma) like

      keep_keys message,foo
    
  • enable_ruby bool

    Enable to use ruby codes in placeholders. See Placeholders section. Default is true (just for lower version compatibility).

  • auto_typecast bool

    Automatically cast the field types. Default is false. NOTE: This option is effective only for field values comprised of a single placeholder.

    Effective Examples:

      foo ${foo}
    

    Non-Effective Examples:

      foo ${foo}${bar}
      foo ${foo}bar
      foo 1
    

    Internally, this keeps the type of value if the value text is comprised of a single placeholder, otherwise, values are treated as strings.

    When you need to cast field types manually, out_typecast and filter_typecast are available.

Placeholders

Following placeholders are available:

  • ${record["key"]} Record value of key such as ${record["message"]} in the above example (available from v0.8.0).

    • Originally, record placeholders were available as ${key} such as ${message}. This is still kept for the backward compatibility, but would be removed in the future.
  • ${hostname} Hostname of the running machine

  • ${tag} Input tag

  • ${time} Time of the event

  • ${tags[N]} (Obsolete. Use tag_parts) Input tag splitted by '.'

  • ${tag_parts[N]} Input tag splitted by '.' indexed with N such as ${tag_parts[0]}, ${tag_parts[-1]}.

  • ${tag_prefix[N]} Tag parts before and on the index N. For example,

      Input tag: prefix.test.tag.suffix
      
      ${tag_prefix[0]}  => prefix
      ${tag_prefix[1]}  => prefix.test
      ${tag_prefix[-2]} => prefix.test.tag
      ${tag_prefix[-1]} => prefix.test.tag.suffix
    
  • ${tag_suffix[N]} Tag parts after and on the index N. For example,

      Input tag: prefix.test.tag.suffix
    
      ${tag_suffix[0]}  => prefix.test.tag.suffix
      ${tag_suffix[1]}  => test.tag.suffix
      ${tag_suffix[-2]} => tag.suffix
      ${tag_suffix[-1]} => suffix
    

It is also possible to write a ruby code in placeholders if you set enable_ruby true option, so you may write some codes as

  • ${time.strftime('%Y-%m-%dT%H:%M:%S%z')}
  • ${tag_parts.last}

but, please note that enabling ruby codes is not encouraged by security reasons and also in terms of the performance.

Relatives

Following plugins look similar:

ChangeLog

See CHANGELOG.md for details.

Contributing

  1. Fork it
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

Copyright

Copyright (c) 2013 - 2015 Naotoshi Seo. See LICENSE for details.

fluent-plugin-record-reformer's People

Contributors

cosmo0920 avatar gyamxxx avatar okkez avatar piroor avatar sonots avatar tagomoris avatar xthexder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fluent-plugin-record-reformer's Issues

Support for rename

It would be awesome if there is a feature to rename keys. With example configuration below, current implementation will add new record even if old_name not defined in record. It would be great if it's not creating new_name record if old_name not exists, or in other word, feature of renaming.

<record>
   new_name ${old_name}
</record>

add field below another field

Hi,

Is it possible to somehow create an additional field such as

<match foo.**>
  type record_reformer
  <record>
    _metadata <record>service bah</record>
    _metadata <record>something else</record>
  </record>
</match>

The event then will have a hash field _metadata??

{"_metadata": {"service": "bah", "something": "else"}}

record transformer and ruby problem !

Hi,

I'm trying to migrate our netflow from es2.4.1 to es6.1.2 ! I use the same config for the new elastic and the latest fluentd and packages updates !

Do something changed and do i have to modify the fluentd config to index our netflow now ??

Thanks for your help.

2018-05-03 17:53:19 +0200 [warn]: #0 dump an error event: error_class=RuntimeError error="failed to expand Resolv.getname(ipv4_src_addr) : error = undefined local variable or method ipv4_src_addr' for #<Fluent::Plugin::RecordTransformerFilter::RubyPlaceholderExpander::CleanroomExpander:0x007f1a9be38048>" location="/opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluentd-1.1.0/lib/fluent/plugin/filter_record_transformer.rb:310:in rescue in expand'"

my fluentd.conf

<filter netflow.event.**>
@type record_transformer
enable_ruby true

ipname_src_addr ${Resolv.getname(ipv4_src_addr)}
ipname_dst_addr ${Resolv.getname(ipv4_dst_addr)}
.....

Multiple tag match error

Fluentd: 0.14.23

I've got an issue with wildcard tag definition. When I point *.team tag this rewrite doesn't work. But when I point some.team tag instead of *.team tag it works.

<match *.team>
  @type rewrite_tag_filter
  <rule>
    key     team
    pattern (.*)
    tag     other.team
  </rule>
</match>

Support adding new element to existing array field

I hope to convert an input field to an output array field. For example:
<record>
  tags ["${tag}"]
</record>

However, this conversion seems unavailable for now.

BTW, I also hope to add new extra element to existing array-type field like:

<record>
  tags tags + ["new-extra-tag"]
</record>

Support for ${uuid}

I really like the flexibility of your plugin as it allows me to modify tags as well as do lots of other helpful processing of the logs. One feature I was looking to be able to add in is a UUID identifier.

I saw that there is a mixin that gives support for the hostname and uuid variables ( https://github.com/tagomoris/fluent-mixin-config-placeholders ) which I see the fluent-plugin-record-modifier that you reference is using. Right now I use that plugin to add in the uuid, then pass it to your plugin to do more processing:

<match uuid.**>
  type record_reformer

  remove_keys l_tag
  output_tag out.${l_tag}

  source ${hostname}
  foo bar
</match>

<match **>
  type record_modifier

  include_tag_key true

  lid ${uuid}
  tag_key l_tag
  tag uuid
</match>

It seems like that's a very clunky solution just to get the uuid into the data so that I can use your plugin to do much more processing.

Would it be hard to add in the uuid feature? Or if you have a better solution on how to do that, that would be great.

Thanks

Typecast

Problem:

Currently, record-reformer outputs all reformed fields as string, but some fileds like to be integer, some fileds like to be array.

Solution:

A way to solve this is to use out_typecast or filter_typecast to convert types of fields (although array form like ["Foo", "Bar"] is not supported)

Roadmap?

Introduce types option for record-reformer and record-transformer itself? The implementation is very easy because Fluentd api is available > https://github.com/sonots/fluent-plugin-filter_typecast/blob/master/lib/fluent/plugin/filter_typecast.rb

But, thiking of UNIX philosophy, Do One Thing and Do It Well, it should be better to separate these plugins, so keep them as is?

Undefined local variable or method `log`

When running the plugin, it looks like there is a warning that should be fixed:

2014-03-25 10:10:56 -0700 [warn]: record_reformer: NameError undefined local variable or method `log' for #<Fluent::RecordReformerOutput::PlaceholderExpander:0x00000002f3d480> /usr/lib64/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluent-plugin-record-reformer-0.2.6/lib/fluent/plugin/out_record_reformer.rb:139:in `block in expand'

Thanks!

json type not supported in ruby placeholder

if I have:

 <match hubbleCamelRouteJmx>
   type record_reformer
   remove_keys request
   renew_record false
   enable_ruby true
   auto_typecast true
   tag reformedhubbleCamelRouteJmx
   <record>
     value "[{\"hubbleBean\" : { \"name\" : \"myBean1\", \"stat1\":0}}, {\"hubbleBean\" : { \"name\" : \"myBean2\", \"stat1\":0}}]"
#    value ${JSON.parse('[{"hubbleBean" : { "name" : "myBean1", "stat1":0}}, {"hubbleBean" : { "name" : "myBean2", "stat1":0}}]')}
   </record>
 </match>

The first reformer is working well but if I used ruby placeholder record reformer will send a string instead of json.

Thanks,
Phil

field type?

I am using timestamp ${time.strftime('%s')}, but it is saving it as a string...is there a way to tell record_reformer that the field is an integer?

Thanks!

Only the type placed at the first place in match would be fowarded to ElasticSearch.

I use fluent-plugin-record-reformer to add timestamp to 2 kinds of record.

Out put to 2 destinations:

  • output to td-agent.log via stdout .
  • output to ElasticSearch via secure_foward

My configuration

<match syslog.**  netflows.** >
  type record_reformer
  tag  logs.${tag}.mylab-mytoken
  <record>
    @timestamp ${time} 
  </record>
</match>

<match logs.**>
  type copy
  <store>
        type stdout
  </store>
  <store>
        type secure_forward
        shared_key  log_metric_pipline
        self_hostname fluentd-client-ca.io
        secure false
        keepalive 10
        <server>
                host datacenter.mylab.io
        </server>
  </store>
</match>

Problem -- In γ€Šmatch syslog.* netflows.* 》

Only the type placed at the first place would be fowarded to ElasticSearch.

Pay attention to <match syslog.** netflows.** >

  • only the type placed at the first place(syslog. **) would be fowarded to ElasticSearch.
  • both types of logs can successfully output to td-agent.log via stdout .

If I change the order:
<match syslog.** netflows.** > ===> <match netflows.** syslog.** >
Then

  • The type of log fowarded to ElasticSearch would be netflows.**

Trying to split tag from docker log driver

So I was pointed here when I asked about splitting the 'tag' passed by the docker --log-opt. I think I am a little lost as to tag vs tags vs fields and how to accomplish what I want.

This is what my json looks like:

{
  "_index": "logstash-2016.04.25",
  "_type": "fluentd",
  "_id": "AVRPJPrIK0GE7Gr8enZd",
  "_score": null,
  "_source": {
    "container_id": "04a08d96da339a0786829134c6ee252feaec1b8536d",
    "container_name": "/api-production-2",
    "source": "stderr",
    "log": "blah",
    "tag": "docker.api-1.company/api:production-latest.api-production-2",
    "@timestamp": "2016-04-25T20:38:22+00:00"
  },
  "fields": {
    "@timestamp": [
      1461616702000
    ]
  },
  "sort": [
    1461616702000
  ]
}

This is a snippet of what I want to end up with:

reformed.docker {
  "hostname": "api-1",
  "image": "company/api:production-latest",
  "name": "api-production-2"
}

This is what I have but it doesn't seem to be doing anything and I'm not sure how to troubleshoot:

<match docker.**>
  type record_reformer
  enable_ruby false

  tag reformed.docker
  <record>
    hostname ${tag_parts[-1]}
    image ${tag_parts[-2]}
    name ${tag_parts[-3]}
  </record>
</match>

Any help in pointing me in the right direction would be greatly appreciated πŸ˜„

Support integer type field with placeholder (including Ruby codes)

For example, there is a record like { "timestamp" => "2015-06-10T10:00:00" } and I hope to convert it to an unitxime. Then the configuration will be:

enable_ruby true
<record>
  timestamp ${Time.parse(timestamp).to_i}
</record>

But it doesn't work as expected, because placeholders are always expanded as strings.

Array support

Is there any support for reforming fields into array?
e.g.
I have field lon:0.2193 and field lat:129.53,
now I want the output is coordinate:[0.2193,129.53]
Is it possible using record reformer?

Breaks when trying to reform nested records

Here's the definition of a reformer plugin that we have:

cat /etc/td-agent/conf.d/005-match-reformer.conf

# this reformer will rewrite any record that has a channel attribute present in
# the payload. e.g.
# a message with key "message.balog" (which is set by default from the tail source)
# with the payload `{ "channel": "foo.bar"}` will have the routing key rewritten
# to "foo.bar".

<match **.balog>
    type record_reformer
    tag ${header["channel"]}
</match>

If the message is well formed then this plugin works:

echo '{"header": {"channel": "service.debug.test"}, "payload": {"x": 10}}' >> /var/log/td-agent/debug.log

However, if it is malformed (note, we are missing the header):
echo '{"channel": "service.debug.test", "payload": {"x": 10}}' >> /var/log/td-agent/debug.log

It will fail with the error

2014-10-06 20:42:51 +0000 [warn]: record_reformer: NoMethodError undefined method `[]' for nil:NilClass /usr/lib/fluent/ruby/lib/ruby/gems/1.9.1/gems/fluent-plugin-record-reformer-0.3.0/lib/fluent/plugin/out_record_reformer.rb:189:in `block in expand'                          

This code is https://github.com/sonots/fluent-plugin-record-reformer/blob/master/lib/fluent/plugin/out_record_reformer.rb#L187-L190

this crashes any other plugins that run after this error which is not desireable. this only happens when trying to parse nested tag attributes. e.g. tag ${header["channel"]} will crash but tag ${channel} will not.

${message} support for attributes in various JSON levels

Hi @sonots ,

Is it possible to add a small tweak in the code to pull out attribute values from various parts of the JSON. Looks like it supports pulling out the attributes only at the first level.

For example:

{
    "level_1_a": "value-of-level_1_a",
    "b": "value-of-b",
    "sub": {
        "level_2_a": "value-of-level_2_a",

        "sub_sub": {
            "level_3_a": "value-of-level_3_a"
        }
    }
}

Below does not work (giving empty values for all except for the level_1_a

<record>
    hostname ${hostname}
    all_a_values  ${record["level_1_a"]},${record["sub.level_2_a"]},${record["sub.sub_sub.level_3_a"]}
</record>

Below works

<record>
    hostname ${hostname}
    all_a_values  ${record["level_1_a"]}
</record>

Sub elements and variables

Hello,

am having an issue with the following stanza:

<match ips.suricata>
  type record_reformer

  tag ips.suricata.reformed

  ips_signature ${alert.gid}:${alert.signature_id}:${alert.rev}
  ips_action ${alert.action}
  ips_category ${alert.category}
</match>

as when it is parsed the ips_ variables are never set. It seems that the plugin does not like when the variables are within a separate JSON list. Here is the source record:

20140507T150926+0100    ips.geo.suricata.reformed       {"timestamp":"2014-05-07T15:09:26.390491","event_type":"alert","src_ip":"218.77.79.34","src_port":41397,"dest_ip":"123.123.123.123","dest_port":80,"proto":"TCP","alert":{"action":"blocked","gid":1,"signature_id":2402000,"rev":3334,"signature":"ET DROP Dshield Block Listed Source group 1","category":"Misc Attack","severity":2},"ips_signature":"","ips_action":"","ips_category":"","city":"Changsha","latitude":28.17919921875,"longitude":113.11360168457031,"country_code3":"CHN","country":"CN","country_name":"China","dma":null,"area":null,"region":"11","location_properties":{"lat":28.17919921875,"lon":113.11360168457031},"location_string":"28.17919921875,113.11360168457031","location_array":[113.11360168457031,28.17919921875]}

Any thoughts on how to resolve please ? Thank you.

Placeholders overridden unexpectedly

I found that placeholders are overridden if the same key appears in the input record.
Is this an expected behavior?

For example, if I change the below lines of test code

https://github.com/sonots/fluent-plugin-record-reformer/blob/master/test/test_out_record_reformer.rb#L28-L31

record = {
  'eventType0' => 'bar',
  'message'    => msg,
}

to this,

record = {
  'eventType0' => 'bar',
  'message'    => msg,
  'tag'        => 'foo' # add this line
}

many tests fail because ${tag} placeholder is now evaluated as 'foo'.

From the document (https://github.com/sonots/fluent-plugin-record-reformer#placeholders), I expect ${tag} to be always replaced by the input tag.
If one want to use the value in the record which is keyed with 'tag', she should specify it by ${record["tag"]}.

support for multi worker

Fluentd v0.14.12 supports multi process workers.
https://www.fluentd.org/blog/fluentd-v0.14.12-has-been-released
but, this plugin was error like this.

2017-07-26 16:48:26 +0900 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="Plugin 'record_reformer' does not support multi workers con
figuration (Fluent::Plugin::RecordReformerOutput)"

could you support multi process workers?

fluent-plugin-gcs may support multi process workers.
https://github.com/daichirata/fluent-plugin-gcs

Catching syntax error in embedded ruby code

In the td-agent.conf file I hade a quote mismatch in my embedded ruby code:

rcvtime ${time.strftime("%Y-%m-%dT%H:%M:%S%:z')}

Everything just silently crashed and failed. Maybe a rescue SyntaxError somewhere in the call tree would catch it.

td-agent v1 and fluent-plugin-record-reformer

Hi,
not sure about compatibility, but td-agent (td-agent-1.1.21-0.x86_64, CentOS 6, ruby-1.8.7) with this plugins keeps restarting:
...

type record_reformer
tag access_with_hostname
hostname ${hostname}

...

2015-02-06 08:02:07 +0100 [info]: connection established to .....
2015-02-06 08:02:07 +0100 [info]: connection established to .....

2015-02-06 08:02:07 +0100 [info]: shutting down fluentd
2015-02-06 08:02:07 +0100 [info]: process finished code=256
2015-02-06 08:02:07 +0100 [error]: fluentd main process died unexpectedly. restarting.
2015-02-06 08:02:07 +0100 [info]: starting fluentd-0.10.55
2015-02-06 08:02:07 +0100 [info]: reading config file path="/etc/td-agent/td-agent.conf"
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-mixin-config-placeholders' version '0.3.0'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-mixin-config-placeholders' version '0.2.4'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-mixin-plaintextformatter' version '0.2.6'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-datacounter' version '0.4.3'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-flume' version '0.1.1'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-grepcounter' version '0.5.5'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-mongo' version '0.7.3'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-multi-format-parser' version '0.0.2'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-record-modifier' version '0.2.0'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-record-reformer' version '0.4.0'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '1.4.1'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-s3' version '0.4.1'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-sampling-filter' version '0.1.3'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-scribe' version '0.10.12'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-secure-forward' version '0.2.5'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-td' version '0.10.22'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-td-monitoring' version '0.1.3'
2015-02-06 08:02:07 +0100 [info]: gem 'fluent-plugin-webhdfs' version '0.3.1'
2015-02-06 08:02:07 +0100 [info]: gem 'fluentd' version '0.10.55'
2015-02-06 08:02:07 +0100 [info]: using configuration file:
....

Removing time key

Hi, i think there is a problem with removing "time" key, i dont know if it is mandatory or an issue, so please enlighten me.
When i check my logs with "docker logs", every fluentd message contains a time key as the last field of event. So i presume, fluentd adding time key.
I am aware of record_transformer plugin has ability to remove keys with remove_keys parameter but at below sample it seems not working.

Below is my simplified fluentd config,

<source>
	type forward
	port 24224
</source>
<filter SomeTag.*>
  @type record_transformer
  enable_ruby true
  **remove_keys time** #this seems doesn't work
  <record>
	LogDate ${time.strftime('%Y/%m/%d %H:%M:%S')}
  </record>
</filter>
<match SomeTag.*>
	type copy
	<store>
		type elasticsearch
                ......
	</store>
	<store>
		type mongo
                ......
	</store>
	<store>
		type stdout
	</store>
</match>

Thanks for any help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.