Git Product home page Git Product logo

srcache-nginx-module's Issues

Location header rewritten coming from cache

We're running into an issue where the Location header sent in a 302 request is getting changed depending on if the response is a cache MISS or HIT. On misses, a relative URL is returned (as desired). On hits, an absolute URL is returned despite a relative URL being saved to the cache.

The srcache configuration:

srcache_response_cache_control on;
srcache_default_expire 0s;

set $key $scheme$host$request_uri$bb_accept$requested_with$cookie__bb_locale$cookie__bb_country;
set_escape_uri $escaped_key $key;

srcache_fetch GET /redis $key;
srcache_store PUT /redis2 key=$escaped_key&exptime=$srcache_expire;

When submitting a request that returns 302 response that is a cache MISS, the Location header is relative.

curl -i -H "Host: foobar.example.com" localhost/categories/non-gmo
HTTP/1.1 302 Moved Temporarily
Server: openresty
Date: Thu, 05 Feb 2015 21:03:42 GMT
Content-Type: text/plain; charset=UTF-8
Content-Length: 64
Connection: keep-alive
Vary: X-HTTP-Method-Override, Accept
Cache-Control: public, max-age=3600
Location: /categories/non-gmo-2/products
127.0.0.1 - - [05/Feb/2015:20:59:49 +0000] "GET /categories/non-gmo HTTP/1.1" 302 64 "-" "curl/7.35.0" 0.003 0.001 - - foobar.example.com public, max-age=3600 - MISS

We can see that the response is now cached in our redis instance:

redis-cache:6379> GET foobar.example:938810f609eeeddf722e3ac68ecca339
"HTTP/1.1 302 Moved Temporarily\r\nContent-Type: text/plain; charset=UTF-8\r\nVary: X-HTTP-Method-Override, Accept\r\nCache-Control: public, max-age=3600\r\nLocation: /categories/non-gmo-2/products\r\n\r\nMoved Temporarily. Redirecting to /categories/non-gmo-2/products"

Subsequent requests that generate a cache HIT are returned with a FQDN in the Location header rather than the relative path:

curl -i -H "Host: foobar.example.com" localhost/categories/non-gmo
HTTP/1.1 302 Moved Temporarily
Server: openresty
Date: Thu, 05 Feb 2015 21:08:15 GMT
Content-Type: text/plain; charset=UTF-8
Content-Length: 64
Location: http://foobar.example.com/categories/non-gmo-2/products
Connection: keep-alive
Vary: X-HTTP-Method-Override, Accept
Cache-Control: public, max-age=3600
127.0.0.1 - - [05/Feb/2015:21:08:15 +0000] "GET /categories/non-gmo HTTP/1.1" 302 64 "-" "curl/7.35.0" 0.003 - - - foobar.example.com - - HIT

I've tried using proxy_redirect off; in the nginx configuration, but it doesn't seem to have an affect on the responses coming from srcache.

If I use srcache_store_statuses 200; to explicitly not cache redirect pages, responses include relative Location headers and aren't cached. This is less desirable in our environment, but is a temporary workaround.

Can you confirm that srcache rewriting the Location header being returned in cached responses? Is there an option for disabling the behavior we're experiencing?

Thank you.

could not build the variables_hash on 1.4.1

After upgrading from 1.2.9 to 1.4.1 receive this on startup:

[emerg] could not build the variables_hash, you should increase either variables_hash_max_size: 512 or variables_hash_bucket_size: 64

Setting variables_hash_max_size: 1024 fixes the issue, but this wasn't required for 1.2.9

Using tag v0.21

Ctrl + F5 in browser -> need request

Good day!

Can i ask a new useful feature?
Now if object in cache your module will always fetch this object from memcached even if i will press Ctrl + F5 in browser
when i press Ctrl + F5 in Firefox for example the Firefox do a request with "Pragma: no-cache" in http request.
A common proxies and caches should fetch a an object not from cache.

Can i ask to add TODO:

If in request there is "Pragma: no-cache" http header then an answer we should do as object is missed in memcached?

If this feature will be an owner will be able to remove any object by Ctrl + F5 if object was deleted at original location

Thanks! :)

memcached down

Hi
If the memcached server is down the very strange output is generated and I have no idea to control it by the module. Expected behaviour is to pass request to proxy without cache requesting.

srcache version: 0.12rc5

Nginx conf:

upstream storage {
  server localhost:8081;
}

upstream memcached {
  server localhost:11211;
  keepalive 512;
}

server {
    # Get static content from storage (control center) via memcached caching
    location /storage {
        set $key $uri;
        srcache_fetch GET /memc key=$key;
        srcache_store PUT /memc key=$key;
        proxy_pass http://storage;
    }

    location = /memc {
        internal;
        set $memc_key $arg_key;
        set $memc_exptime 10;
        memc_pass memcached;
    }

    # redirect server error pages to the static page /50x.html
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

Request:

curl -v http://localhost:8080/1001/1000/1.png

If memcached is up I have my .phg picture

If memcached is down the result is:

> GET /1001/1000/1.png HTTP/1.1
> User-Agent: curl/7.21.6 (x86_64-pc-linux-gnu) libcurl/7.21.6 OpenSSL/1.0.0e zlib/1.2.3.4 libidn/1.22 librtmp/2.3
> Host: localhost:8080
> Accept: */*
>
<html>
<head>
<title>The page is temporarily unavailable</title>
<style>
body { font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body bgcolor="white" text="black">
<table width="100%" height="100%">
<tr>
<td align="center" valign="middle">
The page you are looking for is temporarily unavailable.<br/>
Please try again later.
</td>
</tr>
</table>
</body>
</html>
HTTP/1.1 200 OK
Server: nginx/1.0.4
Date: Tue, 06 Mar 2012 08:54:00 GMT
Content-Type: image/png
Connection: keep-alive
Last-Modified: Thu, 18 Nov 2010 14:37:40 GMT
ETag: "40000a6a-8ca-49554bb626c14"
Accept-Ranges: bytes
Content-Length: 2250

�PNG
......  picture data  ......
<html>
<head>
<title>The page is temporarily unavailable</title>
<style>
body { font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body bgcolor="white" text="black">
<table width="100%" height="100%">
<tr>
<td align="center" valign="middle">
The page you are looking for is temporarily unavailable.<br/>
Please try again later.
</td>
</tr>
</table>
</body>
</html>
* Connection #0 to host localhost left intact
* Closing connection #0

set_hashed_upstream in examples - incorrect?

srcache-nginx-module / README.markdown

There is your example:

location = /memc {
    internal;

    set $memc_key $query_string;
    set_hashed_upstream $backend universe $memc_key;
    set $memc_exptime 3600; # in seconds
    memc_pass $backend;
}

location / {
    set $key $uri;
    srcache_fetch GET /memc $key;
    srcache_store PUT /memc $key;

    # proxy_pass/fastcgi_pass/content_by_lua/drizzle_pass/...
}

I can make a mistake but i think that there is a potential issue here:
set $memc_key $query_string;
set_hashed_upstream $backend universe $memc_key;

I think for GET & PUT memcached's request there will be different $query_string's
And for same $uri's in "location /" will be different $memc_key variable for GET & PUT requests.
And set_hashed_upstream will set different $backend's for GET and for PUT request.
I think there will be trashing of upstream servers: for GET requests of URI $backend will be A for example but for PUT requests will be $backend as B

How do you think?

Thanks :)

Forward to named route or rewrite?

Hello,

How can I forward requests that did not match any cached entry to a named route?

I have something like:

location / {
    try_files $uri $uri/ @backend;
}

location @backend {
    rewrite / /index.php;
}

location ~ \.php$ {
    # catch 404s
    if (!-e $request_filename) { rewrite / /index.php last; }

    expires off;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
}

Should I place srcache_fetch and srcache_store in location / or @backend? Are there any issues if the fallback is try_files or rewrite?

Thank you in advance!

proxy_cache_use_stale like directive

the proxy_cache_use_stale directive can determines in which cases a stale cached response can be used when an error occurs during communication with the proxied server.

but when i using srcache+redis, the reids always delete keys...

How do you use redis with password?

Hey!

I would really like to use this module but the redis in my environment needs to have authentication turned on.

I tried to solve this by using content_by_lua but don't know how to.

This is what I would want to do but nginx redis or redis2 modules don't support redis passwords:

location /api {
     default_type text/css;

     set $key $uri;
     set_escape_uri $escaped_key $key;

     srcache_fetch GET /redis $key;
     srcache_store PUT /redis2 key=$escaped_key&exptime=120;

     # fastcgi_pass/proxy_pass/drizzle_pass/postgres_pass/echo/etc
 }

 location = /redis {
     internal;

     set_md5 $redis_key $args;
     redis_pass 127.0.0.1:6379;
 }

 location = /redis2 {
     internal;

     set_unescape_uri $exptime $arg_exptime;
     set_unescape_uri $key $arg_key;
     set_md5 $key;

     redis2_query set $key $echo_request_body;
     redis2_query expire $key $exptime;
     redis2_pass 127.0.0.1:6379;
 }

If-None-Match let cache MISS always

Hi!

If-None-Match has been set cache is miss always.

Test:

curl -I --header "If-None-Match: 23333333" http://xx.xx.fm/api/xxxxx
I think it's a bug or should can be configure.

HTTP/1.1 200 OK
X-Cached-From: MISS
X-Cached-Store: BYPASS

I used srcache-nginx-module-0.30 with openresty 1.9.7.4

this is my config

    location ^~ /api/ {

        add_header X-Cached-From $srcache_fetch_status;
        add_header X-Cached-Store $srcache_store_status;

       set $key $uri;
       set_escape_uri $escaped_key $key;

       set_by_lua $exptime '
           if string.find(ngx.var.uri, "/api/liveshow/rank/") then
               return 300
           end
           return 60
       ';

       srcache_fetch GET /redis $key;
       srcache_store PUT /redis2 key=$escaped_key&exptime=$exptime;

       srcache_methods GET;

       proxy_pass http://127.0.0.1:8881/api/;
   }

thx:)

Request for variables to use in logging and/or headers

As a lot of people have already said, this is a fantastic nginx module - and I agree, many thanks.

I am currently using it with a couchbase server and will eventually extend to running multiple nginx caches behind a couchbase cluster on a loadbalancer (or more).

I have been trying to obtain a way to get the method srcache used to serve the content.

That is cache HIT/MISS/STORE so I can include either a header and/or cache logging.

I would ultimately like to log bytes sent during a cache hit so I can calculate a monthly usage to graph along side apache monthly usage to give a comparison.

Is this already possible?

Storing > 1MB objects in memcached is actually possible

Typically with memcached -I 10m.
It seems to also require nginx client_max_body_size 10M;, or else i get

client intended to send too large body: 1222117 bytes, client: 127.0.0.1, server: , request: "GET /css/semantic.css HTTP/1.1", subrequest: "/_memc", host: "localhost:3001", referrer: "http://localhost:3001/"

Help with configuration of specific folder

Hi!

As i mentioned earlier im using srcache to serve assets like css/js/img.

My application does the mapping and generates URLs like:

http://frontdoor.ricardo/gimme/a04da9f9fd/pkg/css/index.css
http://frontdoor.ricardo/gimme/e82d990d14/js/backbone.js

So im trying to set srcache to just this folder: /gimme

Here is my config:

upstream memcached {
    server    127.0.0.1:11211;
    #keepalive 1024 single;
}

# frontdoor
server {
    server_name frontdoor.*;

#   memc_connect_timeout 100ms;

    root   /Users/ricardo/Sites/frontdoor/www;

    fastcgi_read_timeout 3600;

    location = /memc {
        internal;

        memc_connect_timeout 250ms;
        memc_send_timeout 250ms;
        memc_read_timeout 250ms;

        set $memc_key $query_string;
        set $memc_exptime 0; # never expires

        memc_pass memcached;
    }

    location / {            
        try_files $uri /index.php$is_args$args;
    }

    location /gimme {
        set $key $http_host$request_uri;
        srcache_fetch GET /memc $key;
        srcache_store PUT /memc $key;
        srcache_store_statuses 200 301 302 304 404;

        try_files $uri /index.php$is_args$args;
    }

    location ~ \.php$ {     
        include        fastcgi.conf;
        fastcgi_pass   localhost:9000;
    }
}

This way it does not write to memcached and its a miss everytime. If i move the srcache stuff to inside the location ~ .php$ it will work, but i do not want the overhead of lookup in requests outside the /gimme folder.

Can you help me to do it the best way possible?

Thx

What to use for SSI/ESI like feature?

Hello,

I am working on a project where I need an SSI/ESI like feature. For full page cache I am now using the srcache-nginx-module with Redis and a PHP backend. There will be at least 3 blocks per page which I need whole-punched with something like SSI/ESI and these will also be cached.

SSI does not play well with Lua, on the other hand if I am using SSI with srcache_fetch only then it won't work on my sub-requests.

What would be a good (elegant and also performance wise) approach to replace SSI/ESI tags in both the backend and srcache_fetch responses? Liked replace-filter-nginx-module, but it does not support Lua replacements which I need.

I've done some research and testing, but none worked so far:

  • Use a template engine - no buffer and how to process cached responses
  • use body_filter_by_lua would fail when chunk ends in the middle of a placeholder

Thank you in advance!

Save gzip and normal version

Hi!

FIrst of all, really GREAT work. Im using it with our js/css packer and it is working really great cause the URLs are hashed so I can store the requests on memcached forever and all automatically. With just memc module i was having to write the assets to memcached in the application backend and i did not like it.

In your TODO it is written Gzip. So i would like to see if it checks with what i would like:

Now im using Nginx Gzip Module to gzip css/js/html on the fly, after it comes from srcache or backend. If srcache could save a gzip version of the content it could be served directlly. What doesnt go in srcache would still be gzipped on the fly.

Not that it is necessary to save a not gzipped version to save IE6. You could even read the gzip_disable var.

Is that the way you are thinking?

Thx and keep the great work!

srcache wont cache

I tried for many hours to solve this on my own without avail. I tested nginx version 1.4.0 and 1.3.11 neither worked. I also ignored all headers that could interfere. No matter what I try I am only able to get a fetch MISS and store BYPASS.

debug log (grep srcache): http://pastebin.com/4c9FnvJw
nginx -V: http://pastebin.com/2WkAF62Y
config sample 1: http://pastebin.com/n7tyf5bx
config sample 2: http://pastebin.com/rCSxY1Bh

in sample 2 I tried adding everything that I could think of to rule out any error with the backend (content-encoding etc). The page being proxied is http://www.x4b.org/main.1364042305.css

Why still get into srcache_store handler even if $srcache_expire=0?

I just put my srcache settings directly under server block:

server {
    set $canonical_host www.example.com;

    # ...

    srcache_default_expire 0;
    set_escape_uri $pagecache_key $canonical_host$uri$is_args$args;
    srcache_fetch GET /pagecache key=$pagecache_key;
    srcache_store PUT /pagecache key=$pagecache_key&exptime=$srcache_expire;
    include srcache_status_headers;

    # ...
}

You see, I use srcache_default_expire, So the $srcache_expire will default to 0,
I expect my upstream setting their appropriate exptime by max-age.

However, my srcache_store get involved even if $srcache_expire==0,
and then cause an extra useless redis request.

location = /redis_put {
    # tested OK with SSDB
    internal;
    set_unescape_uri $exptime $arg_exptime;

    redis2_query setx $redis_key $echo_request_body $exptime;

    redis2_pass pagecache_ssdb;
}

$srcache_expire is always empty

Hey!

I have been trying srcache with redis for few weeks now and today I noticed that the cache keys are not expiring automatically.

I'm using this config in local docker environment:

##
# Adds internal locations for storing and getting full page cache from redis
##

srcache_default_expire 10s;
srcache_max_expire 10s;

location /redis-fetch {
    internal;

    ##
    # In order to use password authentication we use custom redis module which adds $redis_auth:
    # - https://github.com/Yongke/ngx_http_redis-0.3.7
    ##

    # Read the configuration from system envs
    set $redis_auth '';
    set $redis_db 0;

    set $redis_key $args;

    redis_pass 172.17.0.6:6379;
}

location /redis-store {
    internal;

    set_unescape_uri $exptime $arg_exptime;
    set_unescape_uri $key $arg_key;

    # redis module pipelines these 3 commands into single request
    redis2_query auth '';
    redis2_query select 0;
    redis2_query set $key $echo_request_body;
    redis2_query expire $key $srcache_expire;

    # Pass the request to redis
    redis2_pass 172.17.0.6:6379;

}

Then I open redis-cli to monitor incoming requests:

$ redis-cli
> monitor
1478704802.63890 [0 172.17.0.8:54906] "select" "0"
1478704802.638100 [0 172.17.0.8:54906] "get" "wp_:nginx:httpsGETwordpress.test/robots.txt"
1478704802.638122 [0 172.17.0.8:54900] "auth" ""
1478704802.638137 [0 172.17.0.8:54900] "select" "0"
1478704802.638148 [0 172.17.0.8:54900] "set" "wp_:nginx:httpsGETwordpress.test/robots.txt" "HTTP/1.1 200 OK\r\nContent-Type: text/plain; charset=utf-8\r\nExpires: Thu, 19 Nov 1981 08:52:00 GMT\r\nCache-Control: no-store, no-cache, must-revalidate\r\nPragma: no-cache\r\nX-Robots-Tag: noindex, follow\r\nLink: <https://wordpress.test/wp-json/>; rel=\"https://api.w.org/\"\r\nLink: <https://wordpress.test/wp-json>; rel=\"https://github.com/WP-API/WP-API\"\r\nX-Cache: MISS\r\n\r\nUser-agent: *\nDisallow: /wp-admin/\nAllow: /wp-admin/admin-ajax.php\n"
1478704802.638189 [0 172.17.0.8:54900] "expire" "wp_:nginx:httpsGETwordpress.test/robots.txt" ""

And where I should see 10s because of the srcache_max_expire directive or 0 because of the Cache-Control: no-store, no-cache, must-revalidate header instead I see "" as sent to redis.

Version:

nginx -V
nginx version: openresty/1.11.2.1
built by gcc 4.9.2 (Debian 4.9.2-10)
built with OpenSSL 1.0.2j  26 Sep 2016
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx/nginx --with-cc-opt=-O2 --add-module=../ngx_devel_kit-0.3.0 --add-module=../echo-nginx-module-0.60 --add-module=../xss-nginx-module-0.05 --add-module=../ngx_coolkit-0.2rc3 --add-module=../set-misc-nginx-module-0.31 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.06 --add-module=../srcache-nginx-module-0.31 --add-module=../ngx_lua-0.10.6 --add-module=../ngx_lua_upstream-0.06 --add-module=../headers-more-nginx-module-0.31 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.17 --add-module=../redis2-nginx-module-0.13 --add-module=../rds-json-nginx-module-0.14 --add-module=../rds-csv-nginx-module-0.07 --with-ld-opt=-Wl,-rpath,/etc/nginx/luajit/lib --with-http_addition_module --with-http_auth_request_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_geoip_module=dynamic --with-file-aio --with-ipv6 --with-pcre-jit --with-stream --with-stream_ssl_module --with-threads --without-http_autoindex_module --without-http_browser_module --without-http_userid_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --without-http_split_clients_module --without-http_uwsgi_module --without-http_scgi_module --without-http_referer_module --user=nginx --group=nginx --sbin-path=/usr/sbin --modules-path=/usr/lib/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx/nginx.lock --http-fastcgi-temp-path=/tmp/nginx/fastcgi --http-proxy-temp-path=/tmp/nginx/proxy --http-client-body-temp-path=/tmp/nginx/client_body --add-module=/tmp/ngx_http_redis-0.3.7-master --add-module=/tmp/ngx_pagespeed-1.11.33.4-beta --with-openssl=/tmp/openssl-1.0.2j

A queue for multiple simultaneous request to the same content.

I am trying to use srchache + redis as a replacement of varnish as srcache gives the opportunity of having shared cache. However, there is one thing that I am missing. Let me describe a simple scenario.

We have 1000 concurrent users on the front page. We need to flush the redis cache for some reason. When the cache is empty each of the concurrent request for same content (same cache key) will be sent to the backend which might result in beckend failure.

It would be great is srcache support some queue for multiple requests to the same content just like proxy_cache, so the load to the backed is reduced.

@see: https://www.nginx.com/resources/admin-guide/content-caching/#slice
“Sometimes, the initial cache fill operation may take some time, especially for large files. When the first request starts downloading a part of a video file, next requests will have to wait for the entire file to be downloaded and put into the cache.”

example using srcache with php-fpm

Hi,

We are tying to use srcache with php-fpm . Please advice if you have any reference we can use for our implementation.

Thank you,
Vishal

send 304 header by this module

i make one location, to using lua code, and check the Etag to send 304 status code.
but when i use this module to cache this location, it always hit the cache, and send 200 status code, also send the body every time.

help for srcache with redis for elgg

hello ..
i am trying to setup nginx with redis for elgg 1.11...
it is working fine but suffering below issue
login and logout not working....

here i attached mine nginx conf file.

nginx.txt

Please correct doc :)

My English is not good so i will not do a pull request

README.markdown -> Caveats -> "So it's necessary to use the charset, default_type and add_header directives ..."

I work with your module now and noticed that default_type in location where are too your directives is used by nginx only if response from proxy_pass (or static files too) doesn't match with mime extensions by last suffix from mime.types config
For example if i have a following config:

  location / {
      default_type  application/octet-stream;
      proxy_pass         http://proxy_address;
      ....
  set $key $http_host$uri;
  set_md5 $key;
  srcache_fetch GET /memc $key;
  srcache_store PUT /memc $key;
  expires      12h;
  }

For images (*.jpg) i will have image/jpeg
I think this http header is set up by local nginx if object is in cache already (nginx see that there are no content type and he looks for mime.type config file)

I think this is needed to be in docs because somebody can stop to use your module if he will think that he should describe every locations for each mime type

Thanks

multiple definition of `ngx_http_srcache_filter_module' when compile openresty

Version:openresty-1.11.2.5

config

./configure \
--prefix=/usr/local/openresty \
--sbin-path=/usr/local/openresty/sbin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/data/logs/error.log \
--http-log-path=/data/logs/access.log \
--pid-path=/var/run/openresty.pid  \
--lock-path=/var/lock/openresty.lock \
--user=nginx \
--group=nginx \
--with-cc-opt=-DTCP_FASTOPEN=23 \
--with-ipv6 \
--with-file-aio \
--with-threads \
--with-http_iconv_module \
--with-http_gzip_static_module \
--with-http_v2_module \
--with-http_ssl_module \
--with-http_realip_module \
--with-http_degradation_module \
--with-http_geoip_module \
--with-stream \
--with-stream_ssl_module \
--with-openssl=./bundle/openssl-1.0.2j \
--with-openssl-opt=-DOPENSSL_THREADS\ -pthread\ -D_REENTRANT\ -D_THREAD_SAFE\ -D_THREADSAFE \
--with-pcre=./bundle/pcre-8.39 \
--add-module=./bundle/srcache-nginx-module-0.31 \
--without-http_redis2_module \
--with-debug

error stack

cc -o objs/nginx \
        objs/src/core/nginx.o \
        objs/src/core/ngx_log.o \
        objs/src/core/ngx_palloc.o \
        objs/src/core/ngx_array.o \
        objs/src/core/ngx_list.o \
        objs/src/core/ngx_hash.o \
        objs/src/core/ngx_buf.o \
        objs/src/core/ngx_queue.o \
        objs/src/core/ngx_output_chain.o \
        objs/src/core/ngx_string.o \
        objs/src/core/ngx_parse.o \
        objs/src/core/ngx_parse_time.o \
        objs/src/core/ngx_inet.o \
        objs/src/core/ngx_file.o \
        objs/src/core/ngx_crc32.o \
        objs/src/core/ngx_murmurhash.o \
        objs/src/core/ngx_md5.o \
        objs/src/core/ngx_sha1.o \
        objs/src/core/ngx_rbtree.o \
        objs/src/core/ngx_radix_tree.o \
        objs/src/core/ngx_slab.o \
        objs/src/core/ngx_times.o \
        objs/src/core/ngx_shmtx.o \
        objs/src/core/ngx_connection.o \
        objs/src/core/ngx_cycle.o \
        objs/src/core/ngx_spinlock.o \
        objs/src/core/ngx_rwlock.o \
        objs/src/core/ngx_cpuinfo.o \
        objs/src/core/ngx_conf_file.o \
        objs/src/core/ngx_module.o \
        objs/src/core/ngx_resolver.o \
        objs/src/core/ngx_open_file_cache.o \
        objs/src/core/ngx_crypt.o \
        objs/src/core/ngx_proxy_protocol.o \
        objs/src/core/ngx_syslog.o \
        objs/src/event/ngx_event.o \
        objs/src/event/ngx_event_timer.o \
        objs/src/event/ngx_event_posted.o \
        objs/src/event/ngx_event_accept.o \
        objs/src/event/ngx_event_connect.o \
        objs/src/event/ngx_event_pipe.o \
        objs/src/os/unix/ngx_time.o \
        objs/src/os/unix/ngx_errno.o \
        objs/src/os/unix/ngx_alloc.o \
        objs/src/os/unix/ngx_files.o \
        objs/src/os/unix/ngx_socket.o \
        objs/src/os/unix/ngx_recv.o \
        objs/src/os/unix/ngx_readv_chain.o \
        objs/src/os/unix/ngx_udp_recv.o \
        objs/src/os/unix/ngx_send.o \
        objs/src/os/unix/ngx_writev_chain.o \
        objs/src/os/unix/ngx_udp_send.o \
        objs/src/os/unix/ngx_channel.o \
        objs/src/os/unix/ngx_shmem.o \
        objs/src/os/unix/ngx_process.o \
        objs/src/os/unix/ngx_daemon.o \
        objs/src/os/unix/ngx_setaffinity.o \
        objs/src/os/unix/ngx_setproctitle.o \
        objs/src/os/unix/ngx_posix_init.o \
        objs/src/os/unix/ngx_user.o \
        objs/src/os/unix/ngx_dlopen.o \
        objs/src/os/unix/ngx_process_cycle.o \
        objs/src/os/unix/ngx_linux_init.o \
        objs/src/event/modules/ngx_epoll_module.o \
        objs/src/os/unix/ngx_linux_sendfile_chain.o \
        objs/src/os/unix/ngx_linux_aio_read.o \
        objs/src/core/ngx_thread_pool.o \
        objs/src/os/unix/ngx_thread_cond.o \
        objs/src/os/unix/ngx_thread_mutex.o \
        objs/src/os/unix/ngx_thread_id.o \
        objs/src/event/ngx_event_openssl.o \
        objs/src/event/ngx_event_openssl_stapling.o \
        objs/src/core/ngx_regex.o \
        objs/src/http/ngx_http.o \
        objs/src/http/ngx_http_core_module.o \
        objs/src/http/ngx_http_special_response.o \
        objs/src/http/ngx_http_request.o \
        objs/src/http/ngx_http_parse.o \
        objs/src/http/modules/ngx_http_log_module.o \
        objs/src/http/ngx_http_request_body.o \
        objs/src/http/ngx_http_variables.o \
        objs/src/http/ngx_http_script.o \
        objs/src/http/ngx_http_upstream.o \
        objs/src/http/ngx_http_upstream_round_robin.o \
        objs/src/http/ngx_http_file_cache.o \
        objs/src/http/ngx_http_write_filter_module.o \
        objs/src/http/ngx_http_header_filter_module.o \
        objs/src/http/modules/ngx_http_chunked_filter_module.o \
        objs/src/http/v2/ngx_http_v2_filter_module.o \
        objs/src/http/modules/ngx_http_range_filter_module.o \
        objs/src/http/modules/ngx_http_gzip_filter_module.o \
        objs/src/http/ngx_http_postpone_filter_module.o \
        objs/src/http/modules/ngx_http_ssi_filter_module.o \
        objs/src/http/modules/ngx_http_charset_filter_module.o \
        objs/src/http/modules/ngx_http_userid_filter_module.o \
        objs/src/http/modules/ngx_http_headers_filter_module.o \
        objs/src/http/ngx_http_copy_filter_module.o \
        objs/src/http/modules/ngx_http_not_modified_filter_module.o \
        objs/src/http/v2/ngx_http_v2.o \
        objs/src/http/v2/ngx_http_v2_table.o \
        objs/src/http/v2/ngx_http_v2_huff_decode.o \
        objs/src/http/v2/ngx_http_v2_huff_encode.o \
        objs/src/http/v2/ngx_http_v2_module.o \
        objs/src/http/modules/ngx_http_static_module.o \
        objs/src/http/modules/ngx_http_gzip_static_module.o \
        objs/src/http/modules/ngx_http_autoindex_module.o \
        objs/src/http/modules/ngx_http_index_module.o \
        objs/src/http/modules/ngx_http_auth_basic_module.o \
        objs/src/http/modules/ngx_http_access_module.o \
        objs/src/http/modules/ngx_http_limit_conn_module.o \
        objs/src/http/modules/ngx_http_limit_req_module.o \
        objs/src/http/modules/ngx_http_realip_module.o \
        objs/src/http/modules/ngx_http_geo_module.o \
        objs/src/http/modules/ngx_http_geoip_module.o \
        objs/src/http/modules/ngx_http_map_module.o \
        objs/src/http/modules/ngx_http_split_clients_module.o \
        objs/src/http/modules/ngx_http_referer_module.o \
        objs/src/http/modules/ngx_http_rewrite_module.o \
        objs/src/http/modules/ngx_http_ssl_module.o \
        objs/src/http/modules/ngx_http_proxy_module.o \
        objs/src/http/modules/ngx_http_fastcgi_module.o \
        objs/src/http/modules/ngx_http_uwsgi_module.o \
        objs/src/http/modules/ngx_http_scgi_module.o \
        objs/src/http/modules/ngx_http_memcached_module.o \
        objs/src/http/modules/ngx_http_empty_gif_module.o \
        objs/src/http/modules/ngx_http_browser_module.o \
        objs/src/http/modules/ngx_http_degradation_module.o \
        objs/src/http/modules/ngx_http_upstream_hash_module.o \
        objs/src/http/modules/ngx_http_upstream_ip_hash_module.o \
        objs/src/http/modules/ngx_http_upstream_least_conn_module.o \
        objs/src/http/modules/ngx_http_upstream_keepalive_module.o \
        objs/src/http/modules/ngx_http_upstream_zone_module.o \
        objs/src/stream/ngx_stream.o \
        objs/src/stream/ngx_stream_variables.o \
        objs/src/stream/ngx_stream_script.o \
        objs/src/stream/ngx_stream_handler.o \
        objs/src/stream/ngx_stream_core_module.o \
        objs/src/stream/ngx_stream_proxy_module.o \
        objs/src/stream/ngx_stream_upstream.o \
        objs/src/stream/ngx_stream_upstream_round_robin.o \
        objs/src/stream/ngx_stream_ssl_module.o \
        objs/src/stream/ngx_stream_limit_conn_module.o \
        objs/src/stream/ngx_stream_access_module.o \
        objs/src/stream/ngx_stream_map_module.o \
        objs/src/stream/ngx_stream_return_module.o \
        objs/src/stream/ngx_stream_upstream_hash_module.o \
        objs/src/stream/ngx_stream_upstream_least_conn_module.o \
        objs/src/stream/ngx_stream_upstream_zone_module.o \
        objs/addon/src/ndk.o \
        objs/addon/src/ngx_http_iconv_module.o \
        objs/addon/src/ngx_http_echo_module.o \
        objs/addon/src/ngx_http_echo_util.o \
        objs/addon/src/ngx_http_echo_timer.o \
        objs/addon/src/ngx_http_echo_var.o \
        objs/addon/src/ngx_http_echo_handler.o \
        objs/addon/src/ngx_http_echo_filter.o \
        objs/addon/src/ngx_http_echo_sleep.o \
        objs/addon/src/ngx_http_echo_location.o \
        objs/addon/src/ngx_http_echo_echo.o \
        objs/addon/src/ngx_http_echo_request_info.o \
        objs/addon/src/ngx_http_echo_subrequest.o \
        objs/addon/src/ngx_http_echo_foreach.o \
        objs/addon/src/ngx_http_xss_filter_module.o \
        objs/addon/src/ngx_http_xss_util.o \
        objs/addon/src/ngx_coolkit_handlers.o \
        objs/addon/src/ngx_coolkit_module.o \
        objs/addon/src/ngx_coolkit_variables.o \
        objs/addon/src/ngx_http_set_base32.o \
        objs/addon/src/ngx_http_set_default_value.o \
        objs/addon/src/ngx_http_set_hashed_upstream.o \
        objs/addon/src/ngx_http_set_quote_sql.o \
        objs/addon/src/ngx_http_set_quote_json.o \
        objs/addon/src/ngx_http_set_unescape_uri.o \
        objs/addon/src/ngx_http_set_misc_module.o \
        objs/addon/src/ngx_http_set_escape_uri.o \
        objs/addon/src/ngx_http_set_hash.o \
        objs/addon/src/ngx_http_set_local_today.o \
        objs/addon/src/ngx_http_set_hex.o \
        objs/addon/src/ngx_http_set_base64.o \
        objs/addon/src/ngx_http_set_random.o \
        objs/addon/src/ngx_http_set_secure_random.o \
        objs/addon/src/ngx_http_set_rotate.o \
        objs/addon/src/ngx_http_set_hmac.o \
        objs/addon/src/ngx_http_form_input_module.o \
        objs/addon/src/ngx_http_encrypted_session_module.o \
        objs/addon/src/ngx_http_encrypted_session_cipher.o \
        objs/addon/src/ngx_http_srcache_filter_module.o \
        objs/addon/src/ngx_http_srcache_util.o \
        objs/addon/src/ngx_http_srcache_var.o \
        objs/addon/src/ngx_http_srcache_store.o \
        objs/addon/src/ngx_http_srcache_fetch.o \
        objs/addon/src/ngx_http_srcache_headers.o \
        objs/addon/src/ngx_http_lua_script.o \
        objs/addon/src/ngx_http_lua_log.o \
        objs/addon/src/ngx_http_lua_subrequest.o \
        objs/addon/src/ngx_http_lua_ndk.o \
        objs/addon/src/ngx_http_lua_control.o \
        objs/addon/src/ngx_http_lua_time.o \
        objs/addon/src/ngx_http_lua_misc.o \
        objs/addon/src/ngx_http_lua_variable.o \
        objs/addon/src/ngx_http_lua_string.o \
        objs/addon/src/ngx_http_lua_output.o \
        objs/addon/src/ngx_http_lua_headers.o \
        objs/addon/src/ngx_http_lua_req_body.o \
        objs/addon/src/ngx_http_lua_uri.o \
        objs/addon/src/ngx_http_lua_args.o \
        objs/addon/src/ngx_http_lua_ctx.o \
        objs/addon/src/ngx_http_lua_regex.o \
        objs/addon/src/ngx_http_lua_module.o \
        objs/addon/src/ngx_http_lua_headers_out.o \
        objs/addon/src/ngx_http_lua_headers_in.o \
        objs/addon/src/ngx_http_lua_directive.o \
        objs/addon/src/ngx_http_lua_consts.o \
        objs/addon/src/ngx_http_lua_exception.o \
        objs/addon/src/ngx_http_lua_util.o \
        objs/addon/src/ngx_http_lua_cache.o \
        objs/addon/src/ngx_http_lua_contentby.o \
        objs/addon/src/ngx_http_lua_rewriteby.o \
        objs/addon/src/ngx_http_lua_accessby.o \
        objs/addon/src/ngx_http_lua_setby.o \
        objs/addon/src/ngx_http_lua_capturefilter.o \
        objs/addon/src/ngx_http_lua_clfactory.o \
        objs/addon/src/ngx_http_lua_pcrefix.o \
        objs/addon/src/ngx_http_lua_headerfilterby.o \
        objs/addon/src/ngx_http_lua_shdict.o \
        objs/addon/src/ngx_http_lua_socket_tcp.o \
        objs/addon/src/ngx_http_lua_api.o \
        objs/addon/src/ngx_http_lua_logby.o \
        objs/addon/src/ngx_http_lua_sleep.o \
        objs/addon/src/ngx_http_lua_semaphore.o \
        objs/addon/src/ngx_http_lua_coroutine.o \
        objs/addon/src/ngx_http_lua_bodyfilterby.o \
        objs/addon/src/ngx_http_lua_initby.o \
        objs/addon/src/ngx_http_lua_initworkerby.o \
        objs/addon/src/ngx_http_lua_socket_udp.o \
        objs/addon/src/ngx_http_lua_req_method.o \
        objs/addon/src/ngx_http_lua_phase.o \
        objs/addon/src/ngx_http_lua_uthread.o \
        objs/addon/src/ngx_http_lua_timer.o \
        objs/addon/src/ngx_http_lua_config.o \
        objs/addon/src/ngx_http_lua_worker.o \
        objs/addon/src/ngx_http_lua_ssl_certby.o \
        objs/addon/src/ngx_http_lua_ssl_ocsp.o \
        objs/addon/src/ngx_http_lua_lex.o \
        objs/addon/src/ngx_http_lua_balancer.o \
        objs/addon/src/ngx_http_lua_ssl_session_storeby.o \
        objs/addon/src/ngx_http_lua_ssl_session_fetchby.o \
        objs/addon/src/ngx_http_lua_ssl.o \
        objs/addon/src/ngx_http_lua_upstream_module.o \
        objs/addon/src/ngx_http_headers_more_filter_module.o \
        objs/addon/src/ngx_http_headers_more_headers_out.o \
        objs/addon/src/ngx_http_headers_more_headers_in.o \
        objs/addon/src/ngx_http_headers_more_util.o \
        objs/addon/src/ngx_http_array_var_module.o \
        objs/addon/src/ngx_http_array_var_util.o \
        objs/addon/src/ngx_http_memc_module.o \
        objs/addon/src/ngx_http_memc_request.o \
        objs/addon/src/ngx_http_memc_response.o \
        objs/addon/src/ngx_http_memc_util.o \
        objs/addon/src/ngx_http_memc_handler.o \
        objs/addon/redis-nginx-module-0.3.7/ngx_http_redis_module.o \
        objs/addon/src/ngx_http_rds_json_filter_module.o \
        objs/addon/src/ngx_http_rds_json_processor.o \
        objs/addon/src/ngx_http_rds_json_util.o \
        objs/addon/src/ngx_http_rds_json_output.o \
        objs/addon/src/ngx_http_rds_json_handler.o \
        objs/addon/src/ngx_http_rds_csv_filter_module.o \
        objs/addon/src/ngx_http_rds_csv_processor.o \
        objs/addon/src/ngx_http_rds_csv_util.o \
        objs/addon/src/ngx_http_rds_csv_output.o \
        objs/addon/src/ngx_http_srcache_filter_module.o \
        objs/addon/src/ngx_http_srcache_util.o \
        objs/addon/src/ngx_http_srcache_var.o \
        objs/addon/src/ngx_http_srcache_store.o \
        objs/addon/src/ngx_http_srcache_fetch.o \
        objs/addon/src/ngx_http_srcache_headers.o \
        objs/ngx_modules.o \
        -L/nas/software/openresty-1.11.2.4/build/luajit-root/usr/local/openresty/luajit/lib -Wl,-rpath,/usr/local/openresty/luajit/lib -Wl,-E -ldl -lpthread -lpthread -lcrypt -L/nas/software/openresty-1.11.2.4/build/luajit-root/usr/local/openresty/luajit/lib -lluajit-5.1 -lm -ldl /nas/software/openresty-1.11.2.4/bundle/pcre-8.39/.libs/libpcre.a /nas/software/openresty-1.11.2.4/bundle/openssl-1.0.2j/.openssl/lib/libssl.a /nas/software/openresty-1.11.2.4/bundle/openssl-1.0.2j/.openssl/lib/libcrypto.a -ldl -lz -lGeoIP \        
-Wl,-E
objs/addon/src/ngx_http_srcache_filter_module.o:(.data+0x0): multiple definition of `ngx_http_srcache_filter_module'
objs/addon/src/ngx_http_srcache_filter_module.o:(.data+0x0): first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_discard_bufs':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:135: multiple definition of `ngx_http_srcache_discard_bufs'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:135: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_cmp_int':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:1269: multiple definition of `ngx_http_srcache_cmp_int'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:1269: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_hide_headers_hash':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:1127: multiple definition of `ngx_http_srcache_hide_headers_hash'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:1127: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_post_request_at_head':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:365: multiple definition of `ngx_http_srcache_post_request_at_head'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:365: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_store_response_header':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:848: multiple definition of `ngx_http_srcache_store_response_header'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:848: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_add_copy_chain':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:306: multiple definition of `ngx_http_srcache_add_copy_chain'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:306: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_process_header':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:697: multiple definition of `ngx_http_srcache_process_header'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:697: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_process_status_line':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:648: multiple definition of `ngx_http_srcache_process_status_line'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:648: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_request_no_cache':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:479: multiple definition of `ngx_http_srcache_request_no_cache'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:479: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_response_no_cache':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:548: multiple definition of `ngx_http_srcache_response_no_cache'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:548: first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_adjust_subrequest':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:255: multiple definition of `ngx_http_srcache_adjust_subrequest'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:255: first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x0): multiple definition of `ngx_http_srcache_content_length_header_key'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x0): first defined here
objs/addon/src/ngx_http_srcache_util.o: In function `ngx_http_srcache_parse_method_name':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:147: multiple definition of `ngx_http_srcache_parse_method_name'
objs/addon/src/ngx_http_srcache_util.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_util.c:147: first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0xd0): multiple definition of `ngx_http_srcache_propfind_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0xd0): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0xe0): multiple definition of `ngx_http_srcache_proppatch_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0xe0): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x10): multiple definition of `ngx_http_srcache_get_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x10): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x30): multiple definition of `ngx_http_srcache_post_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x30): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x80): multiple definition of `ngx_http_srcache_mkcol_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x80): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0xa0): multiple definition of `ngx_http_srcache_delete_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0xa0): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0xc0): multiple definition of `ngx_http_srcache_options_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0xc0): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x20): multiple definition of `ngx_http_srcache_put_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x20): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0xb0): multiple definition of `ngx_http_srcache_unlock_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0xb0): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x90): multiple definition of `ngx_http_srcache_trace_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x90): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x40): multiple definition of `ngx_http_srcache_head_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x40): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x50): multiple definition of `ngx_http_srcache_copy_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x50): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x60): multiple definition of `ngx_http_srcache_move_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x60): first defined here
objs/addon/src/ngx_http_srcache_util.o:(.data+0x70): multiple definition of `ngx_http_srcache_lock_method'
objs/addon/src/ngx_http_srcache_util.o:(.data+0x70): first defined here
objs/addon/src/ngx_http_srcache_var.o: In function `ngx_http_srcache_add_variables':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_var.c:164: multiple definition of `ngx_http_srcache_add_variables'
objs/addon/src/ngx_http_srcache_var.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_var.c:164: first defined here
objs/addon/src/ngx_http_srcache_store.o: In function `ngx_http_srcache_filter_init':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_store.c:664: multiple definition of `ngx_http_srcache_filter_init'
objs/addon/src/ngx_http_srcache_store.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_store.c:664: first defined here
objs/addon/src/ngx_http_srcache_fetch.o: In function `ngx_http_srcache_fetch_post_subrequest':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_fetch.c:285: multiple definition of `ngx_http_srcache_fetch_post_subrequest'
objs/addon/src/ngx_http_srcache_fetch.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_fetch.c:285: first defined here
objs/addon/src/ngx_http_srcache_fetch.o: In function `ngx_http_srcache_access_handler':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_fetch.c:26: multiple definition of `ngx_http_srcache_access_handler'
objs/addon/src/ngx_http_srcache_fetch.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_fetch.c:26: first defined here
objs/addon/src/ngx_http_srcache_headers.o: In function `ngx_http_srcache_process_header_line':
/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_headers.c:111: multiple definition of `ngx_http_srcache_process_header_line'
objs/addon/src/ngx_http_srcache_headers.o:/nas/software/openresty-1.11.2.4/bundle/srcache-nginx-module-0.31/src/ngx_http_srcache_headers.c:111: first defined here
objs/addon/src/ngx_http_srcache_headers.o:(.data+0x0): multiple definition of `ngx_http_srcache_headers_in'
objs/addon/src/ngx_http_srcache_headers.o:(.data+0x0): first defined here
collect2: ld returned 1 exit status
gmake[2]: *** [objs/nginx] Error 1
gmake[2]: Leaving directory `/nas/software/openresty-1.11.2.4/build/nginx-1.11.2'
gmake[1]: *** [build] Error 2
gmake[1]: Leaving directory `/nas/software/openresty-1.11.2.4/build/nginx-1.11.2'
gmake: *** [all] Error 2

Question: Can you include a Redis password/AUTH?

Im using lua-nginx-module as a reverse proxy.

I have a master Redis that the slave reads from - this for speed.

I would much prefer if I could write to the master - Is there a way I can include a AUTH password to the redis connection?

srcache cant work with nginx slice option not stored

Hello
sorry I cant memcached requests
I need to slice responce from upstream by slice 1m;

server {
listen front.network;
server_name cdn.cdntest.com;

     location = /memc {
         internal;
         memc_connect_timeout 100ms;
         memc_send_timeout 100ms;
         memc_read_timeout 100ms;
         memc_ignore_client_abort on;
         set $memc_key $query_string;
         set $memc_exptime 3600;

         memc_pass ram1;
     }


     location / {
         slice  1m;

         set $key $uri$is_args$args$slice_range;
         #set $key $uri$args$slice_range;
         srcache_fetch GET /memc $key;
         srcache_store PUT /memc $key;

         srcache_store_ranges on;
         srcache_store_statuses 200 206 301 302;
         srcache_ignore_content_encoding on;

         proxy_set_header  Range $slice_range;
         proxy_pass http://www.cdntest.com;
     }
 }

but in error.log I got
srcache_store: skipped because response body truncated: 149616567 > 1048576 while sending to client

Potential Issue with Range Requests

Response headers for pdf file request -> https://gist.github.com/3239727

basic config for location with srcache:

 #set $key $uri$args;
 #srcache_fetch GET /memc $key;
 #srcache_store PUT /memc $key;
 #srcache_response_cache_control off;

Specifically Google Chrome displays an error, but other browsers appear to intermittently work?

srcache_fetch GET with redis2_query and hget

Hi There,

I'm trying to use srcache_fetch with Redis2 module via redis2_query and not with the old HTTP REDIS module.
I have 2 issues and they might be related.

  1. When I'm using redis2 module and not HTTP REDIS in the srcache_fetch directive I never get a good response.
    Working Example:
    srcache_fetch GET /redis $key;
    location = /redis {
    internal;
    set $redis_key $args;
    redis_pass redisbackend_read;
    }

Non-Working Example:
srcache_fetch GET /redis $key;
location = /redis {
internal;
redis2_query get $args
redis2_pass redisbackend_read;
}

My Upstream backend for both examples is:
upstream redisbackend_read {
server 127.0.0.1:6379
keepalive 1024;
}

I can see that both queries are getting to Redis(I can see in in the Monitor), but when using redis2 it seems like the response is not valid or something because I'm redirected to the backend like the key can't be found.

In the working example there are no errors in the log file but in the non working example with redis2 I can see the following:
2016/02/09 15:25:54 [error] 25052#0: *1 srcache_fetch: cache sent invalid status line while sending to client, client: 192.168.56.2, server: _, request: "GET /api/adserver/tag?AV_PUBLISHERID=55b88d4a181f465b3e8b4567&AV_CHANNELID=55f030e7181f46b9228b4572 HTTP/1.1", subrequest: "/redis", upstream: "redis2://127.0.0.1:6379", host: "aniview"
2016/02/09 15:25:54 [error] 25052#0: *1 srcache_fetch: cache sent truncated response body while sending to client, client: 192.168.56.2, server: _, request: "GET /api/adserver/tag?AV_PUBLISHERID=55b88d4a181f465b3e8b4567&AV_CHANNELID=55f030e7181f46b9228b4572 HTTP/1.1", subrequest: "/redis", upstream: "redis2://127.0.0.1:6379", host: "aniview"

Basically I'm trying to use hget against Redis but as I saw that even regular get doesn't work with redis2 module then I think that my issues with hget will be addressed if we can understand what happening here.

NGINX -V output:
nginx version: nginx/1.8.0
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --add-module=/usr/src/ngx_http_geoip2_module-1.0 --add-module=/usr/src/nginx-ua-parse-module --with-http_ssl_module --with-http_realip_module --with-http_sub_module --with-http_dav_module --with-http_gunzip_module --with-http_gzip_static_module --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --add-module=/usr/src/ngx_http_lower_upper_case --add-module=/usr/src/ngx_http_redis-0.3.7 --add-module=/usr/src/lua-nginx-module-0.9.20 --add-module=/usr/src/redis2-nginx-module-0.12 --add-module=/usr/src/ngx_devel_kit-0.2.19 --add-module=/usr/src/headers-more-nginx-module-0.261 --add-module=/usr/src/set-misc-nginx-module-0.28 --add-module=/usr/src/echo-nginx-module-0.57 --add-module=/usr/src/array-var-nginx-module-0.04 --add-module=/usr/src/strftime-nginx-module --add-module=/usr/src/srcache-nginx-module-0.29

More information from redis for both examples which one works and the other one does not work is:
I'm setting the value this way:
"set" "zc:k:db6cf5e91987f48d521e85474750b2ca" "HTTP/1.1 200 OK\r\nContent-Type: text/html\r\nX-Powered-By: PHP/5.5.32\r\n\r\n"

Any help and investigation will be really appreciated.

srcache_store subrequest failed

Hi,
我目前正在使用openresty 1.7.7.1,逻辑如下:request的uri得到缓存key,先判断这个key在memcached集群中是否有对应的response,如果没有,则通过proxy_pass转到backend_api,得到backend_api的结果然后当作response返回给client,同时store到memcached集群.这样的话,下次同样的request过来之后,直接就可以从memcached获取response.
  但现在在线上运行一段时间之后,得到的错误信息如下:
   srcache_store subrequest failed: rc=502 status=0 while sending to client, client: xxx.xxx.xxx.xxx, server: xxx.com, request: "GET /test?code=W7WSoSBZOIM HTTP/1.1", subrequest: "/memc", host: "xxx.com"

nginx的配置如下:
upstream memc_backend {
hash $arg_key consistent;
server 127.0.0.1:11211;
server 127.0.0.1:11212;
server 127.0.0.1:11213;
server 127.0.0.1:11214;
keepalive 1024;
}
server {
  location = /memc {
    internal;
    memc_connect_timeout 100ms;
    memc_send_timeout 100ms;
    memc_read_timeout 100ms;
    memc_ignore_client_abort on;
    set $memc_key $arg_key;
    set $memc_exptime $arg_exptime;
    memc_pass memc_backend;
  }

  location = /test {
set_by_lua_file $key conf/lua/test/k_get_cache_key.lua;
if ($http_is_update_cache = "Yes") {
srcache_fetch DELETE /memc key=$key;
}
set $exptime 3600;
srcache_fetch GET /memc key=$key;
srcache_store PUT /memc key=$key&exptime=$exptime;
proxy_pass http://backend_api;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $x_remote_addr;
proxy_set_header Host $proxy_host;
proxy_set_header Range $http_range;
proxy_set_header X-NginX-Proxy true;
}
}  
能帮忙看下原因么?我已经查了一段时间了,还是没有头绪~~

srcache_fetch subrequest return 405 error code.

Hi, We're running into the issue that I always get resonse from upstream, not from redis cache.

And with some debug info, I found that my srcache_fetch subrequest always return 405 error code.

I using POST method with my request and I got problem. but I got correct response from redis cache with GET method.

Here is my nginx configuration.

upstream redis_pools {
    server 192.168.56.110:6379;
    keepalive 1024;
}

upstream search_tds_backend {
    server 172.26.1.118:80 max_fails=3;
    server 172.26.1.120:80 max_fails=3;
}

server {
    listen       80 default_server;
    server_name  localhost;

    redis2_connect_timeout  100ms;
    redis2_send_timeout     100ms;
    redis2_read_timeout     100ms;
    redis_connect_timeout   100ms;
    redis_send_timeout      100ms;
    redis_read_timeout      100ms;

    srcache_methods POST GET;

    location = /redis_cache_get {
        internal;
        set_unescape_uri $key $arg_key;
        set_md5 $key;
        set $redis_key $key;
        redis_gzip_flag 1;
        add_header X-Cache-From "has-cached";
        redis_pass redis_pools;
    }

    location = /redis_cache_set {
        internal;
        set_unescape_uri $exptime $arg_exptime;
        set_unescape_uri $key $arg_key;
        set_md5 $key;
        redis2_query set $key $echo_request_body;
        redis2_query expire $key $exptime;
        redis2_pass redis_pools;
    }

    location / {
        default_type application/json;

        if ($request_method = GET) {
            srcache_fetch GET /redis_cache_get key=$escaped_key&exptime=3600;
        }

        if ($request_method = POST) {
            srcache_fetch POST /redis_cache_get key=$escaped_key&exptime=3600;
        }

        set $key $uri;
        set_escape_uri $escaped_key $key;
        srcache_store POST /redis_cache_set key=$escaped_key&exptime=3600;

        proxy_pass http://search_tds_backend;
        proxy_set_header  Host "se.beta.hq.hiiir";
        add_header X-Cache-From "no-cached";
    }
}

And this is my debug info : http://pastebin.com/hbbbNuj5

So Can you confirm is this srcache_fetch bug or just my configuration mistake or misuse?

Thank you.

compute a $storeKey for srcache_store after upstream response ?

Hi,
i'm trying to implement HTTP Vary with srcache.
A typical upstream response will contain a Vary: Accept-Language field,
header_filter_by_lua_block could then

  1. compute a $storeKey by concatenation of key/value of request fields listed in that Vary header
  2. store url => Vary field in a shared dict
    (Upon later request, set_by_lua_block could compute a $requestKey by looking up
    in the shared dict to get the list of Vary headers for the requested url.)

Question: is it possible to make srcache_store use $storeKey and not preprocessed key ?

srcache_fetch: $key is not set correctly after access_by_lua subrequest

Hi,

I have the configuration below:

set $mykey test;
access_by_lua 'local res = ngx.location.capture("/my_test") if res.status == 200 then ngx.var.mykey = res.body end';

set $key $mykey;
srcache_fetch GET /memc $key;
srcache_store PUT /memc $key;

Where /my_test sub-request returns test2 with status code 200.

I expect the $key passed to memc to be set to test2, but it is always set to test.

I have read that access_by_lua runs in the end of access, while srcache_fetch runs post-access.

Could you explain this behavior?

P.S. Compliments for your amazing modules!!!

Thanks!

proxy pass blank

I will try and produce exact replication instructions, but the basics of what happened are.

Http Proxy backend, backend is inaccessible due to 100% packet loss, srcache caches an empty page as result (where it should have BYPASS'ed).

I haven't tracked down the cause yet, or loaded this into a VM (once I work out how to replicate it).

Could anything in the configuration cause this behavior (I cant see an options that would)? This was on a production server, so I will have to copy the config off and work backwards.

Please add srcache_store_exp in TODO :)

Thanks for nice module! :)

But i would request a new feature - may be you will realize may be anybody
Please add to TODO :)

Sometimes there is a need to setup different $memc_exptime for different locations
But problem is that $memc_exptime defined in location of memc's module subrequest but srcache's locations are in other places. I cannot use 'set' directive - i tested and set variables are not translated in subrequests

May be i should describe two and more sections like '/memc' (with other $memc_exptime values) in your examples and use other these sections for other locations...

But i think the better is using like srcache_store_exp directive like srcache_store_skip for example (this directive for example should setup $memc_exptime for /memc)

But may be is it not possible because /memc is other subrequest?

Thanks

srcache_store not working with Redis for large files

Hi,
I am running nginx-1.4.5 with the HttpSRCache module and using redis as the backend.

srcache-module version: v0.26
ngx_redis2 version: v0.10
echo-nginx version: v0.50
ngx_set_misc version: v0.24

my nginx.conf (snippet) is:
location / {
set $key $uri;
set_escape_uri $escaped_key $key;

    srcache_fetch GET /redis2-get $key;
    srcache_store PUT /redis2-set key=$escaped_key&exptime=3600;

    proxy_pass http://backend;

}
location = /redis2-get {
internal;

    redis2_query get $arg_key
    redis2_pass <redis_server_ip>:6379;
}

location = /redis2-set {
    internal;

    set_unescape_uri $exptime $arg_exptime;
    set_unescape_uri $key $arg_key;
    set_md5 $key;

    proxy_set_header  Accept-Encoding  "";

    redis2_query set $key $echo_request_body;
    redis2_query expire $key $exptime;
    redis2_pass <redis_server_ip>:6379;
}

I am seeing that when the response is small ( currently I am seeing anything < 1K ), it gets stored on the redis server.

However, when I am fetching a bigger response( currently I am seeing issues with anything > 100K ), everything seems fine, i.e I can see the nginx doing a 'set' on the redis server, but nothing gets stored on the redis.

Is there any size limitation? Can I see any logs to debug this issue? Right now, I see nothing in the nginx or the redis logs.

Is there any known issue? Thanks in advance.
-anirudh

Confirmation of method

Ok firstly im sorry if this is documented anywhere, I couldn't find it. Its a question regarding the way srcache_store works, which may turn into a feature request.

Basically I run a cross datacenter slave system. PUT requests require writing to a master server which can be up to 100ms in latency away. Our system depends on low latency serving of requests.

Does srcache_store, when waiting on a response block the request from being outputted?

cache files

hey, how can i store a files content from the server ? for example i have some image files on the server and when i request image/file.png it gets on srcache instructions and if it is a miss i wanna store the file's content into the redis key.

thanks

Supporting Subrequests

Just ran into this while using ngx.location.capture.

"For subrequests, we explicitly disallow the use of this module because it's not working (yet)."

Are you planning to implement this soon? Is a bounty needed?

Thanks.

fetch_status / store_status in lua

I am trying to retreive the status of $srcache_fetch_status,$srcache_store_status in lua. No matter which phase I try, the output is always BYPASS,BYPASS ($srcache_fetch_status,$srcache_store_status).

Currently I am trying in log_by_lua since I figured it would be the most likely to have these set, I have also tried content_by_lua in post_action and the rewrite phase.

Is this a bug?

To replicate:

ngx.log(ngx.ERR,ngx.var.srcache_store_status..","..ngx.var.srcache_store_status)

srcache_fetch stucked while processing some requests

Hello, i get very strange trouble with directive "srcache_fetch" and some kind of SOAP requests.

When i send some POST requests from SOAP UI or another app with User-Agent like "HTTP/Apache (java)" to Openresty, some requests processed well: sending GET request to memc_pass, then store cache, go to proxy_pass and send responds to client.
But other requests are stuck after response from memcached server and wait till client closes connection by it's own timeout (that can be much more than memc_read timeout).

BTW, when i send SOAP request with same body by CURL, i can't reproduce this problem - all requests processed well.

openresty version: 1.11.2.5
nginx version: nginx/1.10.2

This is depersonalized part of my nginx.conf^

	location = /memc {
        internal;
        memc_connect_timeout 100ms;
        memc_send_timeout 100ms;
        memc_read_timeout 100ms;
        memc_ignore_client_abort off;

        set $memc_key $query_string;
        set $memc_exptime 3600;


        memc_pass memcached-ip-addr:memcached-port;
    }

   location /memc-stats {
       add_header Content-Type text/plain;
       set $memc_cmd stats;
       memc_pass memcached-ip-addr:memcached-port;
   }


    location /location-name  {

        access_log logs/access.log main;
        set $responsebody "0";
        set $reqbody "0";
        set $key "0";

        lua_need_request_body on;
        client_max_body_size 50M;

        rewrite_by_lua '

        local method = ngx.var.request_method
        if method == "POST" then
            ngx.req.read_body()
            local data = ngx.req.get_body_data()
            ngx.var.reqbody = data
        elseif method == "GET" then
            local data = ngx.var.query_string
            ngx.var.reqbody = data
        end
        ngx.var.key = ngx.md5(ngx.var.reqbody)
        return ngx.var.key
		';

        srcache_request_cache_control off;
        srcache_response_cache_control off;
        srcache_ignore_content_encoding on;
        srcache_store_private on;

        srcache_fetch GET /memc $key;
        srcache_store_statuses 200 201 301 302;


        srcache_store PUT /memc $key;
        srcache_methods GET POST;


        proxy_pass http://proxy-pass-addr;

 
        proxy_buffering         off;
        proxy_connect_timeout 5s;
        proxy_send_timeout 5s;
        proxy_read_timeout 30s;

    }

Might be an error in the redis example

Hi

Should the line redis_pass 127.0.0.1:6379;
be redis2_pass 127.0.0.1:6379; ?

We've been testing with the first variant, but not sure if its optimal.

Redis cache not working

Hi,

I'm following your exemple of redis caching and I'm not having always MISS as result.
I'm using openresty/1.7.2.1 .

This is my config, I omitted some values of it because they contain our company data.

   location /redis {

        internal  ;
        set_md5  $redis_key $args;
        redis_pass  redis-endpoint.com:6379;

    }
    location /redis2 {

        internal  ;
        set_unescape_uri  $exptime $arg_exptime;
        set_unescape_uri  $key $arg_key;
        set_md5  $key;
        redis2_query  set $key $echo_request_body;
        redis2_query  expire $key $exptime;
        redis2_pass  redis-endpoint.com:6379;

    }
    location ~ ^/ {         
        add_header XXXX;
        proxy_set_header  XXXX;
        set  $key $host$request_uri;
        default_type  text/css;
        srcache_response_cache_control  off;
        srcache_ignore_content_encoding  on;
        set_escape_uri  $escaped_key $key;
        srcache_fetch  GET /redis $key;
        srcache_store  PUT /redis2 key=$escaped_key&exptime=1500;
        proxy_pass  http://upstream;
        proxy_set_header  Host                           $host;
        proxy_read_timeout  300s;
        proxy_set_header  User-Agent                     $http_user_agent;
        proxy_pass_header  Set-Cookie;
        proxy_pass_header  X-Track;
        proxy_pass_header  P3P;
        add_header  Cache-Control no-cache;
        add_header  X-Cache-Status $srcache_fetch_status;
        access_log  /var/log/nginx/access.log access-log;

    }

Intermittently, when I send a request to the "srcached" location, I receive in the error log two messages like these:

2014/08/19 18:36:24 [error] 11188#0: *603 srcache_fetch: cache sent invalid status line while sending to client, client: xx.xx.xx.xx, server: , request: "GET / HTTP/1.1", subrequest: "/redis", upstream: "http://xx.xx.xx.xx:80/redis?www.hostname.com./param=value", host: "www.hostname.com"

2014/08/19 18:36:24 [error] 11188#0: *603 srcache_fetch: cache sent truncated response body while sending to client, client: xx.xx.xx.xx, server: , request: "GET / HTTP/1.1", subrequest: "/redis", upstream: "http://xx.xx.xx.xx:80/redis?www.hostname.com./param=value", host: "www.hostname.com"

Can you help me?

Thanks in advance!

srcache 模块执行顺序

你好,我想问一个问题:
srcache_fetch srcache_store proxy_pass(memc_passredis_pass) 在srcache_fetch 命中与不命中情况下的执行顺序。因为我翻看了你的README,还是有疑惑。

1.5.x issues

I am not sure where to start with this issue (whose module is at fault). Since the most common error has srcache_fetch in it I am assuming it starts there.

Since upgrading to 1.5.6 we have been seeing many problems -
2013/11/28 01:53:33 [error] 25107#0: *421 srcache_fetch: cache sent truncated status line or headers while sending to client, client: 108.162.219.227, server: www.highway.cf, request: "GET /shop/img/chat.png HTTP/1.1", subrequest: "/___redis__read", upstream: "redis://X:6380",

and occasionally

2013/11/28 01:54:48 [error] 25107#0: *1127 redis sent invalid trailer while sending to client, client: 14.203.10.166, server: , request: "GET /images/logo.png HTTP/1.1", subrequest: "/___redis__read", upstream: "redis://X:6380", host:

Modules installed -
redis (0.3.6)
latest git - redis2, misc, srcache, echo
patches - none

Related config:
srcache_default_expire 60m;
srcache_store_statuses 200;
srcache_store_max_size 10m;
srcache_request_cache_control off;
set_escape_uri $escaped_key "ID$request_method|$scheme://*$request_uri";
srcache_fetch GET /___redis__read key=$escaped_key;
srcache_store PUT /___redis__write key=$escaped_key&exptime=$srcache_expire;

location = /___redis__read {
internal;
set_unescape_uri $redis_key $arg_key;
redis_pass redis_proxy;
}

location = /___redis__write {
internal;
set_unescape_uri $exptime $arg_exptime;
set_unescape_uri $key $arg_key;
redis2_query set $key $echo_request_body;
redis2_query expire $key $exptime;
redis2_pass redis_proxy;
}

We have had no previous issues with this config on 1.4.x (with upstream truncation patch).

Ideas?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.