Git Product home page Git Product logo

taro-mall's Introduction

项目已停止维护

如果你觉得还行,请给个star,感谢你的支持🙏

一个项目从设计至最终成品的架构,包括设计/前端/管理后台/后端/k8s集群架构(持续开发中......)

rxjava-apis项目地址:https://github.com/apersonw/rxjava-apis.git (已发布npmjs正式版)

rxjava项目地址:https://github.com/apersonw/rxjava.git (已发布maven正式版)

一、项目使用说明

服务端:

  • kubernetes1.14.2:安装方法详见 #8
  • 服务发现采用的k8s,配置采用configMap,说明详见:Spring-cloud-kubernetes
  • 每个模块都是单独的模块,单一模块启动无法联动方法接口无法调用
  • 单一模块启动请保证本机安装的有mongodb、rabbitmq、redis,否则会启动报错无法连接
  • 每个模块下均有build.sh,可自行修改发布到自己的镜像仓库,然后修改k8s中的deployment.yaml,发布到集群

客户端

  • nodejs: v10.15.2

二、单独模块启动说明

  1. 客户端模块(React Native单独拉取rn分支)
# run
# H5启动(yarn dev:weapp为小程序启动)
$ cd client | yarn dev:h5
  1. 管理后台模块
# run
$ cd manager | yarn start
  1. 微服务RestFul接口模块
# run 需要本地启动mongo和redis
# 模块启动顺序 (配置中心)config->(注册中心)center->其他模块
$ cd services/xxx | 启动java项目

三、目录说明

.
├── client                    #客户端,访问地址:0.0.0.0:81
│   ├── Dockerfile
│   ├── docker
│   │   └── nginx.conf        #nginx配置文件
│   ├── package.json
│   ├── src                   #项目源码
├── design                    #设计切图
│   ├── assets
│   ├── index.html
│   ├── links
│   └── preview
├── docker-compose.yml        #项目docker-compose
├── manager                   #管理端,访问地址:0.0.0.0:82
│   ├── Dockerfile
│   ├── docker
│   │   └── nginx.conf        #nginx配置文件
│   ├── package.json
│   ├── public
│   ├── src
├── readmeImg                 #readme引用图片
│   ├── category.jpg
│   └── index.jpg
├── run.sh                    #启动项目运行脚本
├── rxjava-api-core           #api request服务核心包
└── services                  #微服务项目组
    ├── service-goods         #商品微服务
    │   ├── pom.xml
    │   └── src
    └── service-user          #用户微服务
        ├── pom.xml
        └── src

注:(整体启动后)
1.数据库访问地址:0.0.0.0:27018
2.注册中心访问地址:0.0.0.0:8761   账号密码均为:admin

四、项目架构说明

  1. 设计说明:

  • 采用Sketch设计并发布切图,设计切图位于design文件夹
  1. 数据来源说明:

  • 采用Python Scrapy爬虫爬取相关数据,加群可分享
  1. 前端说明:

  • 客户端模块-基于Taro+Dva
  • 管理后台模块-基于Umi
  1. 微服务RestFul接口模块说明:

  1. 部署说明:

  • 所有模块部署均采用Docker

五、交易流程图

六、页面展示

1、首页展示

2、分类展示

七、后端模块说明

  • 平台:指被场景依赖的服务,如用户服务,订单服务
  • 场景:指终端服务,不会被其他服务所依赖

taro-mall's People

Contributors

apersonw avatar dependabot[bot] avatar douit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

taro-mall's Issues

搭建mongodb副本集数据库

一、创建docker-compose.yml文件

version: '3.7'
services:
  master-mongo:
    image: mongo
    container_name: master-mongo
    ports:
    - "27017:27017"
    volumes:
    - ./data/master:/data/db
    command: mongod --dbpath /data/db --replSet testSet --oplogSize 128
  secondary-mongo:
    image: mongo
    container_name: secondary-mongo
    ports:
    - "27018:27017"
    volumes:
    - ./data/secondary:/data/db
    command: mongod --dbpath /data/db --replSet testSet --oplogSize 128
  arbiter-mongo:
    image: mongo
    container_name: arbiter-mongo
    ports:
    - "27019:27017"
    volumes:
    - ./data/arbiter:/data/db
    command: mongod --dbpath /data/db --replSet testSet --oplogSize 128

二、进入master节点终端bash

#1、输入mongo命令,进入mongo命令行
#2、输入以下副本集加入配置(10.22.33.44是宿主机的ip地址)
config={
  _id:"testSet",
  members:[
    {_id:0,host:"10.22.33.44:27017","priority": 2},
    {_id:1,host:"10.22.33.44:27018","priority": 1},
    {_id:2,host:"10.22.33.44:27019","priority": 1}
  ]
}
#3、根据config初始化,初始化成功会返回ok
rs.initiate(config)
#4、查看状态命令
rs.status()

三、在@configuration注解下添加响应式事务管理器

@Bean
ReactiveMongoTransactionManager reactiveTransactionManager(ReactiveMongoDatabaseFactory reactiveMongoDatabaseFactory) {
    return new ReactiveMongoTransactionManager(reactiveMongoDatabaseFactory);
}

四、在需要使用事务的地方添加注解即可

@Transactional

四舍五入的几种方法

//1.最笨的办法.......
function get(){
    var s = 22.127456 + "";
    var str = s.substring(0,s.indexOf(".") + 3);
    alert(str);
}
//2. 正则表达式效果不错 
function(){
    var a = "23.456322";
    var aNew;
    var re = /([0-9]+\.[0-9]{2})[0-9]*/;
    aNew = a.replace(re,"$1");
    alert(aNew);
}
//3. 他就比较聪明了..... 
var num=22.127456;
alert( Math.round(num*100)/100);
//4.会用新鲜东西的朋友....... 但是需要 IE5.5+才支持。 
var num=22.127456;
alert( num.toFixed(2));

分布式链路跟踪

一、服务端篇
1、创建一个docker服务端
docker run -d -p 9411:9411 openzipkin/zipkin
二、客户端篇
1、maven pom中添加依赖

<!-- 分布式链路跟踪 -->
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>

2、配置文件application.yml中添加配置

spring:
  zipkin:
    base-url: http://host:post/
http://host:post/  为服务端的访问地址端口
访问此地址即可查看相关微服务的请求时间

Maven打包时找不到本地模块依赖包的问题

如果你有子项目引用了父项目的POM,但没有在父项目POM目录下执行安装操作,这个问题就会出现。针对子模块依赖兄弟子模块的情况,需要在父项目POM目录下至少执行一次安装。即 mvn install

RabbitMQ整合

一、添加依赖

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>

二、增加配置

rabbitmq:
  host: 127.0.0.1
  port: 5672
  username: admin
  password: admin
  listener:
    simple:
      concurrency: 10 #并发数
      max-concurrency: 10 #最大并发数
      prefetch: 1 #预取数
      default-requeue-rejected: true #默认情况下,重新获取队列,拒绝
      auto-startup: true #自动启动
    template:
      retry:
        enabled: true #重试开启
        initial-interval: 1000 #初始间隔
        max-attempts: 3 #最大尝试次数
        max-interval: 10000 #最大间隔
        multiplier: 1.0 #乘数

三、三种模式
1、direct模式交换机(exchange)模式

  • 创建消息队列
import org.springframework.amqp.core.Queue;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;


@Configuration
public class MQConfig {

    public static final String QUEUE_NAME = "queue";

    @Bean
    public Queue queue(){
        //第一个参数是队列名  第二是是否持久化
        return new Queue(QUEUE_NAME,true);
    }
}

创建消息发送者

import org.springframework.amqp.core.AmqpTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class MQSender {
    @Autowired
    private  AmqpTemplate amqpTemplate;

    public void send(String msg){
        amqpTemplate.convertAndSend(MQConfig.QUEUE_NAME,msg);
        System.out.println("send message:"+msg);
    }
}
  • 创建消息接收者
import org.springframework.amqp.rabbit.annotation.RabbitListener;
import org.springframework.stereotype.Service;

@Service
public class MQReceiver {

    @RabbitListener(queues = MQConfig.QUEUE_NAME)
    public void receiver(String message){
        System.out.println("receiveMessage:"+message);
    }
}

2、Topic模式交换机(exchange)模式

  • 创建消息队列
    /**
     * topic 模式
     */
    @Bean
    public Queue topicQueue1(){
        return new Queue("topic.queue1",true);//第一个参数是队列名  第二是是否持久化
    }

    @Bean
    public Queue topicQueue2(){
        return new Queue("topic.queue2",true);//第一个参数是队列名  第二是是否持久化
    }

    @Bean
    public TopicExchange topicExchange(){
        return new TopicExchange("topicExchange");
    }

    @Bean
    public Binding topicBinding(){
        return BindingBuilder.bind(topicQueue1()).to(topicExchange()).with("topic.key1");
    }

    @Bean
    public Binding topicBinding2(){
        return BindingBuilder.bind(topicQueue2()).to(topicExchange()).with("topic.#");
    }
    /**
     * 流程说明:我们先创建了两个queue 分别命名为 topic.queue1topic.queue2 , 然后再创建个交换机 命名为 topicExchang,最后将两个queue和交换机绑定,同时制定了匹配规则 ,"#"代表全部匹配
     * /
  • 创建消息发送者
import org.springframework.amqp.core.AmqpTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class MQSender {
    @Autowired
    private  AmqpTemplate amqpTemplate;

    public void sendTopic(String msg){
        amqpTemplate.convertAndSend("topicExchange","topic.key1",msg+"1");//第一个参数代表交换机名 第二个代表满足匹配规则的表达式  第三个消息
        amqpTemplate.convertAndSend("topicExchange","topic.key2",msg+"2");
        System.out.println("send message:"+msg);
    }
}
/**
 * 我们在绑定交换机与queue时制定了匹配规则,"topic.key1"只能匹配"topic.key1","topic.#"可以匹配全部以"topic."开头的消息; 这样,第条消息就会被 topic.queue1topic.queue2所匹配,而第二条只能被 topic.queue2匹配到
 * /
  • 创建消息接收者
    @RabbitListener(queues = MQConfig.TOPIC_QUEUE_NAME1)
    public void receiverTopic1(String message){
        System.out.println("receive topic queue1 message:"+message);
    }

    @RabbitListener(queues = MQConfig.TOPIC_QUEUE_NAME2)
    public void receiverTopic2(String message){
        System.out.println("receive topic queue2 message:"+message);
    }

3、Fanout模式交换机(exchange)模式

  • 创建消息队列
    /**
     * fanout模式
     */
    @Bean
    public FanoutExchange fanoutExchange(){
        return new FanoutExchange(FANOUT_EXCHANGE);
    }

    @Bean
    public Binding fanoutBinding1(){
        return BindingBuilder.bind(topicQueue1()).to(fanoutExchange());
    }

    @Bean
    public Binding fanoutBinding2(){
        return BindingBuilder.bind(topicQueue2()).to(fanoutExchange());
    }
  • 创建消息发送者
    public void sendFanout(String msg){
        //第一个参数代表交换机名,第三个消息
        amqpTemplate.convertAndSend("fanoutExchange","",msg);
        System.out.println("send fanout message:"+msg);
    }
  • 创建消息接收者
    @RabbitListener(queues = MQConfig.TOPIC_QUEUE_NAME1)
    public void receiverTopic1(String message){
        System.out.println("receive queue1 message:"+message);
    }

    @RabbitListener(queues = MQConfig.TOPIC_QUEUE_NAME2)
    public void receiverTopic2(String message){
        System.out.println("receive queue2 message:"+message);
    }

使用div实现图片成为背景图

<div style="position:relative"><!--父div定位修改相对-->
  <!--盒子内其他的元素可写于此处,且每个div的z-index都要比图片盒子的高-->
  <div style="position:absolute;left:0;top:0"><img/></div><!--图片的盒子定位修改为绝对定位-->
</div>

注意,在使用相对定位时,无论是否进行移动,元素仍然占据原来的空间。因此,移动元素会导致它覆盖其它框。

国内CentOs7.x安装minikube方法

安装docker

一、安装仓库帮助工具

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

二、添加仓库镜像地址

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

三、开始安装

sudo yum install docker-ce docker-ce-cli containerd.io

四、启动容器

sudo systemctl start docker

卸载docker

一、卸载docker

sudo yum remove docker-ce

二、删除镜像、数据卷等

sudo rm -rf /var/lib/docker

命令自动补全

sudo curl -L https://raw.githubusercontent.com/docker/compose/1.24.0/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose

从阿里云仓库镜像下载kubectl

curl -L https://code.aliyun.com/khs1994-docker/kubectl-cn-mirror/raw/1.14.0/kubectl-`uname -s`-`uname -m` > kubectl-`uname -s`-`uname -m`
chmod +x kubectl-`uname -s`-`uname -m`
./kubectl-`uname -s`-`uname -m` version
sudo mv kubectl-`uname -s`-`uname -m` /usr/local/bin/kubectl

安装visualbox

yum update
reboot
yum install -y kernel-devel kernel-headers gcc make perl
yum -y install wget
wget https://www.virtualbox.org/download/oracle_vbox.asc
rpm --import oracle_vbox.asc
wget http://download.virtualbox.org/virtualbox/rpm/el/virtualbox.repo -O /etc/yum.repos.d/virtualbox.repo
yum install -y VirtualBox-5.2
systemctl status vboxdrv

安装minikube

curl -Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v1.1.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
# Minikube 也支持 --vm-driver=none 选项来在本机运行 Kubernetes 组件,这时候需要本机安装了 Docker。在使用 0.27版本之前的 none 驱动时,在执行 minikube delete 命令时,会移除 /data 目录,请注意,问题说明;另外 none 驱动会运行一个不安全的API Server,会导致安全隐患,不建议在个人工作环境安装。
minikube start --vm-driver=none

解决WebStorm配置别名,无法单击进入的问题

**** 新建一个文件命名为webstorm.config.js存放到config目录即可
**** Preferences | Languages & Frameworks | JavaScript | Webpack选择webstorm.config.js

'use strict';
const path = require('path');

module.exports = {
  context: path.resolve(__dirname, './'),
  resolve: {
    extensions: ['.js', '.vue', '.json','.ts'],
    alias: {
      '@/assets': path.resolve(__dirname, '..', 'src/assets'),
      '@/components': path.resolve(__dirname, '..', 'src/components'),
      '@/containers': path.resolve(__dirname, '..', 'src/containers'),
      '@/utils': path.resolve(__dirname, '..', 'src/utils'),
      '@/package': path.resolve(__dirname, '..', 'package.json'),
      '@/project': path.resolve(__dirname, '..', 'project.config.json'),
    },
  },
};

yum离线安装kubernetes集群

一、安装kubelet kubeadm kubectl

 #在能联网的服务器上
#1、安装yumdownloader
yum install yum-utils -y
设置yum源
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum clean all && yum makecache

2、获取k8s安装包及依赖,也可以使用命令repotrack下载所有的rpm包

mkdir /tmp/k8s
yumdownloader --resolve --destdir /tmp/k8s libxml2-python.x86_64 0:2.9.1-6.el7.5 python-chardet.noarch 0:2.2.1-3.el7 python-kitchen.noarch 0:1.1.1-5.el7 libxml2.x86_64 0:2.9.1-6.el7.5
yumdownloader --resolve --destdir /tmp/k8s kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 --disableexcludes=kubernetes
yumdownloader --resolve --destdir /tmp/k8s createrepo
tar zcf k8s.tar.gz /tmp/k8s

#注:也可使用repotrack下载离线rpm

3、将k8s.tar.gz上传到离线服务器上/tmp目录下

#在离线服务器上
#1、解压压缩包
tar zxf /tmp/k8s.tar.gz

#2、制作离线源
cd /tmp/k8s
#若是libxml2冲突,则使用更新命令rpm -Uvh
rpm -Uvh libxml2-2.9.1-6.el7.5.x86_64.rpm
rpm -ivh libxml2-python-2.9.1-6.el7.5.x86_64.rpm
rpm -ivh deltarpm-3.6-3.el7.x86_64.rpm
rpm -ivh python-deltarpm-3.6-3.el7.x86_64.rpm
rpm -ivh createrepo-0.9.9-28.el7.noarch.rpm

rpm -ivh libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
rpm -ivh libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
rpm -ivh libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
rpm -ivh conntrack-tools-1.4.4-7.el7.x86_64.rpm
cd /tmp
createrepo k8s

#tip:更新仓库命令
createrepo -v --update k8s

关闭swap
swapoff -a
# 防止开机自动挂载 swap 分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
#3、编辑yum文件
vi /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=file:///tmp/k8s
gpgcheck=0
enabled=1
#4、安装k8s,禁用CentOS基本仓库
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

二、安装Master

# 只在 master 节点执行
# 替换 x.x.x.x 为 master 节点的内网IP
# export 命令只在当前 shell 会话中有效,开启新的 shell 窗口后,如果要继续安装过程,请重新执行此处的 export 命令
export MASTER_IP=10.253.144.192
# 替换 apiserver.demo 为 您想要的 dnsName
export APISERVER_NAME=k8s-master1.host
# Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中
export POD_SUBNET=10.100.0.1/16
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts


cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.4
imageRepository: registry.aliyuncs.com/k8sxio
controlPlaneEndpoint: "${APISERVER_NAME}:6443"
networking:
  serviceSubnet: "10.96.0.0/16"
  podSubnet: "${POD_SUBNET}"
  dnsDomain: "cluster.local"
EOF

三、加入Master

kubeadm join k8s-master1.host:6443 --token r96ug5.p4seuoqxj9xlu46y
--discovery-token-ca-cert-hash sha256:430454b1fc3da4d4ad01b13e9b9d8a7bf2a1eb85da4b431e3fd0cf1672178357
--control-plane --certificate-key d017dd100df9e187ccf3ee48f9950e59e41bfd55570a3ce12de2afef1fea84cb

四、加入Node

# 只在 master 节点执行,获得 join命令参数
kubeadm token create --print-join-command

# 修改 hostname
hostnamectl set-hostname k8s-worker1.host

# 设置 hostname 解析
echo "127.0.0.1   $(hostname)" >> /etc/hosts

# 只在 worker 节点执行
# 替换 x.x.x.x 为 master 节点的内网 IP
export MASTER_IP=10.253.144.192
# 替换 apiserver.demo 为初始化 master 节点时所使用的 APISERVER_NAME
export APISERVER_NAME=k8s-master1.host
echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
#启动kubelet服务
systemctl enable kubelet.service

# 只在 master 节点执行,获得 join命令参数,获取到下面的join命令
kubeadm token create --print-join-command
kubeadm join

Gitlab Helm安装日志,实现项目全自动发布到k8s集群

系统:Mac
k8s集群:使用的docker-for-desktop
一、Docker配置和K8s配置

  1. CPUs:6
  2. Memory:16 GiB
  3. Swap:2.0 GiB
  4. 代理配置:Manual proxy configuration,地址请自行填写ss的,一般为127.0.0.1:1081
  5. Enable kubernetes选中

注:必须配置代理,否则无法拉取镜像,此配置为推荐配置,最小配置为2核4G,以下为k8s需要的镜像
image

二、helm配置

  1. brew install kubernetes-helm
  2. 执行helm安装命令如下:
    helm upgrade --install gitlab gitlab/gitlab
    --timeout 600
    --set global.hosts.https=false \ #关闭https
    --set global.hosts.domain=example.com \ #你的域名
    --set global.ingress.tls.enabled=false \ #不开启tls
    --set certmanager-issuer.email=[email protected]
    --set global.edition=ce #安装ce版本,ee版本是企业版,请按需选择
  3. 当postgres数据库镜像拉取后并运行,则bash进去创建gitlab数据库

注:如果镜像一直拉取不下来,请检查是否翻墙正确,99%的可能性都是这个问题,gitlab-runner注册地址不允许填写127.0.0.1

注:安装成功后可通过此命令获取root账号的密码:kubectl get secret gitlab-gitlab-initial-root-password -ojsonpath='{.data.password}' | base64 --decode ; echo

shadowsocks命令行pac配置docker翻墙篇

#安装相关软件(只升级所有包,不升级软件和系统内核)
yum upgrade -y
yum install python-pip
pip install shadowsocks
yum install privoxy -y
pip install --user gfwlist2privoxy

#修改shadowsocks配置文件
cat>/etc/shadowsocks.conf<<EOF
{
"server":"你的国外服务器IP",    
"server_port":端口,   
"local_address": "127.0.0.1",
"local_port":1080,
"password":"密码",
"timeout":300,
"method":"加密method",
"fast_open": false,
"workers": 1
}
EOF

#配置docker环境变量
mkdir /etc/systemd/system/docker.service.d
cat>/etc/systemd/system/docker.service.d/http-proxy.conf<<EOF
[Service]
Environment="HTTP_PROXY=http://127.0.0.1:8118/"
Environment="HTTPS_PROXY=http://127.0.0.1:8118/"
Environment="NO_PROXY=127.0.0.1,localhost,0.0.0.0"
EOF

#配置环境变量
cat>>~/.bash_profile<<EOF
proxy="http://127.0.0.1:8118"
export https_proxy=$proxy
export http_proxy=$proxy
EOF

#修改gfwlist
cd /tmp
wget https://raw.githubusercontent.com/gfwlist/gfwlist/master/gfwlist.txt
~/.local/bin/gfwlist2privoxy -i gfwlist.txt -f gfwlist.action -p 127.0.0.1:1080 -t socks5
cp gfwlist.action /etc/privoxy/
cat>>/etc/privoxy/config<<EOF
actionsfile gfwlist.action
EOF

#启动服务
sslocal -c /etc/shadowsocks.conf -d start
service privoxy start
cd ~
source .bash_profile

systemctl daemon-reload
systemctl restart docker

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.